Reusable domain models

Reusable software components let you focus on your task. At any given moment you need to accomplish something that is important for the customer. At this point you need a basic solution for all not-so-important tasks. These solutions can be improved later or left as is if they are good enough.

When working with domain models there is the same problem. You don't want to start from an empty page. Every real business application works in some context that is assumed to be known - there are people, organizations, products, sales, contracts, tasks, schedules etc. When you start working on the domain model you are unlikely to want to focus on these entities - you would better spend your time on your core tasks. So some library of basic domain classes would help you start your work.

XAF allows you to extend such a library with UI pieces, so you will have not just a library of domain objects, but a complete application built around which you will be able to extend and customize.

So our team have this task on hand - to provide a library of domain classes and related XAF modules for you to reuse. How such a library could look like?

First there will be sub-domains of classes that incapsulate a particular part of functionality. I will call them Modules, for instance, if you want to work with tasks, you would add the Task module to your domain model. But Task is not an isolated thing. Tasks are assigned to people or groups of people, so the task module references another module (let's call it Party).

public class Task : XPObject {
  public Party AssignedTo {
    get { return assignedTo; }
    set { assignedTo = value; }

While the Task module references the Party module - it really does not care much about how Party is implemented. In fact, there can be several Party modules for different organization structures. Our developers can just have a plane list of users to assign the tasks to.

At the same time the Party class should not be aware about Task, because a particular application can not use Task at all. But after adding the Task module it will be nice to have someting like the Party.AssignedTasks collection.

public class Party : XPObject {
  public XPCollection AssignedTasks {
    get { return GetCollection("AssignedTasks"); }

It is not a problem to create a library that contains both modules. If you don't need the Task part, just don't use those properties and classes. But there is still a problem with module variations. If you have 2 variants of Party implementation and 2 variants of Task - you will end up with 4 libraries. And for a real library this would be thousands of combinations.

I see several possible solutions:

1. Use an untyped reference.

If we don't know the referenced type - we can make it untyped - something like this:

public class Task : XPObject {
  XPWeakReference assignedTo
  public object AssignedTo {
    get { return assignedTo.Target; }
    set { assignedTo = new XPWeakReference((IXPSimpleObject)value); }

In addition UI should be somehow configured to show only allowed candidates for the assignment.

To show tasks assigned to the given party - we should use filtering criteria, since there is no association defined, the criteria will sound like "find all tasks whose assignedTo refers to this Party instance".

2. Dynamically define the associations.

XPO allows to dynamically extend its metadata, so instead of hard-coding the association between Task and Party, it will be created dynamically when registering modules. XAF will be able to show these members as normal members, but you will have no typed access to them - only using the GetMemberValue and SetMemberValue methods.

3. Use persistent interfaces.

This is a natural solution for non-persistent objects. So a persistent solution can look like this:

public class Task : XPObject {
  ITaskTarget assignedTo;
  public ITaskTarget AssignedTo {
    get { return assignedTo; }
    set { assignedTo = value; }

public interface ITaskTarget {
  XPCollection AssignedTasks { get; }

Since Party does not know anything about Task, I have to create a new party that is aware about it:

public class MyParty : Party, ITaskTarget {
  public XPCollection AssignedTasks {
    get { return GetCollection("AssignedTasks"); }

Note that currently XPO does not implement persistent interfaces and I don't know if this can be done at all. The main problem I see is that in the given app several classes can implement ITaskTarget and therefore XPO will need a way to perform queries for interfaces.

4. Create several reference implementations and give away sources that developers can combine.

This approach could seem simple, but at the same time I have no idea how possibly we can support it. It won't be a well-tested component that is plugged to your system and just works. Instead of having less code to maintain, you will have more code.

These are approaches that I currently consider. At this moment we did not choose one, so your feedback will help us a lot. Please leave your comment if you prefer one of the approaches I listed or if you have something different on your mind.

5 comment(s)
Steven Rasmussen
Given that this is probably the hardest to implement (and maybe impossible as stated above), I think that supporting persistent interfaces would be the best approach.
30 August, 2006
Ryan Britton
A very difficult conundrum. Given the complication of the third case and the lack of usability for the first two would it not be an alternative to create a Modelling Tool within XAF that the domain objects can be added to which labels instances where domain objects need to be coupled/decoupled to form associations?

So then when the Party object is added to the modelling tool it marks the AssignedTasks property with a (funky red) "modelling error" until the property is associated with another type or hidden from the model (in the case where there is no Task type in the domain). This implies that a compliment of metadata tags (attributes) would be needed to describe "open relations" and the like within each object but this may be easier to model and extend than the previously mentioned options.

Just a thought....

31 August, 2006
Dan Vanderboom
Sounds much like an idea I came up with that I call Interlocking Smart Schemas, the idea being that they are treated like jigsaw puzzle pieces.  Having built a plug-in architecture from scratch, I have seen the dramatic improvements in flexibility and extensibility.  There is a fundamental problem with requiring a whole data model to be correct the first time around, or for sections of it to be so entangled that they can't evolve independently.

That being said, the challenge is a great one!  But it can be done.  What we're talking about here are optional associations to non-implemented domain classes.  In add-in systems, interfaces save the day.  My controller add-in can tell the AddInService that it requires an IBarcodeDevice, without writing the logic for that until later or committing to a specific device.  Several implementations will do, so long as the right interface is used.  The fact that a domain class maps to a set of database tables presents challenges, but doesn't change the basic design pattern.

I'm not sure that you need to "make interfaces persistent", or that such a statement even makes sense.  The interface is used as a placeholder for the real class, and when all of the real domain classes load into your AppDomain, references will point to real objects.

What questions will get us closer to a solution?

What aspects of a schema need to be specified to meet the requirements of a business logic controller?

What schema variability is allowed that won't cause problems?

How must XPO be modified to build (or rebuild) metadata only after all schema assemblies are loaded?  What other assumptions has XPO made that get in the way, and what alternatives are there?

Should domain classes be packaged with the controllers that specify their need for them?  Or should controllers contain interfaces as a way of specifying the schemas that they need?

Can a physical schema be derived/created by merging the declarative requirements (attributes?) for multiple controllers that have overlapping schema needs?

Beyond the physical impact of the schema footprint on the database, in the form of tables, indices, and constraints, what validation and other logic within the domain model assembly could negatively impact the controller assembly that attempts to use it?  In other words, some schema assemblies are Smart Schemas and encapsulate schema-centralized business logic to raise the level of abstraction of the data access platform for a whole software system.  If you're only thinking of piecing together "dumb data containers", you may lose out on a lot of value.  On the other hand, it might be a necessary first step.

If only one class for each interface type were allowed, you probably wouldn't need to piece them together manually at all in a designer.  They'd be able to find each other once they load and link up automatically (that's what the add-ins in my current architecture do).  But you'd have to evaluate whether this is practical with interlocking schemas.

Once you have the right set of questions, the answers will come.  :)
18 September, 2006
Roman Eremin (DevExpress)
Dan, thank you for your elaborated comment.

In general, using interfaces solves only half of the problem: I can reuse the schema, but I cannot easily reuse business logic. After all, my reusable Person is not just FirstName, LastName fields, but some logic behind them (like formatting the FullName). Of course logic can be moved to some helper class, but this makes implementing Person not easy.

I am still thinking what approach to take. Templates, code generation (DSL tools, partial classes) are good candidates too.

I want the domain models reuse to be at the same time easy and natural for the developer, meaning that the result should be typed final domain model.

I'm preparing a post that illustrates all fundamental problems we are trying to solve.
21 September, 2006
Dan Vanderboom
Roman, one possibility would be to create mock classes for the types you expect to be supplied elsewhere.  Instead of using an interface.  One way to do this is to create some additional attributes for the class and its members.  Perhaps like this:

public class InventoryUnit : XPObject
  // ...

This could signal XPO that the class is being used as a stub, but the actual type used (maybe same class name, but different namespace) would be supplied in a separate assembly.  (If no separate assembly is supplied, the stub class itself could be used.)

The other thought I had was to supply attributes such as MinimumSize for string properties.  By declaring what minimum requirements must be met for the schema chunks to be usable, the framework could check against the actual schema classes for compatibility.  Something analogous could be used for other types, specifying minimum ranges, etc.

At first glance, it appears a hassel that you would have to define classes that you hoped would be supplied in other assemblies, but not if you think of these XPInterface classes as boundary classes.  If each persistent class is defined by a letter in the diagram below), with hyphens being object relationships (foreign keys), and each line of text represents a separate assembly:

Assembly1:   A-B-C
Assembly2:   C-D-E-F-G-H
Assembly3:   H-I-J

... they could be combined to form a single schema:


Separation of logic needs to be heeded here.  Business logic can be found in XPO schema assemblies, but I think this calls for an investigation of and agreement on what kind of logic appears in these schemas.  Business logic comes in various flavors: there is the business transaction logic itself, which is very specific to that use case, and there is also model verification and update logic which is really a behavior of a reusable platform.

From a design perspective, the model should only contain the logic necessary to enforce the validity of an object and its relationship to other persistent objects in the data store, or to create or edit other objects to maintain model state correctness.  For example: when InventoryUnits are moved from one location to another, a new InventoryUnitActivity object needs to be created to track that activity.  This is handled transparently within the data model assembly, and it occurs quietly so that no transaction using the object need know about it.  Other logic validates property value changes and checks the full object state before saving.  When any of this validation fails, the reason why should bubble up to whichever object is using or binding to this data object, where it can respond appropriately.  If the logic in these classes is limited to model validation and model update logic, I can't think of any reason it should present any difficulties.  The model assembly can be stored along with a comment describing the schema and any model logic in it for the developer stitching them together.  The schemas may provide different validation, but that's why the software architect chooses one version or flavor over another.

However, data related to the transaction itself should be located within the transaction controller class.  As soon as business- and transaction-specific logic creeps into the model, the model starts losing reusability.  The data model assembly stops being just a platform, and now becomes entangled with the application using it.

There seems to be a good case for making these schema assemblies open source.  It would be quite powerful to have a community of people creating and sharing pieces of schemas, which could be linked creatively to other schemas.  Being open source, it would be easy to make modifications to the logic within them during that process.

The base definition for the class as a dumb data container could be placed in one code file, but the validation logic might be better supplied in a separate file using partial classes.  Another route would be to take advantage of inheritence: create data classes with no logic in them (just structure), and create separate assemblies to inherit from those data classes and supply the model's logic there.  This would provide separation of structure from logic on the data model layer.

The other important aspect of this is discoverability of model compatibility for transaction controllers.  Transaction A might require the combined standard schemas X, Y, and Z, at structure compatibility level, or full validation compatibility.

Overall, this is not a simple undertaking.  But I have hope because 1) a lot of the complexity can be absorbed into the framework, 2) new design tools will address discoverability, and 3) the extra work needed to create interlocking schemas will be worth it: this could launch us into a whole new era of componentized data modeling, making systems design exponentially more flexible and extensible.  The opportunity for application integration on an entirely new level comes within reach.  Do you have a transaction controller plugin that needs some subset of schema?  Does it specify that in a programmatically-discoverable way?  Can it query a data store for compatibility?  Could it not then integrate into a shared data model?
25 September, 2006

Please login or register to post comments.