Dci software pattern




















We provide multiple electronic integrations to all the large payroll systems. We have the ability to offer automation as well. Our Real-Time Authorization module allows you to see if your authorizations are being under or over utilized. Are you ready to simplify your system?

Contact us at or request a sales demo today. Business Management Software for Caregiving Agencies. Authorization Management Learn More. Training Certification Learn More. Billing Learn More. If we were to ask about your fond remembrances of your last account funds transfer, what would you report? A typical response to such a question takes the form, "Well, I chose a source account and a transfer amount, and then I chose a destination account, and I asked the system to transfer that amount between the accounts.

Notice that few people will say "I first picked my savings account, and then an amount, and then picked my investment account Some respondents may actually say that, but to go to that level artificially constrains the problem.

If we look at such scenarios for any pair of classes, they will be the same, modulo the class of the two accounts. The fact is that we all carry, in our heads, a general model of what fund transfer means, independent of the types of the account involved.

It is that model—that interaction —that we want to mirror from the user's mind into the code. So the first new concept we introduce is roles. Whereas objects capture what objects are , roles capture collections of behaviors that are about what objects do.

Actually, it isn't so much that the concept of roles is new as it is unfamiliar. The interactions that weave their way through the roles are also not new to programming: we call them algorithms, and they are probably the only design formalism that predates data as having their own vocabulary and rules of thumb.

What's interesting is that we consciously weave the algorithms through the roles. It is as if we had broken down the algorithm using good old procedural decomposition and broken the lines of decomposition along role boundaries. We do the same thing in old-fashioned object modeling, except that we break the lines of procedural decomposition methods along the lines of object boundaries.

Unfortunately , object boundaries already mean something else: they are loci of encapsulated domain knowledge, of the data. There is little that suggests that the stepwise refinement of an algorithm into cognitive chunks should match the demarcations set by the data model. Old-fashioned object orientation forced us to use the same mechanism for both demarcations, and this mechanism was called a class.

One or the other of the demarcating mechanisms is likely to win out. If the algorithmic decomposition wins out, we end up with algorithmic fragments landing in one object but needing to talk to another, and coupling metrics suffer.

If the data decomposition wins out, we end up slicing out just those parts of the algorithm that are pertinent to the topic of the object to which they are assigned, and we end up with very small incohesive methods. Old-fashioned object orientation explicitly encouraged the creation of such fine-grain methods, for example, a typical Smalltalk method is three statements long. Roles provide natural boundaries to carry collections of operations that the user logically associates with each other.

If we talk about the Money Transfer example and its roles of Source Account and Destination Account, the algorithm might look like this:. The designer's job is to transform this Use Case into an algorithm that honors design issues such as transactions.

The algorithm might look like this:. It is almost a literal expansion from the Use Case. That makes it more understandable than if the logic is spread over many class boundaries that are arbitrary with respect to the natural organization of the logic—as found in the end user mental model.

We call this a methodful role —a concept we explore more thoroughly in the next section. At their heart, roles embody generic, abstract algorithms. They have no flesh and blood and can't really do anything. At some point it all comes down to objects—the same objects that embody the domain model.

The fundamental problem solved by DCI is that people have two different models in their heads of a single, unified thing called an object.

They have the what-the-system- is data model that supports thinking about a bank with its accounts, and the what-the-system- does algorithm model for transferring funds between accounts.

Users recognize individual objects and their domain existence, but each object must also implement behaviors that come from the user's model of the interactions that tie it together with other objects through the roles it plays in a given Use Case. End users have a good intuition about how these two views fit together. That , too—the mapping between the role view and data view—is also part of the user cognitive model. We call it the Context of the execution of a Use Case scenario.

We depict the model in Figure 3. These capture the basic architectural form, to be filled in as requirements and domain understanding grow. At the top we find roles that start as clones of the role abstractions on the right, but whose methods are filled in. For a concept like a Source Account in a Money Transfer Use Case, we can define some methods independent of the exact type of object that will play that role at run time.

These two artifacts together capture the end user model of roles and algorithms in the code. On the left we have our old friends, the classes. Both the roles and classes live in the end user's head. The two are fused at run time into a single object. Since objects come from classes in most programming languages, we have to make it appear as though the domain classes can support the business functions that exist in the separate source of the role formalisms.

At compile time programmers must face the end user's models both of Use Case scenarios and the entities they operate on. We want to help the programmer capture those models separately in two different programming constructs, honoring the dichotomy in the end user's head.

We usually think of classes as the natural place to collect such behaviors or algorithms together. But we must also support the seeming paradox that each of these compile -time concepts co-exists with the other at run time in a single thing called the object. This sounds hard, but even end users are able to combine parts of these two views in their heads.

That's why they know that a Savings Account—which is just a way of talking about how much money I can access right now through a certain key called an account number—can be asked to play the role of a Source Account in a Money Transfer operation. So we should be able to snip operations from the Money Transfer Use Case scenario and add them to the rather dumb Savings Account object.

Figure 3 shows such gluing together of the role logic the arcs and the class logic rounded rectangles. Savings Account already has operations that allow it to carry out its humble job of reporting, increasing, or decreasing its balance.

These latter operations, it supports at run time from its domain class a compile-time construct. The more dynamic operations related to the Use Case scenario come from the roles that the object plays.

The collections of operations snipped from the Use Case scenario are called roles. We want to capture them in closed form source code at compile time, but ensure that the object can support them when the corresponding Use Case comes around at run time.

So, as we show in Figure 4, an object of a class supports not only the member functions of its class, but also can execute the member functions of the role it is playing at any given time as though they were its own.

That is, we want to inject the roles' logic into the objects so that they are as much part of the object as the methods that the object receives from its class at instantiation time.

Here, we set things up so each object has all possible logic at compile time to support whatever role it might be asked to play. However, if we are smart enough to inject just enough logic into each object at run time, just as it is needed to support its appearance in a given role, we can do that, too.

When I go up to an ATM to do a money transfer, I have two objects in mind let's say that they are My Savings Account and My Investment Account , as well as a vision of the process, or algorithm, that takes money from some Source Account and adds it to some Destination Account in a way that is agreeable to both me and the bank.

It's probably true that My Savings Account isn't actually an object in a real bank, but it probably is an object within the realm of the ATM. Even if it isn't, there are some nice generalizations in DCI that cause it not to matter.

I also have a notion of how to map between these two. I establish that mapping, or context, as I interact with the ATM. First, I probably establish that I want to do a funds transfer. That puts a money-transfer scenario in my mind's "cache," as well as bringing some kind of representation of the roles and algorithms into the computer memory. We can capture these scenarios in terms of roles, as described above. Second, I probably choose the Source Account and Destination account for the transfer.

In the computer, the program brings those objects into memory. They are dumb, dumb data objects that know their balance and a few simple things like how to increase or decrease their balance. Neither account object alone understands anything as complex as a database transaction: that is a higher-order business function related to what-the-system-does, and the objects individually are about what-the-system-is.

The higher-level knowledge doesn't live in the objects themselves but in the roles that those objects play in this interaction. Now I want to do the transfer. Imagine that we could magically glue the member functions of the roles onto their respective objects, and then just run the interaction.

Each role "method" would execute in the context of the object into which it had been glued, which is exactly how the end user perceives it. In the next section of this article we'll look exactly at how we give the objects the intelligence necessary to play the roles they must play: for the time being, imagine that we might use something like delegation or mix-ins or Aspects.

In fact each of these approaches has at least minor problems and we'll use something else instead, but the solution is nonetheless reminiscent of all of these existing techniques. The arrow from the Controller and Model into the Context just shows that the Controller initiates the mapping, perhaps with some parameters that give hints about the mapping, and that the Model objects are the source of most mapping targets.

The Methodless Roles are identifiers through which application code in the Controller and in Methodful Roles accesses objects that provide services available through identifiers of that type. This becomes particularly useful in languages with compile-time type checking, as the compiler can provide a modicum of safety that ensures, at compile time, that a given object can and will support the requested role functionality.

By this time, all the objects necessary to affect the transfer are in memory. As indicated above, the end user also has a process or algorithm in mind to do the money transfer in terms of the roles involved.

We need to pick out code that can run that algorithm, and then all we have to do is line up the right objects with the right roles and let the code run. As shown in Figure 5, the algorithm and role-to-object mapping are owned by a Context object. The Context "knows" how to find or retrieve the objects that become the actual actors in this Use Case, and "casts" them to the appropriate roles in the Use Case scenarios we use the term "cast" at least in the theatrical sense and conjecturally in the sense of some programming language type systems.

In a typical implementation there is a Context object for each Use Case, and each Context includes an identifier for each of the roles involved in that Use Case.

All that the Context has to do is bind the role identifiers to the right objects. Then we just kick off the trigger method on the "entry" role for that Context, and the code just runs.

It might run for nanoseconds or years—but it reflects the end user model of computation. As shown in Figure 5, we can think of the Context as a table that maps a role member function a row of the table onto an object method the table columns are objects. The table is filled in based on programmer-supplied business intelligence in the Context object that knows, for a given Use Case, what objects should play what roles.

A method of one role interacts with other role methods in terms of their role interfaces, and is also subject to the role-to-object mapping provided by the Context.

The code in the Controller can now deal with business logic largely in terms of Contexts: any detailed object knowledge can be written in terms of roles that are translated to objects through the Context. One way of thinking about this style of programming is that it is a higher order form of polymorphism than supported by programming languages. The goal of our paper is to present some practical experiences with the DCI design pattern. We used the DCI pattern in an exemplary application.

Some remarks on the usage of the DCI design pattern in the software development are also given. Unable to display preview. Download preview PDF. Skip to main content. This service is more advanced with JavaScript available.

Advertisement Hide. Conference paper. This is a preview of subscription content, log in to check access. Alexander, C. Beck, K.



0コメント

  • 1000 / 1000