Friday, November 11, 2005

What is meant by “Service Opacity” and “Managed Transparency”?

Okay – so you all know about Service Oriented Architecture (SOA) and have your ideas of what it is. While we all are likely to disagree with the finer points of SOA, most will agree on a few core tenets of SOA. If you want to discuss this with me, the standard contribution of a vintage Bordeaux applies.

Core to SOA is the existence of services as autonomous entities that act as a boundary between some “functionality” and the entity that consumes that functionality via the service. Service implementers should take care to design services as opaque as possible. This means that as a service consumer, you should talk to the service, but not really care about how the service implements the functionality it provides. This Black Box aspect of SOA is really a specialized notion of the definition of software architecture in the great green book “Documenting Software Architectures: Views and Beyond” by Clements, Bachman et al. In this book, software architecture is defined as “the structure or structures of the system, which consist of elements and their externally visible properties, and the relationships among them”. The key words here are “externally visible properties”. A service provider adhering to this basic axiom should strive to only reveal externally visible properties that are critical for the consumer to know, but no more.

Critique: That is a broad statement. Why? As with anything in technology, don’t run out and do it just because I said so. Question everything. Critique this if you don’t agree.

Rationale: Services provide a healthy abstraction between functionality and those using the functionality. By not revealing the specific implementation behind the service, no consumer of that service has to create a tighter integration than necessary. This frees the service provider to implement, maintain, replace or update the functionality behind the service with the least amount of concern for dependencies from consumers. As long as the replacement, update still supports the existing service interface, the consumer will not notice changes. Please note that I qualified my statement with “than necessary” – the exact level of this is specific to each implementation and it is unlikely anyone could create a hard and fast rule for this.

Specific details of how the service is fulfilling its’ functions are generally not relevant to the consumer either. One should not make assumptions about specific interactions that may happen behind the service and only stick to the externally visible real world effect (RWE). A service is a black box with an interface. As long as the interface allows you to do what you want, you shall be happy. Like this blog, all you need to know is that you enter a URI into the location window of your browser and you get this content back, formatted in HTML. You should not care whether or not this is static text residing on an Apache server or if I just typed into a blackberry really fast to give you this real time.


Some people have advocated that a service must provide a guarantee of delivery assurances between the service itself and the ultimate (application) destination. This delivery assurance effectively suggest that a service must deliver messages it receives to another destination, possibly with additional guarantees for “in order” and “eliminate duplicates” amongst other functions. I strongly disagree with this for a number of reasons. Is this really an externally visible property of a service we should expose? Perhaps in some cases it can be justified but I have trouble making any normative statements for such that apply to all cases. A functionality similar to this may be a higher level matter which I will try to explain at the end of my rant below.


First, even specifying in a standard or protocol that there is an “application destination” behind a service is errant since it implies a specific model for infrastructure to be implemented behind the service interface. If this were accepted within the standard, does it imply that all implementations must now have a specific architecture where nothing can be processed locally on the service?

Secondly, assuming you are going to send a sequence of six messages to a specific service and invoke a delivery assurance/guarantee that all messages in that sequence are delivered “in order”, this now implies a cardinality of 1:1 between a service and the “thing” that must process the messages. Does this disallow the service itself to process part of the message?

Third, this model constrains implementers from dividing up an incoming message to distribute it to several processing classes or applications. The mechanism as it stands today essentially requires a sequence number for each incoming message which must monotonically increase by a factor of one. The intentions are that the service will forward incoming messages to the application destination in the same order it received them in. A cardinality of 1:1 is sub-optimal for scaling IMO. What about pooling? Does this count as one application or several? And why should the service consumer care about this?

The UML model above captures the implied infrastructure. It implies that all incoming messages in a sequence flagged with “in order” delivery MUST be forwarded intact as whole units to one application destination. Why should an implementer not be able to forward messages 1 and 3 to one application destination and messages 4 and 6 to another then simply coordinate those activities to ensure they are “processed” in order? Is the following second diagram below illegal if a guarantee of “in order” delivery is requested? Note that in the following diagram, I have replaced Service.forward() with Service.coordinateProcessing() since this is really what the service consumer is after. After all, even if messages *are* delivered “in order”, there is no guarantee that they have been processed in order.

Note: Please excuse my poor UML style of multiple ApplicationDestination classes but I felt compelled to depict it this way to illustrate the splitting of various messages to different endpoints.

An operation such as “forward()” (from the first diagram) should not be a mandatory externally visible property of a service. It implies a specific model for implementing a service and any specifications that deal with service interoperability should not try to impose specific implementation styles on service providers. My argument does not mean that implementers cannot do this if they see fit or determine that it is necessary, but this is not a one size shoe that should force all to fit within.

A higher level look at the goal.

The real goal is to allow a service consumer to flag that a sequence of messages are meant to be “processed sequentially”. It should not care how that is done, just that this is done or, in the alternative, the service generates a fault. What mechanisms do we need to do this?

1. We need to ensure that the message on the wire between the service consumer and the service carries with it all the tokens to enable the service to understand what type of serial/cardinal processing assurances it requires.

2. We also need a messaging mechanism that allows a service to reassemble the complete stream representing the message in a deterministic manner matching the stream exactly as it was when it left the service consumer.

3. Another requirement is that services must have some form of service description that allows them to declare what their exact capabilities are WRT serial processing of multiple calls.

Does it look something like this?

Note that the cardinality between Service and ApplicationDestination is “any”. It is possible that a service may simply process an invocation request locally. Not all attributes and operations have been explicitly called out to avoid confusion but hopefully this will give you the idea.


Managed transparency should be a core consideration of a service provider. Services should remain as opaque as possible but may need to expose some aspects of their operations to facilitate their use but consumers. Those implementing services must take both of these into account lest we risk building a world wide network of interdependent, tightly bound applications – the very thing that SOA is attempting to save us of.

What do you think?


  1. do i need to analyse the service itself, or the trust vector associated with that service? that is, can i make an argument from inference about a service i am going to use.

    do i trust my indian outsourcing provider because i inspect every line of code individually, or because they deserve their reputation because they have always done good work for me before? reputation and 100% transparency are not necessarily the same thing.


Do not spam this blog! Google and Yahoo DO NOT follow comment links for SEO. If you post an unrelated link advertising a company or service, you will be reported immediately for spam and your link deleted within 30 minutes. If you want to sponsor a post, please let us know by reaching out to duane dot nickull at gmail dot com.