The SOA spin cycle is now churning at full speed, generating significant froth in the market with regards to tooling, consulting, and support for helping to make SOA a reality for most firms. Many end users find themselves lost in all this turbulence, bobbing from one vendor’s SOA marketing pitch to another, confused between different implementation and architectural approaches to making SOA work, leaving them dizzied, dazed, and confused.
Recently, ZapThink highlighted (and bemoaned) the fact that too much of the conversation around SOA today was really a conversation about SOA infrastructure — focusing not on the architectural part of making SOA work, but rather on the plumbing of getting Services to communicate with each other. While it is true that companies should focus less on infrastructure and more on the so-called Governance-Quality-Management (GQM) aspects of SOA, it is of course also true that an architecture based on heterogeneity faces little chance of success if the underlying plumbing refuses to make the abstraction of heterogeneity a reality. So, while SOA infrastructure is far from sufficient in making SOA work, it is a necessary component that underlies any architectural approach.
Do You Need SOA Infrastructure?
Any discussion of SOA infrastructure begs an answer to the most obvious question first: why need any SOA infrastructure at all? The main idea behind SOA is that organizations can map their continuously changing business processes and requirements to IT capabilities represented as Services implemented in an abstracted, loosely-coupled manner such that they can be composed with other Services. Web Services adds to the SOA story by standardizing various aspects of Service interface and communication so as to abstract the implementation details of individual Services. The combination of the two implies that integration is a side-effect of composition and Services are discrete entities abstracted such that they can communicate and interact with any other Service, regardless of how it is implemented.
While we went to major pains to explain that Web Services and SOA are different and mutually non-requisite concepts, most SOA implementations utilize Web Services precisely as the mechanism to make loose coupling a reality in heterogeneous environments. So, given that SOA provides the architectural precepts for building composable Services on the one hand and Web Services provides a technological means for isolating implementations on the other, shouldn’t Service end points simply be smart enough to communicate with any other end point, without requiring any additional infrastructure between them?
The problem is that the devil’s in the differences. Proper abstraction of Service endpoint implementations requires dealing with the differences in protocol, semantics, policy, availability, and the need for security, management, quality, and governance to guarantee reliable communications. While it is very possible to build smart end points that ameliorate those differences, companies often find that they can more easily (and cheaply) make the abstraction a reality by inserting some intelligence between those endpoints. And this is where the confusion over SOA infrastructure begins.
One can observe a few patterns for SOA infrastructure (many branded as “Enterprise Service Buses” [ESB], discussed in much greater detail in other ZapFlashes): application server runtime environments that host Service containers, hub-and-spoke middleware that treats Services as endpoints to be integrated with, and centralized messaging infrastructure that focuses on message passing and handling. The majority of these patterns are championed by vendors who built their current Service infrastructure from technology that preceded the movement to loosely-coupled Service-oriented Architecture. Each of these approaches have merits, to be sure, but lost in the conversation is the fact that there are other patterns for SOA infrastructure that don’t borrow from past technologies and offer a different way of thinking about Service interaction and composition.
Existing SOA Infrastructure Patterns
It is important to note that a discussion of SOA infrastructure patterns doesn’t map cleanly to vendor product offerings, since many such offerings in fact implement a number of different infrastructure patterns or hybrids of SOA infrastructural approaches. However, it is imperative that end users understand the differences and merits of each infrastructure pattern before committing to a style of SOA infrastructure that might not meet the goals of the architecture.
The Service Container Infrastructure Pattern
Since SOA does not introduce a new programming language or runtime environment, one has to implement code that underpins and exposes a Service interface somehow. Since implementation matters to computers as much as architecture matters to people, it makes sense to consider the runtime environment of the implementation to be a good place to coordinate Service interactions. In this infrastructure pattern, developers implement their Services in a “container” that provides both a runtime environment for marshalling requests to and from the Service as well as a control environment for providing security, asynchronous message handling, Service composition, management, and other activities.
The advantage of the Service container pattern is that it provides a consistent implementation infrastructure for running Services across all the Services in an organization’s immediate control. The obvious challenge with this approach is that it can only guarantee Service quality to Services that execute within its runtime containers. Services that run in the containers are managed whereas those that are external to the environment are managed only if a Service proxy is created to interact with them. This approach borrows from existing application server platforms that manage runtime code such as servlets. Implementing a Service container-style infrastructure in an environment of significant heterogeneity, distributed Service runtimes, and wide ranges of interaction types is often a deployment challenge. However, for many this approach is easiest to implement, especially if they already depend on the vendors for non-SOA infrastructure.
The Hub-and-Spoke Infrastructure Pattern
Another SOA infrastructure pattern commonly seen borrows from Enterprise Application Integration (EAI) techniques: the use of integration middleware servers that interact with endpoints through adapters and other interaction mechanisms and alleviate differences through centralized mapping and coordination logic. In essence, the integration middleware server acts as the coordination point for all interactions between end points. One of the advantages of this approach is that end points can run on a wide array of runtime environments as long as an adapter or other access mechanism exists to access the functionality or data. Another advantage is that integration logic is centralized within a single runtime environment.
Of course, the big challenge with hub-and-spoke infrastructure patterns is that the point of centralization all too often becomes a point of tight coupling. In our Economics of Integration ZapFlash, we explored how traditional approaches to integration fail to provide the flat cost of change so desired by proponents of SOA. If the infrastructure forces centralization of integration capability such that the end points really are Service interfaces and not heterogeneous, loosely-coupled, composable Services, then you haven’t really achieved SOA at all. In fact, all you have built is standards-based integration using Web Services. So, the challenge with such infrastructure patterns is to make sure you really are getting integration as a side-effect of composition, and haven’t just been sold old wine in new bottles.
The Centralized Messaging Infrastructure Pattern
One other popular approach to facilitating inter-Service communication and runtime management is to leverage message-oriented middleware and messaging infrastructure to coordinate messages between Services under the idea that managing the messages matters more than managing the specific runtime endpoints. So, rather than placing Services in a managed container or connecting to Service endpoints through adapters or a hub-and-spoke approach, one simply needs to instrument the end points to utilize a particular message bus or publish/subscribe infrastructure. As such, the bus serves as an Service highway, and all that’s needed are the on and off ramps to get the communications onto and off the bus. The bus messaging protocols and servers then handle all the requirements for communication.
This approach has been popularized by firms that have formerly been in the Message-Oriented Middleware or Message Queue markets, and for good reason — the approach lends itself well to loosely coupled, event-driven, and message-oriented styles of communication. Of course, the main challenge with this approach is that it can only manage those interactions in which messages pass through the bus. Much as the Service container approach requires proxies to work with third-party services, Messaging infrastructure approaches require some mechanism for marshalling requests from other systems onto the bus. In addition, the message bus approach tends to imply a platform-centricity in much the same way as the Service container approaches.
The Network Intermediary as Infrastructure Pattern
One can easily make the argument that we already have a loosely-coupled infrastructure that supports heterogeneity: the TCP/IP network. The challenge is that while we have agreement on a single standard for system interoperability, this agreement is only at layer 4 of the OSI Model network stack, and all the interactions we discuss here are at layer 7 of this model. This means that if we want the network to be able to intermediate the interactions of Services, we must focus all our attention on making that seventh layer of more specific, intelligent, and enabled with respect to Services. If we can do that, then we can simply use the more intelligent network as our SOA infrastructure — and nothing more.
In this approach, Service requests are routed through content-based routers, which then use late-binding rules to determine how to route, manage, or enforce policy on Service interactions. In the same manner that routers, firewalls, load balancers, gateways, and caches facilitate all kinds of complex network interactions ranging from email to Voice-over-IP without requiring single-vendor middleware, so too can Service interactions be facilitated by a wide variety of Service intermediary deployed in software or hardware form.
The ecosystem of SOA infrastructure technologies required to make this vision work include content-based routers that inspect layer 7 messages, registries that provide policy metadata to guide content-based routers and help determine how to bind Service consumers to providers, and active SOA management tools that provide runtime exception management, policy enforcement, and state management for long-lived, asynchronous interactions.
There are two key aspects that make this pattern work: ensuring that all Service interactions pass through at least one intermediary between the Service consumer and provider, and using late-binding as a mechanism to determine how to handle and manage Service requests. The easiest way to address the former concern is to leverage WS-Addressing as a mechanism for loosely coupling the references to Service endpoints and forcing Service requests to resolve to network intermediaries. Using hard-coded URLs with HTTP addresses in Service contracts is not appropriate for the late-binding mechanism required of a network intermediary infrastructure pattern. This is an area that we cover in greater depth in our LZA boot camps.
Finally, the network infrastructure pattern works by leveraging the Service registry as a mechanism for enabling late binding. In essence, the registry stores the information about not only where Service end points are located, but also all the policy and contract metadata required to successfully interact with that Service. A Service consumer makes a request to an abstract WS-Address, which is resolved by the content-based routers, passed to a SOA management intermediary, which in turn consults a registry to determine how and where to bind to the appropriate Service. Of course, this is all cached and distributed for performance. This combination of routers, management intermediaries, and registries makes up a highly distributable, loosely coupled, and reliable infrastructure that has a very “un-ESB” feel, although it provides the very same capabilities as the SOA infrastructure patterns discussed above.
The ZapThink Take
So, what’s best approach to SOA infrastructure? Of course, experienced architects know that there’s no such thing as best. A good architect will realize that the right answer is always “it depends”. The problem is that not enough architects have knowledge of all the possible ways to implement SOA infrastructure, and simply assume one approach will suit all needs. Good architects have a tool belt with multiple architectural, infrastructural, organizational, and methodology approaches to suit a wide variety of needs. Trying to shoe-horn one particular infrastructural approach into all scenarios is a recipe for disaster.
The purpose of this ZapFlash is not to set any one infrastructural approach as inherently “bad” or another as inherently “best”, but to simply highlight the fact that SOA has no infrastructural bias. Indeed, one could implement a network of smart end-points that can communicate with each other in a peer-to-peer manner, abstracting all the differences in protocol, policy, semantics, and availability without requiring anything between them. However, it’s exactly the requirement of mediating the differences between end points that necessitates the need for something intelligent between end points that might not have the sophistication to deal with interoperability-killing differences.