Don’t Forget Your Developers!

In our collective desire to equip our organizations or clients with the tools and technologies required to support the realization of SOA strategy and enablement, the focus of selection processes tends, understandably, to be centered on enterprise-centric concerns. Interoperability, standards compliance, reuse, etc., tend to lead the charge through the evaluation process. All of these considerations are important and valid selection criteria, to be sure. However, we find that most organizations frequently overlook another critical consideration: developer friendliness.  No matter the degree of sophistication of, say, a BPMN modeling tool, or an ESB route builder or SCA composite creator, at the end of the day, some poor individual is attempting to turn a specification or requirement into usable software.

Over the years, the software engineering community has developed many best practices, tools and techniques for coding in Java or C# or whatnot to consistently develop and deliver high quality code. However, these tools and techniques focus at the code level, rather than at the middleware level necessary for supporting Service abstractions and compositions in a SOA deployment. A frequent issue we find when an organization adopts a given middleware suite is that development teams, while gaining a toolset that allows them to abstract away less important details and focus on the specifics of creating Services and integrations, may in fact lose some of their ability to effectively engineer high quality solutions based on well-established best practices. Tools, in fact, are part of the problem; the solution is more organizational than technical.

Cycle Time

A key concept in effective development is cycle time. No matter what overall process style is governing a development project, whether it be year-long waterfalls or two-week sprints, the personal development process is inherently incremental and iterative. An implementer writes, compiles, tests, and debugs code in an ongoing cycle that gradually converges on a deliverable implementation.  A great deal of thought (and silicon) has been dedicated to ensuring that this process allows the implementer to focus on the problem-solving aspects of software development, as opposed to burning valuable time waiting for compiles, external builds, or other delays. As a result, good development environments provide the implementer with a sandbox that minimizes or eliminates external dependencies and maximizes the percentage of a developer’s day that is spent on development as opposed to overhead.

Unfortunately, many SOA tools force developers to take a step backwards.  Rather than interactively running an ESB route in one’s development or authoring environment, an implementer may need to first deploy that route to a middleware server. In many cases the resource requirements of that server may dictate that it’s run in shared infrastructure as opposed to locally as well. So much for the sandbox!


We all strive for bug free systems, of course, but the fact remains that no matter the level of abstraction we are using to develop a system, it will not always do exactly what we expect. While at first blush, the nice boxes and lines in a BPMN diagram look simpler than a chunk of Java code, there will always be times when the actual and expected results don’t jibe. Especially as an organization’s use of SOA tooling becomes more sophisticated, there will be a corresponding increase in the sophistication and complexity of processes, metadata and other artifacts.

Traditional development not only deploys debugging capabilities to track down issues, but also for gut-checking new or modified code, just to validate that it’s behaving as people expect.  At the code level, developers are accustomed to a wide variety of tools and capabilities that allow them to step through code execution, view and change the value of variables, set breakpoints based on arbitrary conditions, etc., which have come a long way from the days of having to litter code with print statements in order to figure out what was going on.

However, some SOA toolkits fall short in this area, forcing teams to first deploy to servers, then depend on server-based logging and tracing capabilities (in essence, glorified print statements)  in order to try to understand how a process or route is working (or not). This extra step increases cycle time and the conceptual distance between what an implementer is developing and what’s actually executing in the middleware infrastructure. There is far more to a SOA deployment than the code, but the development tooling is still invariably code-centric.


Frequent, continuous, and automated testing is a hallmark of good development practice. There’s no shortage of testing frameworks and continuous integration tools that provide a technical framework for teams to test early and often at the unit, integration and functional levels of abstraction. There’s also a rich ecosystem of tools that support stubbing or mocking of resources such as databases that allow tests to zero in on functionality with minimal external dependencies.

At the level of abstraction addressed by SOA tooling, the need for this kind of testing support actually increases. Business Services and Service compositions depend upon other Services and resources that may reside beyond departmental or even enterprise boundaries.  Furthermore, coordinating all involved organizations for the kinds of repeated ongoing testing that high quality SOA deployments require is an entirely different set of challenges from traditional QA, focusing more on governance than testing.  To adequately address the testing part of the quality story, your SOA development environment must include the ability to interact with virtualized external interfaces and resources in order to assess the quality of your systems under development.

Builds and Deployment

We’ve all seen the slick vendor demo where a custom integration is whipped up in 15 minutes, then deployed directly to a middleware server, and executed to oohs and aahs of an appreciative audience. However, in the back of our minds, all of us in the audience know that in the real world, we never deploy systems in that manner. Instead, we must deploy solutions to multiple environments for various testing, QA, and other assessment phases, and once the solution leaves the development shop, the individuals involved in the deployment are not the same people who use the slick IDE. It’s the configuration management and operational personnel who have to deploy to multiple environments, across server farms and Cloud environments.

It would be wonderful if today’s ESBs and other SOA platform solutions would address this problem. Unfortunately, middleware solutions do not always provide rich tooling to support the realities of the deployment environment, where traditional solutions are highly scriptable and readily integrated into existing build and deployment infrastructures. For instance, some middleware solutions require the use of specific, development-oriented tools by operational personnel in order to deploy a new business process into a production environment. This limitation can add time and complexity to a SOA deployment.

The ZapThink Take

The easiest way to address these issues is to make sure that actual implementers, as well as configuration management and operational personnel, are all involved with the development of assessment criteria as well as the execution of the overall evaluation of your technology. This involvement will help to ensure that the biggest consumers of your shiny new middleware, the people that actually develop solutions with it, are able to deliver the highest quality solutions in the shortest possible time, helping you actualize the return on the significant investment in middleware infrastructure.

You should also not be afraid to perform an extended evaluation, in order to allow all of your implementation stakeholders to get an opportunity to put your investment through its paces in real world scenarios.  A typical approach is a preliminary narrowing of middleware candidates, then a longer evaluation (three months or more) of the remaining candidates, in your organization’s production environment.

The big picture, finally, is governance. SOA requires full lifecycle governance that drives policies across design, development, QA, and deployment activities. Vendors typically focus their tools on individual phases, and even the large vendors, with their supposedly SOA suites, typically fall short in how well they coordinate activities across lifecycle phases. Don’t expect technology to solve this problem. Only people can do that.