Quality SOA
Quality means far more than simply reducing defects. Fundamentally, quality means building something that meets the requirements of its users, now and into the future. Being defect-free is a necessary, but by no means sufficient criterion for a quality product. Software quality is no different. While many software quality assurance efforts focus on eliminating bugs, the bug-hunting process is only the starting point for software quality.
The real challenge with software quality, as with any other quality effort, is in guaranteeing that the software meets the requirements set out for it. In an ideal world, quality assurance (QA) personnel would simply take the requirements document, use it to build a test plan, and run tests against that plan. Once the project passes all the tests, it’s ready to go live. But in the real world, requirements continue to evolve, both during projects as well as once the projects are complete. And there’s nothing worse than evolving requirements for throwing a wrench in the most carefully laid QA plans.
Environments of continually changing business requirements, of course, are the perfect breeding ground for Service-Oriented Architecture (SOA). SOA leverages a metadata-driven Service abstraction to provide greater power and flexibility to business users, with the clear purpose of enabling IT to respond to changing requirements in an agile manner. This core agility benefit of SOA collapses like a house of cards, however, if the Services or the applications that consume and compose them are of poor quality.
The Increasing Sophistication of SOA Testing Tools
As companies embark on their SOA initiatives, therefore, quality in the face of change should be a top priority. More often than not, however, quality receives short shrift in many such projects, especially when an organization is taking a bottom-up approach to SOA that begins by building Web Services interfaces to their existing legacy systems. For such organizations, their SOA project is light on architecture and heavy on software development, as they hammer out the details of their Service interfaces. As a result, they typically limit their QA efforts to the testing of those Service interfaces.
Web Services testing, after all, is the most basic capability of SOA testing tools. In fact, many tools on the market that claim to be SOA testing tools are really little more than Web Services testing tools that enable companies to put their Web Services (as well as Service consumers) through their paces in an effort to reduce defects and ensure that the Services meet the initial requirements set out for them. Many such tools are updated versions of Web page testing tools that generalize the Web interface, basically considering a Web Service to be a Web page without the user interface. While such Web Services testing tools serve an important role, they are insufficient for providing the QA required for true SOA implementations.
More sophisticated SOA testing tools take into account the fact that Services are more than standards-based interfaces — rather, they’re abstractions of capabilities from multiple, disparate sources. Such tools approach the SOA quality problem as an integration testing challenge, and it’s not surprising that many of such tools currently on the market have evolved from integration testing products. These tools simulate Service requests and other events as they wind their way through the Service interface and underlying middleware, applications, and data sources, uncovering subtle defects that arise from the complex interactions among the various moving parts in today’s distributed systems. While such integration testing is a critical part of any SOA quality regimen, it is fundamentally a design time activity, as is Web Service testing. Neither approach provides much value after the Services go live.
The most sophisticated of SOA quality tools take into account the full Service lifecycle — design time, runtime, and change time. No longer is it sufficient to run a project through acceptance testing immediately before launching it into production, because SOA implementations are by their very nature continually changing. Instead, SOA quality must be an ongoing process that continually confirms that the existing configuration of Services meets the business requirements du jour.
The tooling necessary to implement such advanced quality measures must focus on testing SOA metadata, because metadata are at the core of any SOA implementation’s ability to respond dynamically to changing business requirements. The changes that occur during change time are metadata changes, including Service-Oriented Business Application (SOBA) configuration changes, policy changes, and Service contract changes. Today’s most advanced SOA quality tools must provide for the testing of changes to these metadata in a production environment.
SOA Quality Best Practices
Simply increasing the sophistication of your testing tools, however, doesn’t mean your SOA will be of any higher quality. In fact, if a QA team treats a SOA project as though it were a traditional software project, its quality will likely suffer, because traditional projects don’t have the business requirement of agility — what ZapThink calls the “meta-requirement” for SOA, namely the ability to support changing requirements. Testing for today’s requirements without testing for this meta-requirement leaves an enormous hole in the QA process.
To address the meta-requirement, SOA inherits key best practices from the Agile Movement, including iterative, test-first development. In this approach, architects break up a SOA implementation into individual iterations or “mini-projects” with specific, narrow scope. Starting with the contract and policy metadata associated with each iteration, the project team creates a test plan that drives the development work necessary to implement the Services. The team then continues to work on the iteration until all the tests in the test plan pass. The final step is to re-evaluate the remaining requirements (which may have changed during the previous iteration) to plan the next iteration, and repeat as necessary.
Perhaps the greatest SOA quality challenge, however, involves maintaining quality throughout the Service lifecycle, especially once the SOA implementation is in place. The problem is, the more mature the SOA implementation is — that is, the better the Service abstraction maintains an agile separation between business users and the underlying IT capabilities — the more impractical traditional QA approaches are likely to be. In many of today’s IT shops, there are separate, identical QA and production environments. QA personnel can load any new or changed code into the QA environment, and test it to their heart’s content before giving it the thumbs up for promotion of changes to the production environment. In a mature SOA environment, its practically impossible to maintain a useful duplicate of the running system, because Services, configurations, and associated metadata continually change. As a result, maintaining a parallel QA environment rapidly becomes an exercise in futility.
The solution to this quality conundrum is to test new and changed Services and Service configurations in the live, production environment. The only way to ensure that all aspects of the new configuration continues to meet the requirements set out for it is to run test messages through production Services. Now, saying you should test in production is tantamount to proposing rewiring your house with the power on — it’s possible, but you have to be especially careful, know what you’re doing, and plan ahead. In the case of SOA, planning ahead means that Services (as well as Service consumers) must be able to support a testing mode.
To illustrate how Service testing in production should work, let’s step through an example of a putting a Service modification from version 1 to version 2 into production:
- While version 1 is running, put version 2 into production in test mode.
- Send test messages through version 2 from the test harness (a testing tool configured for this purpose), or ideally, from production Service consumers that are placed into test mode themselves.
- Once the tests pass, set the mode of version 2 to production mode (either manually or automatically).
- Notify the Service registry that any Service requests that support version 2’s contract are now to go to version 2.
- Set the mode of version 1 to deprecated, if your deprecation policy calls for this step.
- If a problem with version 2 crops up, revert to version 1 (again, if your policy calls for it). Fix version 2 and repeat this process.
- Take version 1 out of production as per your deprecation policy.
There are a few important notes about running the QA process through a production system. First, the QA process is policy-driven. As a result, the testing process is itself Service-oriented, which goes a long way to satisfying the meta-requirement of SOA. Second, it’s resilient. Even if a fully tested Service still fails in production, the QA process responds in a way that minimizes the business impact — and thus maximizes the loose coupling of the Services. But most importantly, SOA quality requires planning ahead. Architects must plan for test modes and deprecation policy support as an essential part of designing Services.
The ZapThink Take
The “planning ahead” step on the ZapThink SOA Roadmap appears as the Build a Governance Framework milestone. Before you implement any Services as part of a SOA project, the governance framework should include details about the policies your Services will need to support down the road, including deprecation and testing policies. (ZapThink covered Service versioning and deprecation in our “Grappling with SOA Change and Version Management” ZapFlash.) If you haven’t built quality into your architecture, then it doesn’t matter how sophisticated your SOA testing tools are. As with so many other aspects of SOA, the tools don’t give you the best practices. Instead, the best practices of SOA help you get the most out of your tools.
In fact, ZapThink frequently talks to companies who haven’t planned ahead sufficiently, and they now face a quandary: how to version their Services without breaking Service consumers? In most such cases, there’s no easy answer. They simply have to rework their SOA and start again, chalking their early adopter efforts up to experience. That’s fine for those early adopters who were blazing the SOA best practices trail. Organizations who are only now framing their SOA plans for the first time, however, have no excuse for not building quality SOA the first time.