The Human in the Machine
In the late 1700s, a clever inventor built a mechanical device he called The Mechanical Turk that could play a mean game of chess. It was so good, in fact, that people such as Benjamin Franklin and Napoleon Bonaparte traveled to play the machine, only to lose to it. The Turk was certainly a curiosity of the day, since there were no computers, no microprocessors, hard drives, or networks to speak of. So, how did this curious device work? It turned out that it was all an elaborate hoax, with a chess master hiding inside the machine operating an intricate set of springs and magnets to move the mannequin attached to the device.
Playing chess at the grandmaster level, of course, is extraordinarily difficult, and it was only in the last decade that computers surpassed the best human players in ability. Ironically, computers are still unable to perform other, extraordinarily simple tasks better than humans. Even activities as basic as identifying genders of people from looking at their photos, reading text contained in photographs, and translating phrases accurately are tasks humans still do far better than machines.
When people wax futuristic about technology, they inevitably gravitate toward the idea that computers will somehow become smart enough to surpass the capabilities of ordinary people. Examples abound in science fiction books and movies such as Star Trek, The Terminator, 2001: A Space Odyssey, and countless other works of fiction that anthropomorphize computers. However, as much as we wish that such machines would finally be smart enough to handle all of our tedious tasks so that we could spend all of our time doing the more enjoyable things in life, the truth of the matter is that humans are still far better at doing a wide range of tasks than computers, including many trivial ones.
In fact, Amazon.com has actually taken the idea of the Mechanical Turk and applied it directly to the concept of Web Services-based SOA. The Amazon Mechanical Turk is a Web Service that enables the use of human intelligence to perform tasks that computers are ill-suited for. These “Human Intelligence Tasks” include such menial operations as choosing the best photograph from a set of photos of a single subject, writing product descriptions, or identifying performers on from the covers of music CDs. This Web Service then appears to Service consumers as if it were the interface to automated operations, instead of human activities. Customers pay the human responders a small amount for each request (usually pennies), and they place thousands of requests into the Mechanical Turk system to make it economically worthwhile for all parties. Someone that queries the results of the operation that the Service provides would never know that a person, or group of people, are behind the operation.
This Mechanical Turk approach to such tasks that are easy for humans but difficult for computers works so well that introducing automation that attempts to automate 100% of the operation is nowhere near as cost-effective. Even NASA is now using the ordinary public to help them find useful bits of information in complex pictures. These facts beg the question: how can enterprises take advantage of the principle of the Mechanical Turk in their organizations? If anything, companies have just as significant a problem with trying to make the most cost-effective use of their resources to handle such tasks analyzing purchase order requests. And, involving the human in the enterprise architecture effects the way we go about building the most effective SOA.
Revisiting Heterogeneity and the Service Interface
The answer to the question of how the Mechanical Turk can be relevant to today’s enterprise IT depends, not surprisingly, on Service-Oriented Architecture (SOA). One of the core tenets of SOA is that the Services abstract the functionality underlying the Services. One of the interesting side-effects of considering humans as responders to Service requests is that it pushes the notion of heterogeneity to new levels. Rather than considering technology alone to be the universe of infrastructure we need to consider in a heterogeneous SOA implementation, the Service abstraction enables us to consider that people are just as important parts of the network infrastructure as computers. In fact, the ability to abstract whether the implementation of a Service contains computers or humans significantly impacts the way in which we build Service interfaces and deal with the issues of Service reliability, asynchrony, business process, and security, to name just a few.
In particular, the best way to construct Service interfaces is quite different when you can’t make any assumptions about whether or not the responder is indeed a computer, let alone running on some specific platform, language, or the like. Service consumers can’t assume that the Service will respond in milliseconds or using an intricate set of fine-grained, back-and-forth operations. Rather, human-based Service providers will operate with somewhat longer durations of time, and are more amenable to coarser-grained operations that can relate to larger chunks of data and operations that span multiple data types. In particular, certain data transformation and manipulation tasks that might require dozens of separate computer operations across multiple back-end systems might only leverage a single request to a human-based Service to turn around those tasks in a far shorter amount of time, and with greater accuracy, than even the most sophisticated and highly automated of systems.
In fact, Service interfaces that might abstract human-based operations actually simplify the developer’s task when they compare it to implementing fully automated, machine-based interaction. For example, humans prefer messages over remote procedure call (RPC)-style invocations. In addition, developers can build Service interfaces that specify less than they would otherwise since humans can “fill in the blanks”, which makes the Service interfaces even more loosely coupled than is possible with machine-based systems. Furthermore, there’s no way that developers can make any assumptions about the provider’s runtime infrastructure if it’s possible that a person might provide the response to Service requests. Indeed it might be a best practice for Service interface, policy, and contract design to assume that a human might in fact be a responder or initiator of a Service request, and design the system based on that assumption.
Security, reliability, and process composition present even greater challenges when humans are part of the mix. Notions of identity clearly connect to individuals when a human is responding or generating a Service request, but necessarily when machines act as proxies for people, or when they operate on a fully automated basis. In addition, parties to Service interactions can’t make the same sort of expectations on the synchrony of responders when a human is involved. A human might be fast, but then again, the operation might take longer than expected. Good SOA design will also factor into account the variability of time that it takes a provider to respond to a particular Service request, and whether or not that variability will materially impact the business process at hand.
And, speaking of business processes, when humans are involved, it makes very little sense to have a centralized, computer-based system coordinating business processes on behalf of humans. The notion of centralized business process runtime engines only works for fully automated processes. When a human is responsible for making decisions about where and how to fulfill Service requests, a centralized, orchestrated runtime process engine only gets in the way. Indeed, each Service requester or provider might know more about what the next step in the process is than some central flow-control engine, since a human might be responsible for fulfilling or generating Service requests.
The ZapThink Take
In order for companies to truly hide whether the underlying implementation of a Service depends upon computers or humans, we must first see a significant maturation of both the way that organizations develop their Enterprise Architecture as well as maturation of Service-based business models. Amazon.com figured out that they needed to pay humans on a per-transaction basis to make their model work. Most of today’s Service infrastructure and third-party Services, however, don’t work on such a transaction model. Rather, they use the increasingly obsolete idea of per-server or per-CPU licensing as a way of regulating the costs of the infrastructure.
Companies will need to reconsider the economics of running Services when humans are involved. Furthermore, involving humans as part of the Service infrastructure radically changes the concept of outsourcing. Indeed, companies should make the assumption that a Service request might call upon anyone located anywhere on the planet, whether or not they work for the company, and as such, they must handle issues of reliability, security, and economics differently than if they were simply dealing with machines sitting on a network somewhere.
So, let’s revisit the predictions of the futurists of the past 100 years. While it seemed that making the computer into the equivalent of the person might have been the ultimate vision of the future of computing, it seems that this might not really be the case. The future of computing might herald more for the human than the automaton, and so, the future of enterprise architecture, which until now has done little to consider the role of people as parties to Service interactions, might entirely hinge on the human in the machine.