Web Services seem to have grown from their simple XML-RPC roots to an ever more complicated and contradictory pile of acronyms, from SOAP to UDDI to WSDL to WSCI to BPEL4WS to WSCL to WS-whatever. Despite the intense hype that Web Services have received, it's hardly clear that this was a good idea.
Giving computer programs greater status as consumers and creators of Web-accessible content seems like a good idea on the face of it. The HTTP protocol is agnostic about humans and computers - it doesn't matter whether there's a human using a browser or a program following its own logic at the client side. XML provides a uniform syntax for markup, readable and editable by humans and by computer programs. The basic foundations seem fairly sound.
The problems seem to arise from the competing ambitions of Web Services developers and from the serious differences between sharing information among humans and sharing information among computers.
The original Web began in obscurity, a minor project at a research facility that proved useful to a much wider audience. The commercialization of the Web began only after the foundations had been laid and understood, providing a clear set of basic tools that worked regardless of the situation. Vendors certainly contributed to the development of the Web, but controlling the Web was plainly beyond the reach of any single organization, and development initiatives could appear from any direction. Web Services have had no such time for quiet and enthusiastic ferment, having spent their entire existence as a series of business-driven projects with concurrent marketing hype. Technical excellence often sounds secondary to the business needs of the participants.
The differences between humans and computers are also creating problems. Many of the Web Services specifications have to recreate tasks normally performed by human readers. Humans are excellent at finding information through a variety of channels and discriminating between similar possibilities to find a few that actually do what they want. While many humans are limited to a single language, most are at least tolerant of variations in those languages and many can even read and write multiple languages, opening additional options for translation. Computers can help with all these tasks, certainly, but humans provide an extra level of interpretation that computers do not yet provide.
Much of the promise of Web Services is that programs can be connected to each other without creating much extra work for human programmers. In the idealized world Web Services purveyors often propose, programs can discover each other through directories, configure themselves to conform to other programs' expectations, and open channels of communication when necessary. In practice, these are all tasks that humans are much better at than computers, and recasting these tasks for computers has resulted in a huge pile of specifications which aren't particularly appealing to anyone except perhaps tools vendors.
Reaching solutions to these problems is extremely difficult, especially if the goal is transparent program-to-program communication. The Web does have some answers to offer its odd progeny, though they may not be the ones Web Services developers are looking for.
The strange notion that universal computing communications plumbing can be laid in just a few years of discussion and implementation probably needs to be discarded. The level of communications that the Web itself provides is intrinsically limited by the ability of different participants to publish and understand information, and failure is a frequent result. In the human world, this is acceptable. Different communities may need to communicate with each other, but universal communications aren't that important to solving local problems. A common document transfer protocol and a foundation document format are useful, but common understandings of documents is not a problem that the Web tries to solve. Understanding document meaning remains a local issue, solved locally.
Another aspect of Web Services that doesn't really fit the Web that well is the notion of communications between programs. While the HTTP protocol is certainly about connecting a client computer and a server computer, the kinds of communication that take place over the Web are rarely well-defined one-on-one communications. Publication has been the dominant approach on the Web, with individuals and organizations posting material on the chance that someone else might like to read it. While this lacks the kind of efficienciencies that one-on-one communications provide, it also substantially reduces the coordination costs that plague Web Services today.
It seems like there's another route to making the Web useful for computers, taking a very different path from the SOAP-based approach currently encouraged by the main participants in "Web Services" today. Instead of defining APIs for accessing information, return to the publishing (and custom-publishing) models that have worked well for Web sites so far. The Web already has tools for interacting with sites, and re-inventing them to fit a programming model doesn't seem necessary. Instead of defining Web Services as something different from traditional web development, integrate Web Services with the existing Web. Encourage developers to create more machine-readable information resources - pretty much XML formats - but don't expect them to create systems which are universally understood, any more than we expect English readers to understand Chinese web sites. Local understandings will do fine, and translation services can appear as necessary. Discovery and choreography can be left safely in the hands of human programmers.
While this complicates the supposedly simple process of slapping a web service on top of a legacy API, it also seems like a lower-cost and lower-risk approach to creating new Web Services, even if it doesn't have the glamor and potential tool revenue that the many-acronym approach offers. The web has proven itself repeatedly in a wide variety of circumstances, from small-scale publishing to "enterprise information integration" and the like. There is no need to change the fundamental models it uses; all we need to do is make the information it carries digestible by a wider range of consumers.
Copyright 2002 by Simon St.Laurent.