Socio-political and Commercial Motivations for WS-*

Posted in SOA by AST on Saturday, December 9th, 2006

Amoxil Generic Buy Clarinex Online Neurontin Without Prescription Topamax No Prescription Soma For Sale Glucotrol Generic Buy Aricept Online Stromectol Without Prescription Lotrisone No Prescription Celexa For Sale

I can appreciate Gervas’ position as a “neutral, non-technical observer” to the whole ROA/SOA thread, but I think the root of the problems in bright people having difficulty clarifying basic issues about REST is entirely one of “what they know” and “where they are coming from”.

I have tremendous respect for Steve and everyone else on the list [the service-oriented-architecture list] that I’ve interacted with, so this isn’t personal in any way. I think it is important to understand a bit of industry history in light of lots of smart people and vendors trying to figure out how to field an SOA that works.

A lot of us on this list have been doing distributed computing for a long time. Most of us have done a lot of one or more of CORBA and DCOM before RMI/EJB came on the scene and certainly before XML-RPC and SOAP came on the scene (some people have been doing it earlier than that).

The thing about a programming paradigm is that to get any good at actually doing something with it, it takes a lot of time and effort to learn how to think and design in a way that takes advantage of it. CORBA, DCOM and EJB and the like are about extending the local programming model to remote systems in a more-or-less coherent way.

All of them are object-oriented in that you create a service with a defined set of capabilities and a given interface. This interface is normally designed in similar ways to local interfaces in that it exposes a fairly rich and domain-specific API for interaction between clients and servers. Most of the early mistakes people make in developing CORBA, DCOM and EJB projects are in the granularity of those interfaces because they forget or don’t consider the effect of the cost and overhead of communicating over the network vs the costs within the same address space, e.g. “normal” objects.

Learning how to optimize the tradeoff between a rich, domain-specific interface and one that is efficient is one of the key things in learning how to design and develop successful distributed object systems.

If you take a look at the history involved in developing these systems, formalization of CORBA started in 1990 at the OMG, DCOM surfaced around 1993 and RMI and EJB emerged in 1997. Getting all of these technologies implemented took a lot of work because most of them are naturally fairly complex. It isn’t easy trying to make a remote system look like it is a local one. Lots of vendors produced a lot of products, and some companies were founded around some of these technologies.

While each of these technologies is good (to varying degrees) at providing a distributed object computing platform within a local physical environment, they didn’t scale very well over long distances or between enterprises. Most of them required a large number of proprietary ports to be opened in company networks, which has security implications not to mention just the operational issues of making it happen.

On the other hand, HTTP and Web pages nicely sailed through port 80 which, in most cases, was already open. Both vendors and customers said, “Wouldn’t it be great if we could do things like CORBA, but using HTTP?” Enter XML, XML-RPC and SOAP in 1999-2000.

Now, if you were a vendor that had spent millions in R&D in getting distributed objects computing working in CORBA, DCOM and EJB but had come up against limiting factors such as complexity of deployment (all those ports), lack of interoperability between CORBA, DCOM and EJB and the way the Web was influencing the development of applications, what would you do?

I bet you’d figure out how to take all those things you’d been doing and make them work over ubiquitous Web protocols. I’m not saying this is necessarily bad and doesn’t have its place, but there are two other big reasons why you might think it would be reasonable to do this:

  • it is the way major software vendors had been developing systems since as early as 1990, meaning
  • there was a legion of software developers who understood how to develop distributed systems using those concepts and mechanisms

Vendors are protecting their investment because they need to stay in business and keep their shareholders happy, but somehow make their distributed computing technologies work together as more and more people are running heterogeneous environments not only internally, but across trading partners.

The Web is different, however.

In the same way that messaging-oriented middleware (MOM) isn’t the same way of thinking about solving distributed computing problems as using distributed objects, building successful distributed hypermedia applications using REST for either human/computer or computer/computer interaction requires a shift in the way you think about the problem.

If you can’t suspend your assumptions about how things ought to work to understand how they do work in a different environment, e.g. MOM or REST, you’ll forever be frustrated and not understand the advantages and disadvantages of this approach over any other. From my perspective on the recent ROA/SOA thread, this is where we are and why reaching any sort of common understanding is and will continue to be so difficult.

1 Comment »

  1. Insights » Is “REST API” an Oxymoron? said,

    December 16, 2006 at 12:41 pm

    […] Even though I had to temporarily drop out of the ongoing discussion on the service-orientated-architecture Yahoo group/mailing list, which prompted my last post, to focus on a few high-priority interrupts for a while, my brain hasn’t fully disengaged from the discussion. […]

Leave a Comment