In Search of Portable Interoperability

Posted in SOA by AST on Thursday, November 10th, 2005

Amoxil Generic Buy Clarinex Online Neurontin Without Prescription Topamax No Prescription Soma For Sale Glucotrol Generic Buy Aricept Online Stromectol Without Prescription Lotrisone No Prescription Celexa For Sale

Interoperability is a pretty hot topic these days as more and more people are either implementing or talking about implementing Web services and especially SOA environments. However, if something isn’t done about it, the current initiatives in the WS-* space may unwittingly hurl us back into the days of worrying about how we were going to port applications across Windows, Motif and the Mac.

It is a common misconception in the Java world among people who don’t have a chance to read the fine print that JMS provides an interoperable solution for asynchronous, reliable messaging. However, what JMS actually provides is a consistent programming interface across a wide variety of Message Oriented Middleware (MOM) implementations. According to the JMS Specification, version 1.1, the objective of JMS is to:

“[define] a common set of enterprise messaging concepts and facilities. It attempts to minimize the set of concepts a Java language programmer must learn to use messaging products. It strives to maximize portability [emphasis added] of messaging applications.”

Note that there is nothing said in the above objectives about the interoperability of JMS implementations. The topic of portability is discussed further in the specification:

“The primary portability objective is that new, JMS only, applications are portable across products within the same messaging domain [emphasis added].

Finally, under the section What JMS Does Not Include is the crucial point about JMS and interoperability:

“Wire Protocol - JMS does not define a wire protocol for messaging.”

This statement is perfectly within the goals of JMS as a portable API for enterprise messaging. It is much the same approach as the JDBC API. JDBC means that you can write an application to talk to any database supplying a compliant JDBC driver. JMS means that you can write an application to talk to any messaging system supplying a compliant JMS implementation—but it doesn’t mean that you can use JMS to bridge messaging implementations. It is strange that so many people inherently understand this issue with JDBC, but miss this crucial point about JMS. Most people know that Java runs on lots of different platforms, but they typically don’t have a lot of exposure to MOM implementations, therefore they don’t see the distinction. I think Sun could have chosen a bit better naming for what they called the JMS provider to equate it a bit more closely with the JDBC driver concept, but that ship already sailed on April 12, 2002 (the date on the 1.1 specification).

So, to stress the point (there will be a quiz later): JMS provides a compatible API for enterprise messaging, but it does not provide an interoperable message transport protocol.

Enter Web Services

Historically, people solved the organizational interchange problem using the ubiquitous comma-delimited file (CSV), which is ironic itself in that there is no official format specification for CSV. Faced with the challenge of exposing or bridging MOM implementations, it makes sense to do it in a way which uses an interoperable wire protocol rather than relying on yet another proprietary solution (YAPS). The architecture specified by the W3C’s Web Services Architecture (WSA) attempts to address these issues through the use of WSDL, SOAP and additional protocols for reliable messaging. The current players are WS-Reliability from Fujitsu, Hitachi, NEC, Oracle, Sonic and Sun, and WS-ReliableMessaging from BEA, IBM, Microsoft and TIBCO.

In theory, both of these specifications do the same thing, however the detail of each shows a few subtle differences. These differences aren’t terribly important to this discussion, however. They focus mainly on whether there is requirement for other WS-* specifications and how batches of messages can be transmitted.

If you’ve already subscribed to the WS-* approach to Web services, these are pretty much the choices. The ebXML Message Service Specification (a.k.a. ebMS) is related closely to WS-Reliability, but it includes some things which are specific to its role in ebXML. According to the XML Cover Pages description, ebMS will probably be updated to position it more closely with WS-Reliability in the next version.

Leaving aside the argument of how complex it is to implement these specifications, they have been proven to varying degrees to be interoperable between vendor implementations.

“Great, so we’ve solved our problems. All we have to do is provide a Web services endpoint connected to our JMS vendor and we’re in business, right?”

Ah, Grasshopper… You have much yet to learn.

Inverting the Problem

The real issue with the WS-R* specifications is the same as for JMS: the devil is in the detail. While WS-R* may standardize the wire protocol into sending XML documents over HTTP, which should be interoperable as long as the XML is conformant to the specification, we have a problem at the next layer of the application: the API.

Anyone who has attempted to write cross-platform C or C++ code (and sometimes, even Java code, but to a much lesser degree) that did anything very complex has run into the problems that while both of these languages may be formal specifications, the libraries for doing useful things on a given platform aren’t always formally specified. With the latest POSIX specification, this has gotten a lot easier in the UNIX world than it used to be, but something as critical as a Graphical User Interface still has many proprietary (or at least divergent) variants, depending on your environment. Some of these are now open source like GTK+ and Qt, but there is still a very active demand for a cross-platform UI toolkit (e.g. FOX and wxWidgets (formerly wxWindows).

The reason is that pragmatic programmers really only want to write things once and be able to support the widest possible platforms. Even today, the user interface is still one of the biggest challenges to actually accomplishing this goal. While it’s true that Java provides JFC/Swing, that isn’t always the right solution for all applications. There are still advantages to writing UI applications in C++ over Java, depending on the type of application you’re developing.

Many of the patterns in the venerable Gang of Four book are framed with examples describing how they may be employed to minimize the dependence of an application on a particular UI toolkit. As illustrated in the book, this isn’t done just to cause more work and write more code; it is done to encapsulate the parts of your application that may change often (like the UI) from the parts of it that won’t (or shouldn’t, like the business logic).

This lesson, which many of today’s Java programmers have never been exposed to in the way that people who programmed before Java and JFC were, is still a critical aspect of successful software design. Part of the problem with today’s sophisticated developer tools is that it is very easy to let the tools do things for you. This approach can seem tempting for many types of tasks: refactoring simple name changes and automating the creation of boilerplate Java Bean attribute accessors, for example. However, today’s developer needs to be very wary of what’s going on behind the “magic curtain” of the sophisticated IDE. These are the things that will make a difference between your web service being deployable across various application servers and one which is stuck in the tendrils of YAPS.

It doesn’t need to be this way. While BEA’s implementation of WS-Reliability makes extensive use of the JDK 1.5 annotation feature, the open source implementation of WS-Reliability from Fujitsu, Hitachi and NEC, RM4GS, provides very neat integration into a J2EE container environment as a JCA component. Yes, the implementation of the specification is proprietary (even if it is open source), but the semantics of using it are exactly the same as any other JCA component. That’s the point of the specification.

The important thing about this to a developer is that their implementation code is relatively clean:

main(String[]args) {
  InitialContext ctx = new InitialContext();

  // Obtain a Connection.
  // Specify the JNDI name of the connection factory.
  ConnectionFactory cf = (ConnectionFactory)ctx.lookup("eis/rm4gs");
  Connection conn = cf.getConnection(true, 0);

  // Obtain a P2Pdestination. Specify the name of the queue.
  P2PDestination dest = (P2PDestination)ctx.lookup("eis/SimpleQueue");

  // Create a sending session.
  Policy[] policies = new Policy[] { Policy.EXACTLY_ONCE, };
  SendingSession session = conn.createSendingSession(dest, policies);

  // Start the RM4GS transaction.

  // Create a message.
  TextMessage msg1 = (TextMessage)conn.createMessage(MessageType.TEXT);

  // Send the message.

  // Complete the RM4GS transaction.

  // Release the message.

  // Release the session.

  // Release the connection.

You still have the proprietary implementation imports, but at least the model is relatively straightforward. Also, these imports are actually interfaces, so if you ever really needed to, you could just bridge another implementation’s classes with the Adapter pattern.

The Apache Sandesha implementation of WS-ReliableMessaging is based on Axis and works a bit closer to the metal. From the User Guide, using it isn’t terribly complex:

public static void main(String[] args) {
  try {
    Service service = new Service();
    Call call = (Call) service.createCall();
    SandeshaContext ctx = new SandeshaContext();
    ctx.initCall(call,targetUrl, "urn:wsrm:Ping", Constants.ClientProperties.IN_ONLY);
    call.setOperationName(new QName("", "ping"));
    call.addParameter("arg1", XMLType.XSD_STRING, ParameterMode.IN);
    call.invoke(new Object[]{"Sandesha Ping 1"});
    call.invoke(new Object[]{"Sandesha Ping 2"});
    call.invoke(new Object[]{"Sandesha Ping 3"});
    ctx.endSequence();} catch (Exception e) {

The BEA implementation present in WebLogic 9, looks a bit different:

package examples.webservices.reliable;

import javax.jws.WebMethod;
import javax.jws.WebService;
import javax.jws.Oneway;
import weblogic.jws.WLHttpTransport;
import weblogic.jws.ReliabilityBuffer;
import weblogic.jws.BufferQueue;
import weblogic.jws.Policy;

  * Simple reliable Web Service.





public class ReliableHelloWorldImpl {

  @ReliabilityBuffer(retryCount=10, retryDelay="10 seconds")

  public void helloWorld(String input) {
    System.out.println(" Hello World " + input);


I personally don’t like the use of the JDK annotations—especially when there are more annotations than Java code, but that isn’t the point either. The above 3 examples are supposed to all accomplish the same thing: reliable delivery of a message from point A to B, or in WSA-speak: between a requester agent and a provider agent. However, if you were the one implementing the service, or in our case, a simple Messaging Bridge between JMS and something else (maybe another JMS implementation), your code is intrinsically tied to the vendor implementation. Change vendors, change your code. You’ve just inverted the JMS interoperability problem and have interoperability without compatibility rather than compatible interoperability.

Learning from the Past

To solve this problem requires a look at some historical technologies: Berkeley Sockets and CORBA.

The Berkeley socket API was in a similar position to JMS, it provided an API on top of a message transport protocol. The key difference here is that the transport protocol in question was TCP/IP. It had already been standardized from an interoperability perspective with RFC 793, so what Berkeley sockets provided was a widely implemented interface for interacting with TCP/IP to send messages (at least, it was eventually widely implemented). Sun was not in a similar position to do this with JMS because that would’ve required them to get all of the MOM vendors on the planet to agree on a particular message exchange transport protocol. While that wouldn’t necessarily be a bad thing (and make our problem evaporate nicely), it was something that wasn’t likely to happen.

CORBA is also relevant here because it took the position of defining its own, end-to-end world: an interoperability-focused transport protocol in IIOP, interoperable operation interface specification in IDL and interoperable programming API specifications in the various language bindings. In all, it covered a lot of ground and was quite ambitious. However, I think the people behind CORBA knew that they wouldn’t really have portable distributed objects without specifying all of these things. At the time, none of these existed on which to build, so while what Berkeley sockets did was put a popular, simple API on top of TCP/IP, CORBA drew a box around a problem and said: “This is the box. These are the toys. Now, go have fun.”

Unfortunately, it didn’t work as well as was hoped. While it could be argued that by using XML to specify the semantics of reliable message delivery in the WS-R* specifications provides interoperability, there are very few programmers who are going to generate XML directly—especially with all the composable layers and namespaces required by those specifications. Therefore, the first thing people do with some complex, error-prone task is to automate it—by putting an API around it. This is exactly what the 3 examples above demonstrated, but by only specifying the wire and “close to the wire” aspects, it invites vendors to fill in the gaps. And, being motivated to make money so they can survive, these gaps will be filled with proprietary APIs.

The problem with proprietary APIs is slightly different than it was in the past. With today’s market volatility, the number of mergers and acquisitions in the software industry is somewhat alarming. While your vendor may provide an implementation of feature X today, tomorrow they may have bought or be OEM-ing that feature from someone else—with a different API. You’re caught in the middle. If you don’t change your application, your underlying tools won’t be supported for long (unless they have a truly massive installed base—ask IBM about how many Informix database servers are still out there). If you change your application, it costs you money: both from the additional licensing costs (the “upgrade” is unlikely to be free) and for the time it takes to change the code and completely regression test it against the new tools. Also, it probably introduced new bugs because it is a natural impulse to “enhance” software when you’re porting it, causing more time and money to be spent.

I believe there is a very real danger with all of the hype and speed of adoption of Web services that little thought is being given to the longer-term (even medium-term) compatibility and interoperability of the solution implementations. The motivations of the WS-* camp is similar, but opposite to Sun with JMS. Given the trouble they are having in agreeing core specifications, it is extremely unlikely that the vendors will settle on a common API to implement these specifications. They wouldn’t want to actually be seen as cooperating with the competition too much, or there wouldn’t be enough potential ROI in custom tools to offset the costs of hammering out the interoperability standards in the first place.

Unfortunately, it is the adopters who suffer the consequences. Regardless of what people actually think about WSA and the other specifications, it’s trying to do something good; so was CORBA. The problem is that if enough people blindly follow the easy path to “speedy deployment” of Web services through a reliance on incompatible vendor tools (not the underlying specifications), Web services risks some of the same bad press that CORBA received when people eventually wanted to migrate from one implementation or vendor to another. The lesson here is that the parts that were specified and agreed actually worked and were interoperable, however the parts that were left to “implementation details” are always the parts of a system that end up causing pain and cost to the customer.

I believe that the JCA approach taken by Fujitsu and company in implementing the RM4GS is probably about the best we can hope for in the Java world for achieving both compatibility and interoperability for reliable messaging. Wrapping these specifications in a standardized JCA interface is not overly complicated (see the JavaWorld article Connect the enterprise with the JCA, Part 2). Now that I’ve seen the RM4GS implementation, I’m somewhat surprised no one else has thought to put these two things together yet. Even an article on IBM’s Developer Works, Choosing among JCA, JMS and Web services for EAI fails to consider the power of putting JCA and Web services together. As customers responsible for building systems based on these technologies, we should insist that it is not only desirable, but also possible, to provide portable interoperability when implementing or interacting with Web services. They won’t do it unless we ask for it, but I suppose there aren’t really many people who are looking at Web services for asynchronous messaging yet. Most people see it as a way to invoke remote objects over HTTP, rather than as a way to embrace asynchronous messaging on the scale of the Internet, so that hurdle needs to be jumped first.

Remember, as Yoda says: “Once you start down the dark path, forever will it dominate your destiny.” Don’t let your vendors steer you onto that path. Demand portable interoperability in your software solutions. If you don’t invest in it today, you’ll end up paying for it tomorrow.