A Loose Coupling Strategy

Getting things done when you’re only an architect*

11 minute read

Tags: , ,

When tactical or political pressures mean you can’t rewrite or restructure your applications in line with a best-practice architecture, or apply strategic measures with much effectiveness, then you don’t have many options to make an architecture function meaningful. One approach that I have employed with some success is to apply a refactoring strategy to legacy applications with the aim of creating a situation that supports the ‘right’ kind of change later.

The majority of refactoring affects the code structure and density of software cruft, to make it more readable, maintainable etc and so is more the preserve of development teams than architects, but there is one area that has an architectural impact, and that is in adopting strategies that promote loose coupling. It’s quite possible to do this without getting all big picture or donning your astronaut’s helmet. Applied pragmatically it also needn’t add lots of time to projects (the aim in fact is to add zero time by making margin calls very early in the lifecycle).

Coupling is a measure of how anonymous applications and components are. The more any one component knows about others, the more tightly coupled it is to them. Effectively, it’s a measure of information redundancy. If prior knowledge of the detail of an operational feature of application A is required by application B, then that knowledge must exist in two places - in B as data, and in A as implementation. If the implementation of application A changes, then application B must update its data. The situation gets very complex when lots of other applications also need to know about that feature. One of the key drivers for middleware has been to address this, because with application portfolios being so heterogeneous and dynamic it’s nigh on impossible to fix in the applications themselves in a consistent manner.

One very important thing to understand about the concept of coupling is that there are only looser and looser ways to be coupled, but no way to totally ‘decouple’, as it were. If application A communicates with application B, there is no way for them to be completely disconnected. There are ways for them to be coupled to something that isn’t each other though, and we’ll come on to that in a bit.

Coupling occurs at many levels and in many ways. It takes a lot of effort to address them all and for the large part you may not want to. Like many aspects of architecture, you have to take a pragmatic view on where to fight your battles, otherwise you’ll find yourself in an ivory tower with only powerpoint slides as your friends. The obvious rule of thumb is that you want to be decoupled from things that change frequently and understand any design debt you may be incurring where you end up tighter than you’d like.

These trade-offs are critical when you start looking at things like SOA. Vendors will spout phrases like ‘loose coupling’ all through their demos, but you won’t get it in the right areas if you don’t govern what the implementation looks like, in a way to fit your business. No tool provides loose coupling for free.

You could probably spend a week talking about this subject and I don’t have that long, but the majority of the concepts can be covered simply by examining how one application might exchange some data with another. Once you do this, you’ll have the right kind of mindset to spot other danger areas.

So let’s say that we’ve built a dynamic pricing system. When a customer comes along and wants to buy one of our products we need to know something about them in order to give them a price. If they’re a loyal big-spender we’ll do them a better deal than if they’re a first timer. Perhaps we change prices with the season, are running a two-for-one offer, or maybe we give a discount for shopping online as opposed to via telesales. Who cares? We just need a way to identify a customer and a way to identify the product they are interested in. Then it’s time to talk to that pricing engine.

The design-decision process is similar for both refactoring existing code and for new developments, so I’ve tried to look at a continuum of tight to loose coupling options. They’re not exhaustive but should give a flavour of how discussions could go - the objective is, not surprisingly, to coax changes towards the looser options. Also remember that adding a ‘loose’ interface, even one that isn’t initially used by many callers, is a capability for the future. Once it’s in production, it’s a path of least resistance for new developments to take advantage of. Call me a cranky old cynic, but I don’t believe promotion of reuse works. In the real world, developers and project managers are under such pressure to get the job done that they’ll always (understandably) look for short cuts. The trick is not to write long documents on reuse policy, but to make the best option and the easiest option one and the same. Anyway, let’s get back to the aspects of that pricing query.

  • Where

    Our first problem is where to find our engine. We need some way of letting it know we have a requirement for information, so we need to open up a connection to it. The pricing engine is on the network, so has a network card installed in it. The tightest possible coupling would be to maintain a local copy of its MAC address, next would be to know its IP Address, next its DNS Name. All of these are higher and higher levels of abstraction away form the physical machine itself. The MAC Address would tie us to the network card, which would be invalidated if the pricing engine was moved to a new box. Using the IP Address would avoid this as long as the IP Address also moved with the service. The DNS Name though can be dynamically assigned to wherever the pricing engine goes, thus making a migration much easier. But it’s still a form of coupling because we need to hold the DNS Name somewhere and that may have to change at some point. Using a central look-up registry to ‘fetch’ the name from would solve that, but of course then we’re coupled to the registry. Technologies like JINI use multicast so that the lookup problem is, in a sense, reversed. The registry is discovered, not explicitly looked up. Real SOA (the sort that isn’t just a bunch of XML web services) uses the same mechanism.

  • Who

    So here we are with a connection to our pricing service but, because we can get commercially-sensitive prices from it, we have to identify who we are, so it knows we’re not a competitor snooping for data. A tightly-coupled method would be to keep a username and password locally and for the pricing engine to have it’s own user database (I hesitate to even mention approaches like that, but I have yet to see an organisation that didn’t have quite a few key systems deployed in exactly this way). Better still would be to have a securid (or similar) generator that could be validated by the pricing engine. That relieves us of the responsibility of storing usernames and passwords, and all the issues that come with that, but now we’re coupled to the id generating algorithm, which in most implementations still has to have a way to identify the caller, via a prefix number, which we would have to store. I’m not sure there is a perfect solution here. PKI would work for passport-like identification and encryption but ultimately the pricing engine needs something to identify us that we either store or have (like our own IP Address). This shouldn’t be seen as an issue though, tight-coupling in security can be a good thing (anonymity and security are rarely good bedfellows)

  • How

    We’ve been approved and we have an open connection. Now what? Well, we need to request a price. A tightly-coupled approach would be to combine our data and the semantics of the call into an SQL Query. That couples us directly to database names, tables names, tables relationships and the data types in the tables (but surely nobody’s ever actually done this?). Better would be to institute some type of API style. Now we only need to know the name of the API call, e.g. getPrice(). You could abstract this further by naming business operations and mapping them to one or more API calls, and then only calling the business operation by name, but that’s getting into architectural concepts, like facades, that we said we didn’t have time for.

  • What

    OK, so we’re calling getPrice(), but what arguments are we going to pass it? Let’s say we submit the product ID and a customer ID as integers (which is null if this is a new customer, as determined by a previous customer look-up). This works acceptably unless a subsequent business need is to support multibuy discounts. Now getPrice() needs to accept a list of products, quantities and a customer ID. Or what if we move to supporting business customers that have alphanumeric identifiers. We could change getPrice() each time - except that all applications that use it would also have to be updated and regression tested, and because this is a very popular service that’s a lot of work. Or, we could add getMultibuyPrice() and getBusinessCustomerPrice() but now we’re into interface proliferation, which has a maintenance overhead. Alternatively we could have implemented getPrice() to accept one argument - a Business Document which represents a ‘Price Query Form’.

    Version one of the form would contain only an integer to identify the product, another (or a null) to identify the customer, and the document version number. Now, future enhancements would only require changes to the getPrice() implementation to support new, versioned, query forms. XML is a candidate for the form implementation, but depending on the potential complexity it could easily be something like JSON.

    The point is that where there are many clients, a strong likelihood of change around getPrice() functionality, moving complexity to the centre makes sense to promote looser coupling. If this is starting to sound a bit like SOA then it’s no coincidence, but remember, we don’t have time to do ‘architecture’ this is just refactoring interfaces to promote loose coupling. But now if we we’re to get the mandate, support, resource etc to move towards something a bit more service-oriented it’s going to be a whole heap easier.

    Off goes our request, and we wait for a price to come back. The issues for the response are generally the same as for the request, except that a business document approach may make things pretty complex unless it’s versioned and backwardly compatible (i.e. so each calling application only changes when it needs to, and not because another caller happens to need prices in dollars and pounds in the same response).

    One extra consideration though is error conditions. If we sent across a non-existent product identifier, or the pricing engine had a funny five minutes, we’ll be getting something other than a price, and maybe support staff need to be alerted too. Having an enterprise model for errors is a nice thought, but quite hard to do if you’ve got a mixture of built and bought applications. If the calling application has to deal with many other services (customer lookup, product catalogue, reference data) then there will be quite a few potential errors to handle. The more of these there are, the more worthwhile it may be to abstract them into a standard model.

  • Why

    We’re done with our call. We have our price and can let the customer know. But over our heads a few other things exist that we interoperate with either directly or indirectly. It’s why this call existed in the first place and the way it relates to the business itself. We might be part of a transaction, or a workflow step. Although this may not affect the call we just made in terms of it syntax or semantics, we are now coupled to other calls via some integration architecture, BPM tool or such. Depending on the volatility of these we may want to make different decisions about how the call operates at the lower level, opting to use a more standardised integration approach at greater development cost for example. These issues are certainly architectural and for another time.

  • When

    The timing of all this may be relevant too. For web based applications, the round trip time will be critical to the customer experience. Operations teams may want to monitor the call timings and take action if responses are too slow or not received. Maybe we just need to report numbers of calls and average response time for SLA purposes. Whatever the reason, we find ourselves again existing in a wider framework and coupled to that by having to work with things like SNMP traps.

There are plenty of techniques that help reduce the burden of what applications need to know about each other to get their job done. Some of these make use of technology abstractions, some require the problem to be turned around so that callers become callees. None remove coupling entirely, either moving the coupling from between applications to something in the middle (which is less likely to change) and/or moving the coupling from something syntactic like API parameters to something semantic like business documents, in effect also moving complexity to the centre.

All of these concepts are the strategic and architectural decisions you would normally make early in a project to make future change as easy as possible, but applied as part of a refactoring exercise, implemented by some lightweight governance, they can still have a powerful effect without the weight or mandate needed to pursue full-scale architecture initiatives.

Notes

  • Appropriate apologies to Joe Spolsky for the subtitle

Created: