Part 3 of 3: Patterns from inner space
By Julian Browne on June 27, 2007. Filed Under architecture, development, requirements
This is the third of three articles on the space-based architecture. The first was a general introduction, and description of a commercially deployed example, the second looked at how the SBA supports Agile because of its complimentary nature to how businesses think and work, and this final episode looks at some of the subtle technicalities of the SBA.
Before I start, let me make clear my relationship with the SBA - I am not connected in any way with Gigaspaces (the originators of the formalised SBA concept) nor indeed any other grid vendor. Nor would I zealously advocate an SBA approach over all alternatives. For most of my career I have worked in end-user environments buying, developing and integrating software. My customers have therefore usually been the people who pay for the services of my employer, or people who sit in the same building, so indeed if I did go about only deploying SBAs then, sooner or later, I would do it in an inappropriate setting and get into some serious trouble. Anyway, glad that's cleared up. So, as Macbeth said, I go, and it is done; the bell invites me..
In the previous articles I focused on the Space aspects of the SBA, but what about the Architecture aspect? In the context of end-user environments, what is architecture anyway? Well, I think it's three things that conspire together to deliver an overarching forth:
It's a way to compartmentalise software functionality into logical abstract categories
It's a way to physically deploy these logical structures to make the abstraction real
It's a manner of seeing projects in their wider context, because, as I'm very fond of saying, projects are merely transitions: what you really build are products, and each decision you make in a project setting threatens to come back and haunt you in the longer term.
These three elements: the logical, the physical and the strategic, combine to deliver the primary output of architecture: an ability to meet non-functional requirements - a term which, these days, I try not to use because it isn't conducive to supporting effective analysis, preferring instead to use "systemic requirements" or the slightly more fusty "system qualities and constraints".
But whatever you call them, they are something that businesses rarely like to talk about - hard to define, and devilishly difficult to measure on occasion, but disastrous to get wrong. All the stuff they teach at college about the cost of fixing a bug increasing by factors the later in the life cycle it is found, go double for all those requirements that exist above and beyond the visible functionality. It may be annoying and expensive to change business logic in an application, but try telling a board of directors that the way the entire fabric of a solution operates is wrong, and that's why it won't scale to meet demand, or remain available for long, and that the only way to fix things is a ground up rebuild. Architecture is important. And very tricky to get exactly right.
To determine whether any one architectural approach is appropriate or not, I always look at it against systemic requirements first, illustrating each with a meaningful story so that business sponsors can understand the pros and cons. There are no best practises in architecture, only workable ones that can support business growth after reasonable (proportional) effort but inappropriate designs can be shown to be unusable quite quickly using this method.
Let's walk through some system qualities and see how an SBA would, or wouldn't, deliver based on it's archetype.
I can't think of a more cost-effective option than a grid to give you high levels of uptime. Outages in single points of failure are the main cause of unplanned downtime and in a compute-grid your compute-power is as widespread as you need it to be. If one box goes down you can set it such that another node picks up the load. You can get similar levels of availability with n-tier, but you'll be up to your knees in load balancing and architectural constraints before you know it.
I'd say this is a dead-heat. I don't know of anything inherent in a grid that gives you more capacity per se. You can buy cheaper kit, so extra local store can be cheaper, but if you're in the world of NAS and SANs then I guess you'll connect to what you have just the same.
Hands down the SBA provides the simplest way to achieve high concurrency, as long as your business processes are capable of running this way. If they aren't (e.g. use of singleton pattern, limited network connections to legacy systems) then the effectiveness of your lovely parallel workers will be severely curtailed due to the constraints imposed by Amdahl's Law.
A badly designed SBA I guess is as limiting as any other approach here. To get the most out of an SBA you need to think in terms of pipelines of activity. Data goes into one end of the pipe, and there are stages of processing as each worker picks up the data, and performs its task, and puts data back. Inserting stages is very easy (enhancements) and extensibility can usually be done via new pipelines. In an SBA, logic can be combined with the data in the space (that's partly why performance related NFRs can be so good) so design for later modification is doubly important, and not to be underestimated.
My instinct is to say that SBA is a winner here, and with so many choices for inserting data into the space that's true, but I have to caveat this with some practical experience. SBA is a novel, if not entirely new, approach. It's not legacy, but there's a good chance you'll come up against some legacy systems that need to co-operate with a new SBA.
It can be something of a paradigm mismatch if you're not careful and you definitely do not want to be popping in and out of a space with trips to external systems, or you'll lose nearly all the benefits. If you want a grid that's more legacy friendly for this type of operation then I'd suggest something closer to a data-grid (take the data to the application) rather than a compute grid (take the application to the data), although even in this case your performance will be limited by whatever clunky old kit you have to talk to.
If you take the advice above and stick to pipelining operations that don't leave the SBA world (data and code together in the space entries, workers appropriately distributed) the solution should fly along. I've written demo applications that work on large sets of numerical data and the results, even of quite poor hardware, are amazing. In an n-tier architecture there will always be a bottleneck somewhere, usually at the tier interfaces. That may not be an issue if your system limits are predictable and easily met.
Maintain/Manage-ability & Monitoring
It's not a picnic in any architecture this, but it's one aspect of SBA that has some way to go. In my commercial work I've always been in a position to be able to build what's missing, but it a shame that a third party market hasn't yet sprung up to meet the demand. One company in the UK worth keeping an eye on is PSJ Solutions, who have done quite a bit of work around supporting SBAs with real time event-driven monitoring.
Recoverability, Reliability & Resilience
If there's one reason to adopt and SBA over another, except for maybe scalability, it's this. The atomic nature of interacting with a space is a comforting experience. If you've ever had one of those torturous conversations about container vs. code vs. whatever managed transactions then you'll appreciate the simplicity of the verb set of: put, read, get and notify.
That's not to say bad design won't get you into trouble, because it can (it is a distributed system, and as with any distribution you need to think about what's happening in which logical clouds), but I've not come across any cases where a bit of thought around partitioning and federated workloads didn't bring things back under control.
Another area needing some support. Because spaces are fairly lightweight you can pretty much add whatever security you need. Which means you can also get yourself into as much trouble as you can. I'd hate to see any kind of weighty overhead on the security model though. Beyond some basic authentication I've been lucky enough not to have needed it, but some patterns based on other's experience would be nice.
Hardly worth mentioning this. If you're familiar with the technology to any extent you'll know that it's close to linear for well-designed systems. Like many people, I 'discovered' spaces because I was, at the time, working on finding a solution to a massively scalable problem in the petrochemical industry. I was all set to go down the data-grid route because I thought all I really needed was fast access to data, when I came across the fringe benefits of combining the computational aspects with the data.
So there you have it. An SBA discussed in the light of a few standard NFRs. And I guess my summary of all this is that if you have a simple application, with predictable growth and reasonably contained business logic a tiered approach will do fine. If it's a database-driven web site that's not going to get heavy throughput, or generating lots of business transactions, then one of the MVC frameworks will probably cover it.
But once you get into distributing that logical architecture across nodes, and the words scalability and transactional safety start to become important to the business, you really do need to look into better ways to break away from some of the heaviness of JEE, you'll probably have to do a bit more work to get some operational aspects to the same level as for the big application servers, but it's more than likely to be worth it in the long run.