Imagine you are a team manager for NASA and you’ve been given the job of hiring a group of people to build a space ship that can take humans to Mars. You’re going to need aeronautical engineers, physicists, material scientists, software experts and all manner of extraordinarily clever people. People who know stuff about stuff you probably could never understand. Even after extensive filtering your waiting room is packed full of candidates (after all this is a name-in-the-history-books glamorous project to be working on). How on earth can you decide who to hire and who to drop? What you need is a way to measure something about their capabilities.
I was in a job interview some years ago (unfortunately not one to build a ship to go to Mars) and the hiring manager asked me how to measure architecture. I bumbled something about acceptance tests for scalability, intrusion prevention, and the like, and that seem to satisfy him. But I knew it wasn’t a very good answer, and I’ve thought a lot about it since. How can a business measure whether its architecture function is any good? Come to that how can a business manager measure whether potential team members know their stuff, when the stuff in question is a mysterious and esoteric world to them? The old adage that you can’t manage what you can’t measure is, for the most part, true.
For me, the measuring begins in deciding what it is you want to do, and why. I have a PowerPoint presentation, that I’ve given a number of times over the years, that attempts to explain, to even the most unbridled technophobes, what architecture is - how it came into being, why it’s not just “clever developers”, and why it should be seen as an endeavour in its own right. The fifty-thousand-foot summary of the logic is this:
In enterprise development there’s an awful lot going on. Multiple projects changing things, interacting with things, adding new things, closing down old things. If you just let these projects pursue their own destiny unhindered then, more often than not, the business as a whole would suffer. It would suffer because there’s no point of responsibility for making sure that concurrent projects work harmoniously - sharing work and coordinating their changes - and that expected future changes - organisational, business, technological - are taken into account, as far as they practically can be.
Regulars will know that I’m no fan of architectural ivory towers or consultantspeak, so lets just say that my argument is that these risks can be mitigated by having a group of technically savvy people pragmatically adding value at this enterprise level and, contentious though the name is to some, these people are generally called architects.
So that’s something of a whistle-stop rationale for having them. Now what about measuring whether they are actually adding that value we talked about?
Well, first the role that they perform must be discrete (in the sense of distinct and separate, not in the sense of discreet - subtle, although some would do well to be a bit more subtle in the way they approach the task..). The role must have clear inputs - business relationships, project artefacts, standards, aspirational goals, etc and clear outputs - target architecture, assessments, risk analyses, dispensations, etc. The outputs can certainly be measured - any one project will (hopefully) understand what its tactical mission is, but it should also receive direction on how that mission can be successful strategically. If that direction isn’t clear and timely then the outputs are not up to scratch.
Where there are discrepancies between the parameters of the mission and the strategy, the resolution process should be quick and decisive. The business should know exactly what taking a shortcut means, and shortly afterwards the developers allowed to get on with their distinct role without undue interference.
So that covers what you want to do. The primary “why” of all this is hopefully clear, as the alternative is project success having a severely limited shelf-life. But there’s another rationale to this that allows another kind of measurement of the target architecture itself.
A good target architecture is one that looks, feels, smells and behaves like the business itself does. What I often call a Native Architecture. If you drew out the operating model for a giant corporation like McDonalds you wouldn’t be surprised to see that an overwhelming percentage of it is centralised, globalised and standardised. An architecture that was itself centralised, globalised and standardised would seem a better fit than something highly distributed or federated. An energy trading organisation like BP, on the other hand, has some need of standardisation and centralisation to keep costs down and procedures simple, but also an ability to adapt to hourly changing market forces in multiple countries, across multiple exchanges, under multiple legal jurisdictions.
This nativity is relevant when looking at things like SOA, for example. SOA should be easier in many ways for McDonalds because it’s easier to define what a service is in a way that means the same to all its consumers. BP, one supposes, would find that a more challenging task and may get more out of harnessing business events in a more dynamic way, with SOA services being more computational as opposed to directly supporting business processes.
The measurement of such an architecture (although I freely admit it’s far from empirical) is when changes need to be made. A native architecture will generally require an amount of change in proportion to the change required to the business operating model. If McDonalds wanted to start selling cream cakes they could do this globally with just a few changes to a centralised architecture (product line, pricing, supplier management etc, would already be present for similar products). If BP wanted to start selling cream cakes globally, it would represent a major operational change and thus significant change to the IT landscape would be understandable.
What you tend to find is that conversations between the business and IT departments with a native architecture can be characterised by “how are we going to achieve this new objective and what are our options” rather than IT saying “this is why what you are asking for is very hard for us to do”. The second form of conversation is depressingly common and is a strong indicator that the architecture isn’t quite as it should be. The first is a key part of what should be the main objective of any good IT group - to make the complexities of the technology invisible to those outside IT.
Another way of saying the same thing is that it’s better to think less about applications and more about platforms - the word “application” being an IT naming construct to collectively represent a bunch of code that over time evolves to do a mish-mash of business things that were probably never intended. Another warning sign is where architecture discussions revolve solely around which of the existing hopelessly outdated legacy applications is the best place to plonk new functionality. Amazon have done a good job of not falling into this trap by thinking in terms of business platforms from the outset, as described nicely by Werner Vogels in the presentation on InfoQ posted late last year.
As a parting shot on this, and by way of a segue into my next piece, my original answer to the question of how to measure architecture was that operational features such as scalability and security are themselves measurable and are the products of good architecture. Indeed my primary definition of architecture is that it’s the actions that define these features and then cater for them. The phrase I am avoiding using here is non-functional requirements, for I received an insightful and thought-provoking note from Keith Bines, an architect for Oracle, questioning the use of the term. Keith rightly pointed out that all the features on my standard non-functional requirement list are pretty damn functional in nature.
Without getting overly pedantic (it’s just a name after all) it is something that needs addressing, because names can create mental, political and ultimately operational separation. Thinking that there are functional requirements - what the business want - and non-functional requirements - a distinct, second-fiddle playing, representation of what someone else wants is plainly wrong. One set is not more important that the other, they’re all requirements and all functional. And I think by examining what the nature of functionality is, there’s a way to allow subtle categorisation without hefting an axe to split the requirement set so that anything that’s a bit hard or unpalatable gets left to one side.