I consider myself inordinately lucky to have fetched up in a career that would otherwise have been an expensive hobby. Software development can be a frustrating roller-coaster ride, but every now and then you get to be involved with something that genuinely, and rather beautifully, succeeds. Projects get canceled, derailed, or otherwise slip into oblivion for all sorts of reasons, but it’s very satisfying, once in a while, to be able to point at something in production and know you played a part in making it real.
This love of all things software naturally leads me to have something of a preoccupation with determining what, precisely, is meant by “quality”. Quality is a mildly disturbing term that consultants liberally apply to all aspects of the delivery pipeline. But have you ever tried to read a paper on software quality? Boy are they dull. If it were possible to agree on a concrete, objective and measurable definition of software quality I don’t suppose there would be any disagreement that all serious projects should aim to achieve it. Neither of these things are likely though given the quality of software quality papers. I know - what about a quality movement to improve the quality of software quality papers? We’ll just need a quality committee to oversee the software quality paper quality movement.
One thing we can’t disagree on though is that stuff does, sometimes, emerge from the software engineering process. And most of us that are paid to be involved in that process would like that stuff to work, and be good, and make users happy, and make us proud. This is the first of a two-part exploration of how we might enhance the probability of that being the case.
We’re unlikely ever to have much sway over the wider reasons for our efforts being wasted, but we can at least go all out to make sure that the right thing is being produced.
So what, precisely, is the right thing?
Let’s try one of those contrived thought experiments. You are out for a walk and you come across two buildings that are on fire. One contains nine people and the other contains one person. You have the wherewithal to get into one building and save the residents, but the time it will take you too effect your rescue means the other building will collapse, making a second impossible. Save one person, or save nine - that’s your choice. Of course, the majority of people faced with this dilemma will save the nine people. Very quickly your mind evaluates your moral obligation and the guilt you will experience from not complying with it: save one person and there’s the guilt of not having saved nine people, save nine and there’s the guilt of not having saved one person, do nothing and there’s the guilt of not having saved ten people. So, not exactly a tough choice. You have some guilt in all cases, but handily the least-guilt option also gives you the satisfaction that nine people will be saved, free to live out their made-up existences to the full and star in someone else’s thought experiment.
Put plainly, all we’ve done is located the generally accepted right thing to do.
Then re-run the burning building scenario, but this time the one person is your child. Now your choice is save your baby and prevent nine otherwise healthy avatars from appearing in a Julian Baggini paperback, or save them and turn your back on your own flesh and blood. It’s a little tougher this time because the option of saving your child also comes with the biggest guilt trip. But, naturally, you’d save your child. This doesn’t mean saving nine people is no longer the logically correct thing to do, it just means that you’re human and you can argue that your moral obligation is now to safeguard your own offspring above others, no matter how many of them there are.
OK. That’s enough pretend for today. The point of the experiment is simply this - you would save nine people in scenario one because it’s the right thing, and you would save one person in scenario two because it’s the right thing for you. Note that’s not the same as saying the right thing is a relative concept. Scenario one highlights something close to an objective choice and scenario two makes it subjective.
So what’s this got to do with software?
Well, if you could find the one person whose subjective whims and desires represented all that needed to be met to make a project successful you’d know exactly where you stood. You might not agree with them, but you’d know exactly what “quality” looked like. Or, if we could all act like robots, with no subjective feelings whatsoever, we might have a shot at creating a glorious objective set of requirements derived from pure mathematics and logic alone.
The reason it’s rarely like this is because there’s this annoying duality of the subjective and objective existing together. I say rarely because on very small projects you may actually be lucky enough to have one stakeholder.
You could disagree here, many would, and say that the reality is that there are just lots of subjective views in enterprise software, and this wild-eyed notion of mine that there’s an objective view is bunkum. Nicely put, dear reader, but let me ignore your use of the word bunkum in my essay and try to convince you otherwise.
At three separate times in my career I’ve managed teams of business analysts and functional architects, and all three times we faced the same issues of how to capture, collate and define requirements in such a way as to arbitrate between stakeholders who want different, contradictory, things. Added to this, we also had to ensure that even requirements that seemingly nobody wanted were given due prominence (more on this in a bit..). With all the sensitivity I could muster, I used to say that, at all times, we should assume that all our customers were wrong. Barking, picnic missing a sandwich, six-pack short of a beer, pants on the head, pencils up the nose, singing “paa paa”, mental in the napper, wrong.
My reasoning for this somewhat unorthodox and potentially anarchic approach is thus:
What somebody thinks they want, is often not the same as what they actually want. Iterative delivery approaches take the sting out of this a little, by adopting a code-a-bit, show-a-bit, get-feedback, refactor, rinse and repeat cycle.
What somebody actually wants can also imply a lot of things that they don’t know they want.
Individual stakeholders have a pathological tendency to save their own requirements from burning buildings, while nine times as many equally valid requirements perish in the flames.
Once you accept that one stakeholder could, for solid, understandable, logical reasons be wrong in this particular context, it’s not a big step to see that it is possible for them all to be wrong.
By questioning whether what you hear is correct, and assuming some things that are correct aren’t even being said, you have better chance at finding what they actually want. I won’t labour this because I covered a lot of it in The Requirements Delusion, but this is where the objective view comes in to play.
However, you need to bear in mind that the objective view in most cases will often reflect only the best you can get, and not perfection.
To use a melodramatic, but telling, example, the lifeboats on the Titanic had room for 1,178 people, plus a few more if they squashed in a bit. An objective success would be if the maximum number survived, say around 1200. In fact there were 2,228 to be saved, so a loss of nearly 50% was inevitable. Only 705 made it.
I think it’s almost a given that, when any significant number of people have to collectively achieve anything, the chances for success are severely limited without clear objective leadership. Whether that leadership be from the top, in the form of the Chief Exec’s vision for the future (a subjective view, yes, but distinct at least from regular stakeholders) or from business analysts guiding those inaccurate views towards better ones, it matters little. Ideally you need both.
Given the hard facts, all employees will generally gravitate towards the right thing. Success may be experienced in different ways, and mean different things, but people do know what it looks like. That doesn’t mean they won’t start with views and attitudes that might achieve the opposite.
James Surowiecki wrote an interesting book called The Wisdom of Crowds that shows how large groups of people are collectively quite a bit smarter than individuals. Surowiecki’s examples show how extremes of opinion can be channeled by the community norming effect of a large group (where they’re trying to do things like guess the weight of a cow). When you’re building something for customers, that “smart collective” will judge your results, but it’s the extremes of opinion on the inside that can kill them.
And that smart, normalised, communal, opinion is the objective view.
Hopefully that puts us on the same page as far as what we need to get at to improve our chances of success:
To have quality in software you need to strive for an objective view as to what it looks like.
That view is unlikely to exist in the heads of individual stakeholders at the beginning of the process.
Stakeholders will have very subjective opinions which may in fact be harmful if left unchecked.
The objective view exists somewhere and you have to find it.
Perversely, it looks a lot like the view you might get if you had gazillions of stakeholders and it were possible to statistically analyse their thoughts.
Clearly we can’t do that, so in the next installment we’ll look at what we can do to tease out the right thing from the forest of wrong things.