Sometimes in software development, a project can seem to go perfectly well, but still result in disaster. This is the story of my first major taste of such a project, how I dealt with the immediate fallout, and the lessons I learned in the process.
This is a fairly long post, so I’ve tried to keep it entertaining - but if you’d rather skip to the end for the ‘Pro Tips’, feel free (though you will be missing out on a good story and some excellent imagery, if I do say so myself).
Names and some details have been changed to protect people’s anonymity.
One Angry Man
It was an unremarkable morning at the end of a grey British summer when The Call came. Though it wasn’t a daily occurrence, there was nothing particularly unusual about receiving a phone call straight to my desk - I had several clients who had my direct number - but in the world of consultancy, there were really only two possible reasons a client would reach out to me rather than their primary contact, which was usually a salesperson. Either the client had an idea for something new they wanted to add to their system - the good kind of call, which meant I could control what happened next and earn a little commission - or the client had a problem and knew that the fastest way to get it fixed was to start a fire - the bad kind of call. This, unfortunately, was the latter.
“Tom, we’ve got a serious problem here - this doesn’t work at all!” announced the thick, Scottish voice before I’d managed to place the phone to my ear. Terry. I winced. His voice was loud enough that everyone around me could hear.
I liked Terry - a big, burly man approaching retirement, who dominated whatever room he was in. His job was to manage the distribution hubs for a national retailer - and my job, in this case, was to make his life easier and build a system to allow him to digitally track the packages going in and out of the warehouses he managed. We had a good relationship, but he took no prisoners when it came to getting what he wanted - which was more of a problem for management than it was for me. Most of our conversations were based around relatively minor technical details and UX concerns: the overall design of the system we were building had been agreed months before either of us had been introduced to one another.
Though we occasionally had difficult conversations, I appreciated Terry’s blunt assessment of things he didn’t like, and I like to think he respected the fact that I was always ready to hear him out and give him an honest response, even if I had to disappoint him. He had a reputation amongst my colleagues for being difficult, but I kind of enjoyed the fact you could have a bit of a back-and-forth with him. It was more interesting than most of the meetings I had to sit through, at least. We’d deployed the system he was calling about just a day or two earlier.
“Oh - I’m sorry to hear that Terry, what seems to be the problem?” I replied, switching into that sickly-sweet Customer Service mode. Probably, in hindsight, a mistake.
“WELL IT DOESN’T F***N’ DO THE JOB!”, he bellowed, suddenly irate - probably didn’t appreciate my tone. All around me, heads turned. Eyes widened. Faces (well, mine) reddened. Uh-oh.
If you’re in a technical role, there’s a good chance you’ve heard “It doesn’t work” before - and there’s an equally good chance that you’ve dealt with such complaints with a request for more specific information, error codes, logs - something that actually helps you understand the nature of the problem. It’s not unusual - but it’s not pleasant to have such a complaint shouted at you by a 7-foot man who could snap you in half without strain and sounded ready to do it.
“OK - can you tell me what exactly is going wrong? Where in the system are you seeing the problem?” asked I, naively. The project involved several major components - databases, web frontends, mobile applications, and domain-specific hardware. There were many potential points of failure - and something going wrong somewhere in the mix was to be expected on occasion. But not, ideally, days after rollout.
“The problem is the f*****g system, Tom! It’s useless to us! First of all…”
An hour and a half, pages of notes, and several pints of sweat later, we ended the call with an agreement that we’d set up a meeting to discuss what had gone wrong, and who needed to be thrown to the wolves as a result. Obviously, the unspoken assessment was that it I was I, in all my early-20s fresh-faced innocence, who was to become lupine luncheon.
The System Is The Problem
“Are you okay?” was the first thing I heard after putting the phone down. My boss looked as though she was ready to cry on my behalf.
“I’m alright, but something’s gone badly wrong here” I replied, exhausted.
I was shocked at the sheer number of problems Terry had raised. A few were fairly minor issues that could be resolved in a few hours - UX improvement suggestions, and a couple of performance issues that manifested once they put the system under serious strain - but the vast majority of his complaints, and the source of his frustration and anger, were about missing features: things which had never been on my radar, for which I had no documentation, and which Terry and I had never discussed, but all of which suddenly seemed like such basic, obvious requirements - at least the way Terry had complained about their omission.
To my mind, it was as though the fuzzy image in a telescope had gently shifted into sharp focus, revealing a planet-destroying asteroid heading straight toward Earth.
Everything he’d listed felt reasonable, and seemed like the kind of thing which we would, ordinarily, have pitched to the client ourselves. Poring over the design documents and tech specifications for the project, I was aghast at the disparity between what we’d built and what Terry now claimed he expected. How could we have possibly ended up here? And why did Terry - the person I’d had the most contact with out of everybody involved in the project - seem to be under the impression that not only was I aware of these things, but that I’d simply neglected to implement them?
You see, we were meticulous when it came to documentation. Everything we built went through several iterations of design and planning - and nothing was built without accompanying documentation and explicit agreement between ourselves and the client. This served us very well in the industries we serviced: accountability was everything and for most of our clients, the ability to audit the entire system from top to bottom was a must.
Though the initial gut-wrenching reaction to Terry’s phone call subsided as I started to piece together what had happened, I started to feel my own sense of anger and frustration rising. There was one clear difference between this project and virtually all others we’d engaged in before: the involvement of a third-party consultant with whom I’d had little personal contact, but who nevertheless had immediately struck me as the kind of man who’d try to sell you tartan paint. I don’t recall his name, but for the sake of the story let’s call him Bill.
Normally, we would work directly with our clients. After the initial sales pitches and agreements were done and dusted, it was my job to travel to the client’s place(s) of work and spend time analysing their processes, identify areas that could benefit from digitisation, take on board the many, many quirks and peculiarities that every business ends up accruing over time, and at the end of it all, produce a design for a system which would bring them out of the stone age and into the digital era. I enjoyed this process a lot - I like seeing how things work, and I’d meet all kinds of people at every level in an organisation. It was the most interesting part of the job for me. Though I also enjoyed the actual implementation phase, the systems we designed rarely required anything technically challenging or included anything particularly interesting to build.
In this instance, however, Bill had done most of the preliminary work. He’d been hired by the retailer - Terry’s employers - to analyse their processes and build or buy a system that did what they needed. At some point along the way, Bill had decided that the new system would need to be bespoke and approached us to build it. This was unusual for us - normally clients would approach us, or we’d pitch to them directly. Bill was eager to position himself as the person ‘in charge’ of the project - though he was technically a third-party consultant. He seemed to enjoy brushing off our own suggestions and ideas in favour of his own - much to our frustration (we could earn a commission if clients bought things we suggested ourselves) - but hey, the customer’s always right!
Bill’s notes about what was required had been relatively light, but acceptable for our purposes. He had covered the broad brushstrokes of the system he wanted us to build, and, though we hadn’t done the initial in-depth process of discovery ourselves, we were assured by the client that Bill had done this kind of thing many times before and that as long as we followed his lead, we’d all be fine. They weren’t going to pay us to do something Bill had already done. Fair enough. We tightened up our understanding of Bill’s brief, filled whatever gaps in functionality we could find, and once we all agreed on what needed to be built, the almighty contracts were signed. Design documents were produced and technical specifications were agreed upon and signed off. Bill was happy. We were happy. The client was happy.
Many months passed, and the system was coming along fine. Bill had gradually taken a back seat. He was kept in the loop with important decisions, documentation changes, and technical information, but essentially, his involvement had become something of a box-ticking exercise as time wore on. Occasionally he would appear and ask for a demonstration of the system as a whole, which we dutifully provided, and then he’d disappear again, content and with little comment, to report back to the client. No news is good news, as they say!
Terry, on the other hand, was much more focused on very specific areas of the system. He had a lot of suggestions and feedback on a handful of particular components, and he and I worked closely to make sure his feedback was taken on board and that those areas did exactly what he needed them to do. I drove out to the nearest warehouse several times to meet Terry and the staff who’d be using the system. Everybody seemed positive - things were going as expected, and it all looked like we were on track for a successful deployment.
The deployment date arrived, and everything went smoothly. Servers were up, databases were online, all of our tests and checks passed, and the warehouse staff began using the system. It all seemed to be going great. Everybody who needed to use the system had been given credentials and relevant documentation, and I sat back and watched as live data started to stream through. All systems go, no alarms, a textbook successful deployment.
Until…
Side-Effects And Edge Cases
As software developers, we become accustomed to watching out for edge cases - usage or circumstances outside of the normal expected operation - and building software that handles them gracefully. Sometimes you treat an edge case as an error because it is an error. That thing should never happen, and you want to stop it from becoming an issue. Sometimes you need very specific behaviour to handle a particular situation which doesn’t occur often but might occur occasionally. Sometimes, you need some combination of the two: specific behaviour to handle a situation but to raise it to relevant parties as a problem that should be investigated and avoided in the future. Context is often important when it comes to handling edge cases: we can’t build software that takes into account every potential situation, but we should strive to build software that doesn’t break whenever an edge case occurs.
Similarly, side effects should be identified and taken into consideration wherever possible. Usually, you want to make sure the code you write has as few side-effects as possible - and that the primary impact of rolling out your software is that the underlying goals are met. Secondary benefits are usually welcome of course, but sometimes, benefits collide with edge cases in unforeseen ways.
In this case, it became apparent that the primary cause of Terry’s angst was not that the system was breaking in any particular way, but that it didn’t cater for particular edge cases which, as a side-effect of the system rollout, had suddenly been elevated from occasional issues to continuous problems.
Just one example - amongst the many Terry alluded to - was how the system handled incorrect packages being sent down to the warehouse. To put it simply: it didn’t.
Though it was possible for users of the new system to identify a package as missing or damaged, we had no integration with the rest of the company’s fulfilment system, and attempting to scan an incorrect package essentially resulted in a ‘Package Not Identified’ warning, and little more. Absent a much closer integration with the client’s other software, we had no way of knowing whether an incorrect package was a mistake in the manifest - and that the package should be sent for delivery - or whether it had simply been sent down the conveyors in error (which was actually the case for the vast majority of such instances). To make things more frustrating, the client had no real process in place to reconcile the problem when it occurred anyway: it wasn’t supposed to happen in the first place, but it did. Prior to the new system being rolled out, throughput was low enough that this scenario could be handled easily enough: yank the package off the conveyor belt, and then send it back upstairs when things quietened down - the assumption being that whoever sent it down in error wouldn't make the same mistake twice, or that the recipient would eventually enquire about their delivery and next time round, the manifests would be correct. It wasn’t an ideal solution, but it did the job.
It turns out that increased efficiency in getting packages sent out for delivery very quickly resulted in an increase in overall throughput, which itself resulted in an increase in the number of entirely incorrect packages making their way down the conveyors and sitting there, uselessly, whilst staff scrambled to identify them and work out what to do with them. Now that everything was moving much faster, it became glaringly obvious that a serious problem existed elsewhere and that incorrect packages accounted for a significant percentage of the total number being sent down to the warehouse. This was clearly either some kind of admin issue elsewhere in the business, or just haphazard work from tired employees - but it wasn’t really something we were responsible for. It was a valuable insight, to be sure, but an extremely unhelpful problem for us to have made considerably worse.
Of course, the possibility of incorrect packages had been known about all along and had been one of the first things we asked about. After we’d enquired about it, Bill had gone away to investigate, and later assured us that this problem wasn’t something that occurred often enough for us to spend time building a process to handle it - and so we all agreed that the reconciliation problem was out of scope for us. He was probably right, at the time - but neither we - nor he - foresaw the increased throughput resulting in a higher number of incorrect packages being sent to the warehouse each day. Now that everything was moving faster, this problem suddenly became very much in scope for our system. Unfortunately, it was too late. To fix the situation, we’d need a few weeks and more money: much too long for the client to wait and allow chaos to pile up in their warehouses - and they didn’t want to part with more cash for a problem they perceived to have been caused by us, even if we’d simply exposed an existing issue that we hadn’t explicitly catered for.
What had previously been considered an edge case and something we didn’t have to worry much about, had suddenly become a glaring issue that - to users of the new system - we obviously should have built a process to deal with. It’s very hard to argue otherwise when the client is roaring at you down the phone.
This was just one of the many ‘obvious’ issues which had been exposed thanks to the new system ‘improving’ things in a variety of different areas. The system we’d built - such as it was - worked fine: things weren’t breaking, servers weren’t going on fire, people weren’t being locked out and apps weren’t crashing. It handled the load well, all of our tests passed, and we had no reports of any serious issues with the software itself - but we simply didn’t provide ways to work around a litany of ‘edge case’ problems which became drastically worse now that everything moved much more quickly. Things that Terry and his staff had previously grudgingly dealt with on a case-by-case basis when they could fit them into their schedule had, overnight, become a tsunami of dysfunction that was preventing them from doing their jobs, and ultimately causing huge frustration and alarm for the client. They weren’t edge cases any more - they were just daily occurrences that they couldn’t afford.
This whole situation was incredibly frustrating to me. We’d had weeks of acceptance testing and improvements, we’d worked closely with the users of the system trying to identify as many potential gaps in functionality as we could. We’d asked for - and been supplied with - hundreds of megabytes of test data so we could put the system through its paces. We found optimisations wherever we could, and had the test users feed back to us whenever we made changes.
Ultimately, it felt like it had all been a huge waste of time. I took some comfort from the fact that the software ‘did what it was supposed to do’, but felt like the ground had suddenly shifted beneath my feet, leaving a client bewildered and angry as I sat there trying to avoid saying that these things weren’t our fault.
I knew that this wasn’t just a case of ‘bad software’ being rolled out - but in the heat of the moment, I couldn’t easily articulate the series of process failures that had led to this nightmare situation.
We booked ‘the meeting’. I was warned to expect hostile territory.
A Phyrric Victory
“You’re brave” scoffed Terry when he met me at the car park outside the main distribution warehouse, rain pooling on the ageing concrete. “I would have thought your boss would have come along to back you up. This isn’t going to be a good meeting for you.”
I knew he was half-joking, but only half.
“Maybe”, I replied. “It’ll be alright”.
Terry didn’t comment, but I could tell he was surprised at my apparent confidence.
I’d spent the days between Terry’s initial call and the day of our meeting investigating, going over paper trails, and building up a case for the defence. Though I felt more confident that I could squirm out of this mess, it had all left a bad taste in my mouth - ultimately I simply wanted the client to be happy, and to be proud of the work I and the rest of my team had done - but I had been thrust into a horrible position: I suddenly needed to protect my own employer from repercussions.
We walked over to the meeting room - a portacabin unassumingly situated directly next to a helicopter landing pad - and went inside. There were three men present already - polite with their greetings but clearly ready for an argument. Two were representatives of the company itself - I don’t recall their positions exactly but it was clear that their job was to work out why on earth such an expensive disaster had happened. We’ll call them Mike, and John. The other was Bill, whom I’d not seen in person for several months but who looked suspiciously well-rested and had the sun-stroked sheen of a man who’d just rolled off a cruise ship.
Introductions were made, cups of tea and coffee offered around the room, and then we got down to business.
“As you’re aware Tom, we’re not very happy with the rollout of this system.”, started Mike. “We’ve spent a lot of money here and we need to know what you’re going to do to fix this.”
“I appreciate it’s not gone well”, I replied. “When Terry called, I was shocked to hear about the problems you’ve had. I don’t understand how we ended up in this position, but we’ll happily do what we can to get any problems sorted.”
“That’s good to hear,” said John, “but you should know right now that we’re not prepared to spend any more money to fix things you said were already in hand. As far as we’ve been told - all we’ve heard for months is that everything’s on track, and now we find out otherwise. It needs sorting ASAP.”
Bill nodded in agreement. Terry sipped his tea and nibbled at a biscuit. I got the impression he turned up largely for entertainment rather than to actively involve himself.
“Of course,” I nodded, as I pulled out copies of all the documentation for the project I’d been able to find, and that I’d sat up all night printing. “I think it’d be helpful for us all to be able to review the system design as we work out our next move.”
I passed extremely thick bundles of documentation around the room and watched carefully for Bill’s reaction. He either knew where I was going, or he knew something I didn’t. He hesitated for a second as I handed him his copy, and that was all I needed. “Sorry, mate.” I thought to myself, as a wave of relief settled over me. I don’t particularly enjoy being underhanded, but I always felt that something was off about Bill - and, if my suspicions were correct, I was setting a trap that he couldn’t get out of.
“As you know, we work very closely to the specifications agreed at the beginning of the project. Any changes or deviations need to be communicated in writing and the documentation amended to take those changes into account. We’ve done this a fair few times during the course of the project - for example, Terry and I have worked very closely together on certain parts of the system and you’ll see those changes and additions documented wherever relevant. That’s right isn’t it Terry?”, I invited him to respond.
“Yep”, he affirmed. “I tell Tom what I want changed, and he writes it up and sends it over for confirmation.”
“Okay, that’s fine,” sighed Mike, “but what about the missing functionality? What are you going to do about that? We were told it’d do a bunch of things that it turns out it can’t. That’s just not acceptable.”
Time for the poker face.
“Well that’s the thing - this is what I’m confused about. I appreciate that the system doesn’t do what you need it to, but as I explained - everything we do is agreed up-front and signed off. If we can review the documentation and identify the parts we haven’t implemented then of course we’ll hold our hands up and get it sorted as soon as possible - but since these problems were raised I’ve looked over everything several times and as far as I can tell, we’ve built everything to the agreed spec. The features you’re mentioning now just don’t exist anywhere in the documentation.”
Ah, ‘built-to-spec’. The get-out-of-jail-free card. I’m not proud I had to go there, but it was me or you, Bill - and I can’t afford a cruise, so it isn’t gonna be me.
Mike and John seemed irritated but flipped through the wad of documentation before them. Terry thumbed through a few pages and returned to his tea. Bill hunted through his copy as though it’d just made off with his picnic. He knew what the score was.
“Terry”, said Mike, after what felt like an eternity. “You’re the one who’s been using this. Does all of this look right to you? Everything in here - we’ve got, right?”
“It does everything it says it’ll do, aye - but it’s what it doesn’t say that’s the problem. Everything works, it just doesn’t do everything we need.”
“If it helps”, I offered (It definitely wasn’t helpful, at least not to Bill), “we’ve always had a demo version of the system available throughout the project so any potential problems could be caught and fixed.”
“And who was responsible for reviewing?”, John asked the room. “Did you have a go, Terry? Who else has seen it?”
“Yes - that’s how I got Tom to fix things for me - I’d try it out and if anything wasn’t right, I’d just tell him”, replied Terry.
“But all of this works doesn’t it?” asked John - realisation dawning on his face. “We’ve got everything in here? But all the missing stuff - dealing with bad manifests, all that admin stuff - that’s not really your wheelhouse, is it? Who reviewed the other bits?”
Sorry, Bill.
I took the shot: “Well, other than Terry, Bill’s the lead on this project and helped us nail down the requirements. He signed off the spec and reviewed the system several times.”
What followed was one of the longest few seconds of my life.
“I see”, said Mike, finally.
Bill finally spoke: “Well, that’s right, I reviewed it, and it all seemed good - but we’re talking serious missing features here! We can’t work with this as specced. That’s the problem. It’s no good if things we need were never included in the spec, is it?”
I was amazed. Bill! You signed off the spec! Several times!
Nodding, I responded. “I agree - like I said, when Terry called and explained what was missing I couldn’t believe it. I was sure we’d missed something during development, but as you know - we only build what’s agreed. I really am sorry there are things that aren’t there - but unless it’s documented and specced up, we don’t build it. We’ve given regular progress reports and demonstrations. I can definitely see why these features are important, but we can only build what we’re aware needs to be built - and though we make every effort to capture every requirement - we can’t take responsibility for things that aren’t communicated to us after the spec’s been signed off.”
All heads turned to Bill - who seemed lost for words.
“I think we probably need to review this from our side” sighed John. Mike nodded in agreement. Bill said nothing but looked like he could do with another cruise.
Mike ended the meeting - at least for me. “I think you can go, Tom. Thanks for your time, we’ll be in touch if there’s anything we need to follow up on. Do you mind if we keep these?”. He waved his bundle of documents.
“That’s fine. Thanks for your time and let me know if you need anything else.”
Terry walked me back to my car. He didn’t say much, but he seemed to have enjoyed himself. I got the impression he didn’t like Bill very much, but he didn’t say anything to confirm, other than a laugh and a “You handled that well! See you later.”
On the drive back to the office, I felt equal parts relief and regret. I’d done what I had to do to take the spotlight off myself and my colleagues: it was true that we’d done everything ‘by the book’. Bill had been given plenty of time to spot any shortcomings or missing features, and though some of the problems would have been difficult for anybody to predict, there were a host of other issues which I can’t help but feel we would have avoided if we’d done the discovery process ourselves. I also didn’t like the fact that I had an unhappy client. Even if I could reasonably defer to the specifications to bat away their complaints, I don’t like to disappoint and the whole thing left me feeling very deflated for a while. Still, I don’t know what else I could have done. I wasn’t proud, even if I felt like I’d just gone toe-to-toe with four heavyweights and won.
Aftermath And Lessons Learned
In the weeks that followed, I had only sporadic conversations with Terry as we made a few fixes and improved things as best we could. Unless the client was willing to spend more money, which they weren’t, my hands were tied.
I came to understand that Bill had effectively washed his hands of the project at some point after it had been in development for months, and struggled to explain to his employers the massive disparity between what he’d been telling them, and what he was actually reviewing whenever he looked over the demonstration versions. I don’t really understand how he could have set himself up for such a nightmare, but I don’t think he was around for much longer afterwards. It turned out that Terry and Bill had rarely communicated throughout the course of the project - I’m not even sure Terry actually knew who Bill was or what role he played until our portacabin meeting.
The client continued to use the software and - as far as I understand, tried to deal with the problems as best they could. Once the initial frayed nerves had settled, the client had accepted that we had no responsibility to carry out extra work for free - and simply made the best of it for as long as they continued to run the system. Eventually - a couple of years later - they were taken over by a larger company, who presumably put in place their own systems.
Although from our perspective the project had been a ‘success’ on paper - we made a profit, everything we’d built worked and had no issues, and we’d delivered on schedule - it would be foolish to pretend that the real-world impact of the project was anything but a mess.
Failure to ensure solid lines of communication between all parties, and neglecting to adequately assess the likely practical impact of rolling out the new system beyond ‘make <x> go faster’ resulted in a head-on collision with reality, which was painful for everybody involved.
Ultimately, aside from what we rather generously labelled the ‘teething issues’, we all moved on with no lasting animosity or resentment; but I had absolutely no desire to ever go through such an experience again.
Since then, I’ve tried to take the lessons I learned from this escapade forward with me:
1. Everything In Writing
Though it’s incredibly common for software developers to avoid documentation like the plague, to make decisions as and when they become necessary, and to invent features on the fly ‘because it’ll be helpful’ - the simple, unavoidable truth is that if something goes wrong at any point and you end up in conflict (or even simply the threat of conflict) with a client, then making sure you have clear, complete documentation for the design and implementation of your software will help protect you.
Some helpful tips:
Make liberal use of branches in your version-control system, and feature switches throughout your software. Only fold new features into the main trunk (or enable the feature switch) when you have documented the feature, and your client is aware of it and has agreed to it. You don’t have to go overboard, but the goal should be for you to guarantee your ability to control exactly what ends up being deployed. The added benefit is that your VCS logs (which, of course, are well-written and relevant!) will help when it comes time to document those features properly.
Sketch things out first. I’ve found it’s easier to write ‘real, human-readable’ documentation if I have a visual plan of the software in diagram form to work from. Writing the code first can still work, but wireframes, diagrams, and notes are almost always the best starting point if you’re not just writing software for yourself.
Slow down. Holding off on new features and changes until the client signs them off is a good thing for everybody. Deploying new things too early or without agreement - even if you’re sure they’re rock solid or a good improvement - can be argued to be little more than a needless introduction of risk. Resist the temptation to show off your latest and greatest until everybody involved knows what to expect.
The exact format of your documentation will rarely be a one-size-fits-all deal. Some clients prefer ‘low-tech’ documentation; others demand extremely detailed, technical information. I’ve found the following questions work well to form the ‘skeleton’ of your documentation, regardless of the specific format:
What does the client want?
Can be very low-tech.
Collates everything you know about what the client is after.
What do we plan to build?
More detailed than the previous set of documentation
Includes wireframes, high-level diagrams, flow charts etc
Should aim to include everything you currently know you will be building
What have we built?
Similar to the above, but as detailed as possible
Annotated screenshots where relevant, detailed diagrams and charts
Technical information
Includes user guides and specific usage instructions for different scenarios
Whether you treat these as a series of versioned sets of documentation (my preference), updated and signed off whenever necessary, or as living documents which are modified over time, the important thing is that you actually do it, rather than spend all of your time in the programmer comfort-zone.
2. ‘Better’ Is Relative
Context is everything, and though it’s very difficult to predict every possible consequence of a change - you will rarely regret playing devil’s advocate and pushing at the extremities, even if it can begin to feel a little absurd.
Broadly speaking, the purpose of writing software is to improve something in some way. Whatever the project, it’s highly likely that the motivation behind it is to do something faster or better than it is currently being done. Clients often have targets, which we dutifully incorporate into our designs and tests, and build our software to meet those targets. We consider things a success when those targets are met or exceeded. This is, obviously, a good thing - but before we start patting ourselves on the back for making numbers dance in the right way, we should try - as much as we can - to adequately assess the real-world impact of meeting those targets. In some scenarios, this is incredibly difficult to achieve - because it’s not always practical or feasible to carry out a ‘true-to-life’ test.
Often, we get by with test data, test users, staged rollouts, and other tried-and-true practices to make sure we deploy something good and useful, but when it comes down to it - deploying any new system at scale comes with a fair amount of risk. Meeting all of your targets is great, but what did you, or your client, forget? You’ve improved efficiency in one area by 50% - brilliant! But what happens elsewhere as a result? The real world has physical, practical limitations that the digital world does not - and no matter how positive you might feel about how your new system works, there’s a good chance that there are side effects and edge cases you aren’t aware of. A screw coming loose at 5mph is decidedly less of a problem than one coming loose at 50mph.
It’s impossible to make sure you cover every possible scenario, but carrying out risk and impact assessments regularly is a great way to avoid being blindsided later. Get into the habit of asking your client questions about the future, post-deployment:
“What new risks exist when we meet these targets?”
“What are the practical limitations regarding this input / output when efficiency is improved here?”
“What is the impact on this part of the organisation when we optimise this process?”
Aside from giving your clients confidence that you’re planning well ahead, it will also force them to think about things that you may not even be aware of and to suggest changes or improvements that will make everything run much more smoothly later on.
3. Identify And Close Gaps In Communication
When a project has multiple stakeholders, it’s very easy to gain a false sense of confidence in how well everybody understands what’s going on. Attendance at meetings isn’t a guarantee that people are paying attention. CCing people into emails doesn’t mean they’re being read and absorbed. A signature doesn’t guarantee understanding.
Though you can’t guarantee that everybody involved in a project is always on the same page, and it’s often unreasonable to expect everybody to understand (or care about) every aspect of whatever system is being built, it rarely hurts to keep a list of the most relevant people involved in the project and to check in with them directly and regularly.
Unfortunately, organisational politics often plays a huge role here - and if the people you’re supposed to be communicating with aren’t holding up their end of the bargain properly, there’s not a huge amount you can do about it without causing upset somewhere. What you can do, however, is occasionally step outside of the regular meeting / demo / feedback cadence and invite a response. If you send out a fortnightly progress report, for example - try occasionally sending two in the same time period. Occasionally, send an email or call the client to ask about something - anything, really - just to ‘shake the tree’ and see what falls out.
People quickly become accustomed to routine, and attention to detail slips, especially during long-running projects. Occasionally changing things up is a harmless and fairly simple way to snap people back into attention. Overdoing it can quickly become irritating - so avoid becoming unpredictable or a nuisance - but a little low-friction prodding and poking will often serve you well and help to expose any gaps in understanding or communication.
I hope some of the lessons and practices I learned the hard way can be put to use in your own work - and fingers crossed you never have to receive the kind of enraged phone call I did after what seemed like a successful deployment!
Share Your Stories
Do you have any horror stories or tips you’d like to share on how to avoid them? Feel free to share your wisdom in the comments or by email.
If you think this story will be helpful to somebody you know, or even just entertaining, then please do share!
As always your support is much appreciated!
Great lessons Tom. It's a tall order for 3rd party analysts to thoroughly understand complex business processes and interactions in limited time.
Two from me...
4. Never assume anything...
5. Optimal use of new automation seldom implies a straight transfer and overlay of manual operating procedures...
(6. More of a fault of mine than a lesson, if tha wants ow't doin' right, do it thi'sen)
Always document exclusions. Always.