Taking stock of a failed project

Oops?

Some projects just go wrong.

It’s a fact of life. Projects go over budget, blow their schedules, squander their resources. Sometimes they go off the rails so spectacularly that there’s nothing you can do except (literally) pick up the pieces and try to learn whatever lessons you can so you don’t repeat the failure in the future.

Last week I got a phone call from a developer who was looking for some advice about exactly that. He’s being brought in to repair the damage from a disastrous software project. Apparently the project completely failed to deliver. I wasn’t 100% clear on the details—neither was he, since he’s just being brought in now—but it sounded like the final product was so utterly unusable that the company was simply scrapping the whole thing and starting over. This particular developer knows a lot about project management, and even teaches a project management course for other developers in his company. He’d heard me do a talk about project failure, and wanted to know if I had any advice, and maybe a postmortem report template or a lessons learned template.

I definitely had some advice for him, and I wanted to share it with you. Postmortem reports (reports you put together at the end of the project after taking stock of what went right and wrong) are an enormously valuable tool for any software team.

But first, let’s take a minute to talk about a bridge in the Pacific Northwest.

The tragic tale of Galloping Gertie

One of my favorite failed project case studies is Galloping Gertie, which was the nickname that nearby residents gave to the Tacoma Narrows Bridge. Jenny and I talk about it in our “Why Projects Fail” talk because it’s a great project failure example—and not just because it failed so spectacularly. It’s because the root causes for this particular project failure should sound really familiar to a lot of project managers, and especially to people who build software.

The Tacoma Narrows Bridge opened to the public on July 1, 1940. This photo was taken on November 7 of the same year:

Galloping Gertie

While there were no human casualties, despite heroic attempts at a rescue the bridge disaster claimed the life of a cocker spaniel named Tubby.

Jenny and I showed a video of the bridge collapsing during a presentation of our “Why Projects Fail” talk a while back in Boston. After the talk, a woman came up to us and introduced herself as a civil engineer. She gave us a detailed explanation of the structural problems in the bridge. Apparently it’s one of the classic civil engineering project failure case studies: there were aerodynamic problems, and there were structural problems due to the size of the supports, and there were other problems that combined to cause a distinctive resonance which gave the bridge its distinctive “gallop.”

(We embedded the video of the Tacoma Narrows Bridge collapse here. If you get the Flash Player, you’ll be able to see it!)

But one of the most important lessons we took away from the bridge collapse isn’t technical. It has to do with the designer.

[A]ccording to Eldridge, “eastern consulting engineers” petitioned the PWA and the Reconstruction Finance Corporation (RFC) to build the bridge for less, about which Eldridge meant the renowned New York bridge engineer Leon Moisseiff, designer and consultant engineer of the Golden Gate Bridge. Moisseiff proposed shallower supports—girders 8 feet (2.4 m) deep. His approach meant a slimmer, more elegant design and reduced construction costs compared to the Highway Department design. Moisseiff’s design won out, inasmuch as the other proposal was considered to be too expensive. On June 23, 1938, the PWA approved nearly $6 million for the Tacoma Narrows Bridge. Another $1.6 million was to be collected from tolls to cover the total $8 million cost.

(Source: Wikipedia)

Think back over your own career for a minute. Have you ever seen someone making a stupid, possibly even disastrous decision? Did you warn people around you about it until you were blue in the face, only to be ignored? Did your warnings turn out to be exactly true?

Well, from what I’ve read, that’s exactly what happened to Galloping Gertie. There was plenty of warning from many people in the civil engineering community who didn’t think this design would work. But these warnings were dismissed. After all, this was designed by the guy who designed the Golden Gate Bridge! With credentials like that, how could he possibly be wrong? And who are you, without those credentials, to question him? The pointy-haired bosses and bean counters won out. Predictably, their victory was temporary.

Incidentally, some people refer to this as one kind of halo effect: a person’s past accomplishments give others undue confidence in his performance at a different job, whether or not he’s actually doing it well. It’s a nasty little problem, and it’s a really common root cause of project failure, especially on software projects. I’ve lost count of the number of times I’ve encountered really terrible source code written by a programmer who’s been referred to by his coworkers as a “superstar.” Every time it happens, I think of the Tacoma Narrows Bridge.

But there’s a bigger lesson to learn from the disaster. When you look at the various root causes—problematic design, cocky designer, improper materials—one thing is pretty clear. The Tacoma Narrows Bridge was a failure before the first yard of concrete was poured. Failure was designed into the blueprints and materials, and even the most perfect construction would fail if it used them.

Learning from project failures

This leads me back back to the original question I was asked by that developer: how do you take stock of a failed project? (Or any project, for that matter!)

If you want to gain valuable experience from investigating a project—especially a failed one—it’s really important that you write down the lessons you learned from it. That shouldn’t be a surprise. If you want to do better software project planning tomorrow, you need to document your lessons learned today. You can think of a postmortem report as a kind of “lessons learned report” that helps you document exactly what happened on the project so you can avoid making the same missteps in the future.

So how do we take stock of a project that went wrong? How do we find root causes? How do we come up with ways to prevent this kind of problem in the future?

The first step is talking to your stakeholders… all of them. As many as you can find. You need to find everyone who was affected by the project, anyone who may have an informed opinion, and figure out what they know. This can be a surprisingly difficult thing to do, especially when you’re looking back at your own project. If people were unhappy (and people often are, even when the final product was nearly perfect), they’ll give you an earful.

This makes your life more difficult, because it’s hard to be objective when someone’s leveling criticisms at you (especially if they’re right!). But if you want to get the best information, it’s really important not to get defensive. You never know who will give you really valuable feedback until you ask them, and it often comes from the most unexpected places. As developers, we have a habit of dismissing users and business people because they don’t understand all of the technical details of the work we do. But you might be surprised at how much your users actually understand about what went wrong—and even if they don’t, you’ll often find that listening to them today can help make them more friendly and willing to listen to you in the future.

Talking to people is really important, and having discussions is a great way to get people thinking about what went wrong.  But most effective postmortem project reviews involve some sort of survey or checklist that lets you get written feedback from everyone involved in or affected by the project. Jenny and I have a section on building postmortem reports in our first book, Applied Software Project Management, that has a bunch of possible postmortem survey questions:

  • Were the tasks divided well among the team?
  • Were the right people assigned to each task?
  • Were the reviews effective?
  • Was each work product useful in the later phases of the project?
  • Did the software meet the needs described in the vision and scope document?
  • Did the stakeholders and users have enough input into the software?
  • Were there too many changes?
  • How complete is each feature?
  • How useful is each feature?
  • Have the users received the software?
  • How is the user experience with the software?
  • Are there usability or performance issues?
  • Are there problems installing or configuring the software?
  • Were the initial deadlines set for the project reasonable?
  • How well was the overall project planned?
  • Were there risks that could have been foreseen but were not planned for?
  • Was the software produced in a timely manner?
  • Was the software of sufficient quality?
  • Do you have any suggestions for how we can improve for our next project?

We definitely recommend using a survey where the questions are grouped together and each question is scored, so that you can start your postmortem report with an overview that shows the answers in a chart. (If you’re looking for a kind of “lessons learned template,” this is a really good start.)

Postmortem survey results

The rest of the report delves into each individual section, pulling out specific (anonymous) answers that people wrote down or told you. Here’s an example:

Beta
Was the beta test effective in heading off problems before clients found them?
Score: 2.28 out of 5 (12 Negative [1 to 2], 13 Neutral [3], 9 Positive [4 to 5])
All of the comments we got about the beta were negative, and only 26% (9 of 34) of the survey respondents felt that the beta exceeded their expectations. The general perception was that many obvious defects were not caught in the beta. Suggestions for improvement included lengthening the beta, expanding it to more client sites, and ensuring that the software was used as if it were in production.
Individual comments:

  • I feel like Versions 2.0 and 2.1 could have been in the beta field longer so that we might have discovered the accounting bugs before many of the clients did.
  • We need to have a more in-depth beta test in the future. Had the duration of the beta been longer, we would have caught more problems and headed them off before they became critical situations at the client site.
  • I think that a lot of problems that were encountered were found after the beta, during the actual start of the release. Shortly thereafter, things were ironed out.
  • Overall, the release has gone well. I just feel that we missed something in the beta test, particularly the performance issues we are experiencing in our Denver and Chicago branches. In the future, we can expand the beta to more sites.

(Source: Applied Software Project Management, Stellman & Greene 2005)

There’s another approach to coming up with postmortem survey results that I think can be really useful. Jenny and I have spent the last few years learning a lot about the PMBOK® Guide, since that’s what the PMP exam is based on. If you’ve studied for the PMP exam, one thing you learned is that you need to document lessons learned throughout the entire project.

The exam takes this really seriously: you’ll actually see a lot of PMP exam questions about lessons learned, and understanding where lessons learned come from is really important for PMP exam preparation.

The PMBOK® Guide categorizes the activities on a project into knowledge areas. Since there are lessons learned in every area of the project, those categories (the knowledge area definitions) give you a useful way to approach them them:

  • How well you executed the project and managed changes throughout (what the PMBOK® Guide calls “Integration Management”)
  • The scope, both product scope (the features you built) and project scope (the work the team planned to do)
  • How well you stayed within your schedule or if you had serious scheduling problems
  • Whether or not budget was tight, and if that had an effect on the decisions made during the project
  • What steps you took to ensure the quality of the software
  • How you managed the people on the team
  • Whether communication—especially with stakeholders—was effective
  • How well risks were understood and managed throughout the project
  • If you worked with consultants, whether the buyer-seller relationship had an impact on the project

For each of these areas, you should ask a few basic questions:

  1. How well did we plan? (Did we plan for this at all?)
  2. Were there any unexpected changes? How well did we handle them?
  3. Did the scope (or schedule, or staff, or our understanding of risks, etc.) look the same at the end of the project as it did at the beginning? If not, why not?

If you can get that information from your stakeholders and write it down in a way that’s meaningful and that you can come back to in the future, you’ll be in really good shape to learn the lessons you need to learn from any project. Even a failed one.

Iterative development is not unplanned development

What the...

I got a great question from a software developer who also happens to be a fellow CMU alum.

I have a question related to managing scope creep with respect to “on-going”/iterative development processes.

I’m currently managing a project where we’re redesigning my application’s primary workflow. Simply put, the app is currently designed to have users to signoff all items and we’re redesigning it to be exception-based (only require certain items to be signed off).

As we’ve progressed down the path of planned iterative development, a lot of good/new ideas for future enhancements/requirements spring up. I find myself regularly working with my users and “working group” to prioritize and analyze if any of these new ideas need to be considered to build sooner rather than later (and thus triggering plan adjustments or delays).

I often feel like I’ll end up delivering a product that does deliver the initial vision, but still doesn’t make my users happy, as they’ve already shifted their expectations to wanted the “next” thing (aka phase 2).

Do you have any other tips about how to manage this process?

I’ve used things like taking a timeline-style roadmap and adjusting it by overlaying the new requirements and shifting the timeline out. Do you have an recommendations of ways to present this type of information?

Curious your thoughts. Thanks.

— Seth L.

Iterative software development can be a really useful—and highly effective—tool for software development, but it’s also one of the most abused tools I’ve seen. Even as recently as a few days ago, someone in charge of a software team that’s critical to his comapny came to me with the (incorrect) assertion that iteration just means diving into a prototype without talking to anyone or doing any investigation. Iteration done well can lead to very high quality software. But as Seth saw, iteration done poorly can lead to scope creep and serious planning issues.

Making your users “happy” means managing their expectations. They need to see exactly what you’re working on, and what’s coming next. If all people see is deadlines and they don’t have a sense of what’s going on, then they naturally start looking to the next deadline, because that’s all they see. The more visibility you can give them into the way you build software, the more understanding they’ll have – that’s just human nature.

There are a few things that work well for improving visibility into your project’s iterations. One of them is a task board, which typically involves sticking index cards with your user stories or scenarios on a whiteboard or wall. This means that you actually need to have user stories or scenarios. Scope creep is often an indication of a requirements problem, and getting consensus on at least the scenario level about exactly what’s going into the iteration. Having the index cards arrayed on a task board, with each index card showing the status (“planned”, “in development”, “completed”, “out of scope”) gives a lot more visibility into exactly what you’re building and how you’re building it. In a lot of ways, this is a sort of iterative development project plan.

Another way to prevent iteration-related scope problems is committing to delivering releasable code at the end of each iteration. Test-driven development (or, at least, developing complete unit tests) and continuous integration are effective ways to help make that happen. If your stakeholders are comfortable that you’ll deliver high-quality code at each iteration, they’ll feel less pressure to get the new features in immediately, and will be more willing to wait until the next iteration.

If this sounds like something that might help you, I definitely recommend reading the interview Jenny and I did with Mike Cohn for Beautiful Teams, who knows more about agile and iterative planning than pretty much everyone . He has a lot to say about effective iteration planning. There are pictures of task boards he’s used in the past.

That gets me back to the basic idea—one which I give Mike a whole lot of credit for helping people understand—that iterative development, especially in an Agile project, works best when we take the time to plan each iteration. It still faces the same problems: requirements need to be discovered, scope needs to be controlled, and progress needs to be communicated to everyone who cares about it… especially to anyone who can potentially make the developers’ lives more difficult. That’s why the most successful Agile development projects collect requirements and document them using user stories (or other techniques for writing down what the software needs to do). They plan their progress using task boards, forecast them using calculations and charts like project velocity and burn down rates, and constantly keep any business owners and other stakeholders up to date on their progress.

It’s a great way to develop software, and it’s been really effective for a lot of teams. And I think it shows that iterative development does not necessarily mean unplanned development.

When I sent this to the developer who sent me the question, he had an interesting follow-up, which I thought deserved a response:

So I’ve been digesting this a bit and I am curious to get your thoughts about adapting project management fundamentals into the often fluid process of app management. I manage a few virtual projects within my company and at times have struggled to keep things focused as business demands and/or interests shift over time. Similarly, the “iterative” approach has helped to clarify requirements while building out new flows/apps, but as you pointed out can be very tricky to get “right”.

It’s funny how often I hear people say, “Well, this project management stuff works in theory, but my project is fluid” (or “changing,” or “under too much pressure from the business,” or “critical,” etc.). It turns out that pretty much every project is challenging, and project management is set up specifically to deal with that kind of challenging project.

Here are two thoughts I had relating to this idea.

First, the iterative development model works very well, as long as you’re committed to delivering a high-quality product at the end of each iteration. Whether or not you develop using an iterative approach, you need to manage change: prevent unnecessary changes, and make sure you understand the impact of any change that you make. It also means that you need some sort of scope baseline, so that you know what is and is not a change. It’s faster and easier to update software on paper, before it’s written into code, so the more changes you can move to the “write it down and review it” phase of your project, the better.

And second, if your business is overly demanding, it often means that you could manage your stakeholders’ expectations better. Make sure you identify them – and write down their names and needs! – from the very beginning of the project. Talk to them… a lot. Make sure they’re in the loop. If possible, see them so often that they’re sick of seeing you. If they’re always aware of what you’re doing, there won’t be nearly as many surprises. Also, the more your stakeholders understand the details of the work that you’re doing, the more slack they’ll cut you when they ask you for changes. Often, when someone puts a lot of pressure on you to do the impossible, it’s because they don’t realize that’s what they’re asking.

It’ll take about three weeks…

If you’ve been reading our posts here, you probably noticed that we like to give our “Why Projects Fail” talk. (If you’re curious, here’s a link to the slides [PDF].) One reason we really like it is that it seems to go over well with a lot of different audiences, from hard-core architects and programmers to grey-haired project managers. It’s a pretty informal talk — we interrupt each other and go off on the occasional tangent, which keeps the mood pretty light. And that’s always a good thing, especially when you’re doing a talk to people at a PMI meeting or a user group who just spent the day at work and don’t need to sit through yet another boring slide presentation.

I was thinking about that presentation yesterday, after getting off the phone with a manager at a company that wants to hire us to do software estimation training for their programmers. One problem that they’re having is a pretty common one. Their programmers, testers, and even project managers seem reluctant to give estimates. That reminded me of this slide from the talk:

Things the team should’ve done slide

When we get to this slide in the talk, I usually say something like this:

There’s something we call the “about three weeks” problem. Have you ever noticed that when you ask a programmer how long it’ll take to do something, it’s always going to take “about three weeks”? I’ve done it myself many times over the last fifteen years or so. How long to build a simple website to do a few calculations? About three weeks. What about a utility that will automate some file operations and generate a few reports? About three weeks. A new feature for our core product? Sure, it’ll take about three weeks. See, the problem is that three weeks seems like a long enough time to get something significant done. And if you think for thirty seconds about pretty much any programming project, you can do enough hand-waving and ignore enough details so that it’ll seem to fit into about three weeks.

What does this have to do with why programmers are so reluctant to give estimates?

There are many reasons for this, more than I’ll go into here. But a big one might just be because we’ve all quoted “about three weeks” for a programming project that ends up taking a whole lot longer than that, and we never want to be stuck in that situation again. So after we’ve been burned enough, we just stop giving estimates. I was at a job a few years ago, sitting at a full conference table with a dozen developers. The CTO — an abrasive guy who clearly went home every evening to lift Lose Weight Exercises, and spent most of his day yelling at the people who reported to him — growled an “order” at the team, demanding an estimate. Everyone at the table knew that they’d be yelled at individually, threatened with dismissal, and generally made miserable if they didn’t come up with an estimate. Yet nobody looked up and volunteered anything. Eventually a junior guy in the back cleared his throat, and in almost a whisper he said, “I’m not sure about the rest of it, but I think my piece will take about three weeks.”

And there it is. Nobody wants to go on the record and say how long they think it’ll take to do a job. We all know it’ll take as long as it takes. If the estimate is right, then there’s no great reward or recognition. But if the estimate is wrong, then we’re on the hook for it, and we get to look forward to status meetings where we get to take the blame for whatever terrible consequence happened because the project was late.

So what do we do about it?

Jenny and I put a lot of thought into this problem when we were working on our first book, Applied Software Project Management. It turns out that there’s a really effective way to get a good idea of how long the software will take without putting any one person on the hook, and it’s our favorite way of generating estimates. It’s called Wideband Delphi, and we talk a lot about it in the Estimation chapter in the book (which you can download for free [PDF]). It’s a straightforward technique — it just takes two two-hour meetings to nail down estimates for even a large project. It’s very iterative and highly interactive, which helps the team all come to a consensus and agree on the final result. It’s collaborative, so that no one person is solely responsible — usually, everyone ends up buying into the final numbers. And best of all, it doesn’t require any special expertise beyond what you need to actually do the project.

My favorite part about Wideband Delphi is that it’s really focused on assumptions. That’s another thing I like to talk about in the “Why Projects Fail” talk. If you think that building a particular program is going to take nine weeks, but I think it’s going to take four weeks, we usually aren’t disagreeing on how long it’ll take to do the task. Usually, we’re disagreeing on some basic assumption about the task. For example, you might think that we’ll be building a big GUI with printing support, while I might think that it’s just going to be a console application. That means that we can assume for the sake of the estimate that we’ll either build a GUI or build a console application, and we’ll write down that assumption. That way, if it turns out to be incorrect, we’ll know exactly why the estimate was wrong… and if someone in a meeting later wants to blame us, we can point to that assumption and give a good reason for the delay. (That’s why Delphi has two meetings: the first meeting is mostly taken up with a discussion of those assumptions.)

One nice thing about Delphi is that it’s not some esoteric, theoretical thing. Both Jenny and I have done this in the real world, many times, with all sorts of software teams. Delphi really does work, and it actually does a good job of helping team come up with numbers. And those numbers are pretty accurate, at least on the projects I’ve worked on. If you’re having trouble convincing your reluctant team to come up with an estimate, I definitely recommend giving Delphi a shot.