It's incredibly hard to find accurate information about the Chaos Report on the Internet. For example, infoq quotes 29% success for 2011, whereas Wikipedia quotes 34%, whereas a paper I found directly at Standish Group quotes 39%. A youtube video quotes that period at a success rate of 32%. Another souce writes that "33% of projects are successful, but only 21% deliver a benefit."
Since I didn't want to spend a couple thousand bucks on original reports, I went with "most likely accurate source" in collecting data. If anyone has the reports, I'd be happy to correct my data where it is wrong.
That said, here goes the image based on the data I found:
Is "Agile" really the cause of success? |
What does the data really say?
That it's not nearly as clear-cut as we might want it to be. It doesn't send an irrefutable message that "Agile changed the world, software development is now much more likely to succeed.
There's no proof for anything - only an absence thereof. We have levels of variation that indicate we still have too few data points to make any statement with certainty. The only statements we can make with certainty:
- The data does not prove that "Agile" made the difference.
- It also does not prove that potential improvements could be attributed to a framework like Scrum or SAFe.
- It also does not prove that "Agile" benefits are sustainable.
Factors commonly ignored
Looking back into the beginnings of the Standish Group Chaos Report - that was the 90's: Things other than Agile have changed as well. Here are just some of the major changes that happened since then:
Software was still "new."
Many people never operated with computers back then. They didn't know how to use them, much less how to formulate their needs in a way that made sense in the digital age. Nowadays, everyone knows what a computer is.
The Internet.
Not sure about anyone and everyone - but in the 90's, I didn't have Internet. My only reference was written books, usually published years before. There was a massive asynchronity between encountering a problem and finding an information source that could help solve it. Even as late as 2008, I was still developing for clients that restricted/disallowed using the Internet at work.
IT advanced
Back in the 90's, it was just much harder to process large volumes of data reliably. Back then, we struggled with challenges modern developers are hardly aware of. That included stuff for real nightmares, such as a CPU wrongly processing a correctly compiled piece of source code. Many sources of inexplicable failure have since been eliminated.
IT infrastructure advanced
Some of my early projects were extremely challenging, because "production like environments" were so expensive that it was impossible to afford one for each project member. Indeed, there was often only a single test environment for everyone - even that cost millions. Developers often didn't know what their code would be doing until it was live, simply because of environment constraints.
Transaction costs plummeted
Back in the 90's, we were often shipping literal CD's at project end, there was a final "golden build." Especially for consumer software, that "golden build" could make up for over 95% of the project's cost. I'm sure Atari's ET could have led to very different outcomes if they could've just made a few post-release updates at near-zero cost.
IT Project Management advanced
IT Project Managers also learned how to use these advances to become more successful. Project Management also adopted new and better ways of making their projects succeed.
What does all of that mean, then?
Restating the obvious: there were many factors at play, each of them certainly significant to boost success rates from about 15% in 1994 to the 30% we see today. But there's no one single factor that could be isolated to say, "This factor brought us to 30%, and if we just do more of that, it'll bring us to 50% or higher!" If anything, the data shows that no single factor has been identified that has the potential to boost success rates any further than what we already had in the early days of the Agile Manifesto.
"Agile" most certainly hasn't proven to be the Silver Bullet that will singlehandedly fix the industry.
Closing remarks
We don't have undisputable evidence based on publicly available, reliable sources to argue for or against Agile based on the Standish Group's research. In statistical lingo, "we can't reject the null hypothesis". That is: there's not enough evidence to reject any statement for or against.
There's a fact problem that amazed me: I would have expected that accurate data from the Standish Group's research would be more widespread. But it's not. It's a rabbit hole. I need this huge red disclaimer. I don't even know who's telling the truth or who's misunderstanding presented information (that could also include me - mind you!) Who's stating facts, who's simply pulling numbers out of thin air, and who's deliberately lying to drive an agenda?
Maybe if I had all of the data, undisputably, from its source, the picture could change?
But for now: I can't scientifically state that yes, the Standish Group has provided irrefutable evidence that Agile made a difference.
So - I'd say that we need to drop Standish Group or Chaos Report from the list of "Reasons for Agile." There may be others, but this one doesn't make the cut.
Sources
The following sources were used to produce this article:
- https://de.wikipedia.org/wiki/Chaos-Studie
- https://hennyportman.wordpress.com/2020/01/03/review-chaos-report-2018/
- https://www.infoq.com/articles/standish-chaos-2015/
- https://www.standishgroup.com/sample_research_files/CHAOSReport2015-Final.pdf
- https://hennyportman.wordpress.com/2021/01/06/review-standish-group-chaos-2020-beyond-infinity/
I did a similar exercise and my conclusions are similar. I managed to find the CHAOS reports and looking at the numbers there is no significant change in the overall success rate. If we believe the Agile trainers and coaches that Agile is used in more than 50% of the project (not true in my opinion even for software development projects) and that Agile doesn't fail then that should be reflected in the late versions. I found more useful the state of Agile reports, especially the demographic of the respondents (who is still interested in Agile). PMs are a constant 20% since the first rport but it is concerning (for me) that over 50% of the respondents are now Agile Coaches and Scrum Masters and less than 5% are developers. Food for thought :)
ReplyDeleteThere is only one valid measure of success. Delivery of org. Or project objectives. And consequent success at a possible higher org. Level. Agile doesn’t even have a mechanism for quantifying their objectives. So you are never going to analyze or prove anything. Tom Gilb, SUCCESS : Super Secrets & Strategies for Efficient Delivery in Projects, Programs, and Plans, Book Folder, tinyurl.com/SUCCESSGilb. October 2021
ReplyDeleteNah, not really. The most important metric, in software development, is customer satisfaction - if the customer is satisfied, regardless of the reason, he'll come back to spend more money. That, from a business point of view, which is the only one that counts, is success. In software development, customer satisfaction rarely relates to things that were written down one or two years ago, when the project started, and budget overruns are no issue at all if the customer is happy with what he got for the money.
Delete[[..Pingback..]]
ReplyDeleteThis article was curated as a part of #73rd Issue of Software Testing Notes Newsletter.
https://softwaretestingnotes.substack.com/p/issue-73-software-testing-notes
There's no silver bullet - Brooks said it as early as 1986. Consequently, agile can't be one. But it surely helps.
ReplyDelete