Pages

Wednesday, November 18, 2020

16 misconceptions about Waterfall

Ok, Agilists. It's 2021, and people are still using Waterfall in corporate environments. With this article, I would like to dismantle the baloney strawman "Waterfall" that's always proclaimed as the archenemy of all that is good and would encourage you to think about how exactly your suggested "Agile" is going to do better than the examples I have taken from real-world, professional Waterfall projects.

Here are some things that many agilists may have never experienced in Waterfall projects. I did.


What you think Waterfall is, but isn't

There are numerous standard claims about what's wrong with Waterfall, which I would generously call "statement made from ignorance," although there could be more nefarious reasons why people make these claims. Point is: many of the common claims are not generally true.


Big Bang vs. Incremental

Waterfall doesn't mean that until the determined end date of the project, there will be nothing to show. I remember when I stated that I worked in a 5-year Waterfall project, people from the Agile community called that insane. It's not. We had a release every 3 months. That means that the project had a total of 20(!) Increments, each with its own scope and objectives: Yes - Waterfall can be used to build products incrementally! In corporations, that's actually normal.


Upfront Design vs. Iterative Design

With each delivery, project managers, analysts and business people sit together and discuss the roadmap: which requirements to add or remove, and which priorities to shift. I have once worked in a product that was created in pure Waterfall for almost 20 years, and nobody could have anticipated the use cases delivered in 2010 when the product's first version hit the market back in 1992. Even Waterfall projects can iterate. Especially for enterprise systems.


Death March vs. Adaptivity

When you think that someone sits in a closet and produces the Master Plan, which must be slavishly adhered to by the delivery teams, you're not thinking of a properly managed Waterfall project. While yes, of course, there is a general plan, but a Waterfall plan gets adapted on the fly as new information arises. Timelines, staffing, scope, requirements, objectives - are all subject to change, potentially even on a weekly basis if your project manager is worth their salt.


Fixed Scope vs. Backlog

If you've ever done Project Management, you know pretty well that scope is very malleable in a project. When an organization determines that meeting a fixed timeline is paramount, Waterfall fixed time projects can be pretty similar to Sprints in managing scope. While of course, you get problems if you don't manage the Critical Path properly, that's not a Waterfall problem - it's carelessness. 


Fixed Time vs. Quality

Probably one of the main complaints about Waterfall is that a team delivering on a fixed schedule will push garbage downstream to meet the timeline. Again, that's not a Waterfall issue - it's a "fixed time" issue. If you flex the time, and fix the work package, there's nothing inherent to Waterfall that implies a willful sacrifice of quality.

(And, as a witty side note - if you believe that fixed time is the root cause for low quality: how exactly would Scrum's Sprint timebox solve that problem?)


Assumptions vs. Feedback Learning

Complex systems serving a multitude of stakeholders are incredibly hard to optimize, especially when these stakeholders have conflicting interests. The complexity in Waterfall requirement analysis is usually less in trying to get a requirement right, as it is in identifying and resolving conflicting or wrong demands. The time spent upfront to clarify the non-developmental interferences pays off in "doing the right thing." Good analysts won't be making wild assumptions about things that could potentially happen years down the line. When a release is launched, good Waterfall projects use real user feedback to validate and update the current assumptions


Handovers vs. Collaboration

Yes. There's something like stage-gates in most Waterfall projects. I myself have helped Waterfall organizations implement Quality Gates long before Scrum was a thing. But it's not inherent to Waterfall - otherwise it wouldn't have been a thing in the early 2000's. Also: don't misunderstand gates. They don't mean that an Unknown Stranger hands you a Work Package which you will hand over to another Unknown Stranger at the next Gate. What typically happens: As soon as analysts have a workable design document, they'll share it with developers and testers, who take a look, make comments and then meet together to discuss intent and changes. Good Waterfall organizations have collaboration between the different specialists whenever they need to.


Documentation vs. Value Creation

A huge misconception is that "Waterfall relies on heavy documentation" - it doesn't, depending on how you operate. Heavy documents are oftentimes the result of misfired governance rather than caused by the Waterfall approach itself. It's entirely feasible to operate Waterfall with lightweight documentation that clarifies purpose and intent rather than implementation details, if that's what your organization is comfortable with. Problems start when development is done by people who are separated from those who use, need, specify or test the product - especially when there's money and reputation at stake. 


Process vs. Relationships

As organizations grow large, you may no longer have the right people to talk with, so you rely on proxies who do a kind of Telephone Game. This has nothing to do with Waterfall. A good Waterfall Business Analyst would always try to reach out to actual users, preferably power users, who really know what's going on and build personal relationships. As mutual understanding grows, process and formality becomes less and less important, both towards requesters and within the development organization - even in a Waterfall environment.


Resource Efficiency vs. Stable Teams

There's a wild claim that allegedly, Waterfall doesn't operate with stable teams. Many Waterfall organizations have teams that are stable for many years, in some cases, even decades. Some of the better ones will even "bring work to the team" rather than assigning work to individuals or re-allocating people when something else is urgent. The "Resource efficiency mindset" is a separate issue, unrelated to Waterfall.


Big Batch vs. Flow

Kanban and Waterfall can quite well coexist. Indeed, I have used Kanban in a Waterfall setting long before I first heard of Scrum where requirements flowed through three specialist functions, and we had an average cycle time of less than one week from demand intake to delivery. Waterfall with Small Batches is possible, and can perform exceptionally well.


Top-Down vs. Self-Organized

I've worked with corporations and medium-sized companies using Waterfall, and have met a lot of Project Managers and Team Leads who have worked in a fashion very similar to a Product Owner: taking a request, discussing it with the team, letting the team figure out what to do how and when, only then feeding back the outcome of this discussion into the Project Plan. Waterfall can have properly self-organized teams.


Push vs. Pull

Whereas in theory, Waterfall is a pure "Push"-based process, the field reality is different. If you have a decent Waterfall team lead, it will basically go like this: We see what work is coming in, we take what we can, and we escalate the rest as "not realistic in time", and get it (de-)prioritized or the timeline adjusted. De facto, many Waterfalls teams are working pull-based.


Overburden vs. Sustainable Pace

Yes, we've had busy weekends and All-Nighters in Waterfall, but they were never a surprise. We could anticipate them weeks in advance. And after these always came a relaxation phase. Many people working in a well built, long-term Waterfall project call the approach quite sustainable. They feel significantly more comfortable than they would be under the pressure to produce measurable outcomes on a fortnightly basis! Well-managed Waterfall is significantly more sustainable for a developer than ill-managed Scrum, so: Caveat emptor!


Resources vs. Respect

Treating developers as interchangeable and disposable "resources" is an endemic disease in many large organisations, but it has nothing to do with Waterfall. It's a management mindset, very often combined with the cost accounting paradigm. The "human workplace" doesn't coincide well with such a mindset. And still, the more human Waterfall organizations treat people as people. It entirely depends on leadership.


Last Minute Boom vs. Transparency

Imagine, for a second, that you would do proper Behaviour Driven Development and Test Driven Development in a Waterfall setting. I did this in one major program, delivering Working Software that would have been ready for deployment every single week. If you do this, and properly respond to feedback, Waterfall doesn't need to produce any nasty surprise effects. The Last Minute Boom happens when your development methodology is inapproprate and your work packages are too big, not because of Waterfall.


All said - what then is, "Waterfall?"

"Waterfall" is nothing more and nothing less than an organized, sequential product development workflow where each activity depends on the output of the previous activity.

There are really good uses for Waterfall development, and cases where it brilliantly succeeds. It's incorrect to paint a black-white image where "Waterfall is bad and Agile is good", especially not when equivocating "Agile" to a certain framework.

Proper Waterfall

A proper Waterfall would operate under the following conditions:
  1. A clear, compelling and relateable purpose.
  2. A human workplace.
  3. A united team of teams.
  4. People who know their ropes.
  5. A "facts are friendly" attitude.
  6. Focus on Outcomes.
  7. Continuous learning and adaptation.
  8. Reasonable boundaries for work packages.
  9. Managing the system instead of the people.

All these given, a Waterfall project can have a pretty decent chance to generate useful, valuable results.

And when all the above points are given, I would like to see how or why your certain flavor of "Agile" is doing better.


My claim


I challenge you to disprove my claim: "Fixing the deeper mindset and organizational issues while keeping the Waterfall is significantly more likely to yield a positive outcome than adopting an Agile Framework which inherits the underlying issues."





Tuesday, November 17, 2020

Is all development work innovation? No.

In the Enteprise world, a huge portion of development work isn't all that innovative. A lot of it is merely putting existing knowledge into code. So what does that mean for our approach?

In my Six Sigma days, we used a method called "ICRA" to design high quality solutions.


Technically, this process was a funnel, reducing degrees of freedom as time progressed. We can formidably argue about whether such a funnel is (always) appropriate in software development or whether it's a better mental model to consider that all of them run in parallel at varying degrees, (but that's a red herring.) I would like to salvage the acronym to discriminate between four different types of development activity:

Activity Content Example
Innovate Fundamental changes or the creation of new knowledge to determine which problem to solve in what way, potentially generating a range of feasible possibilities. Creating a new capability, such as "automated user profiling" to learn about target audiences.
Configure Choosing solutions to a well-defined problems from a range of known options.
Could include cherry-picking and combining known solutions.
Using a cookie cutter template to design the new company website.
Realize Both problem and solution are known, the rest is "just work", potentially lots of it. Including a 3rd party payment API into an online shop.
Attenuate Minor tweaks and adjustments to optimize a known solution or process.
Key paradigm is "Reduce and simplify".
Adding a validation rule or removing redundant information.

Why this is important

Think about how you're developing: depending on each of the four activities, the probability of failure, hence, the predictable amount of scrap and rework, decreases. And as such, the way that you manage the activity is different. A predictable, strict, regulated, failsafe procedure would be problematic during innovation, and highly useful on attenuation - you don't want everything to explode when you add a single line of code into an otherwise stable system, which might actually be a desirable outcome of innovation: destabilizing status quo to create a new, better future.

I am not writing this to tell you "This is how you must work in this or that activity." Instead, I would invite you to ponder which patterns are helpful and efficient - and which are misleading or wasteful in context. 

By reflecting on which of the four activities and the most appropriate patterns for each of them, you may find significant change potential both for your team and for your organization, to "discover better ways of working by doing it and helping others do it."


Thursday, November 12, 2020

PI-Planning: Factors of the Confidence Vote

 The "Confidence Vote" is a SAFe mechanism that is intended to ensure both that the created PI Plan is feasible, and also to ensure that people understand the intent behind creating the common plan - what it means, and what it doesn't. Implied in SAFe are two different kinds of confidence vote with slightly different focus.







Train Confidence Vote

The "Train Confidence Vote" is taken on the Program Board - i.e. the aligned, integrated PI plan across all teams. All participants of the PI-Planning are asked to simultaneously vote on the entire plan. Here are the key considerations, all of which should be taken into account:

Objectives: Feasibility and Viability

First, we should look at the ART's PI objectives realistic, and does it make sense to pursue them? Do we have our priorities straight, and are we focused on delivering maximum value to our customer?

High Confidence on PI objectives would imply that these objectives are SMART (Specific, Measurable, Ambitious, Realistic, Timebound) within the duration of the PI.

Features: Content and Scope

Do we have the right features, do all of them provide significant progress towards our objectives, did we pick a feasible amount, and did we arrange them in a plausible order and are the right people working on them? Is the critical path clearly laid out, and is the load on the bottleneck manageable?

High Confidence on Features would imply that everyone is behind the planned feature arrangement.

Dependencies: Amount and Complexity

If we have too many dependencies, the amount of alignment effort throughout the PI will be staggering, and productivity is going to be abysmal. You also need to manage external dependencies, where the Train needs something from people who aren't part of the Train, and you need to pay extra attention when these people didn't even attend the PI-Planning.

High Confidence of Dependencies would imply that active efforts were made to eliminate as many dependencies as possible, and teams have aligned already how they deal with the inevitable ones. When people either mark a high amount of dependencies without talking about them, or you feel that some weren't mentioned, that should reduce your confidence drastically.


Risks: Quantity, Probability and Impact

Risks are normal part of life, but knowingly running into disaster isn't smart. Were all the relevant risks brought up? Have they been ROAM'ed properly? How likely will you be thrown off-track, and how far?

When you consider risks well under control, that can give you high confidence in this area - when you feel like you're facing an army of gremlins, vote low.


Big Picture: Outcomes and approach

After looking at all the detailed aspects, take one step back: Are we doing lots of piecemeal work, or do we produce an integrated, valuable product increment? Do we have many solitary teams running in individual directions, or do we move in the same direction? Do you have the impression that others know what they're doing?

When you see everyone pulling on the same string and in the same direction, in a feasible way, that could give you high confidence. When you see even one team moving in a different direction, that should raise concerns.


Team Confidence Vote



During your breakout sessions, the Scrum Master should frequently check pulse on team confidence. The key guiding question should be: "What do you need so that you can vote at least a 4, preferrably a 5, on our team's plan?"

Your team plan is only successful when every single member of the team votes at least a 3 on it, so do what it takes to get there. It's entirely inacceptable for a team member to lean back comfortably and wait for the team confidence vote and then vote 2, they should speak up immediately when they have concerns. Likewise, it's essential that teams have clarified all the issues that would lead them to vote low on their team's plan before going into the PI confidence vote.

When your team can not reach confidence, do not hesitate - involve Product Management and the RTE immediately to find a solution!

Here are the factors you should consider in your team confidence vote:

Objectives

Does your team have meaningful objectives, are you generating significant value?

Understanding

Do you really understand what's expected from you, how you're contributing to the whole, what makes or breaks success for you - what the features mean, what your stories are, what they mean, and what's required to achieve them?

Capacity and Load

Do you understand, including predictable and probable absences, how much capacity your team has?  How likely can you manage the workload? Have you accommodated for Scrum and SAFe events? Would unplanned work break your plan?

Dependency Schedule

Can you manage all inbound dependencies appropriately, do you trust the outbound dependencies to be managed in a robust way? What's your contingency plan on fragile dependencies?

Risks

Are you comfortable with the known risks? Do you know your Bus Count, and have you planned accordingly? Do you trust that larger-scaled risks will be resolved or mitigated in time?

Readiness

Right after the PI-Planning, you will jump into execution. Do you have everything to get on the road?



Closing remarks

This list isn't intended to check each factor individually, and it isn't intended to be comprehensive, either. It is merely intended to give you some guidance on what to ponder. If you have considered all these, you probably haven't overlooked anything significant. If you still feel, for any reason, that you can't be confident in your plan, by all means, cast the vote you feel appropriate, and start the conversation that you feel is required.
It's better to spend a few minutes extra and clarify the concerns than to find out too late that the entire PI plan is garbage.

Monday, November 2, 2020

Delivered, Deployed, Done?

While an agile organization should avoid over-engineering a formal status model, it's necessary to provide standards of what "Done" means so that people communicate at an even level. The highest level of confusion arises in large organizations where teams provide piecemeal components into a larger architecture, because teams might define a "Done" that implies both future work and business risk until the delivery is actually in use.

In such a case, discrimating between "Deployed" and "Done" may be useful.


What's Done?

At the risk of sounding like a broken record, "Done means Done," "It's not Done when it's not done" and "You're not Done when you're not done."

That is, when you're sitting on a pile of future work, regardless of whether that pile is big or small, you're not done. This is actually quite important: While softening your DoD gives you a cozy feeling of accomplishment, it reduces your transparency and will eventually result in negative feelings when the undone work comes back to bite you.

As such, your enterprise DoD should encompass all the work that's waiting. Unfortunately, in an Enterprise setting, especially when integrating with Waterfall projects or external Vendors, you may have work waiting for you a year or more down the line. The compromise is that teams put work items into an intermediate, "Deployed" status when the feature is live in Production, and set it to "Done" at the appropriate time in the future.

What's Deployed?

In situations where a team has to move on before taking care of all the necessary work to move to "Done", because there are factors outside their own control, it may be appropriate to introduce an intermediate status, "Deployed." This allows teams to move on rather than idly waiting to do nothing or wasting their energy getting nowhere.

In large enterprise situations, teams often deliver their increments and the following haven't been taken care of yet:
  • Some related open tickets
  • User training
  • Incidents
  • Feature Toggles
  • Business Configuration
  • E2E Integration
  • Tracking of Business Metrics
  • Evidence of Business Value
This status is not an excuse - it creates transparency on where the throughput in the value stream actually is blocked, so that appropriate management action can be taken.


Interpreting "Done" vs. "Deployed."

Let's take a look at this simple illustration:


Team 1

If they would soften up their DoD to equate "Deployed" and "Done", then the business risk is hidden, and it becomes impossible to identify why the team isn't generating value, even though they're delivering. They lose transparency with such an equivocation.
A strict discrimination between "Deployed" and "Done" surfaces the organizational impediment in this team and makes the problem easy to pinpoint.

Team 2

It wouldn't make sense to discriminate between "Done" and "Deployed", because the process is under control and there is no growing business risk. This team could just leave items "in Progress" until "Done" is reached and doesn't benefit from "micromanaging" the status.