Tuesday, February 23, 2016

Quantify problems before solving them

I was recently asked: "How many automated tests, do you think, we need?". An underlying premise was that I would provide an expert opinion on whether the team had the right amount of tests.
I could only reply: "I don't know. If you have a problem with quality, you might need more. If you have a problem with the maintenance of your tests, maybe you need fewer. However, if there is no problem, you're probably doing the right thing."

This one can be abstracted, away from the specific issue of test automation.

If there is no problem, there is nothing to fix. But how do you know if you have a problem then?

Here is a simple suggestion:  Collect data. Then, decide based on facts.

This is an example of how you can track:
Each time your team encounters an issue and spends time with or because of it, make a mark. Your result might then look like this, depending on which activities you are tracking and how you track:

In your retrospectives, you can simply look at the numbers and ask the following questions:
  • Does this make sense?
  • If not: What should we do about this?
  • Will we gain more insight if we get more data in the next iteration?

Summary

"Data Driven Decision Making" is not rocket science. Collecting data can be simple and easy - and require very little preparation or technology. While you may want to refine your collection process depending on your understanding of the problem, the simplest possible solution will often suffice to rule out wasting time on dealing with corner cases or imaginary issues.


Thursday, February 18, 2016

Stakeholder Management 101

In large enterprises, Product Owners often get bombarded with stakeholder requests of all sorts, which they all need to manage. Soon, the poor PO will find themselves juggling requirements rather than advancing the product in the best interest of the companies. Because all of these requirements are important to the person raising them, the PO will quickly become the victim not only of enormous pressure, but also of constant nagging "Why isn't this done yet?"

Here is a simplified request appraisal process for Product Owners to stay ahead of the game:

We won't do it unless the world burns!







Friday, February 12, 2016

Evolving Kanban Boards


Regardless of whether your board is electronic or physical, boards have a huge advantage of making workflows visible and making problems transparent without much overhead.

Here is a real world example of a electronic Kanban board. Let us discuss what this board actually means. As a disclaimer: This is not intended to be a pattern of "how your Kanban board should look like", but "how you could work on your Kanban board". Feel free ignore the detailed contents, because they are irrelevant to our cause in this article.



First thing you may notice: the board is a bit more complex than our usual "toDo, In Progress, Done" board.
This board has evolved over time and it is owned by the team. As such, it does not reflect "how the team should be working", but "how the team is actually working".

A little background

This board belongs to a cross-functional feature team, applying a rigorous Test-First Zero-Defects approach which mandates "no code change without test coverage" and "no deployment with known defects".
Consequently, there is no "In Test" or "Fix" column: Devising the right tests (e.g., "analysis") is as much part of software development as fixing any potentially discovered defects.
When the team moves a backlog item into "Done", it is unconditionally ready to go live.

An evolution journey

As the team discovered that the original 3 columns "ToDo, In Progress, Done" do not provide a sufficient level of transparency to understand at a glance what is actually going on and how they could collaborate, additional columns came in.

The first column that came in was the "to Review" column.
With this, a developer indicated that "My code is ready - from my perspective. I need a code review". Along with the Review column came the Working Agreement to always accept a review before starting a new Work Item. This was intended to minimize inventory.

When a work item was technically complete, the Product Owner inspects the delivered software and determines whether a go-live is a good idea. This may be by discussing with real users, with relevant stakeholders or simply by looking at the software on screen. In some cases, the item is so simple that simply trusting the developers to have completed the Acceptance Criteria is enough.
As the responsibility for this decision rests with the Product Owner, developers move tickets to "Resolved" when they have built Working Software ready for a demo.

The Review process became a bit more complicated, because this team is not  fully autonomous - they are part of a large scaled Scrum organization. Occasionally, the input of members from other teams became essential to maintain Collective Code Ownership. Because such reviews from other teams could take a while, the team created an [in] "Review" column.
The Source Code Management System (in their case, Stash) takes care of managing team external Pull Request reviewers and their feedback - but the ticket sticks "In Review" as long as essential feedback is still missing.

An organization problem sometimes made it impossible to merge fully tested, reviewed code: The Red Master.  Nobody is allowed to merge into a Red Master for any reason other than fixing the broken build - so the team introduced an "Pull Request Open" column to indicate work items that are just waiting to be merged into a Green Master.

Due to not only being a large scaled Scrum organization, but also due to B2B dependencies, the team quickly discovered that sometimes, they would be blocked based on external dependencies. They introduced an "On Hold" column to park any work item which they could not feasibly make progress on because they had to wait for an external party to complete their work. Examples for "on hold" would be, for example, an interface adaption in an external 3rd party software.

After introducing Kanban for a long time, the team encountered a new problem: They could pull work as fast as it arrived. However, not every request that was brought forth to the team makes sense. Any feature and/or information requests were parked on the "Backlog", for the Product Owner to inspect. The Working Agreement became that the Product Owner had responsibility to groom and refine stories in the Backlog before developers would pull them.

Had the PO decided the items would actually be an improvement towards the product vision - and met the team's Definition of Ready, the item would be moved to "Selected for Development". The Team's Working Agreement became that the team completely ignored the "Backlog" column, because it could contain worthless, incomprehensible or conflicting requests - or simply those which were misrouted in the organization.

Conclusion


The specific status model of a team's Kanban board must suit this specific team.
This article is not intended to provide a "better model", but to provide an understanding in the though patterns which can lead teams to evolve their board.

It is generally the best approach to start with a simple board. During Kaizen Events, you might then add columns when inevitable. Working with a complex board and not knowing how (or why) to work with it from the beginning is not a good idea.

This article purposefully omits the steps where workflow columns were added and later discarded because they were found un-necessary. That happens - and is actually good, because it means that people care and Continuous Improvement works.