Pages

Sunday, July 17, 2016

How SAFe deals with cost accounting

I was asked the question "How does SAFe handle money?" - well, in a complex organization, this is a complex topic. Here is a non-comprehensive list of answers:

  • SAFe discourages "project accounting", but acknowledges that an ART may be forced to do that because especially in a cross-company value stream, contracts may be hard to change on a short notice. We spent a good half hour discussing the detrimental effect of cost centers (i.e. resource utilization) on flow (i.e. value delivery). An SPC would accept the current cost structure when launching an ART and then collaborte with Executives on a Portfolio level to move to a value stream centered budget structure.
  • On a Portfolio level, SAFe suggests that budgeting be done per Value Stream, on a (no finer granularity than) per Program Increment basis, so the Enterprise should assign for each Value Stream (= each ART) budget based on the expected outcome of the Value Stream.
  • Cost transparency is provided on the highest level of abstraction that the organization has, i.e. you'd measure on a portfolio level the "cost per epic" [i.e. "The new iphone"]. While it would theoretically be possible to drill this down to Value Stream and even team level, SAFe favours TCO (Total Cost of Ownership) with a systemic "Optimize the Whole" view.
  • SAFe standardizes Story Points across teams and then uses Story Points as a "hard currency" in the sense that stakeholders and finance understand where the money goes ("expensive" and "cheap" feature requests). Dean indicates that a workable PBI (i.e something you'd put in a Sprint) should be no bigger than 8 Story Points, while a Program level Epic might be 5k SP's, while another might be 1k (also providing a dimension for "fairly cheap" and "fairly expensive"), then counting on the Law of Large Numbers to make this "sufficiently accurate" for value based prioritization.
  • Dean gave examples of a single SP ranging to a cost between 800 and 1800€ depending on the value stream: This gives a fairly accurate price tag for a portfolio topic.
    Again, this resorts to using the Law of Large Numbers, and it will (based on the distribution) be off more often than not when drilled down to a User Story level. Again, it's about optimizing a system, not a component of the work.
  • In PI Planning (i.e. the big, quarterly Planning Event), each overarching PI Objective ("Sprint Goal") is assigned with a "business value" ranging from 1 to 10, where 10 is the most valuable and 1 the least valuable. Teams will then collaborate during the PI to deliver the WSJF (weighted shortest job first), i.e. deliver maximum measurable business value from a customer perspective, keeping in mind thatthe focus is the delivery of a valuable product increment as an ART, not the completion of individual team objectives.
  • ROI tracking is then done in a simple, yet sophisticated process of having Business Owners ("real PO's") account the value delivery per PI objective (i.e. Iteration Goal) with anything between 0 (done, but no value) and 10 points (done, EXTREMELY valuable) to give feedback how well the plan (i.e. PI investment) worked out.
  • Teams receive transparency whether they delivered on their owned team objectives, and also whether the value stream delivered. Points of inaccuracy in the value forecast become the subject of retrospectives within the team(s).
  • From a management perspective, the goal of PI planning is to move delivery value forecasting accuracy (i.e. how much the ART will deliver in the PI) as close to 100% as possible, which equals a corresponding reliability in cost estimates (i.e. SAFe comes with a promise of catching potential budget overruns early - a HUGE thing for large enterprises!)
  • One of Dean's personal pet peeves is MBO (Management by Objectives) with monetary incentives, because that not only doesn't work in Knowledge work, but it also causes dysfunction. As a current WIP, he is creating educational material for middle management on why and how to move away from MBO. SAFe encourages a salary structure without any performance bonuses beyond enterprise-wide profit shares.

Summary

SAFe discourages utilization-based accounting. SAFe favours value delivery based accounting. Executive management retains full transparency and control over cost.

Fiduciary accountability is realized through the several feedback mechanisms induced in SAFe to ensure the ART operates to continuously maximize ROI based on assigned funding. Failure to deliver the presumed value are revealed early and continuously. 

SAFe encourages optimizing the system as a whole, which requires moving (the impact of) CapEx and OpEx decisions to Portfolio level and away from components of the ART. This prevents local cost optimization which could be very expensive for the organization as a whole.

2 comments:

  1. Michael, nice article! Thanks for writing it. I appreciate the explanation of the portfolio logic in SAFe, the emphasis on optimizing the whole and in focussing on value flow vs. cost for short(er) increments. I would still think that measuring flow every Sprint as we do in LeSS is better than doing it only once a PI, just as being able to change the organisation's direction more often than quarterly is very important and could be impeded by the hierarchy and infrequent planning events... I am interested whether Dean, SAFe instructors and experienced practitioners actually achieve greater agility in practice than what is suggested in the framework. I hope so.

    I'm interested in more detail on how the business value (1-10 scale - for me, not adequately differentiated if this is a linear scale) is estimated, when and by whom. I think that I understand that it is estimated both before and after delivery, with variance between before and after estimates being a subject for team retro reflection? If so, sounds like the team would who estimates business value? Or are the retrospectives inclusive of stakeholders and business representatives, possibly clients in a b2b environment?

    There is an item which appears to need clarification, and may be an error in
    bullet points 4 & 5: if an item for a Sprint typically should not exceed 8 SP, (maybe average of 5?) and a program level Epic is ~5k SP, you are saying that a program level Epic is 1000x the size of a PBI. You then mention a SP costing 800 - 1800 Euros, let's use 1000 euros for an example. SP=1k Euros, PBI = 5k Euros, Epic = 5 million Euros? Must be talking about some REALLY big projects here! (or the math is wrong, which I suspect). I would guess that there is an order or two of magnitude too much in the suggested Epic sizing, as it would be useless for either planning or budgeting at this size...

    Please keep up the great blog posts and thanks again for treating this important topic!

    ReplyDelete
  2. Greg,

    SAFe does indeed advocate Sprint Reviews and Retrospectives - but considering the nature of SAFe, the big "PI Review" is to be understood more as a compromize than as imperative: When you're forced to work with 100+ people from across the globe, potentially from across multiple subsidiaries, then having one big Review with everyone around is simply a massive cost factor.
    As SAFe plans "batches" of 5 Sprints as one Program Increment at once, there are small and big synchronization points.

    Regarding the businses value assignment: It's pretty much an arbitrary number intended to prioritize the teams' backlogs from a business side. This estimation is done by Business Owners (the people which we'd call "THE Product Owner" in LeSS) according to the impact of an objective on relevant business outcome, so a "10" would be make-or-break for the business while a "2" might be "nice to have".

    The reason why the numbers are revised after each Increment is because it could have been that new information actually increased the value of an objective (i.e., we expected to sell 100k units due to the new feature, but sold 150k thanks to early delivery => 150% result) or because objectives can be "partially met" (i.e., the teams did only a handful Stories of a Feature Objective).

    Now, I myself questioned whether there is "Partially Done" on Features in SAFe. Dean stated that "all or nothing credit is unfair when an objective is so big that it takes months. It creates bad mood and takes away the sense of achievement, so partial credit is a motivational factor. We expect that the teams still deliver Stories based on DoD and don't deliver partially done Stories - but an objective can be a big bunch of Stories"
    Note that a Feature is a level of abstraction in SAFe that is (based on Flight Level 2 Kanban) subject to a WIP-Limit, so we won't have the problem that teams on an ART cherry-pick the easy stories from multiple objectives and don't get the Feature done.

    In regards to your last point - yes, that's the point! SAFe isn't intended for small work. Enterprise development might be huge HUGE on a portfolio level. After all, we're not talking about strategic capabilities or features, but about programs(!).
    Now, a Program is an abstraction level above(!) value streams, so you would typically set up an entire organization for a couple of years to deliver a Program, such as the iPhone, LTE - or the Tesla Model X.

    Having worked in enterprise programs before, I can attest that 5m budget is still fairly small for a solution - the biggest program I've been involved with had a solution which would have been like 200k Story Points. And yeah, it was too big. Had we done this level of estimation, every Kanban practitioner would have shouted "Stop this madness right here. Break it down into multiple smaller solutions, then deliver increments!"

    Note that when you have 4-layer SAFe, you're never talking about stuff you could do with LeSS - this is only intended for organizations with thousands of developers working on the same stuff for years!

    And no, I'm not getting into the question of whether a smaller number of better organized developers could do it quicker ... each program is based on the corporate reality and tons of assumptions.

    ReplyDelete