Rebooting ALM, Part IV: Fantasy
This is the final part of the Rebooting ALM series. You should also read:
Part I: Evolution
Part II: Power
Part III: Weakness
We’ve covered where ALM tools excel, and where they falter. Now, allow me to fantasize about how we can take a leap forward and start solving actual problems for real ALM users (that’s us).
Most of our work starts with a backlog of requirements. We should acknowledge that we don’t use the word “requirement” properly. “Requirements” are really assumptions that we think will make the product successful. If we can convince ourselves to make that change, we can modify how we manage “projects” into what we really should be doing: continuously experimenting and learning, incrementally build the right product.
That means that tools should help us manage experiments, not requirements. Experiments that are short, and feed back into the system. Experiments with concrete pass/fail criteria. that were tested. Not in QA, but with real customers.
Yes, our tools need to cover the whole circle of product life.
Options, please
Experiments are the building blocks of our product. But how do we choose which experiment to perform? If we translate requirements to “what we decided to build”, there are obviously other options we’ve left behind. Why have we decided that way? Nothing is ever documented in our one-stop-shop of knowledge.
Our futuristic ALM tool can support managing options. It will give us prioritization mechanisms to compare between options, like Cost of Delay. We can at least have a documented discussion on why we chose a set of options over another.
Unfortunately, the way we naturally make decisions is flawed. Our heads are full of biases, and most of the time, we’re not even aware of them. If we collaborate and discuss options, we will eventually discuss the assumptions behind them. Imagine if our tool helped us capture these assumptions. It would bump collaboration into a whole new level.
Assumptions have a life-cycle of their own. They are first created in our minds, then later categorized as either an assumption or a certainty. If we actually discuss them, they turn into a hypothesis. Finally, they are either shot down (based on data or someone more powerful) or converted into an experiment that confirms or disproves it. You want to manage a life cycle? This is far better value.
What do managers want?
Predictability. After all these years of chaotic development, they would like to know the future. They no longer believe these development burn-down charts (local optima, remember?). While I would like my super ALM tool to do that, I’m afraid we’ve already passed “Back to the Future” day. But all is not lost.
We’ve discussed Cost of Delay as a way to prioritize options. Quantifying CoD is one way to make value projection. The problem is that we’re obsessed with cost, and usually forget to talk about value. When we look at ROI to make decisions, the Return part is completely fabricated (based on gut feelings or HIPPOs), so gaining some insight is obviously valuable.
The I part of the ROI is Investment. We don’t need to guess anymore, because tools today already collect masses of data about actual work. Automation and reporting FTW!
So imagine our hyper ALM tool helping us create a value and cost model. Presto! We now have an ROI that we can actually trust (or at least be more confident with).
So far we looked at how tools can help make better decisions. But does it help in creating a successful product? We should check in the end. Not the end of development or testing, and not even deployment. Today we define quality where we can see it (the bug system, and maybe but often not, the support system.).
I want to know what happened to the feature when it got to the customers. What happened to that feature we worked on so much, but the customer didn’t notice and never used? Or that clunky feature that always seem to crash at the start of the application, so the customer doesn’t enjoy the rest of our fab app?
The super-duper ALM tool measures and monitors what happens when the customers use the product. It gives us actual data of usage and satisfaction. In doing that, it directs us to re-define what success is. It can help us compare between usage of different versions, including historic ones, and clue us in of how real people actually use the product.
Sounds complex? Our magic ALM tool brings us back to the old, simple days. It doesn’t let you build all you think is needed. It’s not a bottomless storage pit for every possible bug out there. It guides us to write less code, which is tested more. It guides us toward managing quality in real time, rather than save for later. It suggests options that save money, because complexity is expensive. It guides us toward simplicity.
And you know what?
Many of these things already exist. Not in the expensive tools, not yet. But everything I wish for can be managed by whiteboards and spreadsheets and maybe a couple of sticky notes. Also some un-common sense may be required.
Do you want to reboot ALM? Press the Start button now.
Reference: | Rebooting ALM, Part IV: Fantasy from our NCG partner Gil Zilberfeld at the Geek Out of Water blog. |