Two of pillars of adaptive approaches are inspection and adaptation.
These can be applied to the product, service or results being produced by the team as well as the way in which those are produced. For the former the use of Minimum Viable Products, Spikes, Minimum Marketable Features and Minimum Business Increments are used to reduce the impacts of building the wrong thing.
But what about changes to the team’s way of working (WoW)?
Whether a team uses a scheduled cadence for reviewing their WoW such as the use of retrospectives in Scrum, or they use a just-in-time approach they will come up with improvement ideas.
Some of those ideas will be all or nothing.
For example, if the team has frequently encountered delays with getting support from an external subject matter expert for completing some work items, they could lobby their sponsor or project manager to have someone possessing the same skills to be assigned to the team.
But many times, there might be more than one way to implement the improvement.
Let’s say a software development team recognizes that they need to improve their code quality and to do this there are many options available. Coding standards, coding reviews, non-solo coding, test-first development, and automated code quality tools are just a few choices. The team might eliminate some of these based on their context. For example, they might not have sufficient budget left to purchase a new tool. But once they’ve eliminated these, they still have to make a decision about how they will implement one or more of the remaining options.
This is where a Minimum Viable Improvement (MVI) can be utilized. Similar to an MVP which is used to maximize validated learning about a product with the least investment, an MVI can be used to validate the team’s belief that a given improvement idea will work.
In our example, let’s assume the team is interested in trying a non-solo development approach.
They could fully commit to it by having the full team adopt pair or mob programming, but implementing one of these practices wholesale might be too disruptive and if it doesn’t work, the impacts would be significant. On the other hand, an MVI might be to find two volunteers within the team who are willing to try pair programming for their upcoming work items.
To be a valid experiment, certain variables would need to be controlled. The volunteers should be representative of the average capability within the team (i.e. you wouldn’t want to run the experiment with either the best or worst quality developers) and the work items they pulled should be similar to work which has been completed in the past so that comparisons using quality metrics are possible.
Once the work items have been completed, the team can regroup to review the results and decide whether to proceed with a larger experiment, pivot or tweak the pairing approach, fully productize the practice by making it part of their WoW, or punt it if they recognize it won’t work for them.
Taking an MVI approach will minimize the cost of learning and change to just the volunteers directly involved and the risk of work being completed at a lower quality level is restricted to just the work items they pulled.
So the next time your team comes up with an improvement idea, propose that they frame it as a hypothesis and design an MVI to prove or disprove that hypothesis.
(If you liked this article, why not pick up my book Easy in Theory, Difficult in Practice which contains 100 other lessons on project leadership? It’s available on Amazon.com and on Amazon.ca as well as a number of other online book stores)