Regular readers of my blog will know how much I loathe the term “lessons learned”. I’ve written and spoken frequently about implementation issues such as waiting too long to identify them, storing them in a manner which makes accessing them challenging and not responding to them in an appropriate manner. One of the more common problems I have encountered when reviewing lessons is a lack of contextual information to enable a reader to understand whether a given lesson is going to help or hinder them.
This is not a concern when the lessons are going to be applied to the same project in which they were identified. For example, if a team identifies a learning from a retrospective and decides to apply it afterwards, the likelihood of context drift is low, hence the lesson is still apropos. It is also not as big a deal if lessons are shared verbally, for example, through a Community of Practice meetup. During such events, if a participant shares a learning, contextual information will usually be shared through the normal back and forth discussion about the practice.
But when practitioners are reviewing lessons captured in the past without the benefit of access to the originator, if context is absent there is a much greater likelihood of practices being applied in situations where they won’t be helpful (i.e. a false positive) or practices being discarded based on the incorrect assumption that they weren’t applicable (i.e. a false negative). In both cases, an opportunity to benefit from organizational knowledge assets is lost.
So what context might we capture?
At a bare minimum, we should record when the lesson was identified, on which project and by whom. Doing so will take the least effort on the part of the lesson identifier and the lesson curator (the person who is responsible for distilling “raw” lessons into published knowledge). With this, an interested reader has the ability to follow up with the person who identified the lesson to get the missing context.
Unfortunately, memories do fade with time and people will move into new roles or leave the company so such minimalist context may be insufficient. Time and cost permitting, the following additional types of contextual information could be captured:
- Quantitative metrics about the project such as its duration, cost, and peak staffing level
- What stage or phase of the project is the lesson applicable
- What type of life cycle was used to deliver the project (e.g. deterministic, adaptive, iterative)
- A categorization of the project’s scope (e.g. process engineering, launching a new product, building a dam)
- Specific background details about the lesson which may have been stripped away from the main description through the process of scrubbing it to be useful but shouldn’t be discarded entirely
This seems like a lot to capture and will take effort to do so. But if we remember that lessons are an investment in improved future project outcomes, the returns will justify the costs of doing so.