Open LinkedIn on any given day and you’ll see someone talking about how L&D needs to get better at proving impact. Peruse any awards criteria and “impact” will be in there.
We all know it’s something we should do. But it’s easy to get bogged down with the production side of things and fall into a “build it and ship it” frame of mind.
But in not proving impact, are you running the risk of your L&D function being viewed as a drain on costs? How can you change that and start proving the impact of L&D in business?
What does “impact” actually mean?
To paraphrase my fellow learning designer, Ross Dickie, it’s answering this question: “Has this thing we’ve invested so much time, money and effort into actually made a difference?”
Let’s break that down. To establish if “the thing” made a difference, we need to know what the starting point, or benchmark was. L&D programs usually emerge from a need of some kind. Whether that’s one that the business recognizes or the L&D function discovers doesn’t matter – the need is there. And it’s usually tied to a business priority or goal.
So, there’s your starting point. You know what you’re trying to achieve, now you can measure it.
What should you measure?
The actual metrics you measure will depend on the program. If it’s a career development program, you could measure internal promotions, job moves and attrition. If it’s onboarding, you could measure early attrition and speed to competence or capability. Or if it’s leadership development, you could measure leadership capabilities.
Measure the before and after
Like our client South Western Rail did on their award-winning LEAP leadership development program. Our Insights and Analytics team worked with them on audience research and evaluation to provide some direct comparisons before and after the program. The result? An average of 12% improvement in leadership capability among participants at a 95% confidence level that the gains resulted from the program. The program also had some ripple effects which are more difficult to measure, like better collaboration and increased confidence.
Look for correlation if you can’t prove causation
Correlation and causation are not the same thing, and you’ll often see “correlation does not imply causation” quoted by the data-savvy.
Correlation is a statistical association between two things without a direct cause. When thing A changes, so does thing B.
Causation is where there’s a direct cause-and-effect relationship between thing A and thing B.
It’s easy to assume that to prove impact, you have to prove causation. But that’s not always practical. For example, it might be too risky performance-wise to run a control group study, or the business wants a fast intervention. In these cases, correlation is your friend.
Proving direct causation can be tricky anyway. Learning doesn’t happen in a vacuum. It’s not always easy to pinpoint the cause of an improvement to the L&D intervention, operational improvements, coaching, or a combination of all of the above. Dr Ina Weinbauer-Heidel’s 12 levers for learning transfer shows just how dependent transfer is on factors outside of the L&D function – 5 out of 12 levers are organizational and 3 out of 12 are on the individual, leaving 4 out of 12 for the L&D function.
In our South Western Rail example, we used a confidence level to indicate how much of the success was attributable to the L&D program. But what if you don’t have data scientists in your L&D function? Or the budget to partner with a behavioral insights team?
Look for evidence of learning transfer
In the Custom Learning team, we’re all huge fans of Will Thalheimer’s Learning-Transfer Evaluation Model (LTEM). Tiers 2 (learner activity), 5 (decision making), 6 (task performance) and 7 (transfer to work performance) are good indicators of whether your program contributed to an improvement in the business.
Don’t forget the impact on learners
Engagement metrics might not be perfect, but they have their use in the overall data story of your program. Completion rates on their own get a bad rep, but pair completion rates with a comparison of before and after performance and you’re on the right track.
Learner perception scores also get a bad rep. But measure their perception on the right things and that bad rep seems a bit undeserved. Instead of asking learners if they “liked” the facilitator, slide deck, animations, ask if they found the activities useful. Ask them to identify what they’ll be able to do differently or better as a result of the program. Likes are cheap, look for the value.
How Mindtools can help
You don’t have to prove impact on your own. This is the kind of thing we love doing at Mindtools! Here are just some of the ways we can help:
- Use our Building Better Managers report as a starting point to benchmark your manager against 12 essential skills
- Partner with our Custom Learning team on impactful learning that addresses real business problems
- Partner with our Insights and Analytics team to research, track, and evaluate the success of your learning programs
- Use our Manager Skills Assessment to discover skills gaps and get a data-driven roadmap for leadership growth
- Use our adaptive Manager Skills Builder to develop your managers and deliver measurable improvements in leadership effectiveness.
Let’s prove the impact of your L&D function to your business. Book a chat with our experts now.