Access the essential membership for Modern Managers
As training functions become more strategically linked to corporate objectives and organizations increase their training expenditure, trainers are now under more pressure than ever to account for the money they spend and to make a strong business case in order to secure their budget. As a result, the evaluation debate is becoming increasingly pertinent, with many arguing that training needs to be evaluated at a higher level, to demonstrate its ‘return on investment’, which is notoriously difficult to do. This article outlines some of the key issues relating to evaluation and explores how best to establish the value of training.
CIPD surveys show year after year that organizations consider evaluation to be an important activity. At the same time, however, many admit that they struggle to get it right. According to the CIPD Training and Development Survey 2004, most trainers have a ‘gut feeling’ that training is valuable: 79% of respondents believed that the training delivered in their organization was ‘of great benefit’ and 15% believed that it was ‘of some benefit’. However, it is interesting to note that 6% – a significant figure – weren’t sure if training brought about any benefit. This highlights the difficulties in conducting evaluation.
It is also striking that the benefits of training linked directly to job performance (such as improved technical skills and higher competence) were ranked higher than the more general organizational benefits and ‘soft’ issues relating to training (such as motivation, job satisfaction and commitment). The former were generally considered to be ‘of great benefit’ while the latter were seen as being ‘of some benefit’, which again demonstrates how hard it is to measure less tangible benefits of training.[1]
Levels of Evaluation
Perhaps because of these inherent difficulties, it is not surprising that many organizations tend not to evaluate any further than the first two of Kirkpatrick’s four levels (reactions, learning, behavior and results).[2] The American Society for Training and Development (ASTD) survey of 2003 found that, while 75% of organizations in 2002 evaluated at level one and 41% at level two, only 21% measured at level three and a mere 11% took it as far as level four.[3] This supports the notion that the higher the level of evaluation, the more difficult it is to pin down the data.
Quantitative Benefits
Yet some people argue that trainers don’t take the measurement of the business impact of training far enough, with some even proposing a fifth level, that of ‘return on investment’ (ROI). This goes beyond Kirkpatrick’s fourth level (which demonstrates the results linked to training in terms of reduced absenteeism or fewer accidents, for example), and attempts to quantify the monetary benefits of the training compared to the cost of its implementation.
The problem with this, however, is that the cause and effect chain of performance at the organizational level is so complex that it would be almost impossible to isolate with absolute certainty the exact benefits resulting from the training. There are so many other factors that contribute to performance at the organizational level, including the performance of other employees and departments, and external influences such as market conditions and the economic climate, that determining the financial impact of the training would take a great deal of laborious and time-consuming investigation. The evaluation process could then easily become an end in itself.
Qualitative Benefits
The qualitative benefits of training can be equally elusive, as the survey results above demonstrate. The learning that results from training is difficult to quantify in terms of hard data. Evaluating results in this way could prove to be an oversimplification of the real and wide-ranging effects. Learning is continuous and often unconscious, and the results may take many weeks or even months to filter through, especially when soft skills rather than directly job-related, technical skills are involved. As organizations now offer a wide range of ‘blended’ learning methods, from traditional training courses to coaching and e-learning, there is no clear start and end point for evaluating the benefits of learning.
To complicate things further, learning is not the only outcome of training. There are also many other intangible benefits – notably motivation, culture change, commitment and job satisfaction, some of which may be intentional, and others which may be an additional and beneficial ‘side-effect’ of the training. As Andrew Mayo points out in the September 2003 issue of Training Journal: ‘There may be good business reasons for doing something – for example, creating cultural change – where the link to resulting profitability is a long way off and very tenuous. In practice, relatively few interventions and programs have objectives aimed clearly at financial gain.’[4] Why, then, spend a great deal of time and effort trying to pinpoint the bottom line when this was not the goal in the first place? It may indeed be sufficient to demonstrate simply that there has been a change in attitudes and behavior, which would mean evaluating at levels one, two and three only. In fact, obtaining information directly from the learner can be just as valuable in many circumstances as bottom-line figures.
The key point here is that financial benefit is not the only type of value that can be gained from training: there are also many less tangible, non-financial benefits that are equally as valuable. These can often be determined by obtaining information from the learners and their line managers. So, the humble ‘happy sheet’ can be a valuable tool after all.
Conclusion
The key to evaluation is to keep it simple: it need only be as complicated as you make it. You must gain some kind of understanding of the value of the training, whether it is quantitative or qualitative (or indeed both): sometimes it will be necessary to attempt to demonstrate a tangible financial benefit and other times it will suffice to establish that the learners benefited from it and that changes in attitude and behavior occurred. In either case, it is essential to ensure that the evaluation process isn’t so cumbersome that it becomes an end in itself, otherwise you may end up chasing something so elusive that you’ll never be able to find it.
When planning a training program or intervention, consider carefully at the outset whether it is worth investing the time and effort in trying to pin down evidence of the benefits/outcomes at every level. Your decision should be based on the type of training to be delivered, its intended objectives/outcomes and key stakeholders involved.
Finally, however you decide to approach evaluation, getting managers on board is crucial. Lack of management support is one of the key barriers to evaluation. As managers are the people who have the most contact with learners both before and after the training, they are the ones who are most likely to be able to identify the benefits resulting from it back in the workplace. They also play an important role in ensuring that the training is followed through and implemented. Perhaps one of the most effective ways of ensuring the success of your training is to encourage both line managers and learners to become more actively involved in evaluating their own learning. You’ll find some useful guidance on getting management buy-in to training and development processes in the methodology ‘How to Develop Managers as Facilitators of Learning’.
References[1] Chartered Institute of Personnel and Development (CIPD) Training and Development Survey 2004, p 11.
[2] Donald Kirkpatrick, Evaluating Training Programs (Berrett-Koehler, 1994). For more information, see ‘Four Levels of Evaluation’ in the models and strategies element.
[3] American Society for Training and Development (ASTD) State of the Industry Review (ASTD, 2003), p 19.
[4] Andrew Mayo, ‘A Problem Always With Us?’, Training Journal, September 2003, p 40.