You Can’t Fail if You Don’t Test
It is a great challenge to identify and then isolate improvements and attribute them exclusively to training. Phillips and Phillips have found that failure to isolate the effects of training is reason #9 that most training efforts fail. In this economic climate, providers and consumers of training alike must be sure that training is both measurable and valuable. In attempting to measure effectiveness, many professionals still use the Kirkpatrick model.
Kirkpatrick Model for Evaluating Effectiveness of Training Programs
Level 1: Reactions
At this level, we measure through the use of feedback to determine the level of learner satisfaction. The analysis at this level enables the facilitator and training administrator make decisions regarding program delivery, content, methodology, etc.
Level 2: Participant learning
This level evaluates knowledge, skill and attitude through pre-test and post-test measures. However, it is important to note that learning at this level does not necessarily translate into application on the job. This will both provide data necessary to fine-tune program design as well as a major indicator for transfer of learning measurement.
Level 3: Transfer of learning
At this level, we measure the application of the learning in the work context, which is not an easy task. Inputs at this level can come from participants and their supervisors. At this point assessors must ask,“What changes can be attributed to the training?” This is a challenging isolation that must be carefully customized for each training and development program.
Level 4: Results.
Most organizations would like to measure effectiveness of training at this level but this is impractical as showing direct causation is difficult. However, it is worthwhile to attempt to measure effectiveness of the program in terms of business objectives such as increase in productivity, decrease in defects, cycle time reduction, etc.
The good news is that at least one of these techniques identified in the chart will work in every setting, and the issue can be addressed in every impact study. To show training’s real value, designers, developers and evaluators must accept the challenge to tackle this issue.
While Levels 1 and 2 can be built into the design of almost any training program, Levels 3 and 4 are far more nuanced. As program developers, we must strive to measure Levels 3 and 4. As organizations determined to get high value for our training dollars we must partner with a highly respected and trusted training professional willing to customize their training and assessments.