Assessment Drives Instruction
Let’s face it, all trainers and facilitators work hard to deliver a compelling and successful program. During the actual program delivery, the best facilitators know the material well and strive to make the learning process both beneficial and entertaining. While we all enjoy the congratulatory “This was a great session, you rock!” comments, these are not the kind that truly inform our assessment of program delivery and content. “But wait!” you cry, “I do give feedback, and I receive it, too!” Yes, we as facilitators do receive feedback from our participants, both in real time and with participant evaluations. But more must be done to avoid the 11th and final reason training programs fail:
Failure To Provide Feedback & Use Information About Results
All stakeholders in the program design and delivery require feedback. Employees need to see progress, developers and designers need feedback on program design, facilitators need feedback to see if adjustments should be made to delivery, and clients need feedback on a program’s success. Without such feedback, a program may not reach expectations.
Sharing a constant flow of feedback and evaluation data is key to evaluating and improving training. Let’s look again at Kirkpatrick’s four levels of training evaluation chart:
|level||evaluation type -what is measured||evaluation description and characteristics||examples of evaluation tools and methods||relevance and practicability|
|1||Reaction||Reaction evaluation is how the delegates felt about the training or learning experience.||‘Happy sheets’, feedback forms.
Verbal reaction, post-training surveys or questionnaires.
|Quick and very easy to obtain.
Not expensive to gather or to analyze.
|2||Learning||Learning evaluation is the measurement of the increase in knowledge – before and after.||Typically assessments or tests before and after the training.
Interview or observation can also be used.
|Relatively simple to set up; clear-cut for quantifiable skills.
Less easy for complex learning.
|3||Behavior||Behavior evaluation is the extent of applied learning back on the job – implementation.||Observation and interview over time are required to assess change, relevance of change, and sustainability of change.||Measurement of behavior change typically requires cooperation and skill of line-managers.|
|4||Results||Results evaluation is the effect on the business or environment by the trainee.||Measures are already in place via normal management systems and reporting – the challenge is to relate to the trainee.||Individually not difficult; unlike whole organization.
Process must attribute clear accountabilities.
The data gathered from the Kirkpatrick guidelines above can
- improve learning design and facilitation,
- allow individuals designing and implementing the programs to make adjustments, and
- illustrate the value of the program to all stakeholders.
Most importantly, the results may be used to make adjustments in the design, development and delivery of the program. The routine communication of data serves as a process improvement in making a successful program more successful.
We must see results-driven training in our current economic climate. Every dollar spent on training and development must be evaluated and adjusted at every opportunity. The 11 failures of training need not happen to you or your organization. Assessment truly does drive instruction, as well as informing where training dollars will be spent in the future.