I always knew evaluations were important. I’m sure we are all familiar with the typical emailed evaluation survey after a conference ends or training concludes that asks us how we liked the program. After we addressed evaluation strategies in class a few weeks ago, I can now finally use the “training lingo” so to speak regarding Kirkpatrick’s evaluation model and better understand the importance of each level in the model.
Those typical emailed surveys or questionnaires are Level I evaluations gauging learners reactions to the program – how much did participants like the space, the flow, the food, the workshops, etc. While there is nothing wrong with Level I evaluations, I believe there are sometimes opportunities to dig a little deeper to truly evaluate learning change. I have the least amount of experience receiving a Level II and III evaluation because I think they are rarely done perhaps due to time and resource constraints. We discussed in class that most trainers or organizations don’t have the time it takes to evaluate learning change and simply want to check a box when it comes to saying we did an evaluation. I think it all depends on what organizations do with the data collected from evaluations and how that data effects the bottom line when it comes to devoting time and energy into Level II, III, and definitely Level IV evaluations.
In my particular circumstance, I need to know if a new advisor acquired the intended knowledge, skills, and commitment needed to be successful in the job. Therefore, I feel as though my department has to take the time to devote to a Level II evaluation. I will do this by collecting their results on certain learning activities during the new hire training. I’ll then conduct observations a month after the training ends to observe if the new advisor is accurately applying what is learned (Level III). Yes, this is going to take precious time out of my day to perform these evaluations, but employee retention is on the line. Since these new hires fall under my supervision, I’m somewhat responsible for their success. I think the trap that trainers fall into is that they are not directly responsible for a participants success on the job – they punt it to the manager who directly oversees that employee. I think most organizations should include managers and leaders of the company in evaluation strategies as much as possible if the company truly wants to measure and evaluate learning change. The time and energy is takes to complete Level II, III, and IV evaluations is worth it, but I think it needs to come from the top.