Being a s ales consulting and training company, clients and prospects often ask if we can measure the impact of our training, and occasionally whether we can guarantee increased sales. While we have confidence in our consulting, instructional design and facilitation abilities, and understand the importance of these questions from both management and training perspectives, quantifying training’s impact can be a slippery slope for any training company.
Isolating training from the many factors that may positively and negatively impact its success is a major pitfall in the evaluation process. Most training professionals recognize that the best training in the world may not be able to overcome the impact of poor management, low morale, unsatisfactory compensation structures, unreasonable sales goals, and a variety of other factors. Conversely, all too often mediocre training appears superb when supported with a marketing blitz, product price reductions, or economic upswings.
Despite these inherent challenges, we agree that evaluating training is wise and a necessary process. However, the process doesn’t need to become a science project with empirical evidence to “prove” that the training intervention was solely responsible for the results.
Many organizations want to measure results, but are unable to provide the resources in time and money to support the process. To help those organizations, we suggest a lighter (tastes great, less filling) approach: “Metrics Lite.” “Metrics Lite” employs Donald L. Kirkpatrick’s 4 Levels of Evaluating Training Programs process as simply stated below:
Level 1: Reaction – Participant reaction to the training
Level 2: Learning – Change or increase in knowledge or skills
Level 3: Behavior – Extent of the application of learning
Level 4: Results – Effect on the business resulting from the training
To illustrate the “Metrics Lite” approach we offer a real-life case study and our self-evaluation: what worked well, what we learned, and what might be a slippery slope. We were contacted by a leading mortgage lender’s Call Center training department interested in helping transform their agents from reactive “order takers” to proactive, needs-based solution performers. Additionally, it was important to them that there be a “measurement” on training’s effectiveness.
We proposed Kirkpatrick’s model designing and executing Levels, 1, 2, and 3, and because of monetary, time and resource constraints, we would support the design of Level 4 only.
Level 1
Action: Immediately following the training workshops, participants rated the relevance and effectiveness of the training, and the training methods and techniques employed.
Result: Participants viewed the training as highly effective and relevant to their jobs, with many offering specific positive comments.
Self-evaluation: Participants rated all questions high, with the highest areas on content relevancy and ability to transfer skills to the job. We learned that participants wanted more coverage on resolving objections and closing (definitely linked concepts!).
Action: Participants completed written Pre and Post-Tests (20 questions), testing their knowledge and the application of skills learned in the workshops. We used realistic customer situations to test the ability to apply knowledge about learned skills.
Result: Participant’s Post-Test scores were higher than their Pre-Test scores, indicating increased knowledge and skill application.
Self-evaluation: The questionnaire was designed to be challenging, but may have been too easy as there wasn’t a significant increase in scores due to relatively high grades on the Pre test. Our training philosophy is that people learn by doing (I hear and I forget. I see and I remember. I do and I understand. – Confucius). Accordingly, we believe passing a written test does have some limited merit in assessing skills learning, but it’s certainly not a true indication of mastery of the training objectives.
Level 3
Action: Using a random approach of training and non-training participants, we monitored two to four recorded calls with the training department. Using a customized 22 sales behavior checklist, we rated the degree participants and non-participants applied the skills. We also conducted an on-line “perception” survey two months following training, asking for feedback regarding the impact and usefulness of the training and materials.
Result: Participants applied the learned skills during calls more often following training. Participants continue to refer to the training materials very often.
Self-evaluation: We believe that assessing a participant’s actual performance to be the most important assessment. We were thrilled to notice distinct improvement in participant’s skills. It was also valuable to learn that the materials were still relevant two months after the training.
We do have two comments, though. First, we are concerned that we and training department personnel conducted the monitoring. We know that we were as objective as possible, but the monitoring should impartially be done by a party who isn’t a stakeholder in the success of the training. Secondly, the success of the training is predicated on not just the training workshop, but also other ongoing factors such as coaching by Sales Managers and the support of management.
Level 4
Action: The Call Center’s training department compared participant and non-participant conversion rates (the ability to convert a qualified caller’s interest in mortgage information into a closed mortgage) and cross-sales levels for two months following training to prior periods.
Result: Sales Agents who went through training had increased conversion rates and cross sales following training compared to prior periods, and higher overall levels than those who did not go through training.
Self-evaluation: This is obviously the bottom-line to any client. However, as stated earlier, there are many other factors that can make the numbers better or worse, regardless of the quality of the training. Given the training department’s need to spend considerable hours on call monitoring, it’s questionable whether the cost in time and manpower is worth the effort. Secondly, while the assessment appears very objective (the assessment is purely statistics), the results were compiled by those with a stake in the training’s success, the client’s training department. We believe that if a Level 4 evaluation is to be done, it should be done by a third party, either a different area of the company, or an outside expert vender.
The American Society for Training and Development (ASTD) found that 45 percent of surveyed organizations only gauged trainees’ reactions to courses (Bassi & van Buren, 1999). Overall, 93% of training courses are evaluated at Level 1, 52% at Level 2, 31% at Level 3 and 28% at Level 4. The data illustrates a preference to conduct simple evaluations. We believe this is partly due to the difficulty of conducting objective in-depth evaluations, and partly due to monetary, time and resource constraints.
Despite the inherent challenges and pitfalls of the evaluation process, we strongly urge organizations to attempt it, even though it’s not foolproof. After all, despite the “Metrics Lite” approach being less filling, it does taste great.