10 February 2020

How to Evaluate Learning Effectively

Our Operations Director, Milla Clynes, provides her insights into how to evaluate learning effectively using the New World Model of the Kirkpatrick evaluation scale.

In my work around L&D strategy development, I am often asked about training evaluation and the measurement of learning transfer- essentially how to prove the value of L&D to the business. There are a lot of models and tools that can be used for this, and with new technologies the availability of L&D analytics keeps improving. Yet, this is still an area of L&D that organisations find hard to crack. Evaluation of learning is time-consuming, can be expensive and often produces the meaningful outputs retrospectively. Six months after a learning intervention the business has moved on, and even the most meaningful evaluation reports produced at this stage can lose their impact. After working in this area for the last 20 year, I have a couple of views on this subject matter, which I’d like to share.

Firstly, prioritise your efforts when evaluating learning. There is no need to evaluate everything you deliver to Level 2 (Learning), Level 3 (Behaviour) or Level 4 (Results) on Kirkpatrick’s evaluation scale. If you have the technology and resources, make sure you assess Level 1 (Reaction) for all your training delivery – this includes engagement analytics available for your digital learning resources. It can be worthwhile doing an overall assessment of transfer of learning and behaviour change on e.g. a quarterly basis with each of your business stakeholders on an overall basis (i.e. how is what L&D delivers impacting your department’s ability to meet your business goals) – but there is no need to measure each individual course further than Level 1.

Secondly, make sure to use your Level 1 assessment results to improve the experience and the learning transfer. A lot of organisations amass Level 1 assessment information through their Learning Management System or Learning Experience Platform, but don’t look at the data with the view of making decisions on what classroom courses or digital resources are working and not working. An understanding of learning analytics and using the information to improve the experience is a crucial skill of any modern learning specialist.

Thirdly, evaluate your strategic learning interventions and programmes thoroughly through all levels of the Kirkpatrick model. Where a blended learning programme is designed to deliver a strategic capability to the business, it’s imperative to provide evaluation data to ensure it is achieving its goals. However, evaluation is only possible if you have set a benchmark to evaluate against. Many L&D teams are tasked with evaluating a programme they have delivered retrospectively, and struggle with finding suitable KPIs to measure the impact of the programme on the business. The key to effective evaluation is having a conversation about evaluating learning at the very early stages of any learning project.

I recently came across a model of learning evaluation that has really helped me with some of these conversations. Kirkpatrick & Kirkpatrick (2016) proposed the New World Model. It is based on the same four levels as the original Kirkpatrick Model but has been adapted to deal with some of the challenges presented by the more hierarchical original model. The New Model accepts that there isn’t always a connection between someone learning a new skill and applying it in their job. Learning evaluation therefore does not always happen sequentially, but rather non-sequentially, concurrently and in reverse. According to Kirkpatrick and Kirkpatrick (2016), it’s best to collaborate with your business stakeholders to agree which levels, frequencies and evaluation methods are most appropriate. There is also a greater focus on the actions and processes L&D can put in place to help embed the learning outside of the formal programme.

Here is a visual explaining the New World model:

When engaging in a discussion about learning evaluation, I like to use this model in reverse.

I start with Level 4 with questions such as:

  • What is the strategic need for this programme?
  • What strategic goal or objective are you attempting to influence?
  • If this programme was successfully implemented, what Key Results would be impacted in the business?
  • What would be the leading indicators that would tell you that Key Results would be positively impacted?

The purpose of the conversation is to identify and measure the leading indicators and desired outcomes (business impacts) of the programme (e.g. improved sales/market share or customer experience, greater efficiency and effectiveness in project and programme management, and so on). Once measures have been agreed, make sure you take a benchmark reading of them as they stand before any L&D intervention has taken place.

When I have clarity on Level 4, I move to Level 3, which translates the desired business impacts and lead indicators into desired behaviour changes. I use questions such as:

  • In order to achieve the leading indicators and business impacts, what are the behaviours that would need to change?
  • What would people be doing/saying/thinking differently in order to achieve the desired results?

The aim is to identify 2-3 key behaviours that are crucial to achieve the desired outcomes (e.g. more sales appointments with potential new clients, greater cross-selling activity, better management of project timelines and costs etc.)

Once you have identified the key behaviours you are attempting to influence, you can have a conversation about the activities and processes that need to be put in place to monitor, reinforce, encourage and reward those behaviours, and the opportunities the people will need to have for on-the-job learning. The chances are that this will include a list of things that are not just L&D’s responsibility, but also bring in the business stakeholders, line managers, the performance management system and rewards. Before you have even started to design your learning intervention, you already know what support systems you need to build around it for it to succeed. Also, if you have had the discussion with the business stakeholders, they are bought into the process and understand the importance of learning transfer and behaviour change outside of the formal learning intervention. With all the outside influences on business results, it will always be a challenge for L&D to be fully responsible for Level 4 impacts. However, we can do a lot more at Level 3 and L&D should be held accountable for results there on key strategic learning projects.

Once you have had a good conversation about Level 4 and 3 with your stakeholders, you can use the outputs to design your learning intervention. It will be easy to define the Level 1 and 2 metrics to collate to support the changes you’re hoping for at level 3. For Level 1 (Reaction), Kirkpatrick and Kirkpatrick (2016) added how evaluations should focus on the degree to which learners: 1. Are engaged in and contributing to the learning experience (Engagement); 2. Expect to have opportunities to use or apply what they learned (Relevance).

In summary, prioritise your efforts and don’t spend time evaluating your interventions beyond Level 1, unless they are developing strategic business capabilities. When you decide to do a full evaluation, start with the end in mind and have a robust conversation with the business about expected results, and tie them into the learning transfer process from the very beginning. Having more of these conversations with the business, demonstrating your business acumen and understanding of the links between L&D and business strategy and engaging the business in the learning journey will all help to prove the value of L&D to the business.


Want to keep up to date with all the trends & insights from the world of work and learning? Follow us on LinkedIn: https://www.linkedin.com/company/harvest-resources_2/