PARTICIPANTS' METRICS

Attendance: participated vs registered

The number of participants who attended the learning program versus registered participants.

For example, there are 250 participants assigned to or registered in the program, but only 180 started it.

Hypotheses for 70 absent people are:
  • lack of understanding or motivation to learn, as no clear view of how this program will benefit their work performance
  • the learning program schedule is inconvenient (e.g., coincides with important dates for business companies such as quartile reports, annual reports, etc.

What affects the metric:
  • learning program schedule
  • participants' motivation to learn
  • leader's/functional leader's/manager's engagement

Services and platforms where this metric is available:
  • Zoom
  • AdobeConnect
  • WebEx
  • Lanes

Number of participants who have completed the program

The number of participants who completed the learning program (i.e., completed the tasks and exercises, attended synchronous training events, worked with all the materials, etc.).

For example, 95 participants out of 115 completed the program with an 85-90% result, completed all the tasks, and were present at each synchronous learning event (any format).

Hypotheses for not completing the program by other participants are:
  • not all the asynchronous form tasks are completed
  • did not attend all synchronous learning (sometimes were absent)
  • trainer did not check the tasks or did not mark completion of the task

What affects the metric:
  • learning program schedule
  • participants' motivation to learn
  • program design
  • supporting environment: trainers, curators, mentors, coaches
  • trainers competencies

Services and platforms where this metric is available:
  • AdobeConnect
  • Lanes

Net Promoter Score (NPS) in feedback score

Participants answer the question in the feedback form: "Would you recommend this program to your colleagues?" It is an assessment and value of the learning program application to work practice. The scale is from 0 to 10, where 0 means full disagreement, will not recommend the program, and 10 stands for willingness to offer a full recommendation. Learners who selected 10 can be considered ambassadors of the learning program.

The indicator is calculated as a difference between the positive reviews (9-10 rating) and negative reviews (1-6 rating). NPS = % the positive reviews — % negative reviews.

For example, if 100% of learners selected 9 and 10 on the scale, the NPS will be +100. If all the participants selected values from 0 to 6, the NPS will be -100.

What affects the metric:
  • practical application potential of the learning
  • learning program design
  • trainer's competences
  • learning program schedule
  • platforms and services convenience for learning

Services and platforms where this metric is available:
  • AdobeConnect
  • Lanes

Quantity of feedback forms completed

How many feedback forms are completed compared to the number of participants? This metric is significant for the NPS indicator.

For example, for the webinar, there were 80 people present, and only 20 participants completed the feedback forms. One-fourth of group feedback is not enough for valid learning assessment.

Hypotheses for the absence of the other 60 feedback forms are:

  • lack of motivation to complete the forms
  • too many questions in the form, too time-consuming
  • if feedback form is offered in asynchronous learning, participants may not receive a notification, etc.

What affects the metric:
  • convenience of gathering feedback forms
  • reasonable number of questions on the feedback form
  • setting where feedback forms are offered (e.g., in synchronous learning, there are traditionally more completed feedback forms)
  • feedback form incentives in the synchronous learning scenario (e.g. learning program completion bonuses can be offered for those who completed the feedback form, or promise some benefits for completing the feedback form)
  • notifications frequency in asynchronous learning

Services and platforms where this metric is available:
  • AdobeConnect
  • Lanes

Time spent in a training session by the participant

How much was time the participant present in the synchronous learning session? It is useful to compare the participant's actual presence and engagement to the actual session duration. If there are significant gaps, we recommend considering the case in detail, to determine which activities were skipped.

For example, the training session was 2 hours and 20 minutes, this metric shows that the participant was present for 1 hour and 13 minutes.

Hypotheses of absence are:
  • the participant was learning and working at the same time
  • the participant felt bored and was disengaged
  • there were some technical internet issues

What affects the metric:
  • duration of the training session
  • place and order of the activity in the scenario
  • correlation of activity to group dynamics
  • task execution: breakout rooms, full group, individual
  • relevance and practical value of exercises/cases/content/practice
  • trainer's competencies: drive, ability to ask questions, facilitate the activity, communicate, and immerse participants in the content
  • learning program schedule
  • convenience of platforms and services
  • number of services used in the training session

Services and platforms where this metric is available:
  • AdobeConnect
  • Zoom
  • Teams
  • WebEx
  • Lanes

Time participants spent in the training activity

Time the participant stayed present in the synchronous session activity.

For example, the mini-lecture duration was 20 minutes and the participant spent 13 minutes in that activity. The metric shows that for 7 minutes, the participant was absent from the session during the mini-lecture.

Hypotheses are:
  • irrelevant activity timing
  • learning format and its implementation are not correlated
  • the participant was bored and disengaged
  • there were technical internet issues

What affects the metric:
  • training session design
  • trainer's competences
  • learning program schedule
  • convenience of platforms and services
  • number of services used in the training session

Services and platforms where this metric is available:
Lanes

Training activity completion progress

The number of participants in the full group who completed the task.

For example, at one virtual training session, work on the case in breakout rooms with the whiteboard was completed by 4 participants out of 10. In comparison, at another session, the same activity was completed by 9 participants out of 10.

Hypotheses of not completed activity are:
  • participants did not have enough time to fulfill the activity
  • participants were off sometimes during the training activity
  • there were some technical internet issues
  • participants have completed the activity but did not submit the answers by clicking the button

What affects the metric:
  • training session design
  • trainer's competences
  • duration of the training session
  • convenience of platforms and services
  • the number of services used in the training session
  • internet connection quality
  • participants' motivation to learn
  • learning service or platform interface manual

Services and platforms where this metric is available:
Lanes

Time for studying learning materials

The time for studying pre- and post-study materials by participants.

For example, the service shows that the participant spent 7 minutes on the 20-minute pre-work video material

Hypotheses are:
  • the participant started to watch the video and then did not continue because they could not find this material on the platform or service
  • video scenario is irrelevant
  • no speed control in the video player
  • participant scrolled the video

What affects the metric:
  • learning program schedule
  • visual perception of the content
  • structure
  • errors: logic, grammar
  • practical application of the material
  • participants' motivation to learn
  • UX/UI service or platform

Services and platforms where this metric is available:
Lanes

Number of tasks completed by the participant

The number of tasks completed by the participant.

For example, after the webinar, the participant had to read a longread, watch a video, and solve a case. The metric shows that the longread and video material were completed, but the case was not solved. Out of the 3 tasks, the participant completed only 2.

Hypotheses are:
  • the participant did not have enough time to study all the materials
  • there was no clear deadline
  • the participant could not find/understand the need for and practical use of the content
  • the task was too easy or too difficult for the participant

What affects the metric:
  • learning program schedule
  • clear deadlines for completing the tasks
  • participant's motivation to learn

Services and platforms where this metric is available:
  • AdobeConnect
  • Lanes

Time of task completion by the participant

The time spent to complete the task. We recommend comparing the task compilation by different participants in the same target audience.

For example, you planned 20 minutes for the case (sales department performance indicators analysis), but the participant needed only 10 minutes.

Hypotheses are:
  • target audience experience with the type of tasks (e.g., an experienced manager solves it in 10 minutes, and the newly promoted one will need 20 minutes)
  • the lack of competencies in task completion
  • participants were too distracted to complete the task

What affects the metric:
  • time allocated for the task completion
  • availability and clarify of task instructions
  • target audience fit
  • duration of the training session

Services and platforms where this metric is available:
Lanes

Trainer's compliance with the learning session scenario

How closely did the trainer follow the developed scenario? Activities implemented, factual timing of the activities, activities skipped. Percentage of completed activities vs planned.

For example, the instructional designer set the duration of the role-playing game to be 30 minutes, and the factual duration was 50 minutes.

Hypotheses of the deviations are:
  • scenario did not include time for sharing instructions with the participants
  • lack of time for sharing after the activity
  • both

What affects the metric:
  • training session design
  • session timing
  • activities timing
  • trainer's competences
  • quality of training of trainers
  • optional activities (if the task does not work well in the group)

Services and platforms where this metric is available:
Lanes

Percentage passing the knowledge test

The final percentage of the test (testing knowledge)

What affects the metric:
  • quality of knowledge delivery in the learning program
  • allocated time for test completion
  • test tasks and target audience fit
  • learning program design
  • trainer's competences

Services and platforms where this metric is available:
  • AdobeConnect
  • Lanes

Participants' rating with % of passing the knowledge test

Participants' rating is defined by the percentage of correct test answers. We recommend exploring in detail how many participants and what percentage of them completed the test with a passing grade, and how many need improvement.

For example, out of 150 program participants, 80 people completed the test with 90% of correct answers, 50 people fell within the range of 85%, and 20 participants had the lowest pass rate.

Hypotheses for 20 participants' low scores on the test are:
  • The learning topic was too difficult for those 20 participants (pay attention to the target audience criteria)
  • low learning motivation, low work-related motivation
  • internet technical issues and there was only one opportunity to take the test

What affects the metric:
  • quality of knowledge delivery in the learning program
  • allotted time for the test
  • target audience fit
  • learning program design
  • trainer's competences

Services and platforms where this metric is available:
Lanes

Participants' responses to the tasks

The quality of the responses in the tasks.

For example, participants completed the task where they had to record their answers on price objections. Look through the answers to assess if they learned the techniques to address this type of objection.

Hypotheses:
  • if the answers are relevant to the information (rules, arguments), it means they have learned that material

  • if the answers lack the training context information, one needs to find the reasons. They can be:
  • information delivery methods
  • learning session design
  • examples
  • theory potential of practical application

What affects the metric:
  • learning session design
  • quality of theory delivery
  • time for the task
  • participants' motivation to learn
  • trainer's competence

Services and platforms where this metric is available:
Lanes