Best Answer

Best practice for spacing tests within a course composed of multiple videos

  • 4 February 2022
  • 11 replies
  • 144 views

Userlevel 7
Badge +3

Hi all,

I was hoping somebody could offer some advice regarding how to space tests within a course composed of multiple videos. Our team is considering the following three methods.

  1. Short test after each individual video.
  2. Medium length test after several videos.
  3. Long test at the end of all the videos.

Each of these methods has pros and cons, so it would be really helpful if you could share how you space your tests and explain why you favor one particular method over another. Our videos are generally quite short, averaging about 3 to 5 minutes in duration.

Thanks,

Daniel

 

icon

Best answer by alekwo 7 February 2022, 10:21

View original

11 replies

Userlevel 7
Badge +6

@Daniel - hi - yes pros and cons to each approach and some of this really depends on the complexity of your materials, your targeted audience, and parameters that your SMEs may desire as part of the experience. 
Getting beyond that generalization - I think it comes down to what you really want to achieve with your audience:

  1. short videos with tests immediately after will increase a majority of your learners chances to retain their knowledge and pass the test. This knowledge check approach aligns with many today as a desired approach because it can be a great way of exercising the learner through the materials while it is still in context. Many times today you can also enforce a gated experience - a person must pass the test before moving on (a gated experience)…and that will also work very well because you are now forcing short term memory to be exercised to be successful.
  2. medium length test after a few short videos will decrease their chances of knowledge retaining without giving a fair warning that this will be happening if it is all done in one sitting. If your learners have  some “skin” in the material (they have a strong reason to be consuming, are incentivized, and have an interest and it is in context for them) the difference between 1&2 are not that great depending on your targeted audience.
  3. the more traditional route? Most learners are used to this. Keep your questions simpler or you will receive feedback about its complexity.

Can you push your team outside the box though? With the advent of HTML5 now predominantly well supported? There are tools that help to embed questions directly in your videos. Those tools can loop a person to the material as necessary and even have hard stops when it is important to have a hard stop. Make it engaging like that and you may be considered a learning hero.

Great name btw.

-Dan

Userlevel 3

Interesting question @Daniel and thoughtful response @dklinger!

Before deciding, consider looking at the list of key learning/talking points across the timeline of all videos in the course, and highlight which of those are the Top 3/Top 5 things a learner should take away from completing the course.

Is there a majority of the most important stuff after one particular video? Evenly distributed?

Maybe don’t think of the “video count” as the unit of measure for when to hit a knowledge check, but rather, reinforcing a key take-away(s) at the natural stopping point after it is presented to the learner.

Just another way to consider looking at planning the timing,

Jason

Userlevel 7
Badge +3

Many thanks @dklinger and @jason.moore for your thoughtful responses. To add some more context to my question, I’ve outlined our usage case in a bit more detail below. Any additional thoughts/advice would be greatly appreciated!

Usage case

Our courses teach learners how to operate and maintain industrial robots, so for each robot type we have an operation course and a maintenance course. Each of these courses is divided into folders that can contain anything from 3 to 25 videos. I would say that in general most of the important knowledge is evenly distributed and our content creators normally create one short knowledge check per video with each knowledge check consisting of about 3 questions on average. As such, it probably makes sense to have a short knowledge check after each video. That being said, one disadvantage of this approach is that the course can look quite long and potentially more intimidating with a test after each video. Some of our team members think the courses will look better and more streamlined with one long knowledge check at the end of each section or with several intermediate knowledge checks in-between. They also think that the learner may tire of having to complete a knowledge check after each video. Personally, I think I would prefer completing one short knowledge check after each video while my memory is still fresh, but I also understand that people learn differently. It would be really helpful to hear how other admins would approach our case. For example, do you think our learners will tire of having to complete a short knowledge check after each video, and do you think our courses will look less appealing? I know there isn’t only one “correct” answer to my questions, but it would great to hear your views.

Userlevel 7
Badge +1

@Daniel we use short knowledge checks after most topics, basically as @jason.moore suggested, when we want to emphasize specific point(s) that we want people to memorize - like “if there is one thing you remember from this presentation it should be ...”

 

An additional effect of this approach is when the learner is getting distracted, they will only need to re-watch the last video to find the answer and continue with the course.

With knowledge checks combining questions from several modules, the “penalty” for loosing focus is higher, as they may need to re-watch a bigger set of materials to be able to answer questions and progress.

Of course, one can argue that the latter would encourage more attention, but I do prefer the former as nowadays people are getting distracted easily and often, and we want to make the learning as seamless as possible (we create product training for our customers, so they don’t have to complete any of if, and if it would be hard they probably wouldn’t bother too much).

For those who want to get really deep and have a proof of their skills, we have a certification that is separate from courses, and there we have questions covering all aspects of using the product (+ a hands-on assignment).

 

And to the concerns of your team, in the feedback we’ve been receiving from our users we got some requests for adding even more knowledge checks in our courses :) of course your audience may have different preferences.

Userlevel 7
Badge +5

Personally - I like to use short, 1 or 2 question quizzes after a video - or any content - really.

This approach seems to do a nice job of “encouraging” the learner to actually watch and pay attention to the videos or to actually read, listen, or interact with something as designed.

We have a lot of learners who are registered to take a course as a new hire by their employer. It was pretty easy to tell which students take it seriously and those who treat the training like the EULA (click, click, click, click).

Having more and smaller quiz questions throughout helped to minimize the EULA students.

Userlevel 3

@Daniel Glad you are getting some good traction and conversation around this topic! From the use case described, I’m imagining that these repairs to the robots are hands-on (vs. fixes to the software running them). Once you described more about the learning scenario, I got interested in how you might be able to support the techs by make these videos available at the time and place of need - on the production floor, 4 months after taking the course and when a unit needs work. Sounds like there is a good plan shaping up for the initial training - also have a plan for making those same materials searchable and available on demand to the techs for a quick refresher on a specific repair task later on?

Userlevel 7
Badge +5

@Daniel - A couple additional comments to your post.

...one disadvantage of this approach is that the course can look quite long and potentially more intimidating…

Of course… the flip side to that is for those paying customers - it can also make it look like they are getting their money worth - ha ha.

Seriously, though, I have always been a fan of more and smaller. This stems from my experience as a classroom teacher and gymnastics coach. Part of my design goal is to craft the learning in a way that can help the student feel as though they are making progress. This is done by making lots of small activities that move them towards the goal.

A 60-minute video, for example, may seem daunting or might be ignored until the learner has an hour to give in one sitting - leaving the progress bar at 0% for a long time. Even though a dozen 5-min videos is still an hour and the list of items on your “syllabus” is really long, the learner can chip away at those smaller videos more easily and feel as though they are making progress. This can motivate them to return knowing they will move the meter forward.

One of my favorite sayings is that “Inch by inch its a cinch, mile by mile it takes a while.”

...the learner may tire of having to complete a knowledge check after each video.

The other part of me is a bit more draconian…

“...but do you want a job?”

These two sides of me struggle at times with one another.

Userlevel 7
Badge +3

@alekwo and @gstager - Many thanks for the additional comments. Hearing your views, and especially the reasoning behind them, has helped us reach a consensus that brief knowledge checks after each video are the way to go!

@dklinger - That’s a great point about embedding questions directly in the videos. The video editor we use has this capability, so it’s definitely something we need to look into!

@jason.moore  - Yes, the operation and maintenance is very hands-on and there is often a delay between the initial training and the technicians handling the machines. Is there anything specific we should be doing to make individual videos more searchable within Docebo? We’ve found the global search function to be quite good, but would you recommend manually tagging the videos to make the search results even more granular?

Userlevel 3

Hello @Daniel Great thought on adding additional meaningful tags to the videos that the AI didn’t catch, or that may be specific to your organization’s terminology/language.  You may also consider making a number of Channels with How To tutorial videos in them, say - per robot family, or repair task-based, or tech role-based - that would make browsing for video content an option, in addition to searching with the above mentioned tags.

Userlevel 7
Badge +3

Hi @alekwo 

As discussed in this thread, we’ve decided to have a short test after each video rather than a long one at the end of the course. However, we’re not sure what to do about the old test which has already been completed by several learners. Would it be okay to delete the old test, or would it be better to retain it and just hide it from the course? Our tests are similar to yours in the sense that they are fairly informal and not mandatory unless learners want to fully complete the course. 

Thanks,

Daniel

Userlevel 3

Great question! Did you know most training & development peeps haven’t had training in learning evaluation? Call them quizzes, tests, KCs, whatever. Trained IDs should know the principles of the purpose and methods of effective evaluation but non-IDs can too!

I caution focusing on what learners “like” or to pick methods that are easy for them to get through like KCs after each or longer ones after so much material, etc. Learning evaluation is a whole training topic in of itself, but try my fab 5 for eval. success!

  1. The goal of evaluation is determining whether learning has taken place.
    1. So, what’s the purpose of the content (explainer videos, whatever)? If it’s for learning (learners can perform things from memory), evaluate!
    2. If the content is more performance support (think job aid) than learning, that’s not training, right? So go easy… Performance support is basically to help people perform in the moment of need. Dual purpose? I’d evaluate.
  2.  Be sure your evaluation (questions, performance whatever) aligns with objectives/content.
    1. Common no-no’s: didn’t assess all objectives (how important was that objective then?), asking questions not in the content, wondering whether you have too many or too few questions.
    2. Some designers create evaluation as soon as objectives are done (wish I did this more) to help you stay aligned. Try an assessment/objectives matrix!
  3. Evaluation, whenever it’s done, is complete if you determine whether objectives were met. Ok fine, you prolly better have at least one question aligned with an objective.
  4. Aim for the highest performance expectation of the objective - don’t low ball with low-challenge recall of facts, right? People may be all happy everyone got 100% but what can they really do now?
  5. Evaluation doesn’t need to always be that typical post-test with X multiple choice Qs all at the end. To your initial question, kinda lol… Mix it up, basic KCs along the way, great. if applicable, a final exercise where they complete something end-to-end related to the learning goals, yes! It doesn’t need to be an over-produced eval, whatever enables them to demonstrate what good looks like. If the content is self-paced, so can the evaluation.

Reply