Skip to main content

I’m looking to edit our end-of-course survey. Currently we have a VERY LONG survey that I am ready to get rid of. We use the survey tool at the end of each course and set it as the end object marker. Therefore, users are required to take it to get their certificates but they get angry at the length. I would too. :grinning:  Would anyone be willing to share your surveys so I can get an idea of how we can effectively evaluate our courses with fewer questions?

Looking for around 10 strong questions.

Thanks for your help!

HI @GingerG we use mostly a very short survey with only a couple questions like:

  • Would you recommend this course to others?
  • Was the content relevant to you?
  • Will you be able to apply what you learned to your role/job?

and in some cases, we follow up with a more detailed survey a few months after they completed the course; similar to the Kirkpatrick model


10 question still seems to be quite a lot of for the end of every course - especially if it’s mandatory. Our standard questions are quite similar to the ones posted by @lrnlab + we also ask our users to rate the course on a scale from 0 to 10 or from 1 to 5 (depending whether we need to measure the NPS).

I know it’s not really the solution you’re looking for right now, but perhaps it can still be helpful - have you considered making the survey optional and mark the previous training material as an End Object Marker? If you have implemented Gamification, you could award your users with additional badges & therefore points for completing the surveys to encourage them more. That way, maybe you could even keep your current super-long survey, keep your users happy and get what you want and need :slight_smile:


Hi @GingerG, great question! I’ll echo what you stated and what’s already been shared: shorter is better when it comes to post-course surveys.

 

Here’s an example of the survey we designed when we launched Docebo Learning Impact in Docebo University. We’ve only seen an 8% response so far, so we’re trying to figure out what we can do to improve that. We hope to do a deep-dive focus on our implementation of DLI later this year, so stay tuned for that! Laurent hasn’t joined the community yet, but I’m sure he’d love to chat more about survey design 🙂 Let me see if I can twist his arm to join.

 

There are also some great resources in DU on Learning Impact that might help you think through some of these best practices more. 

 

Excited to learn more from others on this thread!


Thank you all for your input. We a research-based institution so the survey is a very important part. I think I can get some really good data from a shorter survey - I just need to do some more research. If anyone else sees this post and wants to share their current surveys, I’d love to see more examples!


Hi @GingerG,

Based on Docebo Learning Impact experience, we usually recommend that level 1 surveys include:

  • 3 to 7 questions for e-learning courses
  • 5 to 12 questions for ILT course or Learning Plan evaluations, in the first category you usually need additional questions on the instructor and the logistics. For multiple day ILT courses, we can have longer questionnaires and it usually will not affect the response rate 

About e-learning evaluations, we also recommend to avoid focusing too much on the materials, the ease of connection, the overall look n’ feel of the course… For all these questions, feedback from a bunch of learners is enough, and it is unlikely to vary between the learners.

Finally for strategic courses, we recommend to extend the length of the questionnaire during the pilot phase so that you can collect more feedback when it is time to adjust the course.


Thank you for sharing this valuable info @laurent.balagué!

 


@GingerG 

We have decided for three closed ratings - that are still pretty actionable, and one open text question.

In closed questions we’re asking user to rate on a scale from 1 to 5:

  • if the user is satisfied with the scope of the learning content (CSAT) - to know if we have covered all what was expected (or if we set the expectations well).
  • if the content was well structured and easy to follow (CES) - to know if we may need to improve the delivery.
  • if the content will be useful for their work - to get an overview what is their perception of the value they gained from the course.

We’re using this for over 2 years, and together with the text feedback it’s providing us lots of valuable insights on which we can take quick actions.

 

On the side note, I’m not a fan of using the NPS “would you recommend” question for courses, as while it may measure the overall sentiment, the result is not very actionable.


Reply