I am looking for some best practices here in uploading a test from the CLOR to multiple courses. Some details.
The test was authored in Docebo
The test serves as an End of Course Survey
Currently, when we go to pull the data for the survey responses from within the individual course, the data does not align. We’ve noticed the same responses populate across multiple courses, rather than having answers specific to the course we uploaded it to. Is this because we’re uploading the same test to multiple courses? Do you have any best practices here so that we can ensure our data aligns with the correct course?
Thanks!
Page 1 / 1
Hi @cfriedman haven’t used CLOR for this purpose but since it carries the same object across all courses you attach it what you describe does not surprise me. You might want to report this to support to get clarity
@cfriedmanYou do need to have one test per course. One of the drawbacks of the CLOR. When a CLOR object has been completed once, it is completed for all.
The test can have the same content; however, it needs to have a different name.
Thank you, @lrnlab and @KMallette ! Sounds like I have some test-making to do.
It sounds like the best practice here would be to create a new test for each course titled “End of (Course Name) Survey” to ensure proper reporting.
Do either of you know if I add the questions individually to the question bank, and then upload them to each respective end of course “test”, if this will create the same reporting issue or it will only track the data for that specific test?
Thank you, @lrnlab and @KMallette ! Sounds like I have some test-making to do.
It sounds like the best practice here would be to create a new test for each course titled “End of (Course Name) Survey” to ensure proper reporting.
Do either of you know if I add the questions individually to the question bank, and then upload them to each respective end of course “test”, if this will create the same reporting issue or it will only track the data for that specific test?
@cfriedman The idea of a ‘pool’ is that the question/answer set can be reused multiple times. This leads me to surmise that you’d get course-specific data in the report. I have not used the pool yet.
agree @cfriedman it’s easy enough to use the import/export options to move them onto other courses as long as you are not using extended text and short answer type questions; these dont export so you’ll ned to create them manually each time.
Thank you both!
Hello @cfriedman, we do this in our system! We have a “Course Evaluation” at the end of each course. We created it as a survey instead of a test as it’s not a graded or scored test. We have it in the CLOR and it’s in almost every course we make. We don’t run into answers populating from each course, and when you go into reporting on the course itself, they responses are for that course only. A quick and simple set of questions about the course. I suppose if you’re looking for it to be scored, then you’d have to make it a test.
The only downside is when you go to export the survey results as a spreadsheet, it does not state the course name so you’ll want to name the file right away.
We also have a course template that’s a blank course with the survey in it and we just duplicate it when we’re making a new course to bring the survey over. This hasn’t caused any issues with reporting on the survey either. Let me know if you have any questions!
Thank you, @ChrisPrice, that’s super helpful! The reason our team had initially opted to use a test as opposed to a survey was not about scoring, but about reporting. For tests, user responses are tied to user IDs, so if someone writes in specific questions or valuable feedback we’d like to follow up on, we like to have the option to reach out. Do you happen to know if you can get this kind of reporting on a course survey?
@cfriedman I will check and see when I get home! I’m not sure if you can.
@cfriedman I’m not seeing any user ID tied to the survey results. You could enter a text field in the survey that asks them to put their name though! Might make a little more work reporting but that hopefully would clear up any issues with answers populating from other courses!
Thanks so much for the suggestions @ChrisPrice
Yet again. A drawback to the lack of the option to set a learning object as local or shared scoring. All objects in the CLOR should be able to elect whether you want the Learner to inherit previous attempts - or begin with a clean slate each time. Tests, like Surveys, or any other training material that you want the Learner to retake if they are presented with the content again could be set to local versus shared - similar to Surveys today.
Thank you, @ChrisPrice, that’s super helpful! The reason our team had initially opted to use a test as opposed to a survey was not about scoring, but about reporting. For tests, user responses are tied to user IDs, so if someone writes in specific questions or valuable feedback we’d like to follow up on, we like to have the option to reach out. Do you happen to know if you can get this kind of reporting on a course survey?
Best to be clear on the purpose of surveys vs. tests…
Surveys are basically “Level 1” evaluation for the Kirkpatrick model fans, which measures learner reaction to training. Sure it’s helpful, but only so valuable which is why they’ve been called “smile sheets” forever. Next, for integrity and to encourage candid feedback, surveys in Docebo I’m pretty sure are anonymous, which is the whole point, and thus enables you to solicit their anonymous feedback and state that in the survey info. You’re likely to get higher response rates and more candid feedback than not.
But, you can enable a learner to identify themselves if they wish by adding for example a text entry type question asking them for it. Ex: “If you’d like to be contacted about any of your responses, please provide...”
If you want to gather learner reaction over time and find broad themes/trends (and or just keep it simple), you can create one for eLearnings and one for ILTs in central repository, which tracks across all completions. You can break data down (at least I thought?) depending on the type of course and at least for ILTs you can usually isolate based on various fields like session dates, instructors, etc. I have to brush up on this again.
Tests are “L2” instruments, that is to determine whether learning has taken place. I’m unsure of a use case, but seems hard to conceive how to use this effectively in the CLOR unless, similar to above, the Qs apply to all courses or LPs you add it to, and accumulated results over time are valuable across all completions.