Skip to main content

For folks that went to Inspire 2023 there was a session about SCORM, xAPI and Content practices in general.

It was pretty good - we were walk through best practices of generating content and how to work with it.

At one point I raised my hand and asked how you get that fancy question level type of CMI interactions into the system where it is a human readable format for consumption (essentially a training material question/answer level report). And @KMallette shouted out from the back of the room SCORM 2004, v3. She stopped over and we met for a first time in person 🙂. I expressed it has been eluding me since we starting working with Docebo. We chatted about what could be the problem - and the conversation did not leave me.

I am here to say that @KMallette is right. But there are some things to take into consideration to trigger that report properly.

First and foremost, where I was going wrong? Do not set a trigger to complete your course. You want the completion criteria to be a person passing a quiz.

Now in practice I began doing that maneuver years ago, but I wished someone walked me through it or I looked into it a little more before adopting the approach. The short share is that Articulate and Rise had to both live through a transition out of Flash and fully into HTML 5. There was a time (and I am sorry I can only speak to it anecdotally as I was troubleshooting less and leading more) where it was a bit rocky to count on those CMI interactions - none the less to get the completions successfully recorded in your LMS.

To help? I believe authoring systems like Articulate Storyline and others took a step back and said “hey if they want a single step to trigger a completion? Let's give it to them”.

Here is the thing - using that trigger? Transmits no deeper CMI interactions to the system...in essence? You have short cut all of the data your course could be collecting and told the LMS the person is done with that click….and that is about it. In fact? The CMI interaction “payload” that is submitted looks like SCORM 1.2 gibberish.

So - hopefully someone will read this and gain from this. If you want those interactions? Question and Answer level data? Then the moral of the story is Single Trigger = Bad. When the learner completes a quiz = good.

 

Learn from my blunders….

Are there caveats to the approach? Well - there are a few:

  1. SCORM can be flawed with how it suspends and picks up a score from a quiz that was suspended in the middle of the quiz.
  2. When you transmit more data via SCORM? It can be more verbose. And you can use that to your advantage (especially in Captivate where a publishing option is to send interaction communications or something like that). The flaw is that SCORM actually counts on constant communication between the SCO and the LMS - or the course can be left in a “bad state”.
  3. Depending on the user's connectivity (and work from home/remotely can severely challenge this), the user could be attempting to take the course where it doesn't stand to have a chance for that solid LMS to SCO communications.
  4. For long (duration) SCOs, you can find the LMS will go out of session before the course is triggering CMI interactions for its quiz.
  5. Editing a SCO that is “in flight” (live in production) can be hazardous to the health of the SCO and your learning campaign. Never mess with the structure of a SCO that is published without telling to yourself, why is it getting hot in here?. Structural details are details that negatively impact the xml manifest, including deleting expected slides, changing question and answer order or answer counts, etc. In Docebo, you can adopt hiding the older sco and importing a newer training material...but that approach has some caveats too (another article, another story).

So good luck with this - I hope someone gains from the stub and the conversation with another SCO enthusiast in the community.

@dklinger AWESOME!!!!  Thank you for the follow up on the details.  And it was so lovely to get to meet you in person in Nashville.


I’m not an expert, but it seems to be possible to achieve this outcome via Storyline 360 and SCORM 1.2 as well – at least judging from something I’ve found with one of our courses.

Is this the type of report you were describing, @dklinger?

I’ve been asked to try and consolidate all our technical specs for the e-learning partners we use to author our trainings, and having read this post a while ago, I was somewhat surprised to find that this level of tracking had been achieved with a package that uses SCORM 1.2 (the excerpt below is from the imsmanifest.xml file):

<?xml version="1.0" encoding="utf-8"?>
<manifest t...]>
<metadata>
<schema>ADL SCORM</schema>
<schemaversion>1.2</schemaversion>

Furthermore, it was created using Storyline 360 (excerpt from the story.html file):

<!doctype html>
<html lang="en-US">
<head>
<meta charset="utf-8">
<!-- Created using Storyline 360 - http://www.articulate.com -->
<!-- version: 3.79.30834.0 -->

Personally, I’ve never used Storyline before, so I can’t really comment on how this was achieved in that software, beyond what I can see in “Advanced stats”:

Tagging @KMallette here as well:

  • Does this output align with what you’d expect to find from a SCORM 2004 v3 package, built as described in Storyline 360?
  • If so, is the advantage of exporting from Storyline as SCORM 2004 v3 that it makes this outcome easier or quicker to achieve?

Any other thoughts are certainly welcome! I’m technical enough to go poking around inside a SCORM package while understanding some of what I find inside, but a little knowledge is a dangerous thing… There’s a lot I have to learn about SCORM, so any insights are much appreciated.


@Ian I’m probably about as techy as you are on this, but one thing I think you’ll notice in your first screen shot, the actual question information doesn’t appear. We tracked this down to Storyline, which means that this format is a no-go for us. Courses built in Rise do present the actual question, and we can put a Storyline block into the RISE course ‘shell’ so that gets us the best of both worlds.

In the Scorm 1.2 courses that I’ve looked at, I’ve never seen responses come thru. I’d recommend that you check the specs of the two protocols, as I’m nearly certain 1.2 doesn’t even offer that support. Meaning, somebody did something in that course if the headers are to be believed.

I can’t provide any screenshots of my 1.2 courses at the moment given the outage/service interruption we’re experiencing, but I’ll try to add a couple of examples once the platform is back up.


@Ian UPDATE to my response earlier today

You pushed me to go look at this again, and I’m seeing the same you are.  Scorm 1.2 is providing information into the Additional Stats tab for a Storyline 360 course. I’m not sure if that tab was always there, or if it’s newish.

I know that when my team looked at this issue we were focused on what could be exported to Excel/cvs as our goal is some KPI dashboards with exam info as a baseline (how good do they perform their jobs vs. how good they did on the course exam). The issue of the question not appearing in the exports seems to be somewhat resolved when using a RISE quiz. When I export the training material information I get part of the question so it’s not a solid solution.

My team and I are working on this issue today, so if we discover more info, I’ll post again.


Thanks for much for the follow-up, @KMallette! This is really helpful. Good spot that while the answers came through, the question itself didn’t.

I will have to bring this up with our e-learning partners, some of whom have licences for multiple authoring tools (Storyline, Captivate and iSpring, if I recall correctly). We’re looking to establish a narrower spec that works best for us, and while we had been focusing primarily on questions of SCORM 1.2 vs 2004 v3 vs xAPI/TinCan, etc., I’m now wondering if we should go so far as to insist on a specific authoring tool being used as well.

I might also poke around Articulate’s “E-Learning Heroes” community to see if there are any insights there on this topic. If I find anything interesting or new, I’ll share that as well.


Ah, OK… So I’ve managed to pinpoint one significant difference between SCORM 1.2 and SCORM 2004 v3:

cmi.interactions.n.description (localized_string_type (SPM: 250), RW) Brief informative description of the interaction

This seems good for storing additional context about the question, if we don’t want to use the ID for some reason. It’s part of the 2004 spec, but it’s not included in 1.2. Both specs include cmi.interactions.n.id but with slightly different definitions:

SCORM 1.2:

cmi.interactions.n.id (CMIIdentifier, WO) Unique label for the interaction

SCORM 2004 v3:

cmi.interactions.n.id (long_identifier_type (SPM: 4000), RW) Unique label for the interaction

 

As far as I can tell, “SPM” stands for “Smallest Permitted Maximum”, i.e. I guess one can assume the character limit to be at least the SPM. “RW” is “read/write” and “WO” is “write-only”.

Certainly, this is making me think that SCORM 2004 v3 may be preferable to 1.2, regardless of the authoring tool. That, plus the much larger limit for cmi.suspend_data, of course… Albeit with the caveats in @dklinger’s original post.

 


@IanI neglected to mention the Description field.

We found that if we are creating the quiz in Storyline, we need to change to the FORM view (of the question) and add in the Question there. If we do this, then the question comes out in the cmi.interactions.n.description field as you described.

The Heroes community is where I got a lot of my understanding pre-Inspire 2023...several really good articles there. Scorm.com is another resource, run by Rustici Software.

As I mentioned on Friday, my team did meet. We ran thru our tests again (publishing both Storyline and Rise with Scorm 1.2 and Scorm 2004/3 so that we can compare the results of both Course Management > {coursename} > Enrollments > User Stats > Reports, and the Training Materials > Answer Responses reports.

We got different results from our earlier tests, which has led to a change in our requirements. Both Storyline and Rise produced exports that contained the question and responses (assuming that the storyline version used the FORM view to create the question).

 


@Ian - my apologies about not responding a little while back. The first report image you pulled up is what I was thinking about. Great to see it can work with SCORM 1.2.

Thanks for continuing the chat with @KMallette. I think between this article and the best practices one posted by @John - we are getting to a sweet spot. 

 


Appreciate the collaboration everyone!  @dklinger@KMallette,@Ian 

Really insightful info and happy to see the topic around SCORM BP’s being brought to light. Building and maintaining SCORM content is a very “to each their own” type of thing, however, hoping with your help and combined knowledge we can walk away with some common tenets to live by when creating, updating and managing the content in Docebo.


This original post is a bit old, but I wonder if someone has a workaround for this:

We have a Storyline 360 course with a pretest that allows people to test out of chapters in the course. If they do not correctly answer all questions for a chapter in the pre-test, they have to complete that chapter AND then pass a knowledge check for the chapter. 

As the tracking options state in, the first tracking option reached is submitted to the LMS. 

So, in our case, if they fail the pretest and need to review and pass the knowledge check for the first chapter of the course, the LMS marks the entire course complete because that’s the first item reached by the learner. 

Now, to complicate things, I’m trying to see the interaction data for the questions. If I only go with “Course Completion Trigger,” the quiz results are not stored. If I enable tracking of the results slides (even if I set them to pre-test), the interaction data is sent but the course marks complete whenever the learner passes any of the knowledge checks they had to complete.

SCORM package is 2004 V3.


This original post is a bit old, but I wonder if someone has a workaround for this:

We have a Storyline 360 course with a pretest that allows people to test out of chapters in the course. If they do not correctly answer all questions for a chapter in the pre-test, they have to complete that chapter AND then pass a knowledge check for the chapter. 

As the tracking options state in, the first tracking option reached is submitted to the LMS. 

So, in our case, if they fail the pretest and need to review and pass the knowledge check for the first chapter of the course, the LMS marks the entire course complete because that’s the first item reached by the learner. 

Now, to complicate things, I’m trying to see the interaction data for the questions. If I only go with “Course Completion Trigger,” the quiz results are not stored. If I enable tracking of the results slides (even if I set them to pre-test), the interaction data is sent but the course marks complete whenever the learner passes any of the knowledge checks they had to complete.

SCORM package is 2004 V3.

Course Completion Trigger wont work for ya. It is like a “hard - the user is done” trigger for the detail it sends along.

But you can/should be able to “see” what the pre-test is acting for scoring (I would think you would want to have only your quiz to trigger scoring) and still collect interactions.

Take a look at the Reporting and Tracking Options:

It should be a combination of the right results with adjusting this screen above....

Personally we do not use pre-tests - but I can only think there has to be a way to send along the interactions for both….(pre-test and quiz) but one does not control the completion.


Yes, it’s complicated. I can see if we had one final quiz that changes based on their pre-test results. Perhaps that’s something to explore. Right now, we have multiple chapter “quizzes” that someone must take and pass to complete the course. 

So, if I enable tracking on any of those quizzes, the course marks complete if they close the course after passing it (they can pass 4.18, close the course, and get a completion). 

We’re using variable to gate people until they get to the end slide and the trigger you hate. :) 

 


@rloverin - I can see the issue better now. But I dont think there is a simple way to tackle this one - because you are “failing” a type of base logic with how Articulate authors. I swear this was easier in other authoring tools like Adobe Captivate (sorry - that is the product where I started in this business years ago).

Here is a thought - and it is by far not perfect. Can you break out your pre-checks to other SCOs? and use that to impact your logic to a required section of the course and have it act as an end marker for the course?

By breaking it up - you should be able to “listen” to your interactions.

Then use the virtue of a course container to help drive your logic with possible end markers?

@John - you ever have a customer trying to tackle and resolve something similar?????

The only other thought is to engage the team over at Rustici Software. They have literally heard it all about SCORM and SCOs. They do need to know you are working with Docebo, because from my lens - it does have a limitation with branching multi-SCOs (which would probably work perfect for you here) that is called out when you load SCORM 2004 files.


Thanks @dklinger . We had a lot of strict requirements for how the course had to work. But this might force us to simplify things. I’ll keep digging and reach out to a few others people.


@rloverin - let us know where you land please - it is a topic that is very interesting across industries.


Hey @dklinger, thanks for the tag!

I may sound like a newb, apologies in advance, since I’m not publishing in Storyline or Captivate daily/weekly… 

Could you not organize the course package in a way that continues to use variables (allowed in SCORM 2004) that support a forced navigation path or could allow a free navigation based on a (pre-test) score? Your final course completion can always be related to the final test

Keep me honest with the summary below and assumptions made on steps to take.
I think you could get close, but maybe you’re already doing this today...

1. Pre-test Outcome: If the learner passes the pre-test, a variable (e.g., Pretest_Passed) is set to True.

2. Conditional Branching: Based on the outcome of the pre-test, the course can be set to skip over the mandatory chapters and go directly to the final test.

3. Free Navigation: Once the pre-test is passed, triggers will allow the learner to bypass the content, fast-tracking them to the final test.

Happy to discuss further the options. It’s an interesting case, the testing out concept, but often left to the creator and tool limitations during publication (unless you are using native Docebo Training Material Test objects in courses).


Hi @John . Thanks for weighing in. 

Your first two assumptions are basically what we’re doing now. Another wrinkle (another) is that someone can pass the pre-test questions, which sets certain variables to True and marks off those chapters. But, they then have to complete an additional two chapters: one that’s refers to content for their specific geo location (based on a prior selection from the user) and a final chapter that everyone must review. Neither are test out opportunities. 

So, at the moment, it sounds like we’d need a final quiz that adapts to the user’s performance. That wouldn’t capture the pre-test results, though, since we can’t have someone pass that and receive a completion before reviewing those last two chapters. And I don’t think we can pass variables across separate SCORM packages inside a Docebo course. Or can we? SCORM material one would communicate the results with SCORM material 2?

 


@John You can definitely pull it off with a multi-SCO approach. But I get worried about the Docebo’s support of a branching multi-SCO and the conditional logic being ignored.

 


What’s odd, though @dklinger , is that our current Rise course (which has chapter knowledge checks but uses the View 100% of the course) does send the question interaction data. The Rise course does not have a pretest, and all learners must review all blocks. Quiz results are not required. 

I wonder what’s going on behind the scenes in Rise that isn’t in Storyline. 


Right - using the View 100% you are definitely using a different completion criteria.

It is nice to know that it is sending the interaction data. And that does sound right.

BUT - that will fail with the concept of using a Pre-Test > Conditional Logic > Content > Knowledge Check.  The conditional logic would decide to visit more slides or not. There isnt a way to pull that one off as far as I understand.

You maybe able to mix the two criteria (viewed and score) to get an effect, but I would suggest to map it out very tightly to avoid corners.


@dklinger @rloverin  I’ve read this through 3 times now, and I”m still a bit confused about what you’re after. Are you WANTING to capture multiple test responses for the same course, or... Are you ABLE to capture multiple test responses, but the LMS is just marking the course complete too soon?

If the second, how are you doing that? From your screen shot it looks like you’ve got all of your Results pages in one .story file.

When I’ve worked with Storyline, 2004/v3, and Docebo, we’ve only been able to capture a single set of responses. (Just a normal knowledge check, not a complex pre/post test). If they reset and try again, then the first set of results are overwritten. I’ve had a couple of IDDs want to capture all results, and I’ve told them sorry can’t do. So not I’m wondering if I’ve mis-directed them.


@John You can definitely pull it off with a multi-SCO approach. But I get worried about the Docebo’s support of a branching multi-SCO and the conditional logic being ignored.

 

I may need to walk back my comments… as with SCORM 2004, Docebo does not support sequencing at the moment. I’m not 100% certain on it, but I am leaning towards a “yes”, is the part in the steps I described in my last comment. What I explained is in essence sequencing… right?

Since we are trying to adjust on the fly the experience, I am leading towards sequencing being the best solution in order to preserve the activity info and then place the user in the corresponding location in the navigation.

Trying to think out of the box and back to Storyline’s capability on Triggers and Variables (also knowing we can manage multi-SCO’s), maybe there is a path there (albeit complex). 

 

Let’s keep noodling on some alternatives… 🍜

What’s the potential to manage in multiple SCOs, since this is supported today?

 

Create Multiple SCOs (Modules/Chapters broken out):

  • Structure the course with multiple SCOs, where each chapter or module is a separate SCO.

Pre-test Logic:

  • Use triggers and variables in Storyline to capture the learner’s pre-test score and determine if they have “tested out.”

3. JavaScript for Completion Handling:

  • Use a custom JavaScript that interacts with the SCORM runtime API to mark specific SCOs or the entire course as complete based on pre-test performance.
  • // Assume 'Pretest_Passed' is the variable set when a learner passes
    if (player.GetVar("Pretest_Passed") == true) {
    var lmsAPI = parent; // Retrieve the SCORM API
    lmsAPI.SetValue("cmi.core.lesson_status", "completed");
    lmsAPI.Commit("");
    }

One downside on multi-sco content approach is that the file size can grow quite large if you have enough SCO’s nested inside.

I’ve seen (successful) attempts at building language translations inside packaged multi-SCO SCORMs. Although, depending on the content’s length, you may only be able to get a handful of languages input before reaching the upload cap on the zip file.


Reply