Skip to main content

"Tests" dashboard


Hi Everyone,


As agreed, we've started working on the Tests dashboard. We have listed some KPIs that we believe are useful in analyzing this specific topic. We would like to get your feedback and collect suggestions on what else might be needed.
Here’s the list of KPIs:

  1. Learners Pass Rate: Percentage of participants who passed the test out of the total number of participants.
  2. Accuracy Rate/Average Score (Percentage of correct/incorrect answers): Percentage of correct answers (considering the scores of the answers).
  3. Average Number of Test Attempts: Average number of attempts made by users to pass the tests.
  4. Most Difficult Tests: Top 10 tests with the highest number of attempts, regardless of whether they were passed or not.
  5. Top/Worst 10 Questions (by pass rate): List of the 10 easiest and hardest questions, considering the pass rate of the answers.
  6. Detailed Report: A report providing detailed information down to the level of individual answers.

Filters: time, test, course, branch, group, user.


Is anything missing from the list? Could you rank the KPIs from the most useful to the least useful (using the corresponding numbers)?
What do you think about the filters?
Thank you!

Did this post help you find an answer to your question?

4 replies

JeanetteMcVeigh
Hero II
Forum|alt.badge.img+5

Hi. Oh, I like this idea...

  • Ranking (since you asked, but all are valuable bits of information) - 1, 5, 6, 2, 3, 4
  • Not sure that #4 is named correctly - just because there are multiple attempts, we can not assume it is because it was difficult. I mean, maybe, but not definitively.  I would suggest renaming it to Most Attempted Test perhaps…
  • Would any course with the training material of ‘test’ automatically be on this dashboard? or is there some other way to distinguish the course to be on this list or better yet, NOT on the list?  I ask because sometimes we use ‘tests’ for things that might not be really considered a test...like maybe just one question that we need the person to say Yes to to continue...I wouldn’t want that course/test on this list or included in metrics at all.  Will there be a way to ‘tag’ (check box or something) that a course should be excluded from this dashboard? Just a thought
  • Filters are good - maybe add one in reference to my bullet about tagging/marking a course to not be included if that is something we can do :-)

Thanks, jeanette

 


lrnlab
Hero III
Forum|alt.badge.img+8
  • Hero III
  • 4887 replies
  • August 1, 2024

I think the top points for me are #1 and #6. The last being most important to be able to evaluate the pass/fail rate for a specific question. While the number of attempts is good to capture, not sure the average is useful or needed...the number of attempts to pass a test is quite subjective and depends on the content, etc.; many of our tests only allow a single attempt.

We also need to know whether the overall test was passed or failed so that would be a great filter to add. along with the ability to filter by test name (since the same test could be used in more than 1 course at a time.

Next, when we do updates to our tests, we keep the old version unpublished...will this data also be included and will those unpublished tests also be available in the filters?

We slo desperately need access to the archived tests since that data disappeared when we archived the user’s enrolment. This is a super important point to cover. The same logic, filters, metrics, etc. all need to be applicable to archived records.

Also, I was testing the “waiting period” between attempts and to get this to work, ‘the user needs to first exhaust all test attempts and only then will the waiting period kick in before they can try it again - not sure how this counts toward tracking attempts and whether the data around the first attempts is actually saved and reportable., This should be listed a test case to make sure no data is lost when using this process/settings.

Showing or reporting on the test settings could also very useful...things like total number of questions, total by question type, etc.

What about the questions themselves...will we have access to view/report on the questions down to the feedback for each answer choice? While this may be needed as an analytical datapoint is would great to be able to see whether there is answer choice feedback by question in some kind of list format.

Hope you find some interesting points to leverage…

thank you.


Forum|alt.badge.img+3

Ranking depends on what you are trying to accomplish. When reporting at a high level 1, 3, 2 are important. When deep diving into test performance it is 1, 6, 5. I don’t see 4 as useful because it isn’t really an indicator of difficulty, the tests that will likely make the top of this list are all employee training tests due to the large target audience size. The difficult and easy tests are better determined by #1 and #3. 

When I think about how we analyze tests we tend to do analysis by topic or related courses. I think adding filters for category, catalog, and maybe skills would be useful. Also, @lrnlab’s idea of hidden/not and passed/failed. 

For filters, it is more flexible when we can filter for inclusion and/or exclusion.
E.g. if I want all categories except for “all employee training” I don’t want to click 180 categories for inclusion, when it would be easier to exclude 1 or few categories, courses, catalogs, tests, branches, groups, etc. 


Forum|alt.badge.img
  • Contributor I
  • 10 replies
  • August 1, 2024

Another metric that would be useful is to:

  • Break down the scores by question category

For ranking the most useful KPIs, I think I would have it as: 1,2,3,6,4,5


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings