Skip to main content
Sticky

Harmony Search: New enhancements and a quick survey to shape the road ahead

  • October 21, 2025
  • 4 replies
  • 423 views

This post was published in the public PDG for Harmony Search. If you aren’t already a member of this PDG, we invite you to join!

 

Whether you are in the PDG or not, we’d love your feedback.

Click here to complete the short survey.

 

Hello everyone,

Thank you so much for all the feedback you’ve been sharing, it’s truly invaluable. Your insights help us prioritize and shape our work so that it reflects the real needs of those using the feature every day.

We’re also thrilled to see the growing adoption of Harmony Search and the enthusiasm you’ve shown toward this new experience. Your engagement is what drives us to keep improving and innovating.

I’d like to share a few important updates with all PDG participants:


Generic Knowledge preference - Now live

Following strong demand, the option to disable the model’s generic knowledge (the general information the LLM has learned during its training) is now available. You can now decide whether to include or exclude the model’s general knowledge in its answers directly from the Harmony Search configuration within the Artificial Intelligence panel.
This means you can restrict responses to content that exists only within your platform and is visible to the user, for a fully secure, context-based experience.
 

 

SCORM Reprocessing - In progress

After improving our tool, the Distiller, which now accurately parses SCORMs, we are reprocessing SCORM content, a process that takes a few weeks. Once complete, SCORM files created with Articulate Rise and Articulate Storyline are included in Harmony Search’s answer dataset. This enhancement also delivers noticeable improvements to the standard search experience

 

Conversational experience improvements - Q4

By the end of the month, we’ll release several upgrades to the conversational experience, including faster response generation and the ability for Harmony Search to continue generating answers even when the LMS browser tab isn’t in focus.

 

Tin Can/xAPI parsing - Underway

We’re also working on parsing and transcript generation for Tin Can/xAPI files, similar to the process already implemented for SCORM content, starting with those created using Articulate Rise and Storyline. Already today, xAPI content generated with Creator is correctly transcribed and fully functional with Harmony Search. Stay tuned for updates as soon as we have more information to share on this topic.

 

Expand Harmony Search answers with catalog content - Sandbox Preview in November

Currently, Harmony Search generates answers based only on content from courses in which the user has an active enrollment. Soon, we’ll introduce an option to expand the dataset used by Harmony Search to generate answers, by including content from free courses within the catalogs visible to the user asking the question.

This means that Harmony Search will leverage catalog content as part of its answer generation, surfacing valuable training materials that were previously hard to find, and dramatically increasing the reach and usefulness of the system. You’ll be able to manage this directly from the configuration panel, deciding whether or not to include catalog content in the dataset used to generate answers.
 


 

Help Shape the Future of Harmony Search

As the adoption of Harmony Search continues to grow, we want to ensure we’re moving in the right direction, aligned with your needs and vision.

We’ve prepared a very short survey (just 4 questions!) to gather your feedback on the future evolution of the feature.

 

The survey explores two key possibilities for federated search: the first scenario involves allowing external systems to find answers within your LMS, and the second involves transforming the LMS into a unified hub that can search external repositories.

 

These two scenarios are functionally defined as follows:

 

Scenario 1: Treating the LMS as a data source for an external enterprise search engine.


Scenario 2: Configuring the LMS as the unified hub to query external repositories like SharePoint, Drive or Confluence.

 

Your input will directly influence our next steps, helping us make Harmony Search even more powerful, relevant, and connected to your learning ecosystem.
 

Thank you again for your continued trust and support, together, we’re building the future of learning search.

 

4 replies

spotratz
Helper III
Forum|alt.badge.img+3
  • Helper III
  • October 21, 2025

Expanding the dataset to free courses, even those not enrolled, is the droid we were looking for. That and turning off general knowledge. Looking forward to seeing this in action!


Dominik
Contributor II
  • Contributor II
  • October 22, 2025

Thank you for this update. Great improvements, looking forward to testing them.


  • Influencer I
  • November 3, 2025

Good improvements!

I tested the new version and i’m very glad the generic answers can be now limited to in-platform content only. This was the main pain point.

However I noticed that it still makes-up information or creates wrong associations, which in turn translates into wrong answers. I paste two examples below. We sell technical training, so the answers should be quite strict, with little to no variability/nuance.

I don’t know exactly how to solve this problem, but I think it could be related to the degree of “creativity” given (or temperature, as it is called in other ai-gen bots). If the temperature/creativity slider could be put to a minimum or very close to a minimum, maybe these cases would diminish.

I think the idea of Harmony is great and i envision AI will never separate from Learning anymore, however creating wrong information by interpretation is definitely not the good direction, it paves the way like from astronomy into astrology.

 

Example 1: the user asks for information about a product version which doesn’t appear in the course material available, however Harmony decides it the same with the one present in the course material, which is not at all. Reading through the below example, you can replace “RAU X” and “RAU N” with “Tesla model X” and “Tesla model Y”, maybe that makes more sense to someone unfamiliar with these products. You will see that the robot quickly jumps to associate the two as being the same, which is not at all. It’s not a matter of nuance at all, the answer should be very clear: either provide the correct answer, either provide “i don’t know”.

Example 1

Example 2: the robot makes a clear mistake, not sure how or why, maybe it doesn’t read correctly the slides (humans easily associate information put together on a slide, even if the grammatical phrase doesn’t specifically say “this product has this attribute”)? In the document referenced in the below conversation, there is one slide with product 6622 and the weight 7.5kg, another slide with several other products weighting 8kg and one slide with product 6322 weighting 11kg. But for some reason, the robot not only correlates product 6322 with the wrong weight, but then also defends their idea and draws the conclusion that the weight can vary up to the one specified by the user (11kg) in specific cases. There is another problem in the below conversation, the robot gives the reference “document 4036” although i couldn’t find this document index even as superadmin, but this is less important.

Example 2

 


  • Novice II
  • December 18, 2025

My organization’s technology leaders would say Scenario 1
I want Scenario 2
Can it be both? 😃