At my organization, our learning team has done a bunch of testing with the AI Coach, and I was curious if anyone else's experience differs from ours.
We are not a sales organization, so the primary scenarios we’ve been testing are direct report/direct supervisor situations such as providing feedback on meeting goals, how to properly interview, etc.
The way the bot responds during the interaction is pretty good. It follows along well with the personality that we give it, and it’s fairly realistic.
Where we have some concerns are in the feedback that it gives after the interaction. The feedback focuses a lot on brevity, clarity, and correctness, which isn’t necessarily bad but it can miss the bigger picture.
For example, if you build a scenario where you’re giving someone negative feedback, it encourages you to be more concise when that can be detrimental to the relationship with the person. When delivering negative feedback sometimes more words is better to soften the blow and preserve the relationship with that person so that they can hear the feedback and not take it so personally.
Another example is how heavily it prioritizes grammar and spelling. If you have someone that is a poor typist or not that great of a speaker it tends to focus on their grammar mistakes instead of the content of what they are saying. Helping the learner with the content is much more valuable than fixing their grammar.
I’ve also done some testing with LinkedIn Learning’s similar coaching chatbot and I feel like these types of issues are present there as well, but not as overt as what we’re experiencing with Docebo’s product.
Would love to hear other’s experiences as well. At my organization, L&D is a small team, so any type of experience y’all are having is helpful for us to hear.
Also, if anyone at Docebo has comments on this I would love to gather those as well.
