Skip to main content

A huge drawback (for me) of using APIs is that 95% of the endpoints want some ID that isn’t easily visible. Usually you have go to a different API to retrieve it. There are some endpoints in the API browser, like "Returns list of mappable fields for the data importer" under Manage > Users that just 'dump' info. This is very helpful.

This leaves me with two questions
1. Somewhat rhetorically - How come Docebo has hamstrung us like this?  Why not have a series of standard endpoints, themselves organized collectively instead of hidden all over the place, that provide the basic ID for all catalogs, all learning plans, all courses, all training materials, all sessions, all events, etc. This api-browser holds so many endpoints, many that I can't imagine ever using...but the simple, low hanging tasks are almost impossible to do.

2. So, assuming that I can happen upon a list of IDs that I want to use, how do I keep that information so that I don't need to figure out how to find it again? For example, I find a list of catalog IDs. What's the best practice? Scrape off the .json and then convert it to .csv?

I did find this article, although VERY cumbersome

Unique IDs in Docebo Learn via the User Interface

Still interested in best practices for collecting this information.


Hehe, well, I feel partially responsible for this, so I might as well chime in.

The thing to keep in mind is API’s are really designed to be programs talking to programs, and thats why those ID’s usually are so unique and special (and not human friendly). It is also why typically to get them you use an API to do so, so the program can go find them as well. What you are running into is a side effect of trying to use the API’s to collect info/carry out tasks as a human and more of a stop gap of feature gaps. That being said, I have used platforms where such IDs are in a data dictionary or directory to be able to easily reference if needed, it would be great to have that here, I swear I put an idea in for it.

For your part 2, my answer would be, it depends!

If it is something fairly static, I do often make a lookup table for myself, whether in CSV, Excel, SharePoint, etc. 

If it is something that changes regularly, I would not bother with the scraping part as you will just be spending time updating your lists so much. Here is where I often make a collection or folder in postman for the task I am doing. So like, maybe my task is ‘Updating things in catalogs’ and the first API I would have in that folder is, ‘FInd the Catalog IDs’ so I don’t need to remember which one it was or what order or whatnot, its just sitting there waiting for me to use based on the task.

 


Oh, and turns out I did make an idea….but its not happening :)
 

 


Oh, and turns out I did make an idea….but its not happening :)
 

 

Yea, I saw that … 😣

Thanks for your input.  I recognize that APIs are meant to go machine-to-machine, but until AI takes over the world, it still has to start with a human. I do understand that there are only some things that it makes sense to make a ‘data dictionary’.

 

You actually clarified another point, which is why use API when a batch method exists and you already have a process that fits. Many of our conversations around APIs have been to use them as a stop-gap for missing features. This is key to remember. Thanks.


Oh, and turns out I did make an idea….but its not happening :)
 

 

Yea, I saw that … 😣

Thanks for your input.  I recognize that APIs are meant to go machine-to-machine, but until AI takes over the world, it still has to start with a human. I do understand that there are only some things that it makes sense to make a ‘data dictionary’.

 

You actually clarified another point, which is why use API when a batch method exists and you already have a process that fits. Many of our conversations around APIs have been to use them as a stop-gap for missing features. This is key to remember. Thanks.

Yeah, so next weeks DU Live session we will be talking a bit about that topic, and I can’t emphasize it enough, while it’s fun to know and have all the shiny tools…it doesn’t mean we need to always use them 🙂 I literally have an analysis step when building a custom solution that is called “Pause and recheck are we over complicating” which is designed to be a “can we do it natively or simpler” check. 

I often also end up diverging from a native CSV thing or similar if there are limitations too it, like scheduling ILT’s and such, where I can then justify building something out. 


Reply