Skip to main content

DISCUSS: Docebo Connect - Best Practices for Docebo API Pagination Loops

  • October 22, 2025
  • 9 replies
  • 219 views

hailey.gebhart
Helper III

Hello Community!

I wanted to start a conversation about best practices for handling pagination in Connect when making Docebo API calls. As many of you know, in order to get all of the information from a large amount of data via API, you have to use pagination. page number and page size dictate this in the search query for your call, but I am curious how you are handling your triggers for increasing pages and loop triggers. Here are some common ones I see.

 

  • Page Number Handling
    • Outside of the while loop, create a page number variable starting at 1. Use this variable as the page number inside of your API call. After you complete your operations in your loop, add one to the variable.
    • Use the index + 1 of the while loop to set the page number in the API call
    • If the recipe logic allows it, use Current Page + 1 of the returned data in the API call as the page number / make a variable starting at 1 and base the next page numbers off of the page number field in the API call
    • Somehow creating a for loop based on the total number of pages?
    • Something else…?
  • Loop break condition
    • repeat while page number is less than or equal to the total page count
    • repeat while has_more_data
    • repeat while returned data[].length > 0
    • Something else…?

 

 

 

I have seen these in examples and some differing ones have been instructed to me by folks at Docebo from PS hours. I am curious to hear your thoughts! What are the pros and cons of each to consider, what do you like to do, and what situations require different conditions?

For the page numbers, I personally like to define a variable, as this makes it easier to visualize when I am trying to get details on a call and customize with testing.

In addition, I like to repeat while has_more_data because you don’t risk getting an empty call that you can’t debug at the end of your pagination. In addition, not all calls have extra meta-data at the end (such as the analytics service in the API browser for new reports. However, I have been using data[] > 0 for quite some time. I am also curious on the security of the field vs basing your condition on the actual output of the data. 

 

 

I would love to hear what you think if you use Connect often or what you do with other API structures beyond Docebo’s format. 

 

Thank you!
 

Hailey

 

9 replies

Ian
Guide II
  • Guide II
  • October 23, 2025

Hey Hailey,

These days I pretty much always default to while loops using index+1 for the page number, and has_more_data=true for the condition. And since you briefly mentioned page size, I’ll add that I always set it to 200 to minimize the total number of API calls I need to make.

I’ve found this approach to be:

  • reliable (both in performance and availability)
  • quick to implement, and
  • elegant.

When I’m building a recipe, I don’t really want to be spending any more time on pagination than I have to. So those advantages above are a big selling point for me.

Thanks for raising the topic! I’ll be interested to hear what others are doing here.


Forum|alt.badge.img+1
  • October 23, 2025

Hi Hailey,

Depending on the overall need, I typically use the repeat while method too. Using index+1 for page, page size to the max 200 and also include any additional query parameters to return only the data I need.

If you’re getting information from these calls to store the data in a list it’s also important to use batch action steps within the loop to be more efficient (ie: batch add to list).

There’s a number of other scenarios that would require a different approach. For example maybe the data you’ll pulling is very large (total course enrollments for active users) and you are doing other things after retrieving the data, making other calls to and from Docebo. This would require a for each approach (for each enrollment/user do this then if it meets this condition update this...etc). You need to be mindful of total calls per hour to Docebo as there are recommended limits. You may not want to paginate through all and instead just the first page.

Think of the above example in this way, let’s say you just return the first page with max 200 results, then have to make other calls back to Docebo or even get more data for each of those (depending on what you’re doing). If you do 2 additional Docebo API call action steps for each entry returned in the list of 200, that’s a total of 401 API calls within the recipe run time. If you have other recipes also running in the same hour you need to plan accordingly to keep under the 1000 “soft” limit for total calls per hour.

So I guess what I’m trying to illustrate here is that during ideation and development you need to not only evaluate the current recipe you are creating but also the potential impact or worse yet, conflict with other recipes and system limits at the same time. This may alter how and why you use the while or for each steps.

Hope that helps.

~Mark


dwilburn
Guide III
Forum|alt.badge.img+4
  • Guide III
  • October 23, 2025

Hi, I’m in a new organization and now have access to Docebo Connect. I have worked with it a bit. We have the Outlook connector to get RSVPs for ILTs.

Using the basic Docebo Connect I can build and use recipes as needed?

I am very familiar with Postman for API calls and feeding variables via CSV, but for things I do often I could see how a DC recipe could be handy.


Ian
Guide II
  • Guide II
  • October 23, 2025

To clarify, ​@dwilburn, you’re all set up with the Outlook part already, and you're asking if you can use Docebo Connect for other purposes using just the Docebo connection + Workato utilities (e.g. CSV parsing)?

Basically, yeah, and that’s actually the bulk of where I spend my time in Docebo Connect.


dwilburn
Guide III
Forum|alt.badge.img+4
  • Guide III
  • October 23, 2025

Thanks ​@Ian - I thought that was the case. I have some courses in DU I need to get back to, and get in there and play with it. It would get me around the Postman limitations on the free account for collection runs. I have gotten setup with Newman but it is cumbersome.


hailey.gebhart
Helper III
  • Author
  • Helper III
  • October 23, 2025

@mstrom  You bring up another great point of discussion - what are the best practices to save paginated data and do operations on them?

Usually this depends on my use case. In the past I have truncated a lookup table, added everything to it in batches as I make calls, then do whatever operations I need. I use this when I know my data will be below the lookup table limits (10 columns by 10,000 rows), and I want to make it easy to filter through my data without using workato formulas.

 

I also like adding all my data to a list, then iterating through this list.

 

However, sometimes there are situations where you want to have a nested loop. I don’t like doing this if I don’t have to. It gums up the readability of my recipe and increases processing time (in theory). However, this is a good option if my operation needs to stop at a certain condition, so I don’t want to pull everything then do work on it, if that makes sense. 

 

Does anyone have any other good practices for working with a lot of data while minimizing API work? I tend to lean heavily on the workato lookup tables and list and hash formulas to get my data where I want it, then use batch actions where I can. I would also love to hear the ways in which you might batch a batch action. For instance, I have 500 course enrollments I would like to archive, but the API only accepts 200 at a time, what is the cleanest way to batch the 500 enrollments to get everyone archived?

 

I am also curious how you track API limits within recipes you create? Is anyone adding waits to their recipes to “cool down,” so they don’t make too many calls?

 

In addition, from my understanding, Docebo is more forgiving with the API call limit for Connect as their charging model is based on the number of connectors enabled. However, for runtime and complexity, I still like to try to minimize the number of calls I am making. 

 

I would love to hear more thoughts!

 

@Ian ​@dwilburn 


dwilburn
Guide III
Forum|alt.badge.img+4
  • Guide III
  • October 23, 2025

Hi ​@hailey.gebhart - as mentioned I am getting up to speed on Docebo Connect, but on my large Postman / API collection runs I do split things up so that I stay below 1000 calls / hour. I was recently updating a setting on about 1500 courses, and split it, then set a timer on my watch and worked on other stuff. Then came back and ran the next collection.

Based on Docebo’s last communication regarding exceeding API call limits, going over this limit impacts not only your instance, but other instances on the Docebo data lake.

I’m glad you mentioned the 10 column by 10,000 row limit, that is good to know.


Ian
Guide II
  • Guide II
  • October 24, 2025

I’ve only recently started using Lookup tables for such a purpose, ​@hailey.gebhart, and that was because I wanted to add to the table each time a webhook fired, while separately only processing the list every 8 hours. So in other words, using the same list across different recipes. Within a single recipe, I just use the Create List + Add to List (Batch) actions through the Variables by Workato utility. I confess I’m not sure what the limit is here, but if the Parse CSV action can accept up to 50,000 items, I suspect a List can do the same.

For your archived enrollments example, a for loop’s default repeat mode is “one item at a time”, but you can set that to “batch of items” instead and then specify the batch size. From there, you may find that the endpoint requires you to do something a bit fancy to get your batch data into the format it expects. For example, it may ask you to provide an array of IDs, as opposed to an array of objects. In such cases I might use e.g. .pluck() to extract only the data I need from my batch.

Which leads me to another point: sometimes I find the best way to filter or transform my data is to use a Ruby snippet. You could use Python or JavaScript snippets as well, but I tend to favor Ruby because of the shared syntax with Workato formulas. Whichever you use, it can be really powerful (and very cheap with respect to tasks used). I have sometimes used it for the express purpose of preparing a list of optimized API calls to cycle through. And along the way, this can also reduce the need for nesting loops within each other.

It is of course harder to visualize what’s happening exactly. And for a long time I found this approach very difficult, but ChatGPT or Claude are pretty good at taking the Ruby snippet sample code as a reference, and then writing some logic that will (a) be accepted by Workato, and (b) do what I need it to. The usual caveats re: AI apply here, of course. You definitely want to test the logic before you actually use it, especially if it’s doing anything irreversible.

Lastly, on the topic of API rate limits and cooldowns, maybe the first time I succeeded with a Ruby snippet on my own (without AI) was implementing e.g. a 2- or 3-second pause for that exact purpose. I honestly don’t recall whether I did this out of an abundance of caution or if I was hitting rate limits before. And I don’t think I’ve used it in quite some time. In general, if one takes the time to plan an approach that minimizes calls in the first place, using batch endpoints wherever available, I agree that you’re unlikely to run into problems.


dwilburn
Guide III
Forum|alt.badge.img+4
  • Guide III
  • October 24, 2025

Docebo’s post June 17th 2025, regarding rolling out rate limiters for API calls.