Skip to main content

so I have a tricky API question. I am trying to make a POST call to the batch user management end point through a power automate flow. I have everything working and a call sending the following information:

{
    "host": {
        "connectionReferenceName": "shared_docebo-5f878a7b3d70fe89ad-5ff90adea333524d7b",
        "operationId": "RosterFile"
    },
    "parameters": {
        "body/items":
            {
                "Username": "mary.smith",
                "Firstname": "Mary",
                "Lastname": "Smith",
                "Email": "mary.smith@example.com",
                "Password": "fi5Zd90S",
                "Language": "english",
                "Branch Name": "Demo Branch",
                "Level": "poweruser",
                "Timezone": "America/Chicago"
            },
            {
                "Username": "patricia.johnson",
                "Firstname": "Patricia",
                "Lastname": "Johnson",
                "Email": "patricia.johnson@example.com",
                "Password": "ww2Do42U",
                "Language": "english",
                "Branch Name": "Demo Branch",
                "Level": "poweruser",
                "Timezone": "America/Mexico_City"
            },
            {
                "Username": "linda.williams",
                "Firstname": "Linda",
                "Lastname": "Williams",
                "Email": "linda.williams@example.com",
                "Password": "cm7Gv20L",
                "Language": "english",
                "Branch Name": "Demo Branch",
                "Level": "poweruser",
                "Timezone": "America/Winnipeg"
            },
            {
                "Username": "barbara.jones",
                "Firstname": "Barbara",
                "Lastname": "Jones",
                "Email": "barbara.jones@example.com",
                "Password": "ek5Sy95J",
                "Language": "english",
                "Branch Name": "Demo Branch",
                "Level": "user",
                "Timezone": "America/Winnipeg"
            },
            {
                "Username": "sandra.martin",
                "Firstname": "Sandra",
                "Lastname": "Martin",
                "Email": "sandra.martin@example.com",
                "Password": "lv6Xv62T",
                "Language": "english",
                "Branch Name": "Demo Branch",
                "Level": "user",
                "Timezone": "Asia/Seoul"
            }
        ],
        "body/options/change_user_password": true,
        "body/options/update_user_info": true,
        "body/options/ignore_password_change_for_existing_users": true,
        "body/options/force_create_branches": false,
        "body/options/send_notification_email": false
    }
}


 The only problem is that I keep getting the following response:

{
    "statusCode": 200,
    "headers": {
        "Transfer-Encoding": "chunked",
        "Vary": "Accept-Encoding",
        "Request-Context": "appId=cid-v1:0c37065f-4d7c-47cf-af06-96cd9a75949d",
        "x-ms-function-status": "OK",
        "Date": "Mon, 10 Apr 2023 15:19:58 GMT",
        "Content-Type": "application/json; charset=utf-8",
        "Content-Length": "26"
    },
    "body": {
        "message": "Hello World"
    }
}


And no users get imported. Why? What am I doing wrong?

How do you have the call built in PA? Manual HTTP request? Custom Connect? Something else?


How do you have the call built in PA? Manual HTTP request? Custom Connect? Something else?

I figured this out late last afternoon. I had to parse json the input and apply to each the output of the parse to the docebo connector, mapping the output of each field to the field built in the custom connector. the test roster file went in no problem twice now. The next step is going through and adding our actual user additional fields into the docebo connector in this flow. I also need to add steps to the previous sequential flow to split excel files into two (or more) respective csvs to pass each through this individually, as some of our user management comes in that way. 


Ah, yeah, building the arrays can become a bit of an art in both systems. 

Are you parsing the CSV’s into JSON directly? If so I usually do splitting up during that, either by filtering for the values wanted in different sets, or just a loop of breaking the JSON array by line numbers.

Also, if you are parsing the CSV, what process you using? One of the premium paid for connectors or something else? I’ve come up with a good process that takes a small setup for each product, but behaves well in bulk (no looping required) and uses all built in components, no extra pay or reliance on more services that can fail.


I’m using a free version of one of the JSON to CSV converters, but I might do something custom if we ever hit 100 requests in a month (very unlikely, as we typically get 10 files like this per month). I would be interested to hear your process though, incase we do go over.


Yeah, I started there and quickly ran into paywalls for quantity but also service issues so moved to ways to just process myself, much quicker too. The other thing to keep in mind is you are sending your data to another 3rd party service, and aren’t paying for that service….thats usually a recipe for data privacy isssues, just be careful.


So to do it, I keep a templated ‘started flow’ but have also made one that is dedicated to this process, so I can send files to it and get the output back during different flows (basically what you do with the connector services, except your data stays with you the entire time,), pros and cons to different approach.

The main setup looks like this:

 

The trigger in this case is a sharepoint file being created. Usually have someone dropping their CSV into a folder on a sharepoint site. Can also have this be based off of an email attachment, ftp site, trigger flow manually off a sharepoint/onedrive file….etc.

The ‘Scope Run’ just keeps all the actions together and if anything in the block fails, passes it to the failure block for easy notification/retry/triage, etc.

  1. The first block is a ‘Compose’ element. This is a trick, I need essentially a line break, but traditional methods don’t work for it in PA, so use a compose block and just hit the enter key once within it:
  1. The next block is a ‘Filter Array’ element. In the from, put the following formula:
    split(replace(replace(triggerOutputs()?a'body'],'"',''), ',', ';'),outputs('Newline'))

    This is doing a few replacements of the body of the file. Note that in this case that means the body of the trigger, depending upon your trigger, you might need to adjust the ‘triggerOutputs()?h‘body’] within it. Also, if you named the Compose element from the previous step something else, put it in the outputs(‘Newline’) instead of Newline.

    In the left side box put the following expression in:

    item()

    Set the middle dropdown to ‘is not equal to’ and leave the right box blank. This will filter out blank rows.
    You filter block should look like:
     

     

  1. Add a ‘Select’ element next to do the JSON conversion. In the ‘From’ field put:
skip(body('Filter_array_-_Convert_to_Usable_Form'), 1)

This takes the output of the previous step, so make sure the names are right, and skips the first row (assuming there are headers in your file, if there are not, you can eliminate all the skipping and just use the body of the output of the previous step)
Now build your JSON table. On the left side enter the headings you want to use, they do not need to be the same as the CSV (this is great to avoid inaccuracies from user submissions) on the right side, put the following in each:

trim(split(item(),';')=0])

The ‘0’ shown there represents the first column of your CSV, change it to correspond to any column from your CSV, but remember its 0 indexed, so column A is 0, B is 1, C is 2, D is 3, etc. You do not have to follow the order of the columns in your CSV, make the table the order you want for you. You should end up with something similar to below:

  1. Insert a ‘Parse JSON’ element. In the ‘Content’ box put the output from the previous step. In the Schema, you can either run this once to get an example output to generate, or I typically build one quick using the following format:
    {
    "type": "array",
    "items": {
    "type": "object",
    "properties": {
    "Example1": {
    "type": "string"
    },
    "Example2": {
    "type": "string"
    }
    },
    "required": /
    "Example1",
    "Example2"
    ]
    }
    }
    I just add to the properties list the different fields and types from the table. Adding to the required list is up to you, can get away without it if you wanted. The Example1, Example2 correspond to the names from the Select Table in the previous step, make sure yours do too.

     
  1. Use the data however you want. The names you put in the mapping will now be available in the dynamic content area. Use it to filter your data, perform actions, anything:


Once you build it once, very easy to copy into flows or build as one that accepts and sends data for all your flows. Lots of advantages:

  • No loops, so no performance issues/concerns with volume of data in the file
  • No sending data to other services.
  • No reliant on other services that can break or have down time that you need to track now too.
  • No premium elements.
  • Full tracking and control of the process.

I exported the example one I was using and added to the post below as well.


Reply