In one of my other blog articles I talked about queue times, and how they can be a good measure to detect if you have too many or too few build agents.
I recently got a question from one of you to ask how these are actually measured.

The API calls below are for TFS 2017 and newer, but they work in the same way for Azure DevOps. Simply remove the server and collection reference.
The calls also work for TFS 2015, but pools are located on a server level in 2015 while queues are at a collection level. In TFS 2017 and newer both of these concepts moved one level down respectively. (to collection and project level)
These API calls do not work for the XAML build system.

All requests are GET requests.

You will need to get a list of pools for a given collection first.
https://myTFS:8080/tfs/MyCollection/_apis/distributedtask/pools

This returns a JSON in the following format.
{“count”:NUMBER_OF_POOLS,”value”:[{POOL_1},{POOL_2}]}

The pool objects contain information about when the pool was created, under what account, etc. They also contain an id number, that you will need for the next step.

If you want to get a view of the agents currently joined to the pool, you would run the following command next:
https://myTFS:8080/tfs/MyCollection/_apis/distributedtask/pools/{PoolNumber}

This returns another JSON array, this time of the agents contained in the pool.

For each agent you can get its recent requests, by running this command.
https://myTFS:8080/tfs/MyCollection/_apis/distributedtask/pools/{PoolNumber}/jobrequests?agentId={AgentId}

But, if you are just interested in seeing all job requests for a particular pool, you can run this as soon as you have the pool number.
https://myTFS:8080/tfs/MyCollection/_apis/distributedtask/pools/{PoolNumber}/jobrequests

It does not matter if you ran your request against a particular agent or the entire pool, the resulting JSON will be an array of requests.
Each of them contains a few very useful metrics.

{

“requestId”: UNIQUE_ID_OF_THE_REQUEST,
“queueTime”: TIME_QUEUED,
“assignTime”: TIME_ASSIGNED_TO_AGENT,
“receiveTime”: TIME_RECEIVED_BY_AGENT,
“finishTime”: TIME_FINISHED,
“result”: END_RESULT,
{MORE_INFO_ABOUT_THE_JOB}

}

Comparing the different time stamps allows you to gather the average queue time for a particular pool.
If you like, you can even hook the API request up to PowerBI and create graphs for each pool to compare both the average and top queue times.

The additional information in the job request JSON enables you to differentiate between builds and releases or get the average queue times per person/process.

Returning to the initial thought of capacity planning, I personally like to look at the relation between “average queue time” and “maximum queue time”.
If both are high, then that means that almost all builds/releases are waiting before they get processed. In this case you may want to consider adding more agents.
If both are low, then you may have too many agents.
If only the maximum or average is high, then there could be a particular build or release that makes everyone else wait.

The information in the request lets you compare different pipelines to get a view on what products in your organisation/collection use build/release the most and which ones use it the least.