How TDA as part of your Automation works


Integrating test data allocation into your automation framework enables you to specify the test data you need for your automated tests. This has two core benefits:

  1. Test data specifications can be defined for a vast array of data sources and can then be easily imported into your automation framework to facilitate the creation and /or resolution of data just-in-time before the automation executes.

  2. The test data can be specified to be allocated only to specific test cases. This avoids the problem of interlock where test cases interfere with other test data being used in other tests. Unique test data allocation ensures a data row is only consumed by the test case it is defined for. This makes automation much more robust as you always have the test data you require available, avoiding brittle tests.

How it Works

Test Data Allocation works in three main phases:

Phase 1 – Firstly, the test data required must be specified and exposed as a test data criterion within the data catalogue (covered in the previous sections).

Phase 2 – Secondly, the automation framework must call the data catalogue API to execute and run the data allocations (this phase finds and makes the right data then assigns it to each test instance).

Phase 3 – Finally, each test can retrieve the allocated data values which can then be used within the automation framework to perform the required actions across user- interfaces, APIs, mainframes, ETL environments, and much more. This is built to plug directly into an automation framework independent of the language or type of automation being executed.

The Test Data Allocation API

The test data allocation API can be integrated with any tool or framework with.  The test data catalogue API exposes endpoints for all the Test Data Manager capabilities within the data catalogue user interface. For test data allocation there are two core endpoints you will require:

  1. Specifying the tests to run the associated allocations (finds and makes) across.

  2. Retrieving the results of an allocation.

Authentication within the test data catalogue API is achieved using an API Key. Therefore, to get started, you will first need to create an API Key within the Test Modeller workspace you want to consume and inter-op with.

To do this, navigate to the profile tab in the left side menu and view the details. This will show the API Key and API URL, take note of both of these. The API Key is unique to your account and the API URL is the endpoint you need to connect to.

The API URL combined with the API Key will give you the ability to consume and connect to the data catalogues without needing any further authentication. At any time, if the key becomes compromised you can revoke and refresh the key associated with your account from the page you used to create it.

You can review the swagger API documentation by adding the following URL to the API endpoint:


For the cloud portal this can be accessed using the following URL:

We advise you review the API documentation available on the API you will be connecting to since this contains the appropriate documentation for your API’s version and capabilities.

There are two methods to follow in order to perform allocations.

Execute Test Allocations

Firstly, you will need to create an allocation Job on your automation server. This will call the appropriate finds and makes and allocate the results within the specified allocation pools.

You can view our interactive documentation for this endpoint within our API documentation specified above.

The allocation endpoint takes three parameters:

  • {apiKey} – The API Key for connecting to your selected

  • {poolname} – The data pool name to perform the allocation

  • {servername} – The server to use for performing the data

POST – /api/apikey/{apiKey}/allocation-pool/{poolname}/resolve/server/{servername}/execute

The endpoint takes the following JSON body below which specifies the executions to perform resolutions against. This is a list of the allocation test names, pool names, and suite names to use for the resolution:

“allocationTestName”: “string”, 
“poolName”: “string”,
“suiteName”: “string”

Retrieve Allocation Results

Once the allocation has completed execution successfully the allocation tests within each data pool will have been assigned the appropriate test data. Now, you can query the API to retrieve these values and use them within your own framework or toolset.

You can view our interactive documentation for this endpoint within our API documentation specified above.

The resulting API takes four parameters:

  • {apiKey} – The API Key for connecting to your selected workspace.

  • {pool_name} – The data pool to retrieve results from.

  • {suite_name} – The test suite to retrieve the results for.

  • {test_name} – The test name to retrieve the results for.

Get – /api/apikey/{apiKey}/allocation-pool/{pool_name}/suite/{suite_name}/allocated-test/{test_name}/result/value

This endpoint above returns the following body of allocated results for the test case. This is a list of the allocation values in a hash-based list, where ‘additionalProp1, additionalProp2, additionalProp3’ will correspond to the names of the output columns for the allocations which have been executed. This body is shown here:

“additionalProp1”: “string”,
“additionalProp2”: “string”,
“additionalProp3”: “string”

We advise when using the data allocation API to bundle all the allocations to be executed up into one job that is executed as a pre-processing activity. This is (a) far more efficient since the appropriate engines only need to be span up once, and b) the allocation instance only ensures unique allocated values are valid within the execution session. It is also worth noting that once an allocation has been executed the results will persist as cached values within the data allocation API. You may choose to only perform allocation once (within the portals user- interface) and then retrieve the same results there are on after.