QuickStart API Automation Tutorials:
Watch the 3 tutorials to guide you in using QuickStart API Automation.

Objectives: By the end you’ll be able to:

  • Create and/or edit an Automation model

  • Navigate the Test Modeller UI

  • Understand the directory structure in Test Modeller

  • Name and Configure a Project

  • Test an API Request prior to using it as part of a model

  • Modify a API Automation Login action or function type through a Parameters tab

  • Ensure an API Token is pinged back to verify the API Request is working as expected

  • Return a specific Status Code, ie 200 or 400, added under the Assertions tab

  • Validate an API Request for building it into a model

  • Add in and execute the credentials seen in the open API website

  • Pull out successful test cases but also the edge case whilst modeling

  • Update specific Test Data

  • Run and see test results

We also have QuickStart Automation for Mobile, Web and Mainframe.

1 Intro and Create a Project (1min 26secs)

2 Model Out Combinations (3min 34secs)

3 Generate & Run Tests (5min 40secs)

4 Publish a Test Suite (5min 32secs)


1 Intro and Create a Project

This clip, 1 of 3, on API Automation introduces the basics of Test Modeller’s default API Request modules, how to test them, then build them into a model ahead of generating and running tests through Test Modeller’s API Automation QuickStart framework. Specifically, this API Automation clip focuses on how to create a project through the Test Modeller interface.

Watch the tutorial (1min 26secs) | QuickStart API Automation | Get Started

2 View and Create Models

Now with a project set-up, this clip 2 of 3 on API Automation shows how to test an API Request prior to using it as part of a model. This is shown through use of an example API Automation Login action or function type that you’ll modify through a Parameters tab as part of a broader Module Collection.

Watch the tutorial (2min 26secs) | QuickStart API Automation | View and Create Models
Read more

For the Login function type Parameters are taken from an example open API website which are email, password and responseCode. Additionally the site’s url is input as Post under the API Request tab and through which Values can further be parameterised under the Body tab. And finally, for returning a specific Status Code, ie 200 or 400, this is added under the Assertions tab.  

The next step is to ensure an API Token is pinged back to verify the API Request is working as expected. This is done by clicking Run in the Edit Function popup, and in the Name and Value popup, just add in the credentials described in the open API website and click Execute.

This API Request is now validated and can be built into models for the purpose of testing the API Requests more vigorously.

Generally API Request functions can be created in Test Modeller in a number of ways including of course manually, but also importing Swagger specifications, and also tools like Postman and Fortress amongst others.


3 Create a Model & Run Tests

Now having verified the API Request is returning as expected, this clip 3 of 3, on API Automation shows how to build out the API Request in a model to not only pull out the successful test cases but also the edge case, update specific Test Data, and then to Run and see test results.

Watch the tutorial (3min 16secs) | QuickStart API Automation | Create a Model and Run Tests
Read more

To set up a model you’ll next open a folder called Scenarios in the Explorer and click New Model. Once named the canvas gives a start and end block, and so click the start node and locate the Login API Request and you’ll see it expands on the canvas.

Knowing there are two scenarios to test, the first being a 200 successful response but also a 400 error response, you’ll see how these are set up using the end points and get connected to the Positive Login and Negative Login Waypoints accordingly.

With the blocks connected to Email and Password you can now parse in specific Test Data from the original website you are testing against. To do so you’ll click on each block, then copy the data from, from the example open API site, into the Variables field in the Test Data pane in Test Modeller.  

Ahead of running the automation, at this point you’ll set-up and generate the tests which will be in the Scenarios pane. In this example you’ll see 3 paths which are individual test cases used to test the API Request. So now you’ll click Run and select Automation Code from the wizard. Instantly you’ll see that 3 tests are executed.

Finally, to view the automation results in isolation, this is done by clicking the cog icon in the Scenarios pane on the path concerned and then choosing Results from the context menu.


You’ve been following
our QuickStart Mainframe Automation Tutorials.

Browse more Learning Portal content