QuickStart Mainframe Automation Tutorials:
Watch the 4 tutorials to guide you in using QuickStart Mainframe Automation.
Objectives: By the end you’ll be able to:
Mainframe Automation Overview (4min 01secs)
1 Get Started (3min 11secs)
2 Model Out Combinations (3min 34secs)
3 Generate & Run Tests (5min 40secs)
4 Publish a Test Suite (5min 32secs)
Mainframe Automation Overview
1 Get Started in Mainframe Automation
Setting up for Mainframe Automation incredibly quickly our Accelerators. Initially we need a Project and for simplicity we are going to connect to our mainframe the Project Configuration which sets us Assets that we can use a little bit later on. For the Connector it’s simply Add New Connection.
Using the Mainframe Connector type: in the example we’re using Astrom with the specific Port. Once set-up we get a folder of models, Components and Actions to quickly Run Mainframe Automation without delay. The components shows Actions and Commands that we can Run against the Mainframe, capturing screenshots, different Assertions for immediate use.
As this is my first Mainframe Automation, it's a fairly linear model, to open up a connection, where those host details set up earlier appear. I've got automated to capture screenshots and we are good to go.
To get going let’s Generate just one Test Case as one path through this straightforward model and Run. Choose Automation Code, and Execute. Essentially is going to pick up some Code Templates that sit in the background of Test Modeller to automatically Generate and Run Tests, and as expected with a straightforward Test it has zero failures.
The results can be download as a zip if we want to or sync that into a Git repository. Back in Test Modeller there’s a Test Summary on which clicking a Run ID reveals the Steps and additional Log information. From this straightforward next we’ll consider combinations to see how Test Modeller can really accelerate Automated Testing.
2 Model Out Combinations
From a straightforward linear model let’s now use different Components in Test Modeller to vary some details. So on the left hand side of my screen we've got various Tasks and Conditions, ready for applying different Logic into this particular model. Use a Condition node for adding in a User Login and again capture a screenshot, and also a Password, before updating the Parameters.
Previously we had just one path or one Test Case and now with two we'll Run this Automation Code, again picking up the Code Template in the background which looks at specific Data Variables added to the demo.
In our Run Results looking at our demo application we'll see that we've actually logged into it as hoped. I would recommend you go through, make some modifications in terms of some of the Logic you to play with next in which we’ll test of that Login process in more detail to quickly make that testing even better. And also further explore the UI.
3 Generate & Run Tests
Previously we executed some simple Mainframe Automation, navigating through the UI, and now to add further Logic. In review at the end of our process we aim to achieve a Successful Login, though between the the two states of either be pass or fail we use Conditions in order to test for instance of a Valid User, Valid Password or similar combinations, just as a start.
To know here is getting the same results can always be modelled differently, along with different Scenario or testing combination for instance Invalid User and Valid Password or Valid User and Invalid Password etc.
In this example we’ll add in a fake user, but also know that Valid Users potentially may have Invalid Passwords. So let’s put this in as a fake class, knowing that the end point here is not necessarily going to be a failure. To highlight this possibility I'm just going leave the default node name of End. Additionally we can set the Data Type with this as Invalid, and also for the Username to essentially say that this is not correct.
Let’s Re-Generate the Tests giving us five paths including three that are negative. Now go and Run this particular Automation Code and preview the Logs as they process. Again we could commit to a Git repository.
Coming to the Test section it says 1135 tests have just ran and see about locating one for the fake user, ie a fake user with a Valid Password. So when we get to try and enter in our fake credentials, this is actually as far as the path goes before hitting on a fail. So we're happy that that's been inserted and this test has been passed correctly because we haven't been able to actually log in.
Where in this tutorial we’ve looked at the various Mainframe screen, next we’ll overlay some Logic to one particular process. Also to highlight different ways we can build out models using different screen clicks, tabs, enters, screenshots, that are slightly more complex.
4 Publish a Test Suite
The focus here is on Regression packs and changing some of these tests related to a Login process that allows us to test Valid and Invalid combinations. So here we’ve build out a few simple models to demo functionality within our Mainframe including a messaging system that allows the review of internal mails, emails and the like. This requires a profile screen and in here we’re adding text Assertions to check that access to the screen and see what results ger returned.
We've also got the ability to just test our Mainframe Connection and make sure that that opens every time. So again, we're just checking some text that's present and its intersection. But rather than coming in and running these all individually, we are going to build a model which I'm going to call My Mainframe end-to-end.
And just go and create that model using Subflows to essentially chain all of our different tests together, in whichever sequence. Browsing to the Examples folder I’ll search Full Login, also Profile so this is opening to check our connection, and then the final piece, which is going to be our messaging system. These tie up and require the Log-in to have already happened before the Successful Login result can be executed.
The MMS (see tutorial) is the only Subflow which has further functionality in my Mainframe that I might want to model out so I can go and view different screenshots. But for the moment we're going to Generate our tests, which in this case there should just be three running through separate Scenarios.
But what I actually want to do is when I come in click run is for my Automation code to Publish a Regression Suite. So, name this Regression and Save and Execute to Run the Automation for it to test as pass or fail.
We can also store the Regression pack that we can Run any time from within Test Modeller, through the Dashboard itself. So rather than having to come into models and Execute them individually, you can save them as part of a broader Regression strategy, or simply trigger it through individual models that you just want to go and Execute at any time.
Under Test Modeller’s Test section we can see what we are running, for instance our Mainframe end-to-end, showing we're into different screens and checking the intersection and we see all passed. Also there’s a Test Plans for my Mainframe Regression. So if I click on that the details of the tests that we're running, including Results that we've already Executed are listed. But of course we can just come in and Run these directly.
So I'll kick off a Job and Run those particular tests for you. That's just an API call that can be embedded into a CI/CD Pipeline in whatever Framework you want. In practice through it's really just about simplifying using a Model-based solution that’s able to wrap up lots of tests and in Mainframe quickly and efficiently.
If you want to find out more about generating test cases, or how to implement different test case coverage techniques, review this How to Generate Test Cases in Test Modeller.