June 2024
Release Notes Summary | Release Notes |
---|---|
List now supports function AddRandomIfEmpty which will allow creating random objects to the list | List now supports function AddRandomIfEmpty which will allow creating random objects to the list. It can be used like below: lstExistingModelName.AddRandomIfEmpty<>(10, New Namespace.ModelName()) ) |
Added the ability In Datagen to convert clone data across environments and within databases | On the Datagen we can now convert it to clone data across environments and within databases. Cloning data allows sets of interesting, rare or data that is causing bugs to be easily moved from one database/schema/environment to another. The data can also be manipulated to mask any sensitive data, or amended to make it more useful in the target schema. For example age the data to be current, or add in additional transactional data required for specific tests. Simply create the rule set with the tables you need and then convert the datagen to a clone. You can also create multiple copies of the same data allowing for rare data to be easily multiplied. All internal keys and business relationship values are created with new values. |
In data generation and Subset rule sets, the entity diagram button now invokes the diagram for active tables | Click on the diagram button on the rule set version And the selected diagram will be displayed |
When using ResolveCSV in masterfile controller, there is now the ability to invoke a find and a make together | ResolveCSV runs through a created CSV from prior steps - for example create message using template and resolves any VIP functions or Excel functions starting with an = The ResolveCSV can also make a call out to test criteria, pass in parameters and perform any test criteria, for example make some date OR look up data and allocate it. For example: {{parTestID=395,parPoolID=116,col=3,parWhere=SEX = ' {SEX}'}}Will issue a call to the test data allocation for test id 395, using pool id 116 with a where clause which includes another column in the CSV containing SEX. It will return the third column in the OUT variables If you include the expression. ,parTestName={ModelTestName} And include a column ModelTestName in the csv with a unique resolvable expression, for example: {="TEST" & SEQNEXTVALUE("ORDER_ID", 105)} You will get a unique find if you wish to create multiple rows of data, for example create 10 messages. You can link the find to a Make when no data is found, by including: {{parTestID=395,parMakeTestID=397,parPoolID=116,col=3,parWhere=SEX = ‘{SEX} ',parListCol1= {ADDNINO},parListCol2={ADDNINO},parListCol3= {ADDFIRSTNAME},parListCol4= {ADDLASTNAME},parListCol5= {SEX},parTestName= {ModelTestName}}} a parMakeTestID and point it to a Test Criteria that makes data. Any variables with the same name, in this case SEX will be used for the make as well as the find, so if you couldn’t find a female we will make one in this case. The shipped flow C:\VIPTDM\ImportCSVIntoAList\ImportCSVIntoAList.vip Now has a new parMode DIRECTADD which will add in a new row to a list. You can add in columns using parListCol to each column in the list. |
Added the ability to handle nulls in search and reserve | If you wish to search for data with null values simply use the reserved word ‘null’ - the query will switch to using ‘is null’ SQL constructs. The same is true for ‘not null’ Note that the query will change to is null or is not null only, any other data mapping will be replaced, for example Greater than etc. |
Added the ability to eliminate duplicates from find and reserve search queries | You can now eliminated duplicates from find and reserve search queries - this could occur when you are returning columns that are not all unique. Simply set the new option on the form to switch the form to only return distinct values. |
Added the ability to append the column searched on to the results tab | When you create the search form you can include a new parameter to include the ability to return the columns you used in the search to the summary screen and allocateddata.xlsx. So for example, find accounts where the number_of_sales >= 100 - Would return acct_no=743, number_of_sales=150 |
Changed the data generation to explicitly extract to csv | Changed the data generation to explicitly extract to csv - which can operate in parallel and then load the data into a schema directly. Add in two new drop downs. Option 1 has the following options. The repeater is the default number of time to repeat the publish. This is the value that goes into the created submit form The form will look like this. All the normal data generation parameters will exists, plus a repeater. If you repeat 10 times you will have 10 parallel processes running. so 10 and 20 = 200 products Once complete copy the outputted csv files to a folder or make a note of the working director. And put the folder containing the data into the required folder. Things to note. You will need to use Curiosity sequences to create the data. |
Added the ability to display any compile errors from the generated VIP flow in the rule set version editor | There is now an ability to verify - compile and resolve a rule set version from within the data activity. There is a new option: Run Validate and Preview. This will in effect compile and resolve the rule set version. Each of the cells has various statuses - not checked, error, and valid When you run the preview if there are any errors they will display in the job status: And you can see the error in the rule set version editor: If there are no compile errors the preview will display a sample of data in the job status plus create csvs for each table: The rule set version cell will display: |
Added a code completion service to help build functions | There is a new service installed on the server. Called code completion, this provides a much more sophisticated way of building and validating functions. Micro help with examples are provided. You can see which servers have code completion available - they have {} Each rule set will need a preview server When you are in the editor: There is now a much richer set of functions with the output type displayed. The editor will now display micro help including the descriptions, types and examples for each type of function. If a function is invalid it will display a red circle, if you click on the circle. It will describe the error. |
New Datagen Accelerators | When selecting sequential in list selector, 3 options: Start from top (first row) |
Added a code completion service to help build functions | When installing the server - you can now include a “Code Completion Server” This allows the data generation editor to use the frontend of mirrorsharp to connect to the CodeCompletion server using websockets and offer context-based intellisense |
Added a Random / Sequence choice for disabled foreign key in the Rule Set Version Accelerator | On foreign key panel on rule set, add buttons to: Select random value from the generated parent items |
Added the ability to display groups to be included, and allow selection of the groups to be added, into the pipeline into the datagen pipeline accelerator | For large database models that have been organised into groups - you can now choose which groups you want to include in the pipeline. Often you will want to work on three or four groups that logically fit together. |
Added new ability to create a Subset Rule Set | As an alternative to creating a “Process Model” list to drive subset processing you can now create a Subset Rule Set. Attach a Definition Version to your Activity and from the Actions dropdown select “Create New Rule Set” and click on the execute button. Once a Rule Set has been created you can modify it to select the tables included in the subset and define them as “Driving”, “Subset” or “Reference” tables. If you set a table as a Driving table you can then select it and add a driving condition. The Foreign Key Rules section of the Rule Set allows you to active or de-activate foreign key relationships from a parent to a child or a child to a parent
Once the Subset Rule Set has been created and edited you can create a submit form and run the subset processing from that.
|
Added the ability to create a new rule set version without having to clone or copy another version. | You can now create a new rule set version without having to clone or copy another version. Click on New Version and select any tables etc to create the new version. Once you have created the version you can edit name of the version and add a description. |
New data types and more details in the data painter or wherever column names are displayed | Throughout the data generation where the data painter or column names are displayed, the data type and more details about the column are displayed making it easier to create the data you need. If there has been a deep analysis performed then that can be easily viewed. |
Added support for Databricks CRUD operations | Full Support added for DataBricks data generation. |
Added the ability to copy defaults | The rule defaults allow you to easily make mass edits for data generation and masking. There are a number of standard defaults shipped s standard. You can now copy any of the standard or other groups of default edits, this allows the user to use the prior work and customize it for your own purposes. |
Enhancement to Compile errors when creating VIP flows | You can now easily track any compile errors before you create the vip flow on the server. Firstly edit your function in a row/column. When you make a change you will see a triangle to show it needs to be checked. From the data activity version, use the action “Run Validate and Preview” This will validate any expressions and show you any errors on the submission screen. You can also go back to the rule set version editor and see a red circle with an x If you click on the x you can see the exact compile error. If you re-run the Validate and Preview it will display an initial pass at what the generated data would look like: |
Added new search operators on the default search | The search and reserve now has 4 new operators. Not Contains |
Added the ability to display descriptor columns in submit form drop downs | You can display descriptor columns in submit form drop downs - you can refer to another column to be passed down into the submit process. For example Currency Name is displayed but Currency code is passed into the process. |
If you edit the find submit form. Change the data type to String and switch the UI Type to Range The Screen will prompt you for a minimum and maximum range This will convert the SQL query to use a >= and <= For example: ("order_count" >= 1 and "order_count" <= 3) | |
Improvements to the Rule Set Version Table Editor |
|
Added a new action to order the tables in load order | When generating or subsetting data, the load order is important to ensure that parent tables get loaded prior to child tables. This avoids having to drop/disable foreign keys. The rule set version will now be created with the load order automatically calculated. You can also disable and enable foreign keys and recalculate the load order from the actions menu in the rule set version. |
Added the ability to show the tables being masked in a masking rule set when the plus button on a Pipeline is clicked | When you click on the plus button any tables being masked and their relationships will be displayed. |
You can apply defaults to multiple rule set versions in one pass. You can do this for either all of the rule set versions in a rule set OR for all rule set versions being used in a pipeline | |
Flows can now be generated from Curiosity Modeller web frontend | Flows can be generated from TestModeller web frontend |
A starter set of defaults are now shipped as standard | A starter set of defaults are now shipped as standard, these contain a series of useful and example types of defaults that can be applied to the generation and masking rule sets. They are displayed in blue - they cannot be edited but they can be copied. |
Functionality to bulk edit nodes in the modeller screen | Within the Quality Modeller screen there will now appear an icon to give the user the ability to edit node properties for multiple nodes displayed in a table. |
Extensive new defaults processing capabilities | Te defauls - the ability to mass find and apply changes to the generation, masking and defintion has been significantly improved. You can now have multiple sets of default rules. There a number of standard default rules applied. You can have multiple rule set find and changes. The search capability has been significantly improved. The type of attributes and objects has been significantly improved. There is also the ability to look at the current value and select where that value came from. This allows for very sophisticated mass changes to be made across multiple test data activities. Once the find has been run, the old and potential new change will be displayed. You can selectively filter on these results and choose which cells you wish to apply any changes to. |
New functions for data using different distribution algorithms | Data Generation can leverage various distribution algorithms available using RandomHelper to generate data e.g. NormalDistribution |
Added the ability to declare variable for each Parameter Row Scope | The ability to declare variable for each Parameter Row Scope has been added |