Featured

ChatGPT for Bespoke Test Data Generation

I’m seeing a lot of brilliant posts that I’ve learned a tonne from all about ChatGPT and what it can do to help us Quality Engineers. I’ve also posted about what a contentious topic this is to even acknowledge you are using ChatGPT – my earlier musings on this topic seem to have been borne out in my experience so far.

However, the one thing I haven’t yet heard anyone go into is using ChatGPT to create test data generation.

TLDR: Watch this video to see how I did this

There are a few advantages to this:-

  • Realistic data sets – We know ChatGPT-3 isn’t the latest data set, but it is at least based on a heck of a lot of data. So maybe you want to know what the most popular products are for your company, or what the best grossing films were in 2010 in Morocco, or what the most well known technical trailblazers names were. If its something where you care about what others think, as opposed to having a linear set of something e.g. 1-200, then being able to tap into those data sets could help you be more realistic
  • Bespoke Test Data – there are already brilliant libraries you can use (such as faker.js) which auto-generate test data for you, but what if you need something more specific. One example could be you have a field that requires first names, but you want to only use female first times. That isn’t a sub-selection you can currently do out of the box.
  • Something fun for Demo’s – want to spice up a customer demonstration, or an end-of-sprint show and tell? Plug in unique test data and ask it for something wild!

I spent an evening a few weeks back solving this puzzle.

I created:-

  • an open-source, free, publicly available workspace in Postman
  • Using Postman Flows (the low code workflow builder feature) you simply modify a query that you feed in to ChatGPT using the template provided, and Flows formats the response that comes back to allow you create an array of comma separated test data
  • This test data is then immediately plugged into an API request – showing the end-to-end process of test data generation and looping through a request for each and every bit of test data ChatGPT Provides – so you get to select how many times you want this to run by asking for that number of items in your query.

The results are in this video:-

YouTube Video Walking Through OpenAI Test Data Generator

Pros and Cons

  • ChatGPT will not be free forever, so this may have limited shelf life unless you’re willing to pay for access to the API
  • Asking for large datasets may use your free tokens pretty quickly
  • If you ask the same query, ChatGPT will come back with the same answer. So its really important you know what to ask it if you need randomised data every time the query runs
  • You don’t know how accurate the data is – so be careful what you are asking it for and how much you rely on it as a source of truth

I hope this helps you in some way, as this is quite a novel way to use ChatGPT from what I’ve read (although I’m sure someone will create a more user-friendly tool version that does something similar soon if they haven’t already). As with all of my posts, videos and community work, I didn’t get a penny for creating this or putting it out there, so if you do find it useful please remember to say thankyou and quote your sources, it gives me the impetus to keep going!

T’ra for now 😜

Featured

OpenAI: Three ways it can help software testers

Opportunity knocks

I’ve been playing around with the suite of OpenAI tools that have recently appeared in public beta. I doubt they will be free to use forever, so now is a good time to have a knock about with them if you want to see what AI can do.

Hit the links to go straight to each area:

Dall-e for image generation using Postman API 📷
ChatGPT for inspiration and content creation 💡
Codex for checking test script syntax and first drafts

I am coming at this from a very novice level of understanding algorithms, AI, machine learning and all that jazz. The buzz on LinkedIn and Twitter is growing and it would appear that battle lines are starting to be drawn, but if you want to check out this technology for yourself, here a few places to explore.

Where to start

First things first, sign up for an account

Go to https://beta.openai.com/signup/

You’ll need to provide an email which is verified and an SMS which is also verified. Call me paranoid, but I worry about my records being leaked by a service such as this, plus as a tester I like to be able to use non-personal records, so I used Mailinator who’s enterprise level has both unlimited private email addresses and also an SMS service. Its mine, but it also doesn’t use my personal email and phone number. You might need to sign up for the separate tools (Dall-e, ChatGPT etc) separately too.

What can I do?

Dall-e Image Generation for Test Data (or fun!)

There are lots of ways to do this, but for me the quickest way of knocking around any API is via Postman. If you want to use Dall-e via API calls to generate images on the fly, you can do that in open AI and Postman currently by following these steps:-
1. Generate an API Key
2. Create an Open AI Environment in postman. Add in the URL and the API Key

3. Fork the Postman Fun APIs Only OpenAI endpoints created by Postmanaught Ruby Sutter workspace. Save as a new collection in your workspace of choice.

4. Ensure the authentication token points to the API key which was saved in your environment variable

5. Select the environment variable created earlier. Open the image generation endpoint, click body, enter something funky into the prompt and click send

6. Finally, select the Visualize option in the response body to view the images – awesome huh!

Images can be used as test data, generated on the fly and fed into your tests.

Use ChatGPT to research and even write content

YouTube Video showing Open AI’s ChatGPT tool answering the question “how openai can help software testers”

Once you have registered and verified your account, you are allowed to put in any text you wish and see what ChatGPT has to say about it. Here is the output in full, when I asked the question “Write a short post on how open ai can help software testers”:-

Open AI is a leading provider of artificial intelligence technologies and has the potential to help software testers in a number of ways. For example, Open AI’s GPT-3 language model can be used to automatically generate test cases, making it easier and faster for testers to cover a wide range of scenarios.

In addition, Open AI’s machine learning algorithms can be used to automate certain aspects of software testing, such as regression testing and performance testing. This can help testers save time and reduce the amount of manual effort required for these tasks.

Furthermore, Open AI’s technology can be used to analyze large amounts of data, such as test results and performance metrics, to identify trends and patterns that may not be immediately apparent. This can help testers better understand the behavior of the software they are testing and make more informed decisions about how to test it.

Overall, Open AI has the potential to significantly enhance the capabilities of software testers and help them to more effectively and efficiently test complex software systems.

Open AI

But how to use this as a tester now? Research! Ask it stuff, even create blog posts or draft other content you need, as well as generating e.g. a fixed number of words without relying on lorum ipsum.

Use Codex to cross-reference or draft test automation scripts

Youtube video using Codex to generate test scripts (code extract not shown on mobile view)

Like the others, this still has a way to go, and I don’t see it ever replacing skill test automators but as a neat cross-referencing tool, to easily give some inspiration on correct formatting, syntax etc. this is a good thing to know about. There was a very interesting LinkedIn post of someone generating a lot of automation code for tests

Other areas with promise are the content moderation and text completion endpoints, which seem very cool as well.

I’m still very much exploring this developing tool set, and look forward to learning from others how they are utilising it (if at all) to add another tool to their testing toolkit.

Bye for now!

Featured

6 Essential Test Scenarios Using Postman Flows

TL:DR Use the links below to see a video on each Flow:-

📃1. Schema Testing
🔐2. Security Testing
🧑‍🏭3. Workflow Testing
🚫4. Negative Testing
🏃5. Performance Testing
6. Positive Testing

I recently volunteered at the awesome Ministry of Testing’s Testbash UK 2022. A huge draw for me was meeting the wonderfully talented Julia Pottinger who, with the backstage support of her QA partner Orane Findley wowed the attendees with her characteristically clear and simple explanation of “Next Level API Automation”.

Julia’s talk included the following slide, which detailed 6 important high level scenarios to consider when performing API tests. I thought it might be good to show examples of how, you can execute tests in all 6 areas using Postman Flows.

Julia Pottinger’s API Checklist. Numbers added by me relate to this blog post.

Scenario #1 Schema Testing

As JSON parsing seems to be done on a field by field basis in Flows, it doesn’t currently seem possible to extract the entire schema from the response body in order to compare it with a stored value (correct me if I’m wrong folks!). However, if you want a visualisation that your schema test stored within the tests tab of your request has worked, then here is a good way to do that.

Scenario #2 Security Testing

You can use the create data block to add different data sets to run against a request. I walk through a very aesthetically pleasing example here. 😻

Scenario #3: Workflow Testing

As highlighted by Julia Pottinger at the Ministry of Testing’s amazing Testbash UK 2022, testing an end to end workflow through API calls is often essential. Think about it logically, how do you know if a delete call has worked if you don’t then check the data isn’t then available?
Using Flows to test workflows is a major use case for this Postman feature, because you can see exactly what’s happening. Here is an example workflow test using Postman Flows.

YouTube video showing how to perform a typical workflow test using Postman Flows

Scenario #4: Negative Testing

Here I cover 2 different but connected scenarios:-
1. You want to verify the status code and message (e.g. 400 Bad Request)
2. You want to verify a string within the error response (e.g. “the length must be 3-18 characters”

An added bonus is generating test data within Flows using both lists and records, which although might not be ideal for large test data sets, would be very handy for smaller ones such as testing what happens if 3 different invalid inputs are entered for the same request.

YouTube Video showing negative testing using Postman Flows (response codes, status and response body message check)

Scenario #5: Performance Testing

So we know that scaled and in-depth performance testing is one of the few things that Postman isn’t really built for. Although Flows is no exception to that, if its cheap and cheerful performance testing you’re after (say, executing the same scenario 10 times and checking the response time is under a certain threshold) then that is absolutely doable.

Scenario #6: Positive Testing

Last but certainly not least – time for the happy path! 🙌

In this example, I show how it is possible to execute 6 endpoints at the same time, and apply the same conditional logic to all of them to check:-
* status code is 200
* response body value is correct (“status: UP”)

This gives a really clear indicator when running a health check against several different API’s at the same time. Which is one of the huge benefits of using Flows.

YouTube video showing happy path testing using Postman Flows

And that’s it! I hope that structuring this post around Julia’s essential test scenario’s made sense to you, and that you were inspired to try Postman Flows next time you are looking to test your API.


Til next time! 👋

Featured

Postman Flows: 5 Example Flows

In my earlier blog posts, I’ve taken a high level look at Postman’s beta feature Flows, and delved a bit deeper into what each of the blocks that make up a Flow actually does.

As of March 22nd, 2023, Flows is now in General Availability (GA), which means everyone can play around and use Flows to help them get stuff done – hurrah!

However, I think when we’re learning, it’s good to have examples we can refer to – that’s always the shortcut I use to get inspired anyway! Postman have put a lot of time into creating some amazing Flows content for their Learner Centre, so I’d recommend starting there. However, there isn’t yet much QA/Testing specific stuff, so here are some working examples I created to help get those creative juices flowing. Note these are all basic Flows that may form part of a larger Flow should you wish to do something more advanced.

All of my Flows use the Restful Booker API, written and maintained by the amazing Mark Winteringham. Click on the links below to access a video and short tutorial on each Flow.

Flow #1: Passing Variables

There are 4 different ways to pass a variable from one place to another:-

  1. Define the variable explicitly in a SET/GET kinda stylee
  2. Leverage what you already have defined in your Environment variables
  3. Define a one-off variable within a Send Request block (e.g. pass in this string “xyz”)
  4. Pass some response body data into a new request
Image shows a Postman Flow where Create / Get Variable blocks and environment variables have created the Variable data required

NOTE – The block may not recognise the response body data unless they have been defined in an example first. You won’t always see this, but particularly for POST requests, not having examples could stop your Flow from working completely.

Flow #2: Looping data

*Updated March 2023* If you want to loop through a response and apply a rule to it then you can find an example picture below. Here I define a list of names, and using a FOR loop take each name, do something with it in the Evaluate block (combine a string + the name) then output that response to a Send Request block. I have asked OpenAI to generate a poem about a tester, and output this to the log, and also an output block.

For loop iterating through a list

There are also Repeat blocks, which are handy if you want to re-run something a set number of times, or a Collect block, if you want to catch all of the output data as one homogenous chunk before you pass it through the flow. A bit of decision logic can be found using the IF block.

Flow #3: Take Inputs From Multiple APIs

This example shows how you can take requests running against two different environments from two different collections and perform a Check on them both to output to a terminal (in the GA Flows you could use the Evaluate and Log blocks. Another cool feature of Flows is the ability to send multiple API requests in parallel, so if you want to fire off multiple requests at the same time, just have the Start block connect to as many Send Request blocks as you need.

Video showing outputs from two API requests flow into the same check block in Postman Flows

Flow #4: Passing Data Between Requests (descoped)

From version 9.5.0, you can use the Create Durables block to complete the task of passing data between requests – as long as you have created an example API request first! By adding the token as a durable type, the generated value can be persisted throughout the Flow. Note – this has now been replaced by the Create and Get Variable blocks (see Flow #1)

Video showing data being passed through a Postman Flow using Data Durables – Remember to create an example first!

Flow #5: Generating a visual Output block

I’m hopeful that one day the test summary block will make a reappearance. However, the brand new Output block is a brilliant way to visualise the data that’s hiding in those API responses. Images, charts, even YouTube video’s can all be shown (I’m kinda thinking of it like a Datadog dashboard but at API level only).

https://learning.postman.com/docs/postman-flows/gs/flows-overview/

Flow showing a picture generated by an OpenAI API response as an output

I hope these will help you to see how Flows might be useful to you in your testing endevors. You can see a video of a live stream I did with the Postman team where I walk through Flows in more detail here.

screenshot of YouTube livestream showing Postman DevRel’s Arlemi and Vikram and myself

Bye for now 👋

Postman Flows: 1 Flow To Smoke Test Your API Estate

One of the things that I love most about airing a new talk is the new ideas it sparks. On Friday, I delivered my talk “Go with the Flow” for the first time, to the lovely internal community at Dunelm.

LinkedIn Post from Dunelm, who were the first to give feedback on my new talk around Postman Flows

The inspiration for this post goes to Adam Pike of Dunelm, who during the talk mused that Flows would make it possible to see the full suite of APIs interacting with each other in Postman.

And it got me thinking. We know Flows has this superpower of being able to call any other collections in a workspace. If you have different collections for all your different APIs, wouldn’t it be handy to be able to visually see the key connections and check they’re all talking to each other?

Here’s a couple of ways I think it could work.

Simple Ping Smoke Test

Example smoke test Flow showing several requests being checked and errors logged out to Slack

In this example, I’m using 3 requests which can belong to different API’s. I’m then checking for the status indicator – however, you can just as easily check for status code != 200 or in a range of allowed values or similar. This then outputs to the console log if all is well, or, if there is a problem, a Slack message is output to the team with the Flows ID to alert them to an issue.

Advanced Integration Smoke Test

But what if what you want to keep an eye on is whether API’s are working when strung together in a real-world customer journey? Will the Auth API talk to the Orders API? Will the Location API talk to the 3rd party Google API? Perhaps you have mocks for third party services, but want to check if they return their end of the bargain and give you what you expect, that your internal API’s will respond in the right way?

Flows can allow you to create that visual map of your API estate.

I see it developing like a spiderweb, with a start button in the middle branching off into different workflows, all executing at the same time. 🕸️

Here’s a very rough idea of what that could look like:-

Flow containing 2 integration tests running concurrently

The Yellow Flow shows API 1, authenticating, posting and deleting. The Blue flow shows a different API flow entirely – you can of course use multiple APIs in a single flow if you wish. Using the colour feature we can code each flow to make it easy to see if they are running as expected, tagging a different API in a different colour (maybe mocked APIs in grey, for example). If not, we evaluate the status code and if we don’t like what comes back, we send a message to Slack.

Once you are confident you aren’t generating heaps of information that will just be ignored, and no false negatives(remember to test the tests!), you might even want to get this up on a monitor and schedule a run for every 5 minutes or so, just to give you the confidence everything is up and running in the environments you need things to work in.

Thanks again to Adam for the inspiration – I think this might be a useful feature to have.

Hope this helps, til next time!

Test Automation Portfolio

Speaking about my Test Portfolio for an internal QE Community, November 2022

Welcome 👋

I’m Beth, and this is my test automation portfolio! This page is intended to act as both a reference point for my own portfolio and, if you would like, a template for your own.

Click the links below to see more information about each of the portfolio entries, including an overview post, code repo’s and video walkthroughs.

  1. Welcome 👋
  2. Beth’s Portfolio
    1. Setting A Goal
    2. Robotic Process Automation UiPath
    3. Browser UI Taiko
    4. API Postman
    5. UI Cypress
    6. UI NUnit
    7. RestAssured
    8. Postman Flows
  3. Spreading The Word! Articles and Appearances

Beth’s Portfolio

Setting A Goal

My Story starts here! What are Test Automation Portfolios? Why are they useful, and how do you decide what to put in one?

Robotic Process Automation UiPath
  • Code Repo
  • Video Walkthrough
  • Languge: VB.Net / UiPath proprietary language
  • One Cool Thing: Integrated with SQL Server Express Database
  • Type of Tests: UiPath Workflow Automation
  • Website Tested: My Local Gym
Browser UI Taiko
  • Code Repo
  • Video Walkthrough
  • Languge: JavaScript
  • One Cool Thing: Atomic UI tests using API’s for non-assertive test steps
  • Type of Tests: Taiko, Gauge UI and API level automation of browser
  • Website Tested: Various including The-Internet, Gauge.org
API Postman
  • Code Repo
  • Video Walkthrough
  • Languge: JavaScript
  • One Cool Thing: Mocks, Randomised Variables
  • Type of Tests: API
  • Website Tested: Restful Booker
UI Cypress
  • Code Repo
  • Video Walkthrough
  • Languge: JavaScript – see also Typescript repo from TAU course here.
  • One Cool Thing: Multiple page login tests
  • Type of Tests: UI Browser
  • Website Tested: OpenCart and todo.mvc
UI NUnit
  • Code Repo
  • Languge: C#
  • One Cool Thing: Pairwise
  • Type of Tests: Unit Testing
  • Website Tested: Restful Booker
RestAssured
  • Code Repo
  • Languge: Java
  • One Cool Thing: Angie Jones Course
  • Type of Tests: API
Postman Flows
  • Public Postman Workspace
  • Languge: Low-Code, FQL (Flows Query Language)
  • One Cool Thing: Integrating with OpenAI to generate bespoke test data which is then automatically used in an API workflow.
  • Type of Tests: API Workflow Testing
  • Website Tested: Restful Booker

Spreading The Word! Articles and Appearances

I intend to add to this page as I continue to learn and grow my test automation skills. I have spoken and written about this topic at a number of places, click the links for details:-

OpenAI : The-Tool-That-Must-Not-Be-Named ⚡

“No, I don’t want to hear what ChatGPT thinks, I want to hear what you think”

Said a software testing friend recently, when in answer to a question I responded that I’d researched this very topic on OpenAI for an upcoming interview and wrote down the salient bits on my notepad. It stung a bit, because I thought I’d been clever to get a second opinion, but it was a hard no on their part. The agency of thought had to come from me, what a computer had to say wasn’t what they were interested in.

Maybe in times to come we’ll look back on those kinds of exchanges wistfully, and Peter Kay-esque comedy sketches will be written (by computers of course) saying “remember when folk used to actually care what a human thought about the best way to do something!”.

Its been fascinating to see both the level of interest and the level of distaste emerging around this new tool within the QA Community. People are happy to take a look at it, and then very quickly seem to come to an irrevocable marmite-esque conclusion that either:-

“This is the future! It’s incredible what this thing can do – look, it helped me do X, Y and Z, and with far less pass agg than Stack Overflow. Love it”

Or

“This tool is dangerous. We should avoid it at all costs, and if we do use it, treat it with extreme care. Relying on the information it produces (which is often incorrect) without the ability to critically evaluate it will lead to some terrible results. Look – I asked it X,Y and Z and it came up with some absolute mansplaining tosh that sounded great but there was absolutely no factually correct substance to it. Hate it.”

The tool-that-must-not-be-named

For those of you unfamiliar with Harry Potter, the evil Lord Voldemort was considered so powerful , terrible and omnipresent, that to even utter his name was something shocking. Everyone thought about him, and knew about him, but only those with incredible skills as wizards would dare to mention his name.

OpenAI and other AI tools such as Lensa may quickly become tools-that-must-not-be-named within professional tech circles. In other words, tools that a lot of people actually use, but don’t openly acknowledge for fear of retribution. A bit like a company advertising for a “manual tester”, or the quarter final of the FIFA World Cup being the most watched TV event of the year (in the UK). Just what the testing world needs, another thing to argue about interminably – hurrah!

As testers, I love that people are using their noggins to evaluate a new tool. And it has genuinely educated me to learn some of the more negative sides of the AI world – and there are plenty, so I am glad there are people out there who are talking about that stuff!

I also believe in confirmation bias, so we will look for information to justify our inherent beliefs and place less importance on things that seem to cause us cognitive dissonance by diminishing or trivialising them. We, at least, are still human after all.

So what can we acceptably use Open AI for?

However, I believe in shades of grey. I believe that there is a middle ground, and a set of acceptable use cases for this suite of models in particular which will evolve, many of which I am already finding myself forming the habit of using:-

AI as an explainer

Tech is full of acronyms, weird expressions with several meanings (hello Lambda!) and difference of thought. So much so that it is baffling to outsiders, or people trying to enter the industry, or even those of us who have several years under our belts in all honesty!

These smoke and mirror linguistics can feel gatekeepy, and it’s exclusionary to say that people have to learn everything through experience only, or somehow magically know all the same things you do. I recently met up with a group of new software tester recruiters who were overwhelmed with the amount of buzzwords, do’s and don’ts and terminology they had to get their head around.

Example

Example ChatGPT extract asking to explain the difference between C# and .NET in a straightforward way

Beware

I think for basic definitions such as this one, the AI is probably good enough to be reasonably accurate – at least as accurate as a google or stack overflow search. However, I’d be careful using it for more detailed information, or information about recent events as the data is only as good as the data set – which in OpenAI’s case currently ends in 2021. At the very least, when asking a tricky question you should try and cross-reference the key facts the engine is giving you elsewhere – we’ve all had that icky moment when we’ve realised a Wikipedia entry has been modified and we’ve already relied on the definition!

AI for experimentation (and fun!)

I think as testers, we love learning new stuff. We are as magpies to the shiny free tech glimmery gold. And, its got to be said, there is joy to be had from asking a computer to write a poem about something tech related in the style of a gruff yorkshireman/robot/sarcastic salesperson. There just is.

Example

I used Postman Flows (anyone who reads this blog knows what a huge fan I am of that feature) to automate a workflow that checked an OpenAI auto-complete phrase and then output the results to a Slack channel using their API. In my case, “give me the top 3 headlines this week on Postman” – this could easily be leveraged into a scheduled run each week using the new scheduler on the collection runner.

somewhat fuzzy picture shows a Postman Flow sending a request to an OpenAI API and piping the response to Slack

Or even, computer generated pictures of a cat eating a pizza anyone?

Beware

There is so much to evolve and iron out here, with images in particular. For example, I’d caution against uploading any pictures of yourself, as the rights to what AI does to that data stop being yours (noone needs nude deepfakes). If you have a strong conviction against AI generated art (or art which has been moderated by AI without acknowledging or compensating its originator) then probably steer clear altogether. We will be having to ask ourselves moving forward with pretty much everything we see – could this be fake?

Summary

I remain just one of a multitude of opinions on this subject – mine more uninformed than many. So read this blog and take its advice with the same critically applied evaluation that you apply to the rest of your testing life. I will still be interested, curious and open to hearing and learning about the complex and evolving opinions on this topic.

2023 Gifts For Software Testers

Wait, you’re leaving/been promoted/done something awesome for us and we want to say thanks?

Need some thoughtful inspiration on what to buy for loved one or colleague who works in Quality Engineering(QE)/Quality Assurance (QA) that will put a smile on their face.

Here are my top tips:-

Books

There have been several stellar software testing books released recently. Available digitally (hello last minute.com) or in good old fashioned paper form.

images of front covers of the software testing books mentioned

Memberships

Often, us testers aren’t fortunate enough to work for an organisation with a huge personal training budget. So we miss out on anything that requires a subscription or paywall, which can make finding decent content a bit more tricky. Want to help?

screenshot from MoT website with reasons to go Pro

Other bits

Of course, most testers just want regular nice things as presents. But if you want to get something more generic, or even give them the choice of something to get that’s still thoughtful, maybe a voucher could do the trick?

  • Home Office Stuff – stuff to make that home environment a bit nicer – think plants, posh stationary, maybe a small whiteboard or a cool picture. Designworks do some great bits.
  • Gift Card – did you know you can get a Ministry of Testing gift card? Now you do! Lots of cool SWAG on there including Testsphere cards, would Heu-Risk-it and all the hoodies and caps a tester could wish for.
sample images of Designworks stationary

Please note – I am not sponsored by any of these people or companies, they are just genuinely things I think are good.

Hope it helps put a smile on the face of a QA who deserves it.

T’ra for now!

Postman Flows Early Access: Feature Guide

I’ve been wrapping my head around the latest Postman Flows early access release, and this one is a biggie. Of course it is still subject to further change still, but as outlined by the team in the linked post, the major changes to the current beta version of the low code API workflow feature called Flows are:-

Start at the Start

  • Back by popular demand, the return of the Start button – don’t call it a comeback!

Simple Block List

Massively simplified list of blocks, probably around 50% are left (those that didn’t make the cut include Test Summary, Create Durables and Conditions blocks to name a few).

Youtube video showing the simplified block list in Postman Flows

Hooking You In With Webhooks

Webhooks – can now be added in order to trigger Flows from the Cloud – this will make CI implementation of a Flow possible as the Webhook URL that is generated when a Flow is created can be saved and called called like any other Postman request, as well as allowing a Flow to be triggered automatically by an event, say, a Slack or Discord message. I’ve tried calling this from the CLI and holy smokes it is fast!

Terminals Terminated

End of Terminalslog blocks can now be added instead, which pumps data to the console log.

YouTube video showing console log entries appearing for tests following execution of a Flow

Flows Query Language (FQL)

The arrival of Flows Query Language (FQL). FQL aims to low code-ify data that is used in our API requests and responses, to allow that data to be easily queried, accessed, reused and changed in a much simpler way than by writing complex JavaScript pre-post scripts against the APIs themselves.

An example Evaluate block, using FQL to add variable strings together to create a query for OpenAI

I suspect FQL will be subject to tweaks, but everyone acknowledges that the click and hope method used prior to this was pretty painful. To my mind it looks a lot more technical (and therefore a bit more scary for a new user to pick up) this way, so I’m hopeful that with plenty of feedback the team can continue to make the experience simpler.

But in order to use Flows for more advanced operations, there does need to be a trade off between ease of use and functionality. FQL is trying to bridge that gap by allowing us to:-

  • Generate standalone data (e.g. current date/timestamp) to use
  • Pull data from a Flow (e.g. a response body value) to use
  • Create our own variables (e.g. no times to iterate a test) to use
  • Manipulate data (trim it, combine it etc.) to turn it into something more useful

Here is Postman’s own FQL guide, which I’ve used extensively for reference.

Youtube video shows an early attempt to work out evaluate block using FQL

Summary

This feature is finally coming close to General Availability. And it feels it too – Flows is much more polished and functioning well. It is worth getting the early access version just to play around with some of these features, especially for folks who are either completely new to Postman or are regular users who want to keep up to date with what it can do.

Postman Flows How To: Generate Test Data

In this series of blog posts, I give short tutorials on how to accomplish something using Postman’s no/low code API feature Flows.

This time it is generating test data. If you need to run a set of steps repeatedly to create data before you can execute your tests, then here’s an easy way to do this, no code required.

Steps:
1. Create a flow of the steps needed to generate data e.g. Send Request then x,y,z.
2. Add a Loop N Times Block, with an inbound connector of Number. Select how many iterations you wish to run in the number block (e.g. I want to create 5 bookings, so will enter 5)
3. After the loop N Times block, add a Create Data block. Inside this block, create a list pointing to /data (in other words, the numeric input to the block that you have created above)
4. Add a For Each block after the Create Data block. This will connect to your steps created in step 1 above, and tells flows that you want to iterate through the looped data. Ensure the For Each block is set to pick up /for
5. Run the Flow and check the data has been correctly generated, hurrah!

Here is a short video running through these in more detail.

Youtube video on how to generate test data using Postman Flows

Postman Flows How To: Override Order Of Execution

This is a quick post to explain the 2 ways to link blocks together using the current version of Postman’s low/no code feature, Flows. (version 10).

TL:DR check out the video to see the flow in action, apologies for my somewhat noisy cat! 🐈‍⬛

What are connectors?

As discussed by Postman, Postman Flows has two different types connecting one block to another:-

  • Connection – the solid line
  • Signal – the dotted line

We use connections as standard. But it is useful to know how and when to use signals. Signals change the default order of execution, so if you want to make sure that block B waits for block A to complete before kicking off, then use a signal to do this.

How To Add A Signal Connector

  • Click the grey dot in the bottom left hand corner of a block e.g. block A
  • Drag the block to the “On” box of the block you wish to pause e.g. block B
  • You should see a dotted line connect the two, and the On change to Off in block B

The Off indicator is telling us that block B will be considered switched off until block A has executed. During execution of the Flow, as block B executes its status indicator will change to On.

Tutorial Video

Youtube video demonstrating the two kinds of connectors

I hope you’ve enjoyed this tutorial, and weren’t too distracted by my pesky moggy. 😺

Til next time!