What is happy path testing and why is it bad for you?
November 27, 2017Table of Contents
If you’ve been following this blog, I’m sure you’ve stumbled across one or two articles that mention happy path testing of the UI. If that’s right, you probably know that we, the folks at Screenster, aren’t too excited about happy paths in UI testing automation.
So what’s the big deal with UI tests that revolve around happy paths? I see two major problems:
- When writing test cases for the GUI, QA automation teams rarely stray from the happy path.
- In most cases, automated UI tests only cover a fraction of the happy path they target.
Since UI testing automation is something that we all take seriously, exploring these two issues seems like a useful thing to do. But before we can proceed to that, let’s make sure we’re on the same page regarding what happy path testing is and how it should be automated.
Okay, so what is happy path testing?
The term “happy path” denotes a user scenario where nothing ever goes wrong. Coming from UX and software modelling (and an alternative universe where the Murphy’s Law doesn’t exist), happy paths exclude exceptions, human error, and corner cases. If you need an example, happy path scenarios which cover user sign-in will ignore invalid input, connection problems, or possible server errors.
In most product teams, happy path testing reflects the most essential user stories of your product. Testing these is paramount at the MVP stage when you need a clear vision of what must be covered with whatever budget and schedule you have.
Starting with well-defined test cases helps you focus on the core UX scenarios where bugs are unacceptable. This doesn’t mean, however, that only testing testing the happy path is a viable strategy…
Venture off the happy path with Alternative and Exception paths
Much like the proverbial comfort zone, the happy path gives you an illusion of security. This illusion gets broken fast if you’re only testing well-defined UX scenarios.
Getting back to our signin example, you will want to expand your test suite with alternative sequences of user actions. Say, the user accidentally hits Enter after typing in the login, triggering a warning about the missing password. Seeing the warning, the user will provide the missing input and hit Enter to submit.
This UX scenario involves an Alternative Path where users will use the same actions and reach the same goal, yet under different conditions and with extra steps. In addition, there are Exception Paths that result in failed UX scenarios:
- invalid login, password, or both,
- the server is overloaded or there is a gateway timeout,
- the user inputs invalid credentials, hits Submit, then repeats the same procedure 100 times, each time with a new invalid login and password.
Ideally, your UI regression testing suite needs to handle all of these cases because each case deals with new UI states. What if the password warning isn’t there, or the 500 and 504 error pages have broken layouts? Or worse, what if your sign-in page doesn’t display captcha to rule out a bot attack?
What this all tells us is that the happy path is just a starting point. Basically, you’d want to start with test cases which automate the happy path, then expand your suite with alternative paths and exception flows.
So do your automate Alternative and Exception paths?
Based on what I saw in dozens of QA teams, it would be safe to assume that you don’t. In real life, nobody has the time to write and maintain this many test cases. Given how often the UI changes, it’s no surprise that most teams would rather test alternative and exception paths manually. In fact, manual testing for exception and alternative paths is something that some people recommend. The problem is manual UI testing is a false economy, and QA teams never run enough manual testing sessions.
Do you test your happy paths end to end?
This is where things get trickier. If you’re working with something like Selenium or Protractor, there’s a good chance you only test part of your happy paths.
Let’s return to our sign-in case once more. A typical Selenium test will probably check if the form is there, along with the right input fields and buttons. It will also check if typing in user credentials and clicking the Submit button will do the right thing. But doesn’t it seem like there should be more to it? What if there’s a CSS bug that breaks the page layout, fonts or images?
Changes of this sort always go unnoticed by hand-coded tests. Low-code platforms generally do a better job at catching visual bugs, but they can’t quite match the efficiency of coded tests. Or can they?
Going low-code for better test coverage
Coded tests can’t guarantee sufficient coverage of the UI — even if we’re talking about happy path testing. What’s worse, most QAs learn about this from experience.
Trying to make coded tests work for my team proved to be too much of a burden. When automating the UI testing of our product AjaxSwing, we’ve spent months playing around with Selenium and several Selenium alternatives. At some point, our quest for the Holy Grail of UI testing automation lead us to a tough decision. To make things right, we needed to build a tool of our own.
The tool we’ve created is a low-code record-playback-verification platform. We’ve called it Screenster, and initially, we used it for internal UI testing purposes. At some point, we thought it might be a good idea to release it to the public.
Before it starts to look like I’m pitching you this platform, I have to admit Screenster isn’t the Holy Grail — just yet. But I’m sure we’re moving in the right direction. And as far as this direction is concerned, here are several unique features that make Screenster the right choice for testing the happy path and beyond. I strongly believe that the functionality described below is a must have for UI testing automation in 2018.
Automatic verification of every on-page element
To make happy path testing work, you need a guarantee that the UI is not broken. With Selenium, you only cover the UI elements that you explicitly target. With Screenster, you get automatic verification of every on-page element, on every page. Here’s how this works.
Screenster uses record-playback, but it’s very different from simplistic record-playback IDEs like Selenium IDE. In addition to recording steps and taking UI screenshots, Screenster scans the DOM of the UI. It builds complex self-healing selectors for every element, and it tells you if some of the elements look different. Even if the element isn’t a part of your happy path testing scenario, Screenster will analyze and monitor it during regression testing.
Smart detection of differences
When looking for differences in new UI versions, Screenster uses the concept of baselines. A baseline is the UI step recorded during test creation. For each baseline, the platform will search for differences of three types:
- Visual differences. Screenster uses pixel-perfect screenshot comparison across different screen resolutions to detect layout shifts, unexpected color changes, wrong fonts, etc.
- Content differences. The platform will also run a content comparison algorithm to detect added, removed, or changed text and images.
- WebDriver differences. Much like Selenium, Screenster runs WebDriver scripts emulating user actions, like page navigation, mouse clicks, and keystrokes. If a UI change alters the UX scenario, the test will return a failure.
Future-proof locators
Selenium locators are inherently fragile. Even within the narrow confines of happy path testing, hundreds of things can go wrong because someone renamed an ID or changed the parent-child structure.
Instead of relying on a single selector, Screenster locators collects all available targeting criteria for each UI element. In a Screenster test, every element gets a list of selectors based on HTML attributes (id, name, class, etc.), and ancestor-descendant relations. If one selector breaks, the platform will automatically swap it for another one.
Low-code testing with a smooth learning curve
Screenster offers a low-code solution, meaning that test creation is primarily codeless. You can import coded tests if you want to, but you absolutely don’t need to be a programmer to create or maintain your test suites.
According to the people who have tried Screenster, it takes about 15–30 minutes to get comfortable with the basic functionality. It also takes under 5 minutes to record your first test.
No infrastructure burden
Setting up a testing infrastructure can be a pain, especially if it needs to include collaboration and CI. As far as infrastructure burden goes, cloud platforms liberate you from having to install and setup your test server and grids. Besides, automating your tests in the browser seems like a more rational approach to UI testing of web applications.
With Screenster, you get to choose between a cloud server and a server on premise. Your testers can collaborate using web-based dashboards, and there’s a proprietary plugin for CI integration.
Try automating a happy path test with Screenster
There’s a whole bunch of other features that make Screenster truly unique. But instead of reading about it, won’t it be better to try the platforms yourself?
You can try Screenster for free by clicking the button below. Automation of a simple case of happy path testing will take less than 5 minutes, and most users can pick up the core functionality without a manual. So try Screenster and tell me what you think about it.