5 reasons to choose Screenster over your current UI testing automation tool
March 30, 2017Table of Contents
Looking for an efficient codeless solution for UI testing automation? Meet Screenster, a new cloud-based platform that brings speed and simplicity to UI testing. Here are five reasons why we’re different from (and better than 😉) our competition.
1. Super-short learning curve with no coding skills required
Okay, we all know that every other UI testing automation tool boasts having a smooth learning curve. Here’s why Screenster is different: when we say our platform is easy to pick up, we mean it.
Screenster requires, precisely, zero programming skills. Instead of relying on scripted tests, the functionality of this platform revolves around record-playback and screenshot comparison. This allows manual QAs and product owners to cover the whole UI and test actual UX scenarios instead of hand-coding verifications. Now isn’t this what UI testing automation is all about?
When working on Screenster, we’ve put a lot of effort into making it accessible to non-testers. You don’t need to read manuals to automate basic UI tests, and most of the things are easy enough to learn while actually working with the platform.
2. Maintainable tests created in 3 minutes
… And I’m not talking about all those “hello, world!” mock tests. Take a look at the video below, and you’ll see how a basic UI test of Gmail can be automated in under three minutes:
Sure, something more complex will take longer — but even when it comes to automating complex tests, Screenster is pretty darn fast. Given that outdelivering your competitors is so important, speed is a must for any UI testing automation tool used in any real-life project.
More importantly, Screenster does a great job at eliminating maintenance pains. Thanks to Smart Maintenance, all of your test will always be up to date, and no test will break because someone renamed a class or an ID. But more on that later:).
3. Automatic handling of locators, dynamic UI areas, and timeouts
When recording a test, Screenster captures the DOM tree of the UI and creates a list of locators for every UI element. It will automatically update these lists in case someone changes the code of your UI. As a result, your testers can forget about the “element not found” exceptions and move on to more challenging tasks.
Dynamic areas is another source of pesky little problems for testers — and, also, another thing that Screenster handles automatically. When doing the first run of a UI automation test suite, the platform will detect dynamic areas and offer you to ignore them. Simple as that :).
Finally, our “automate everything” roadmap includes timeouts. The platform automatically determines the right waiting time for each action. With Screenster, global delays are a thing of the past — and so are hand-coded explicit and implicit waits.
4. Visual testing of what users actually see
This is, in fact, one of the main reasons to choose a smart record-playback solution over a code-based tool like Selenium or PhantomJS. If you’ve ever worked with one of the latter, I’m sure you’ve noticed how they can only verify separate UI elements properties. But UI automation testing requires a more comprehensive approach that tests CSS layouts, images and visual aspects of the page.
With a record-playback tool like Screenster, a single test does both visual testing of the layout and the actions taken by the user. Tests of this kind focus on the visual side of the UI as a whole, not just its separate parts. These tests cover the layout, fonts, colors, paddings, margins, user input — as well as every other aspect of the UI.
5. Web-based platform with collaboration functionality
UI testing automation has long turned into an activity carried out by teams, not individual testers. Given this, it’s surprising to see that only a few UI testing tools support collaboration as a major feature.
Being a web-based platform, Screenster offers QA and Dev teams to access all tests on a shared portal where tests are stored. Depending on the needs of a particular project, teams can use a shared server that’s either running on premise, or a cloud-based one.
* * *
There’s a lot that we can tell you about Screenster — mainly because we’re really proud about this product. Still, there’s only one way to actually see if this tool is able to meet your UI testing automation needs. Check out our free demo, and try automating a simple test for your website. Chances are you’ll like what you see:)
This looks good – for an IDE. Noone likes having to wrap their head around an element that has no id assigned to it, so you’ve got that going for you. I will be following this testing automation tool.
I’ll give it a try. Neat video btw
#6
It can do Selenium tests — with or without you handcoding the test. We’ve been working with Screenster since September and I really liked Minimal Verification with datasets. There’s also an option to actually use Selenium or JavaScript tests in Screenster (though this one could’ve been more straightforward).
Selenium support does seem like a good idea. Every UI testing automation tool uses WebDriver nowadays, and I’m sure Screenster is no exception. So why not make tinkering available to advanced users?
Here’s a №5 reason that I think you should include to your list=) Your pricing page creates an impression Screenster isn’t as expensive as full-sized UI testing environments like UFT/UFT pro, and this makes for a good chance more budget-conscious folks will use it. I’m thinking about smaller testing firms that have to maintain teams of manual testers eyeballing other people’s apps
All in all, web-based UI testing automation tools look like they’re improving, which is nice
I like the functionality of your tool, but the UI could really use some attention… Looking at your top bar, there are four buttons and all of them look different!
For UI testing, a more precise headline would be “5 reasons to choose Screenster over not using any UI testing automation tools”. Because that’s what the reality is for most small websites and webapps. Automation is too expensive for most companies
I’ve been a part of teams that used Selenium and Protractor, and I can see your point. When automating the browser, you only work with narrowly focused actions. E.g. your test clicks on a button and checks if the browser will take it to the right UI state. Problem is it doesn’t really look at anything else. If your header is now 35px off and with a red background, or if there’s a typo in some other button, you won’t know it unless you have specifically written tests, specifically targeting these bugs. I’ve worked with screenshot comparison, too, but these things aren’t practical at all. You cab set them to fire failed tests whenever the UI looks different, but you’ll have to brace yourself for having to deal with quite a lot fo false positives (even if you’ve set a tolerance threshold). basically, once your tool gets faster, sign me up and I will be happy to try it out and share my experience on my blog.
It is a nice tool, indeed. I really with the UI didn’t look so old-school, but I guess it’s something you get used to. But still, do something about your UI, guys!
As long as we’re talking about visual testing, there’s a big difference between how people use screenshot comparison and image diffs in record-playback. The former are primarily triggered on test fail because you wouldn’t really want to overload their tests with graphics data (screenshot) or slow them down be frequent calls of pixel-matching functions. In the case of record-playback, some tools are optimized to take screenshots per every action.