Manual testing checklist for web applicationsMay 8, 2018
Here’s a fun fact: manual testing accounts for ~75% of functional tests. Let that sink in for a moment. In our brave Agile world that lives by the motto “automate everything”, we only automate 25% of functional testing. Why does this happen?
When do companies choose manual testing over automation?
Sometimes, manual testing is the only viable choice because automation is simply a non-option. There are quite a few cases of this sort.
- Early-stage development. WebDriver scripting is a no-go if you’re building from scratch and most features are in active development. This is why so many projects start with manual UI testing.
- Ever-changing requirements. System-level automation can be cost-effective if (and only if) the requirements are stable. This isn’t the case with some projects. Also, this is never the case with early-stage projects.
- Tight schedules. If you’re hand-coding your tests, test suites will take weeks to write. Manual testing can provide you with the short-term speed gains needed to start shipping faster.
- Exploratory testing. Automation only works when you know what you’re looking for, which is a complete opposite to how exploratory testing works. When running exploratory testing, you need to proactively search for corner cases and unexpected issues. With the exception of really advanced AIs, only humans are capable of this.
Where manual testing outperforms automation
Aside from cases when automation is impossible, manual testing has important advantages over coded frameworks like Selenium. It’s these advantages that make teams stick with manual tests.
- Human attention. Selenium tests ignore everything outside of their scope. Humans, on the other hand, also notice whatever else is wrong with the UI. Due to this fact, it makes sense to run manual testing at least once in awhile for every business-critical feature.
- Flexibility. Web application testing needs to constantly adapt to dozens of variable factors. Unlike Selenium code, manual tests are easy to change.
- Faster test preparation. Manual tests are ideal for ad-hoc testing because they take little time to prepare.
- Low barrier of entry. Let’s be frank, learning several software testing tutorials is easier than learning several testing tutorials, a programming language, and an automation framework. Manual testing is faster and cheaper to set up, and it doesn’t require programming skills. This is the reason why manual testing is the starting point for both companies and specialists. This is also a reason why automation is an overkill for simple projects that still require through testing (like ecommerce websites).
Manual testing checklists
Now that we know what makes manual testing necessary, how do you actually run it? Read on for checklists on functional, usability, security, and compatibility testing.
Usability testing checklist for UI
- Do all UI elements and content (text, images, animated GIFs, etc) render on the page?
- Can the user navigate the UI?
- Are all links, menus, and submenus accessible, clickable and tappable? Are there any broken links?
- Is there a home link on every screen/page?
- Will the user be able to access all clickable elements (links, buttons, dropdowns, sliders, and boxes) accessible via keyboard only? Do all clickable/tappable elements receive focus via tabbing (i.e. pressing the tab key)?
- Do disabled fields and read-only elements receive focus via clicks, taps, and tabbing?
- Does the UI automatically place the cursor in the first (uppermost) text input field? Does this behavior match the specification?
- Are there any issues with the text content of buttons, fields, tooltips, messages, navigation items, and menus?
- Are there any spelling errors?
- Does the text content match the specifications and naming conventions?
- Are there any visible layout issues?
- Is the order of menu and submenu items correct?
- Do all widths, margins, and paddings match the specifications?
- Is the content of UI elements clearly visible (e.g. not truncated due to the width of the element)?
- Does the format and size of all buttons, fields, tooltips, messages, navigation items, and menus match the specifications?
- Does the text and fields aligning match the specifications?
- Do fonts match the specifications?
- Are all context cues in place and working as specified?
- Are the disabled fields “grayed out”? Are there visual clues for distinguishing disabled fields from active ones?
- Are there tooltips (“hover boxes”) available for input fields, buttons, icons, and other UI elements?
- Is there a blinking cursor when an input field receives focus (via clicking, taping, or tabbing)?
- Are there confirmation messages (e.g. confirmation popups) for every operation that involves updating or deleting something?
- Are there error messages in place for fields? If there’s an error on submit, does the field retain previous user input?
- Are there context cues for successful and failed file downloads and uploads?
- Does the UI display scrollbars of its own? Should it?
- Are there any major issues with the structure of the page?
- Is there a single <h1> element on the page?
- Does the page have a <title> tag and a <meta description=””>?
- Are there any accessibility issues?
- Are all buttons and loading states properly labeled for screen readers?
- Are there clearly-named exit points for modals?
- Is the hierarchy of UI components screen-reader-friendly?
- Do all images have descriptive alt text?
- Is there anything frustrating about any aspect of the UI?
Functional testing checklist
- Do all interactive UI elements work as specified?
- Check all buttons, radio buttons, dropdowns, toggles, checkboxes, text boxes, list boxes, date and time pickers, sliders, search fields, pagination and tags. Do these elements respond correctly to clicks, taps, and other input?
- Is there an error message or page when a particular functionality fails?
- If the UI involves sorting (e.g. catalogue of product items), does the sorting functionality work as specified?
- If there is support for drag-and-drop, does it work consistently?
- Is there validation in place for all fields where necessary?
- Are there constraints for maximum length for alphanumeric input? How does the system handle very large strings?
- Do the input fields handle special symbols?
- Do numeric fields accept alphabetic input?
- If there are calculations, does the application handle very large numbers and division by zero?
- Does the app impose constraints on input involving currency? Does the app handle different currency formats?
- If there the supported input includes dates, does the app handle leap years?
- If the case of invalid input, is there an error message?
- Does the user registration functionality work?
- Can the user clearly tell what fields are mandatory?
- Will the application display an on-submit error message if the user doesn’t provide input for all mandatory fields?
- Will the application display an on-submit error message if the user doesn’t provide input for non-mandatory fields?
- If registration is successful, is there a welcome page or message?
- If the registration fails, is there an error page?
- Are there timeouts in place for registrations, payments, and other session types?
- Does clicking or tapping on an email address open an email client?
- Does the app allow the user to download files?
- For downloads, are there error messages / error pages in place?
- Does the app support file uploading?
- For file uploading, are there constraints for app type and size?
- Does the app display an error message if the user tries to upload a file of a wrong type and/or size?
- How does the UI handle the user’s tinkering with the browser?
- What happens when the user deletes cookies / clears browsing history mid-session (i.e. while using the UI)?
- What happens if the user deletes cookies / clears browsing history after the session?
Compatibility testing checklist
- Is the UI layout consistent across different screen resolutions and browsers?
- Do all widths, margins, and paddings behave consistently across browsers and screen resolutions?
- Do fonts and colors render consistently across browsers?
- Do images and animated GIFs load on all browsers?
- If applicable, does the HTML version of the app look consistently across all browsers?
Checklist for basic security testing
- Check for major security flaws associated with user authentication:
- Does logging in involve a two-step verification?
- After you’ve logged out, can you access your account (or any pages with sensitive data) without logging in?
- Are there password rules in place an all authentication pages (sign up/registration, sign in, change password, forgot password, etc)?
- Does the password field mask user input?
- After you’ve changed the password, can you still log into your account with an old password?
- How many times can the user input an invalid password? Does the app lock the user out in case they have exceeded the number of attempts for entering the password?
- Does sensitive data appear in any of the following:
- Error messages of any sort.
- Any pages that don’t require a login.
- In the source code of the page. If yes, is the View Page Source / View Source Code option disabled?
- Is there any sensitive data stored in cookies?
- Is sensitive data of any kind still accessible if some functionality of the app is not working?
- Are there traceable log files for storing important information? Does the app update these files as specified?
- Does the app use HTTPS / SSL?
- Does the app use encryption when handling sensitive data (including user credentials, user bio, credit card information, etc)?
- If there are session values in the address bar, does the app encrypt these values?
- Does the app encrypt cookie data?
- What happens if you refresh the page or click/tap Back when an transferring sensitive data (e.g. credentials, payment info, etc.) is mid-progress?
- Can the user use your app after the session has expired?
- Can a user access the functionality that is only available to other roles (e.g. can a regular user access admin-only features)?
- How does the app handle an SQL injection?
Looks like quite a checklist to run manually, doesn’t it?
Some of the items above, you can cover only once. For instance, how many times will you need to check if the app uses SSL? Still, you’ll have to run through most of these checklists on a regular basis if quality assurance is a priority. And there really is a smarter way to do this than to rerun all test manually over and over again.
Codeless automation of manual testing with record/playback
If you find manual test runs too time-consuming, writing WebDriver code isn’t necessarily the best automation tactic. Manual testing teams seeking to optimize their performance can benefit from running their usability and functional tests is a record/playback solution. This approach should work for the same reasons manual testing works:
If record/playback is so awesome, why isn’t it as popular as Selenium?
Record/playback is underrated, and there is a reason why that’s so. Software testers often associate record/playback with simplistic old-school solutions like Selenium IDE or earlier versions of QTP/UFT. If you ever tried one of these testing tools, you’re probably sceptical too. But things have changed a lot since the time Selenium IDE and QTP were the only game in town…
The visual UI testing platforms of today have gone a long way to make record/playback work for professional software testing. Here are a few things that our very own platform Screenster can offer.
Codeless test automation, code-optional maintenance
No auto-generated code
One of the main issues with most record/playback solutions is that they only allow editing via editing auto-generated code. This is neither pleasant, nor feasible. With Screenster, manual testers can easily edit UI tests without touching a line of code.
Visual testing !== screenshot comparison
Screenster uses a sophisticated visual testing algorithm that matches the DOM structure of individual UI elements to how they look in the UI screenshot. The platform runs precise UI comparison without firing false positives caused by minor content shifts, anti-aliasing, or dynamic content. Screenster also recognizes text, and it can tell if a particular line of text is an automatically-updated timestamp (date).
Brittle locators are one of the main reasons UI testing is so hard. To make things simpler for testers, Screenster captures complete lists of selectors for each UI element. Should someone move the element or rename an ID or class, Screenster will still be able find the element using the other stored selectors.
Smart automation of timeouts
Unlike Selenium or most record/playback platforms, Screenster doesn’t depend on explicit or implicit waits. Instead, the platform will determine optimal waiting time for every UI element to fully load — much like a human tester.
Smooth learning curve
Most of our current users were able to automate a real UI test in under 15 minutes. Screenster caters to manual testers, and 90% of its functionality if accessible to non-technical users.
No installation or setup pains
Most desktop IDEs weight gigabytes and take days to set up. In the meantime, Screenster is fully web-based, with no setup pains and browser plugins required for it to operate. Everything really works out of the box, and there’s an option to run the platform on a local server if you need to.
It’s free to try
If the features above sound interesting, you can try our free demo and see how Screenster can help optimize your workflow. Sign-up for our free online demo and tell us what you think!