(Written by Craig Pryde, Front-end Web Developer at After Digital)
When developing projects for the web we usually don't go into too much detail about testing, but this stage in the development process is a vital step to ensuring that the end user can complete tasks easily and the functionality that was desired is consistent across devices and browsers.
In this blog, I want to focus on how you can use testing and tracking to improve the user experience, ensuring that customers enjoy their time on your site, and I want to look into how we use tracking and data to identify potential issues on a site.
What is testing for?
Testing within web development is the fundamental step to ensuring quality and reliability when delivering a product to an end user. The use of testing within a solution is key to ensuring that the functionality and experience the user will have is the one intended by the designer of the project.
The aim of testing is to check the code produced within the delivery phase works as expected across multiple devices and browsers within reference to the project defined support list. Testing allows the developer to ensure the end product will meet the desired functionality listed in the feature specification and work as intended by the designer’s vision.
The end goal for testing is to ensure that the code released to production is of a stable manner and works as intended on all supported devices. Through testing, we can ensure that all work produced is of a high standard, and it allows the developers time to sense check the work produced; involving the designer to ensure that all experiences are optimised for the end user in line with the design provided.
Why mix in tracking?
Tracking allows us to see the devices our end users are using and allows us to focus efforts on utilising features that are prevalent in those browsers. By looking at the raw data of device stats and user interaction events for the website over a period of 6 months we can see some of the following:
Low mobile traffic
This could identify that the mobile experience may have problems or be incomplete. The low stats would show us that we need to review the user journeys across the mobile viewport size of the site with a UX Designer. We can then make changes and track changes in behaviour.
Having a primarily arts and theatre based client list, we take low stats on mobile as a sign that there are potential issues on a specific device. We can then spend time user testing the website and watching recordings of interactions to understand the struggles our users are having. By fixing the issues causing a negative user experience we can improve the number of users visiting on mobile and ensure the site is accessible and optimised for all types of users.
Google Analytics Tracking
By using custom events within Google Analytics, we can track users interactions including clicks to buttons and the progress the user has made it through the purchase path. This allows us to see areas of the purchase path that cause drop-offs and allows us to investigate potential issues or places that could be optimised to make the process more streamlined.
This kind of tracking also allows us to test out new features on the site and trial new user feedback to see the preferences across our full user base. This could allow us to see stats like the following:
- Seat map overlay link changed to a button: 50% increase in interactions.
- By adding a slight bounce when tickets are added to the cart we speed up the seat map selection stage by 5%.
This data is invaluable in helping our in-house team improve the experience of our users.
Heatmaps & recordings
This kind of tracking allows us to see where the users’ focus primarily lies on the page. With recordings of user interactions, we can also identify areas that need to be improved or made more obvious to the user. A few examples could be:
The User could be struggling on mobile to identify how to progress to the next step or see the tickets that have been added to their cart. By being able to see this pause in the purchase path within recordings, we can introduce features to solve the issue. An example would be making the cart overlay bounce slightly when the first ticket is added to direct the user’s attention to the fact that they can open the cart at any time to see the tickets that have been added.
It's the small things that matter on the website, interaction design is a key step to any website and this kind of data is vital in allowing our team to come up with innovative solutions to problems the end users are having.
Methods Of Testing
When developing a website or app we need to ensure that the user experience is effective, but we also need to ensure that the features we have implemented work and are reliable. To do so we use the following methods of testing:
Physical & Emulated Devices (Cross Browser Testing)
In our studio, we have a range of test devices that we share across the teams to ensure that quality assurance can be performed on the hardware that our end users are using. We use the device data from our client’s websites to ensure that we are actively testing on devices that matter the most to our end users.
Physical devices have their limits though, and we need the availability to spin up a device and test at any point of the day. For this, we use a service called 'Browserstack' to gain access to over 2000 browsers and emulated IOS and Android devices. This ensures that if an issue is raised, we have the means to test and debug the issue reliably.
When testing on physical devices we test the sites rigorously to ensure that, no matter what are end users throw at it, the functionality will be able to cope and not throw errors that prevent the user from finishing their journey. Testing the site in both horizontal and landscape modes is a good way to ensure that the site’s responsiveness does not have any flaws and the site will be future-proofed for the new device screen sizes that launch onto the market after the project goes live.
Unit & End2End Tests
On most large-scale projects we incorporate unit tests into the application and where budgets allow we also perform end to end testing. Jargon aside:
- Unit testing - is the procedure of testing individual units/components of software to validate that each unit/component performs as designed. An example of this would be testing that a carousel’s next method updated the active number to match the index of the shown slide.
- End2End - is the technique used to test whether the flow/journey of an application is behaving correctly from beginning to end. An example of this would be completing a registration form with all the data provided and with only a minimal amount of data being provided, this would test the form for both potential flows.
By implementing these tests we can ensure that when new functionality is added later in the project life span there are no breaking changes to the rest of the site. This means that if there are multiple components across the site using the same piece of core functionality, and we add additional features to upgrade one of the components, we can quickly confirm if that change will break any of the other components.
We can also use this when deploying new features or fixes to the live environment, by having our deployment tools run the tests and cancel the deployment if any issues are found, this prevents broken code reaching the end users.
This is the kind of testing that is overlooked by most development teams but, in my opinion, is the most important stage of all. This is a method of testing to ensure that what has been developed is accessible to all your users.
When talking about accessibility testing, a lot of people think it just means running through a site with a screen reader which is aimed at the visually impaired. In reality, accessibility testing has a larger scope and includes testing the experience for these common kinds of impairments:
- Visual impairment
- Motor/dexterity impairment
- Hearing impairment
- Cognitive impairment
In order to cater to these users, we follow guides and standards that aim to standardise accessibility approaches. We mainly follow the WCAG 2.1 spec, which outlines the following for the functionality we intend to build:
- Event handlers - How this component should be interacted with
- Additional features - Any additional considerations to take when implementing the feature
- Insights - Provides links to examples of code or information around how the user will interact with the functionality
By having a solid understanding of what is defined within the WCAG for the features we are building, we can ensure that the end user’s experience for those using assistive technology is optimised and allows the user to complete the desired actions with minimal difficulty.
By using the data we gather from users we can ensure that when it comes to designing and building new features onto a site we are making the decisions that are right for the end user. This data allows us to make decisions from facts and educate our clients on what users really want.
We can then apply the different methods of testing to ensure that the end product being used is of a high standard, works well and provides a powerful experience for the end user.