With the multiplication of versions, platforms (desktop, mobile, tablets) and operating systems, testing an application that is supposed to run in a browser is not easy. In this article, Alexander Rayskiy proposes an approach to select the set of browsers that will be used during the software testing activities.
Author: Alexander Rayskiy, XB Software, https://xbsoftware.com/
It is hard to imagine a software quality team which is not constrained by time and resources and can test the web application in all available browsers. Thus, you need in the algorithm that allows determining the optimal set of browsers. The features of page rendering can change from browser to browser, which doesn’t allow to completely rely on testing automation and to get rid of the need to check manually how your app looks in different environments. In this situation, manual testing is a must.
How to Choose the Right Browser
There are different approaches to the browsers selection process. You can rely on the statistics of the browser usage among your target audience to define which one of them should be included in your list. It guarantees that your users will get the expected experience using your web application. Another approach is aimed at finding bugs. It implies testing on the browsers that cause the greatest number of problems such as the native Android browser, for example. Unlike the first case, these browsers are not used widely enough. But they can give you the understanding of how your app behaves in different usage scenarios and what GUI issues you can face. So, you have to determine the goals of the testing process clearly before getting started. If you want to guarantee a better user experience for the majority of your audience, focus on the most used browsers. If your aim is to find the maximum amount of bugs and glitches, the most problematic browser should be your target.
There is also a three-step approach that is the golden mean between the ones previously mentioned. First of all, you should run your app on the most popular browsers on a development machine to get a general idea of where the bugs may be found. Then you can run a manual test of the app on the problematic browsers, which will allow demonstrating the most of the bugs. The final step is the sanity checking on the most popular browsers that allows guaranteeing that the majority of your audience will get the expected user experience. The first two steps are aimed at finding the bugs. If you are convinced that your app is robust and every test case from the checklist is passed successfully, you can proceed with the step three to test the minimum number of browsers that corresponds to preferences of your target audience. Such approach can give you relative confidence that your application will work as planned for a certain percentage of potential users.
Gathering Required Information
Before getting started with the software testing activities, you have to learn the habits of your users. If the app provides services related to a specific geographical location, find the audience statistics for this region. What are the most common browsers? What are the browsers that barely used and can be removed from the list? Such data can be gathered via Google Analytics or any other similar service. Make the gathered data more readable and easy to understand. Create a table that visualizes the results of your research. You have to be able to get information about the most popular combinations of operating systems and browsers on the market at a glance. So don’t overload such table with unnecessary data.
Here are some tips on simplifying the visualization of browsers usage statistics. In the most cases you can ignore the used OS and concentrate on the browsers. Differences between the browsers versions can usually be considered negligible especially if we take into account that such software often updates automatically. The exception from this rule is Internet Explorer since we know that the older versions of this browser are pretty problematic and widespread. So, different versions of IE should be enlisted separately. In another case, you can merge versions of desktop browsers. While in the most cases the version of the used OS is not so important, it is if we talk about the Safari browser. The reason is that Safari versions are linked to the OS. Another important phenomenon that is worth your attention is the rapidly growing popularity of in-app browsers. So, you have to predict how much of your users will access the app via Facebook or Twitter and take this info into account. Any browser that makes up less than 5% of the audience can be harmlessly removed from the list if it is not important for the target audience or a customer.
After you finish with making the list of target browsers, you can proceed with the prioritization. Your app should be first tested on high-risk browsers. When you fix a particular bug which is related to a specific browser, you want to be sure that the changes that you bring to the code won’t cause new errors in the case of the other browsers. To avoid such situation, you can create a gradation of browsers according to the probability of problems. For example, the older versions of IE can be considered as high-risk browsers, the newest versions can get the “middle-risk browser” mark, and the latest Chrome version can be considered as a low-risk browser. The main thought behind such approach is that when you test the low-risk browser first, find some bugs and fix them, the probability of appearing new errors in a high-risk browser is quite high. So, you have to make recheck over and over again. But if you start with the high-risk browser, find a bug and fix it, there are not so many chances that the low-risk browser will be affected. Thus, there will be no need to jump back and forth between the browsers and make the same checking over and over again.
A natural question arises: how I can identify a problematic browser? The answer depends on particular features that your application uses. There are dozens of websites that provide you with the information about potential issues associated with such features as CSS3 2D Transforms or CSS Grid Layout in a particular browser. You can juxtapose the source code with such info to predict which browser will become a source of a headache for the testers and mark it as “high-risk.”
As we have mentioned before, the last step of QA in software testing is the sanity checking. Even if you successfully overcame the issues associated with the most problematic browsers, you can face the differences between the rendering features in the latest versions of Chrome and Firefox, which can become the source of problems at this stage. Since the nature of testing has changed, we must change the approach and the prioritization criteria.
We should take into account the percentage of users that use a particular browser, ease of upgrading, and whether or not the browser is the default for the operating system. If the overall usage of a browser is less than 0,05%, you will waste your testing efforts. If the overall usage is large than 10%, the browser should get the highest priority. If this indicator is more than 2%, it should get the middle priority. Browsers default for the operating systems should get the middle priority if their overall usage is higher that 0,5% or if it is the latest version of the browser. In another case, such browser should get the lowest priority.
To avoid the need to make the same checks over and over again or even spend your resources on tests that will not affect the final result, you have to choose the prioritization criteria properly. The analysis should combine statistical (percentage of usage) and technical (potentially problematic browser features) approaches. It is important to remember that the choices that seem to be the most obvious at first sight may not be the best options.
About the Author
Alexander Rayskiy is the Head of QA Department at XB Software. He is an analytical and responsible QA professional with strong leadership and people management skills. Alexander can easily define and implement effective test strategies. He perfectly manages QA staff both on internal and external projects, and takes care of providing QA staff augmentation services to meet the clients’ needs.