Agile, DevOps, Continuous Delivery, Testing in Production: all these modern trends put pressure on the traditional approach to software testing in general and more specifically on the performance testing activity. In this article, V.M.Guruprasath explains issues created by the new context and proposes some possible solutions.
Author: V.M.Guruprasath (AIS,MCTS)
The increasing disruptions in information technology with newer ideas like Agile, Artificial Intelligence, IoT (Internet of Things), Cloud, intelligent Application Performance Management (APM) and companies’ cost-cutting measures on IT outsourcing during the past few years have given rise to speculation about the future of performance testing, performance tester and performance test management roles. Given these trends, some suggest that testing performance is less vital to the organization and the influence of the role is likely to diminish over time.
This article is aimed at making the audience understand the key fundamental pillars of performance testing which have eroded over the period, leading to the speculations mentioned above and the possible performance corrections to be implemented.
Role of Performance Testing in the Software World
The performance tester role is more of a “software doctor”. In the medical world, a patient reaches out to the doctor for a general health check-up or to diagnose a specific health problem. The doctor gathers the patient’s basic details, suggests few tests (Blood test, ECG and so on) and based on the test results recommends the remedial medicine.
The performance tester role is very similar in the software world. Here the organization reach out to the performance testing team for evaluating the application performance or to address specific application performance issues. The performance testing team gathers the requirements, based on the requirements suggests few tests (load test, stress test and so on) and based on the test results recommends the application performance and the area of tuning.
We could see a co-relation between the role of the doctor in the medical world and the role of the software performance tester in a software world.
The Four Pillars of Performance Testing
Before we take on any disruption, we need to understand the four basic pillars of Performance testing, which are the foundation. Let us explore them and try to understand how they have eroded over the period.
The four basic pillars are Data, Time, Resource and Cost
Performance testing is a data intensive and data driven task. The data requirements are broadly at three levels:
a. Requirements gathering: performance testers need to know more about the application. The more they know, the better they can suggest or identify the performance problem. Now-a-days, it has become a trend where, the non-functional requirements (NFR) drill down to a single line of statement which states “Every page should respond in ‘x’ seconds”. (Where ‘x’ can be replaced with any numeral).
Such requirements fail to answer the basic Hermagoras’ 5W1H:
* What are the transactions that need to respond within given service level agreement (SLA)?
* When will the transactions be performed? (Peak load period)
* Who will initiate the transactions? (User roles split and concurrent users)
* Where will the transaction be executed? (Environment)
* Why will the transactions be executed? (Scope)
* How will the transactions be executed? (Transaction rates and Online/ Batch/…)
We need to understand that the performance testing team members’ expertise area are not on application functionality. They are a pool of technical resources with knowledge predominantly on their performance testing tools like MicroFocus LoadRunner or Apache JMeter or Neotys Neoload or Microsoft VSTS and so-on. Each of these tools have got their own scripting languages, in-built functions, custom-built functions, execution methodologies and error handling techniques.
When the performance testing resources are assigned to a project, they need data to understand the project. On average, a person needs four weeks of time to get adjusted to a new ecosystem. Hence, you need to have a mechanism in the projects to share all the vital data about the project like, brief introduction about the project, architecture of the project, project timelines, preferred or available tool details with their license details, environment details, the purpose of performance testing, performance intensive transactions, their transaction rates and anticipated SLAs to start with. This would help the newly joined performance testing team member to quickly get up to the speed about the project.
Nowadays these non-functional requirement data are mostly missing, and a new trend is emerging as to requesting Performance testing team to define the requirements.
b. Test execution: unlike functional testing, the performance testing deals with volumes of data. We could see over the period; this dependency is forgotten, and it turns out to be a last-minute project management effort to fulfill the bulk data requirements, be it bulk user credentials or transactional data. The project management need to understand the bulk test data requirement that will come from performance testing and appropriate test data management/ automation efforts and timelines are to be planned in the overall project plan.
c. Result analysis: We need to understand, in the performance testing world, the size of result data is huge. An hour of load test will generate huge amount of data both on the client side like Throughput, average response times, Time to first Buffer, Hits per seconds, etc., and on the server-side like CPU utilization, Memory utilization, Disk utilization, garbage collection trends, threads, top running database queries, methods consuming large CPUs/ elapsed times and so-on.
Though the performance testing tools provide ways to collate and view the client-side statistics data, they still need to be manually analyzed.
The intelligent Application Performance Management [APM] tools do help us on quickly analyzing the server-side data. However, we need to understand that it is only one part of the high data requirements (Server side) that is analyzed by APM tools. The software doctor has still large volumes of data on the client side to be analyzed and correlated with the Server-side statistics.
From the above data requirements, we could observe, the high data requirements in turn requires equivalent amount of time for the data to be handled. So, it becomes imperative for project management planning to have the large time requirement need of Performance testing to be catered in their planning. A trend is emerging where the appropriate timelines are not provided, and a huge squeeze of efforts is anticipated as default expectation. This makes performance testing team to burn midnight oils, which in turn makes it difficult for the Performance Test Manager to maintain the morale of the team.
The timing of the performance testing is also a factor.
The position of performance testing phase on the overall project plan leads to the Performance testing team, who are last in the cycle to bear all upstream delays. This is primarily because the Go-live date behind Performance testing phase could not be changed, as the contingency time & funds have already been consumed for the upstream delays.
A decade ago, end-to-end performance testing used to happen after user acceptance testing (UAT). Over the period, it slowly eroded and started to run at the end of UAT; then it became performance testing running parallel to UAT; then it became performance testing running at the end of System integration testing (SIT). Now we could see emerging trends of end-to-end performance testing being proposed for execution at the middle of SIT. I understand that shifting left software testing is the need of the hour, however this is not the right way. This way of Left Shift jeopardizes the performance testing team’s efforts on the script development and ultimately the performance test results, because fundamentally performance testing is conducted on the final version of the build and gets certified for production deployment.
A careful analysis shows there has been an erosion of the non-functional testing phase over the years. The non-functional testing which includes the performance testing, security testing and OAT (Operational Acceptance Testing like disaster recovery testing and resilience testing) used to have its own time window ranging from 1 month to 3 months (depending on the project size) after UAT and before production readiness kicks in. This phase is no longer available. We need to get back to have the non-functional testing phase in the overall project timelines.
The ideal Shift Left for performance testing would be component-level performance testing or API level performance testing (in Agile or waterfall model) to early identify and fix performance testing problems and a full end-to-end performance testing in the non-functional testing timelines/ phase. We can have Agile models (if required) on the early performance testing and ramp up team for the end-to-end performance testing.
The second part of this article (to be published here later) will discuss the resource and cost aspects of performance testing. It will also propose some solutions to the issues mentioned above.
About the Author
V.M.Guruprasath works as a Senior Manager in an organization which is a global leader in consulting, technology services and digital transformation. Guru has over 14 years of experience in performance testing & delivery management for multiple domains ranging from large multi-national financial organizations like credit bureaus, credit card corporations, Insurance & banking institutions, to retail and construction businesses. Learn more about V.M.Guruprasath on his linkedin page: https://www.linkedin.com/in/vmguruprasath/
Disclaimer: Views expressed in this article are of the author as individual and does not reflect the view of any organization