There is nothing worse than building right the wrong software. Acceptance testing is the activity that allows the customer to validate that the delivered software meets their needs and specifications. If acceptance testing play an important role in validating software delivery, it can also cause some issues as Toby Weston explains it in his book “Essential Acceptance Testing”.
Acceptance software testing
The concept of comparative (or back-to-back) test originates in testing hardware, when the output of the device under test is compared with the output of pre-tested “ideal” device wherein input is provided with the same data. From the viewpoint of Telecom industry, back-to-back test of OSS/BSS (Operating/Business Support Systems) solutions is usually implemented for testing systems managing large amount of data to get maximal coverage of migration and configuration processes. In this article, Yulia Liber discusses pros and cons of implementing comparative tests in Telecom.
Behavior-Driven Development (BDD9 and Acceptance Testing are heavily intertwined and in many aspects are one and the same. Both focus on starting at the outer layers of your application by concentrating on what matter to users; behavior.
Tools like Selenium make writing automated browser tests dead easy. Many teams never look further than this, and are satisfied with just replacing their laborious manual testing efforts with reliable Selenium scripts. They’ve missed a big opportunity.
This article from David Sale provides a short introduction to Behavior-Driven Development in Python. The article presents the principles of Behavior Driven Development and present the syntax of the Gherkin language that can be used with the freshen Python package, a clone of the famous Cucumber BDD framework written for Ruby. Freshen is an open source acceptance testing framework for Python that uses (mostly) the same syntax as Cucumber. A small step by step example is provided on how to use freshen and alternative tools are proposed.
This video presents the lessons that a team has learned from having a big code base of Selenium tests for acceptance testing. It covers different ways they have developed to track their tests across different projects and how this has helped them to identify flaky tests.
Unit tests are programmer’s best friend, but relying on them exclusively gives an illusion of overall system integrity. At some level, we need to verify how our components integrate and ensure unexpected behavior does not creep in when we shift the application into the target runtime. It all amounts to whether your application is providing the end user what he or she is really needs (tire swing) instead of what anyone thinks they need. How can we save our users from frustration, keep the fail whale at bay and communicate with stakeholders that the requirements are being met?