Reducing Test Failure Analysis Time with Machine Learning

How many of us have spent those hours just to analyze test failure Automation run reports to first determine whether it was an actual bug or environment specific issue or a test automation issue. We agree that no matter how much robust we make our UI Automation frameworks, we always encounter Automation/Environment specific failures, which increase the time spent on analyzing those failures and spotting actual issues/defects.

At some point, all of us have had a bad day where we see so many flaky tests, that we lose confidence in the reliability of UI test automation results and tend to ignore the results over a period of time. We propose a solution to help us overcome this problem using the most trending technology of last decade – Machine Learning (ML). The fact that we run around 130k test runs a day with around 2.3 million test records saved in MongoDB every month motivated us to look into Machine Learning as an approach for the problem statement we have.

What if we can use ML algorithms to find patterns in the day-to-day UI Automation error messages that we see, to tell us if it is – an actual bug or not! All of us use various Selenium based Test Automation frameworks like Cucumber, TestNG, Scalatest, Nightwatch JS etc. which have their own libraries to report any Test Validation/Automation failures, hence vary quite a lot in the formats. Also, it’s difficult to find a common pattern in user defined error messages.

A typical error message from UI Automation would have the – Message Stack Traces, other error data dumped from Selenium etc. We can argue that we can take only the Message part of this as input to be able to predict the outcome for us. However, we have seen many instances where these Messages are not very self-explanatory and we will have to look into the trace/error details to actually determine the root cause. Considering the whole error message is not as easy as it sounds.

Hence, the problem we have at hand is unstructured text data. Our approach includes the steps; Collect Training data (viz., pre-classified errors), Clean the data to be fed to the model, Identify a simple yet powerful algorithm such as SVM, Random Forest, Naive Bayes etc., Classifiers to work with, Tune the model to identify the right metrics to help us calculate the reliability of the resultant predictions. This can also be extended to other error messages like Javascript Error Messages, Splunk or Trace logs as well.

Video producer:

Be the first to comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.