Using Command Line in Software QA Activities

There are often discussions in the software testing domain if software testers should learn a programming language, but operating system knowledge is rarely mentioned. In this article, Svetlana Ganina explains how mastering Linux command line could help in daily software quality assurance activities.

Author: Svetlana Ganina, TestMatick, https://testmatick.com/

An employee of the software testing department works constantly under pressure: the number of tasks in the backlog doesn’t decrease, production deployment is around the corner and a lot of things need to be checked. In order to guarantee a proper quality of developed web applications, it is necessary to improve tester’s efficiency through increasing usage of testing tools. One of the proven method is to try to master some new tools, which allow making daily software testing much easier.

For handling daily tasks and making some simple automation, you can easily use Bash. Bash is a special command shell tool which started on Linux but is available for all popular operating systems today. It allows searching, editing the files, watching the functioning of launched processes, logging in marked virtual machines, as well as doing about a hundred useful tasks faced by a QA specialist every day.

But is it so easy? Can a simple command line be the absolute panacea with the technologies and opportunities in 2019? Let’s analyze this situation in detail.

Is a command line so important in tester’s activity? Command line vs. GUI

Many people have been wondering recently why should command line be used where there is a nice and intuitive graphic interface. No matter how much you want to, there are two truths that are against any common sense.

Command line’s supporters believe that with its help you can quickly “go through the whole system inside, as well as have full access to many computers, connected to the local network”.

Using Command Line in Software QA Activities

The other ones point to the fact that modern GUI is so functional and intuitive, that you can start only one program for many processes in a multitask operational mode.

Partly both of them are right. Tasks for a command line and any GUI are practically the same. They don’t have any intellectual logic or built-in business-system – only a set of programmed actions, that are different just by the opportunity to start them.

Let’s take a look at one simple example

For example, we have a really simple system operation…

In the case of command line usage, we want just to input a command; accidentally we make a mistake, google it on the Internet, find and copy/paste it in the command line. So we get the desired result.

With the GUI option: we actively go through app functionality, found the required buttons (as a rule, we can’t choose incorrect button) and achieve the desired result.

Hopefully, you see the difference: this options miss traditional Internet surfing and pasting results on the command line. If there are only few commands, it is not so noticeable; but if there are a lot of them, the difference can be seen in wasted hours and days. And what about frequently used operations? In this case, the difference is highly significant.

After the first contact with the button in GUI, you immediately remember its location and can easily find it in the future. But what about storing of program command syntax, it is worse than that, because it hinders Winchester productivity with unnecessary information. If you want, we can compare it with a topographic map and text description of the area: just imagine how many words you need to describe, a country with text and using all that information compared to what a simple map can show.

10-windows theory and how to work faster

In the case when we face difficult data structure, this process becomes more difficult. A customer opens 10 windows at the same time for some reason, sometimes for all this, he/she needs 2 or even 3 screens in order to see the whole extent of an interaction between several processes. It means that a tester doesn’t have time to sit and enter the commands: there is a lot to do, plus this activity is hard and it fully activates a brain function.

So why hasn’t command line become history yet or why isn’t it used only by a small group of specialists? Why do users keep wasting their time for manual routine command function? Who needs this?

We can suppose that the answer to these questions may be in the psychological field but not in the technical one. Using the command line provides the possibility to assert oneself quickly – this is a natural need of all us – it always has different forms and expressions. A lot of programmers use the command line on purpose, in order to be like a big and honored guru of black screen in comparison with a simple user, who can push some buttons alternately.

Of course, you can ask why do testers care about programmers’ habits? But in practice, all such processes lead to the creation of like-minded people teams, who slow down the process of software development and waste the maximum time of surrounding people (testers are in such a category). Moreover, such bandwagon effect influences people, who in the future would be able to have their own independent position and improve gradually their intellect in the way of many professional intentions, but instead of it, they must learn commands of outdated technologies.

But we can’t tell only bad things about command line relevance in the tester’s activity. There are good sides too. Using command line directly connect you with a lot of tools, which sometimes make software QA work with developed technology much easier. The Internet life of any developer becomes difficult because of some technical moments all the time. The worst thing is that sometimes you don’t know the source of such difficulties. Is there a problem in the accuracy of the sent request to the server or methodology of necessary library establishment? Maybe it is an external API that works incorrectly?

Using Command Line in Software QA Activities

Nowadays there are plenty of applications and programs that can make our life much easier. Now we are going to explore a couple of really effective command line utilities that can be very useful and efficient for every software QA specialist.

cURL

cURL is a special program for quick data transmission on the various protocols, which is a bit similar to wget. The basic difference is in the patterns, wget saves into a file when cURL brings everything in the command line. So, there is a great opportunity to look at the content of any website quickly. For example, you can get current external IP really fast if you use such a command set:

$ curl ifconfig.me
93.96.141.93

Options i- (show header) and -/ (show only headers) make the cURL app a quality tool for debugging of HTTP-replies and high-quality analysis of what exactly a server is sending to you.

$ curl -I habrahabr.ru
HTTP/1.1 200 OK
Server: nginx Date: nov, 2018 14:15:36
GMT Content-Type: text/html;
charset=utf-8 Connection: keep-alive
Keep-alive: timeout=25

The option –L is also very useful because it allows the app to follow automatically through all available redirects and support HTTP-authentication, cookie, tunneling via selected HTTP-proxy server, a lot of manual settings and many others.

Siege

Siege is an interesting tool for load testing. It has very practical function –g, which is similar to curl-iL, but can show headers of http-request. Here is a simple example with the Google site:

$ siege -g www.google.com GET / HTTP/1.1 Host: www.google.com
User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)
Connection: close HTTP/1.1 302 Found Location: http://www.google.co.uk/
Content-Type: text/html; charset=UTF-8
Server: gws Content-Length: 221
Connection: close GET / HTTP/1.1 Host: www.google.co.uk
User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)
Connection: close HTTP/1.1 200 OK Content-Type: text/html; charset=ISO-8859-1 X-XSS-Protection: 1; mode=block Connection: close

This tool is perfect for load testing. As a well-known Apache benchmark ad, with its help, you can easily look through the many concurrently launched requests and analyze, how it deals with traffic. Below there is an example from Google site, according to creation of 20 requests during 30 sec and then showing the final results.

$ siege -c20 www.google.co.uk -b -t30s ...
Lifting the server siege... done.
Transactions: 1400 hits
Availability: 100.00 %
Elapsed time: 29.22 secs
Data transferred: 13.32 MB
Response time: 0.41 secs
Transaction rate: 47.91 trans/sec
Throughput: 0.46 MB/sec
Concurrency: 19.53
Successful transactions: 1400
Failed transactions: 0
Longest transaction: 4.08
Shortest transaction: 0.08

One of the most useful functions of this utility is a possibility of simultaneous interaction with several sites, and also with a big list of URL files. It is a perfect variant for traditional load testing, so you have a real opportunity to model operationally real traffic on the checked site, but not to push simply on the same URL constantly.

Here is an example how the program Siege can be used for server loading, using an address from your Apache log:

$ cut -d ' ' -f7 /var/log/apache2/access.log > urls.txt $ siege -c -b -f urls.txt

Ngrep

For the more detailed analysis of outgoing traffic, the Wireshark tool has a lot of useful settings, configurations, and parameters. There is a special version of this tool for command line call tshark. But for the simple tasks, Wireshark is a redundant application. For the simple tasks, you can use utility Ngrep. With its help, you can interact with a network package (in other words, to do all the same operations as with grep).

For interaction with web-traffic, you almost always need to interact with parameter –W, in order to save a form of formatted lines, as well as a special parameter –g, with the help of which you can hide a redundant data about unsuitable packages. Here is a simple example of the command, which helps to intercept packages with command GET or POST:

ngrep -q -W byline "^(GET|POST) .*"

A user can easily add a special filter for the selected packages, for example, for the set host, for some IP-address or for a simple port. Here is an example of the filter for the all incoming and outgoing traffic to the site google.com, port 80, which includes keyword “search”:

ngrep -q -W byline "search" host www.google.com and port 80

Using Command Line in Software QA Activities

Conclusion

As you can see from the tool mentioned here, in everyday life any quality assurance organizations have to use different tools for to test software quality, which likely would be able to give some answers to the questions. Usage of the command line tools by software tester is not mandatory, but this is a useful addition that helps to evaluate comprehensively the quality of the developed web application.

About the Author

Svetlana Ganina is QA Team Lead at TestMatick. She is experienced in testing web-based applications (including SaaS-based applications), content management systems, databases as well as mobile-based client-server applications on major platforms like iOS & Android.