You get a bonus - 1 coin for daily activity. Now you have 1 coin

Load testing a web server using ab Apache HTTP server benchmarking tool

Lecture



As long as your web server is running stably and stably giving the requested content to visitors, everything is fine. But did you ask yourself the question: what will happen if the load on the server increases? What if the number of requests per unit of time doubles? Three times Ten times? How to find out the answer to this topical "what if?". In today's article we will look at the basics of load testing web servers using the ab - Apache HTTP server benchmarking tool , a tool that will allow you to determine the maximum possible number of simultaneous requests that your web server installation can handle.

  Load testing a web server using ab Apache HTTP server benchmarking tool

Cooking

The ab utility comes bundled with Apache , so if you have it installed, you already have everything you need. In today's note, we will practice on the default Apache 2.2.14 installed in Ubuntu Server 10.04.2 LTS . The configuration of the equipment (to whom it is interesting is rather weak) and I will not give Apache settings, because there is no point in this article. The purpose of this note is a small overview of the ab utility, and not an analysis of the performance of specific software on a particular hardware. Within the framework of the example, we will perform a test of return by the server:

  • HTML file test.html size 177 bytes;
  • a small PHP script test.php , which is shown below.

All files are located in the root of the DocumentRoot server. The PHP script code was used like this:

 
1
phpinfo();

To reduce the impact of network latency, use a system to perform tests, the network bandwidth from which to the server being tested is as high as possible.

Request html file

So, let's begin. To begin, let's organize the load on our server at one thousand consecutive requests. To specify the number of requests, use the '-n' option. The port number can be omitted if it is not different from the 80th:

 
1
$ ab -n 1000 http://aserver.ashep:80/test.html

So, what happened with us (the conclusion is not fully cited, omitting insignificant moments so far). During testing, the utility will inform you about the progress of work:

 
1
2
3
4
5
6
7
8
9
10
11
12
Benchmarking aserver.ashep (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests

Further you will see information about the server software version, its name, which document was loaded and what is its size:

 
1
2
3
4
5
6
Server Software:        Apache/2.2.14
Server Hostname:        aserver.ashep
Server Port:            80
 
Document Path:          /test.html
Document Length:        177 bytes

And further, in fact, the results:

 
1
2
3
4
5
6
7
8
9
10
11
Concurrency Level:      1
Time taken for tests:   1.500 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      453000 bytes
HTML transferred:       177000 bytes
Requests per second:    666.58 [#/sec] (mean)
Time per request:       1.500 [ms] (mean)
Time per request:       1.500 [ms] (mean, across all concurrent requests)
Transfer rate:          294.88 [Kbytes/sec] received

As seen:

  • Concurrency Level: the number of simultaneously sent requests - 1;
  • Time taken for tests: a thousand requests to the server took 1.5 seconds;
  • Complete requests: successfully received a response to all a thousand requests;
  • Failed requests: failed requests - zero;
  • Write errors: write errors - zero;
  • Total transferred : total transferred data: 453,000 bytes;
  • HTML transferred : of which “useful” HTML is 177,000 bytes;
  • Requests per second : the average number of requests per second was 666.58
  • Time per request : the average time per request is 1.5 milliseconds
  • Transfer rate : the data exchange rate with the server was 294.88 kilobytes per second.

Further, in the output there is information about the time spent on network connections:

 
1
2
3
4
5
6
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       2
Processing:     1    1   0.7      1      13
Waiting:        1    1   0.5      1       8
Total:          1    1   0.7      1      14

And for servicing requests by the server:

 
1
2
3
4
5
6
7
8
9
10
Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      2
  95%      3
  98%      4
  99%      5
100%     14 (longest request)

As you can see, the server successfully coped with thousands of consecutive downloads of a static small file. Let's now see how the server will behave if the entire thousand requests are sent to it simultaneously , specifying it with the option '-c' :

 
1
2
3
4
5
6
7
8
9
10
11
12
13
$ ab -n 1000 -c 1000 http://aserver.ashep:80/test.html
 
Benchmarking aserver.ashep (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
apr_socket_recv: Connection reset by peer (104)
Total of 804 requests completed

Here, the test could not be completed due to the fact that after sending 804 requests simultaneously, the server stopped accepting incoming connections. Experimentally, by reducing the number of simultaneous requests, it was found that my Apache painlessly in the current configuration can handle approximately 300 simultaneous non-keep-alive requests.

 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
$ ab -n 1000 -c 300 http://aserver.ashep:80/test.html
 
Server Software:        Apache/2.2.14
Server Hostname:        aserver.ashep
Server Port:            80
 
Document Path:          /test.html
Document Length:        55716 bytes
 
Concurrency Level:      300
Time taken for tests:   13.658 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      55998000 bytes
HTML transferred:       55716000 bytes
Requests per second:    73.22 [#/sec] (mean)
Time per request:       4097.409 [ms] (mean)
Time per request:       13.658 [ms] (mean, across all concurrent requests)
Transfer rate:          4003.91 [Kbytes/sec] received
 
Connection Times (ms)
min  mean[+/-sd] median   max
Connect:        0   17 190.0      6    3015
Processing:   224 1659 2376.3    644   13644
Waiting:      212 1628 2360.6    621   13636
Total:        230 1677 2379.8    648   13650
 
Percentage of the requests served within a certain time (ms)
50%    648
66%    654
75%    668
80%    785
90%   7003
95%   7243
98%   7384
99%   7425
100%  13650 (longest request)

Naturally, with Keep-Alive requests, the situation will be even worse, since Apache's server resources are not released so quickly. To test with Keep-Alive connections, just add the '-k' option:

 
1
$ ab -k -n 1000 -c 1000 http://aserver.ashep:80/test.html

PHP Script Request

With the work of scripts, the situation is naturally different. Here, the server needs not just to give you the file, but to start the interpreter, wait for its output and return the output to the client. And, of course, the work of the interpreter will take a certain amount of system resources, which will also affect the performance of the server as a whole.

Let's try to request our simple script a thousand times, making 300 requests at the same time:

 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
$ ab -n 1000 -c 300 http://aserver.ashep:80/test.php
 
Server Software:        Apache/2.2.14
Server Hostname:        aserver.ashep
Server Port:            80
 
Document Path:          /test.php
Document Length:        55469 bytes
 
Concurrency Level:      300
Time taken for tests:   44.110 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      55660000 bytes
HTML transferred:       55469000 bytes
Requests per second:    22.67 [#/sec] (mean)
Time per request:       13232.931 [ms] (mean)
Time per request:       44.110 [ms] (mean, across all concurrent requests)
Transfer rate:          1232.28 [Kbytes/sec] received
 
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  198 934.7      0    9021
Processing:   295 5910 9113.3   2457   44098
Waiting:      227 5711 9149.6   2273   44039
Total:        301 6108 9102.0   2470   44106
 
Percentage of the requests served within a certain time (ms)
  50%   2470
  66%   2983
  75%   4412
  80%   5575
  90%  14254
  95%  32750
  98%  33302
  99%  33589
100%  44106 (longest request)

As you can see, the server successfully coped with requests, but the processing time increased significantly, averaging 44 milliseconds per request, while the return of a small HTML file of about the same size was only 13 milliseconds.


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Quality Assurance

Terms: Quality Assurance