Saturday, July 04, 2009

Using Apache Bench to monitorate performance

What about an application that feels 'faster' than yours? Or pages that seems to load slowly than others? We'll use ab command line tool to measure load times of your pages in scientific way. Example based on this Ossigeno installation.
Installation
Apache Bench is a command line tool, open source like Apache Httpd Server, that performs simultaneous and repetute requests to an url to simulate traffic and outputs a statistical analysis of performance.
On Ubuntu, it's available by:
$ sudo apt-get install apache2-utils
and then
$ ab ....
to execute Apache Bench.

Usage
Let's see it in action:
$ ab http://ossigeno.sourceforge.net/blog
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking ossigeno.sourceforge.net (be patient).....done


Server Software: nginx/0.6.31
Server Hostname: ossigeno.sourceforge.net
Server Port: 80

Document Path: /blog
Document Length: 334 bytes

Concurrency Level: 1
Time taken for tests: 0.433 seconds
Complete requests: 1
Failed requests: 0
Write errors: 0
Non-2xx responses: 1
Total transferred: 632 bytes
HTML transferred: 334 bytes
Requests per second: 2.31 [#/sec] (mean)
Time per request: 432.576 [ms] (mean)
Time per request: 432.576 [ms] (mean, across all concurrent requests)
Transfer rate: 1.43 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 162 162 0.0 162 162
Processing: 271 271 0.0 271 271
Waiting: 271 271 0.0 271 271
Total: 433 433 0.0 433 433


That is a lot of info, and some noise is present. Ab has made one request to sourceforge.net and outputs time elapsed and other statistical info like mean and median. However, the most important parameter is Request per second, that represents how much different pages the httpd server can serve in 1s. Let's cut out unnecessary statistical data:

$ ab http://ossigeno.sourceforge.net/blog | grep Request
Requests per second: 3.02 [#/sec] (mean)

Ok, it seems that this server of sf.net can send out three pages every second. This is only one server: it has many and when we type ossigeno.sourceforge.net in our browser we are redirected through a load balancer to one of them.
Well, one request is not statistically significative: it can be noticeably faster than others because our request was sent in a particular idle instant. Or it can be slowed down because some cached elements were refreshed while serving our request. So let's do what a statistic will do: increase the sample size.

$ ab -n 100 http://ossigeno.sourceforge.net/blog | grep Request
Requests per second: 2.70 [#/sec] (mean)

Ab makes one hundred request in a row, and it calculates the mean of loading times. With this loading time, we see that this server can serve out 2 pages and a half every second. That's pretty fast.
What if simultaneous users request pages at the same time? A webserver is designed to have multiple process that works on different http requests. So, let's see if sourceforge.net is scalable:

$ ab -n 100 -c 5 http://ossigeno.sourceforge.net/blog | grep Request
Requests per second: 12.82 [#/sec] (mean)

Ab makes one hundred request, five at time, opening simultaneously five connection to sourceforge.net; we see that request per second is increased to ~13. What does it mean?
Let's put in this terms: if we request a page, it is sent to us in ~0.3s. If we request two pages at the same time, they are sent to us still in ~ 0.3s. So the server it's not a bottleneck at this level of concurrency, because it can handle 5 simultaneous request without slowing down them. If we increase concurrency level:

$ ab -n 100 -c 20 http://ossigeno.sourceforge.net/blog | grep Time
Time taken for tests: 2.082 seconds
Time per request: 416.397 [ms] (mean)
Time per request: 20.820 [ms] (mean, across all concurrent requests)
Connection Times (ms)

the time for serving one page increases to only 0.4s, that is a kick-ass performance.

Conclusion
Profiling is the essence of optimization: you have to see where is the bottleneck to improve your application.
Apache Bench is a useful tool as it monitor loading times of webpages, going beyond human sensations of "speed" and provides statistics calculated on a sample of request that you can choose. You can activate and disactivate some modules of server, like mod_deflate for Apache or apc at Php level or Zend_Cache in your application, and see with ab what makes your server works faster, basing on collected data.

No comments:

ShareThis