StarmanへReverseProxyした時のabベンチ取ってみた

perlでかかれたwebサーバ Starman の爆速ぶりは目をみはる物がありますが自分の運用環境では lightyのproxyを介しての接続であるためkeepaliveが使えない等Starmanの機能を生かしきる事はできない。

じゃlighty以外のapacheやnginxとかReverseProxy専門のvarnishだとどうなんだろと疑問が湧きtestしてみることにする。

接続構成

abベンチ側

  • FreeBSD 7.2-STABLE FreeBSD 7.2-STABLE
  • Pentium E2180 2GHz
  • MEM 2.0G

アプリサーバ側

  • FreeBSD 8.0-RELEASE-p2 FreeBSD 8.0-RELEASE-p2
  • Celeron 1.7GHz
  • MEM 1.5G

アプリサーバ側に

  • apache-2.2.14(worker)
  • lighttpd-1.4.26
  • nginx-0.7.65
  • varnish-2.0.6

port:80でwwwサーバ起動しこのアクセスをStarmanで起動しているアプリにReverseProxy(localhost:5000 or /tmp/nginx.sock)します。 補足)

StarmanはTCPIP以外にUNIX Domain Socketも扱うことが出来るためnginxではUNIX Socketを介した接続でもベンチを取りました。

ReverseProxy側のkeepalive(持続的接続)はapache2.2.14のみONにする事が可能なためon/offそれぞれのベンチもとりました。

Starmanで起動するhello_plack.psgi

# THIS SCRIPTNAME: hello_plack.psgi
# perl -IPlack/lib Plack/scripts/plackup -app ../test_waf/hello_plack.psgi
my $body ="hello plack!";
my $app = sub {[
        200,
        [ "Content-Type" => "text/html", "Content-Length" => length $body ],
        [ $body ],
    ];
};
$app;

starman start (port:5000)

# setuidgid www starman -a hello_plack.psgi --workers=10 (--socket=/tmp/nginx.sock)

ab bench command

# ab -c 10 -t 1 -k http://example.com/ex

結果 参考starmanへ直接アクセスした場合

Concurrency Level:      10
Time taken for tests:   1.000 seconds
Complete requests:      756
Failed requests:        0
Broken pipe errors:     0
Keep-Alive requests:    757
Total transferred:      103709 bytes
HTML transferred:       9084 bytes
Requests per second:    756.00 [#/sec] (mean)
Time per request:       13.23 [ms] (mean)
Time per request:       1.32 [ms] (mean, across all concurrent requests)
Transfer rate:          103.71 [Kbytes/sec] received

結果 参考staticファイルindex.html(apache22)

Concurrency Level:      10
Time taken for tests:   1.000 seconds
Complete requests:      1274
Failed requests:        0
Broken pipe errors:     0
Keep-Alive requests:    1266
Total transferred:      445864 bytes
HTML transferred:       31875 bytes
Requests per second:    1274.00 [#/sec] (mean)
Time per request:       7.85 [ms] (mean)
Time per request:       0.78 [ms] (mean, across all concurrent requests)
Transfer rate:          445.86 [Kbytes/sec] received

apache2.2 config

<virtualhost *:80>
    DocumentRoot "/path/to/root"
    ServerName exmple.com:80

    ErrorLog "/var/log/httpd-error.log"
    CustomLog "/var/log/httpd-access.log" common

    ProxyRequests Off

    ProxyPass /ex http://127.0.0.1:5000/  smax=10 max=20  keepalive=On
    ProxyPassReverse /ex http://127.0.0.1:5000/

# -- reverse proxy keepalive off--
#   SetEnv force-proxy-request-1.0 1
#   SetEnv proxy-nokeepalive 1

    <directory /path/to/root>
      Options None
      AllowOverride None
      Order allow,deny
      Allow from all
    </directory>
</virtualhost>

apache 結果

Concurrency Level:      10
Time taken for tests:   1.000 seconds
Complete requests:      402
Failed requests:        0
Broken pipe errors:     0
Keep-Alive requests:    400
Total transferred:      67575 bytes
HTML transferred:       4836 bytes
Requests per second:    402.00 [#/sec] (mean)
Time per request:       24.88 [ms] (mean)
Time per request:       2.49 [ms] (mean, across all concurrent requests)
Transfer rate:          67.58 [Kbytes/sec] received

apache 結果ReverseProxyのkeepaliveを切った場合

Concurrency Level:      10
Time taken for tests:   1.000 seconds
Complete requests:      223
Failed requests:        0
Broken pipe errors:     0
Keep-Alive requests:    224
Total transferred:      37641 bytes
HTML transferred:       2688 bytes
Requests per second:    223.00 [#/sec] (mean)
Time per request:       44.84 [ms] (mean)
Time per request:       4.48 [ms] (mean, across all concurrent requests)
Transfer rate:          37.64 [Kbytes/sec] received

lighttpd config

$HTTP["host"] == "example.com" {

      $HTTP["url"] !~ "^/([^.]+\.html)" {
            setenv.add-request-header = ("X-Forwarded-Host" => "example.com" )
            proxy.server    = ("" =>
                (("host" => "127.0.0.1", "port" => "5000" ))
            )
      }

      server.document-root   = "/path/to/root"
}

lighttpd 結果

Concurrency Level:      10
Time taken for tests:   1.001 seconds
Complete requests:      227
Failed requests:        0
Broken pipe errors:     0
Keep-Alive requests:    188
Total transferred:      36736 bytes
HTML transferred:       2736 bytes
Requests per second:    226.77 [#/sec] (mean)
Time per request:       44.10 [ms] (mean)
Time per request:       4.41 [ms] (mean, across all concurrent requests)
Transfer rate:          36.70 [Kbytes/sec] received

nginx config

worker_processes  1;
error_log  /var/log/httpd-error-nginx.log;

events {
    worker_connections  1024;
    use kqueue;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    access_log  /var/log/httpd-access-nginx.log;
    sendfile       on;
    tcp_nopush     on;
    keepalive_timeout  65;

    upstream backend {
        server unix:/tmp/nginx.sock;
        server localhost:5000;
    }
    server {
        listen       7002;
        server_name  example.com;

        location /ex {

            proxy_pass http://backend;
            proxy_redirect     off;

            proxy_set_header   Host             $host;
            proxy_set_header   X-Real-IP        $remote_addr;
            proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;

            proxy_connect_timeout      90;
            proxy_send_timeout         90;
            proxy_read_timeout         90;
        }
}

nginx 結果TCP/IP接続時

Concurrency Level:      10
Time taken for tests:   1.005 seconds
Complete requests:      242
Failed requests:        0
Broken pipe errors:     0
Keep-Alive requests:    243
Total transferred:      38637 bytes
HTML transferred:       2916 bytes
Requests per second:    240.80 [#/sec] (mean)
Time per request:       41.53 [ms] (mean)
Time per request:       4.15 [ms] (mean, across all concurrent requests)
Transfer rate:          38.44 [Kbytes/sec] received

nginx 結果UNIX DOMAIN Socket接続時

Concurrency Level:      10
Time taken for tests:   1.005 seconds
Complete requests:      271
Failed requests:        0
Broken pipe errors:     0
Keep-Alive requests:    272
Total transferred:      43248 bytes
HTML transferred:       3264 bytes
Requests per second:    269.65 [#/sec] (mean)
Time per request:       37.08 [ms] (mean)
Time per request:       3.71 [ms] (mean, across all concurrent requests)
Transfer rate:          43.03 [Kbytes/sec] received

varnish 結果

参考 varnish では結果をcacheしてしまいproxy接続のテストにならない為cacheしない設定でベンチを取りました

Concurrency Level:      10
Time taken for tests:   1.001 seconds
Complete requests:      205
Failed requests:        0
Broken pipe errors:     0
Keep-Alive requests:    206
Total transferred:      38316 bytes
HTML transferred:       2472 bytes
Requests per second:    204.80 [#/sec] (mean)
Time per request:       48.83 [ms] (mean)
Time per request:       4.88 [ms] (mean, across all concurrent requests)
Transfer rate:          38.28 [Kbytes/sec] received

ベンチ結果一覧

ベンチを取る前からだいたい結果は分かってましたがapacheのkeepaliveの威力は大きいですね。他のwebサーバと比較して圧倒的な差があります。ただそれでもstarmanに直でアクセスした場合とで比較するとproxy分のオーバーヘッドは結構大きいのが分かりました。

starmanの性能を生かしきりたいならport:80でstarmanを起動させるかハードウェアのReverseProxy製品を使うしかなさそうです。

しかしkeepalive速いマンセーとばかりは言ってられないようで id:kazuhookuさんの2010年代には Apache の mpm_prefork とか流行らない (もしくは HTTP keep-alive のメリットとデメリット)にありますようにkeepaliveは諸刃の剣なんだなぁって思いました。

                       req/sec     reverse proxy側のkeepalive

index.html static file 1274.00 [#/sec] N/A
starman direct access 756.00 [#/sec]  N/A
apache22 worker        402.00 [#/sec]  on
apache22 keepalive-off 223.00 [#/sec]  off
lighttpd              226.77 [#/sec]  off
nginx                  240.80 [#/sec]  off
nginx UNIX SOCKET     269.65 [#/sec]  off
varnish (Cache Off)    204.80 [#/sec]  off
created:

Back to top