江南才子 发表于 2021-8-22 21:31:44

编写Go程序对Nginx服务器进行性能测试的方法

目前有很多提供Go语言HTTP应用服务的方法,但其中最好的选择取决于每个应用的实际情况。目前,Nginx看起来是每个新项目的标准Web服务器,即使在有其他许多不错Web服务器的情况下。然而,在Nginx上提供Go应用服务的开销是多少呢?我们需要一些nginx的特性参数(vhosts,负载均衡,缓存,等等)或者直接使用Go提供服务?如果你需要nginx,最快的连接机制是什么?这就是在这我试图回答的问题。该基准测试的目的不是要验证Go比nginx的快或慢。那将会很愚蠢。
下面是我们要比较不同的设置:

[*]Go HTTP standalone (as the control group)
[*]Nginx proxy to Go HTTP
[*]Nginx fastcgi to Go TCP FastCGI
[*]Nginx fastcgi to Go Unix Socket FastCGI

硬件
因为我们将在相同的硬件下比较所有设置,硬件选择的是廉价的一个。这不应该是一个大问题。

[*]Samsung 笔记本 NP550P5C-AD1BR
[*]Intel Core i7 3630QM @2.4GHz (quad core, 8 threads)
[*]CPU caches: (L1: 256KiB, L2: 1MiB, L3: 6MiB)
[*]RAM 8GiB DDR3 1600MHz
软件

[*]Ubuntu 13.10 amd64 Saucy Salamander (updated)
[*]Nginx 1.4.4 (1.4.4-1~saucy0 amd64)
[*]Go 1.2 (linux/amd64)
[*]wrk 3.0.4
设置
内核
只需很小的一点调整,将内核的limits调高。如果你对这一变量有更好的想法,请在写在下面评论处:

复制代码代码如下:
fs.file-max                  9999999

fs.nr_open                     9999999

net.core.netdev_max_backlog    4096

net.core.rmem_max            16777216

net.core.somaxconn             65535

net.core.wmem_max            16777216

net.ipv4.ip_forward            0

net.ipv4.ip_local_port_range   1025       65535

net.ipv4.tcp_fin_timeout       30

net.ipv4.tcp_keepalive_time    30

net.ipv4.tcp_max_syn_backlog   20480

net.ipv4.tcp_max_tw_buckets    400000

net.ipv4.tcp_no_metrics_save   1

net.ipv4.tcp_syn_retries       2

net.ipv4.tcp_synack_retries    2

net.ipv4.tcp_tw_recycle      1

net.ipv4.tcp_tw_reuse          1

vm.min_free_kbytes             65536

vm.overcommit_memory         1

Limits供root和www-data打开的最大文件数限制被配置为200000。
Nginx
有几个必需得Nginx调整。有人跟我说过,我禁用了gzip以保证比较公平。下面是它的配置文件/etc/nginx/nginx.conf:

复制代码代码如下:


user www-data;

worker_processes auto;

worker_rlimit_nofile 200000;

pid /var/run/nginx.pid;



events {

    worker_connections 10000;

    use epoll;

    multi_accept on;

}



http {

    sendfile on;

    tcp_nopush on;

    tcp_nodelay on;

    keepalive_timeout 300;

    keepalive_requests 10000;

    types_hash_max_size 2048;



    open_file_cache max=200000 inactive=300s;

    open_file_cache_valid 300s;

    open_file_cache_min_uses 2;

    open_file_cache_errors on;



    server_tokens off;

    dav_methods off;



    include /etc/nginx/mime.types;

    default_type application/octet-stream;



    access_log /var/log/nginx/access.log combined;

    error_log /var/log/nginx/error.log warn;



    gzip off;

    gzip_vary off;



    include /etc/nginx/conf.d/*.conf;

    include /etc/nginx/sites-enabled/*.conf;

}

Nginx vhosts



upstream go_http {

    server 127.0.0.1:8080;

    keepalive 300;

}



server {

    listen 80;

    server_name go.http;

    access_log off;

    error_log /dev/null crit;



    location / {

      proxy_pass http://go_http;

      proxy_http_version 1.1;

      proxy_set_header Connection "";

    }

}



upstream go_fcgi_tcp {

    server 127.0.0.1:9001;

    keepalive 300;

}



server {

    listen 80;

    server_name go.fcgi.tcp;

    access_log off;

    error_log /dev/null crit;



    location / {

      include fastcgi_params;

      fastcgi_keep_conn on;

      fastcgi_pass go_fcgi_tcp;

    }

}



upstream go_fcgi_unix {

    server unix:/tmp/go.sock;

    keepalive 300;

}



server {

    listen 80;

    server_name go.fcgi.unix;

    access_log off;

    error_log /dev/null crit;



    location / {

      include fastcgi_params;

      fastcgi_keep_conn on;

      fastcgi_pass go_fcgi_unix;

    }

}
Go源码

复制代码代码如下:


package main



import (

    "fmt"

    "log"

    "net"

    "net/http"

    "net/http/fcgi"

    "os"

    "os/signal"

    "syscall"

)



var (

    abort bool

)



const (

    SOCK = "/tmp/go.sock"

)



type Server struct {

}



func (s Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {

    body := "Hello World\n"

    // Try to keep the same amount of headers

    w.Header().Set("Server", "gophr")

    w.Header().Set("Connection", "keep-alive")

    w.Header().Set("Content-Type", "text/plain")

    w.Header().Set("Content-Length", fmt.Sprint(len(body)))

    fmt.Fprint(w, body)

}



func main() {

    sigchan := make(chan os.Signal, 1)

    signal.Notify(sigchan, os.Interrupt)

    signal.Notify(sigchan, syscall.SIGTERM)



    server := Server{}



    go func() {

      http.Handle("/", server)

      if err := http.ListenAndServe(":8080", nil); err != nil {

            log.Fatal(err)

      }

    }()



    go func() {

      tcp, err := net.Listen("tcp", ":9001")

      if err != nil {

            log.Fatal(err)

      }

      fcgi.Serve(tcp, server)

    }()



    go func() {

      unix, err := net.Listen("unix", SOCK)

      if err != nil {

            log.Fatal(err)

      }

      fcgi.Serve(unix, server)

    }()



    <-sigchan



    if err := os.Remove(SOCK); err != nil {

      log.Fatal(err)

    }

}检查HTTP header
为公平起见,所有的请求必需大小相同。
复制代码代码如下:
$ curl -sI http://127.0.0.1:8080/

HTTP/1.1 200 OK

Connection: keep-alive

Content-Length: 12

Content-Type: text/plain

Server: gophr

Date: Sun, 15 Dec 2013 14:59:14 GMT



$ curl -sI http://127.0.0.1:8080/ | wc -c

141



$ curl -sI http://go.http/

HTTP/1.1 200 OK

Server: nginx

Date: Sun, 15 Dec 2013 14:59:31 GMT

Content-Type: text/plain

Content-Length: 12

Connection: keep-alive



$ curl -sI http://go.http/ | wc -c

141



$ curl -sI http://go.fcgi.tcp/

HTTP/1.1 200 OK

Content-Type: text/plain

Content-Length: 12

Connection: keep-alive

Date: Sun, 15 Dec 2013 14:59:40 GMT

Server: gophr



$ curl -sI http://go.fcgi.tcp/ | wc -c

141



$ curl -sI http://go.fcgi.unix/

HTTP/1.1 200 OK

Content-Type: text/plain

Content-Length: 12

Connection: keep-alive

Date: Sun, 15 Dec 2013 15:00:15 GMT

Server: gophr



$ curl -sI http://go.fcgi.unix/ | wc -c

141
启动引擎

[*]使用sysctl配置内核
[*]配置Nginx
[*]配置Nginx vhosts
[*]用www-data启动服务
[*]运行基准测试
基准测试
GOMAXPROCS = 1
Go standalone

复制代码代码如下:



# wrk -t100 -c5000 -d30s http://127.0.0.1:8080/

Running 30s test @ http://127.0.0.1:8080/

100 threads and 5000 connections

Thread Stats   Avg      Stdev   Max   +/- Stdev

    Latency   116.96ms   17.76ms 173.96ms   85.31%

    Req/Sec   429.16   49.20   589.00   69.44%

1281567 requests in 29.98s, 215.11MB read

Requests/sec:42745.15

Transfer/sec:      7.17MB

Nginx + Go through HTTP





# wrk -t100 -c5000 -d30s http://go.http/

Running 30s test @ http://go.http/

100 threads and 5000 connections

Thread Stats   Avg      Stdev   Max   +/- Stdev

    Latency   124.57ms   18.26ms 209.70ms   80.17%

    Req/Sec   406.29   56.94   0.87k    89.41%

1198450 requests in 29.97s, 201.16MB read

Requests/sec:39991.57

Transfer/sec:      6.71MB

Nginx + Go through FastCGI TCP

复制代码代码如下:



# wrk -t100 -c5000 -d30s http://go.fcgi.tcp/

Running 30s test @ http://go.fcgi.tcp/

100 threads and 5000 connections

Thread Stats   Avg      Stdev   Max   +/- Stdev

    Latency   514.57ms119.80ms   1.21s    71.85%

    Req/Sec    97.18   22.56   263.00   79.59%

287416 requests in 30.00s, 48.24MB read

Socket errors: connect 0, read 0, write 0, timeout 661

Requests/sec:   9580.75

Transfer/sec:      1.61MB

Nginx + Go through FastCGI Unix Socket





# wrk -t100 -c5000 -d30s http://go.fcgi.unix/

Running 30s test @ http://go.fcgi.unix/

100 threads and 5000 connections

Thread Stats   Avg      Stdev   Max   +/- Stdev

    Latency   425.64ms   80.53ms 925.03ms   76.88%

    Req/Sec   117.03   22.13   255.00   81.30%

350162 requests in 30.00s, 58.77MB read

Socket errors: connect 0, read 0, write 0, timeout 210

Requests/sec:11670.72

Transfer/sec:      1.96MB

GOMAXPROCS = 8
Go standalone

复制代码代码如下:


# wrk -t100 -c5000 -d30s http://127.0.0.1:8080/

Running 30s test @ http://127.0.0.1:8080/

100 threads and 5000 connections

Thread Stats   Avg      Stdev   Max   +/- Stdev

    Latency    39.25ms    8.49ms86.45ms   81.39%

    Req/Sec   1.29k   129.27   1.79k    69.23%

3837995 requests in 29.89s, 644.19MB read

Requests/sec: 128402.88

Transfer/sec:   21.55MB
Nginx + Go through HTTP

复制代码代码如下:
# wrk -t100 -c5000 -d30s http://go.http/

Running 30s test @ http://go.http/

100 threads and 5000 connections

Thread Stats   Avg      Stdev   Max   +/- Stdev

    Latency   336.77ms297.88ms 632.52ms   60.16%

    Req/Sec   2.36k   2.99k   19.11k    84.83%

2232068 requests in 29.98s, 374.64MB read

Requests/sec:74442.91

Transfer/sec:   12.49MB
Nginx + Go through FastCGI TCP

复制代码代码如下:


# wrk -t100 -c5000 -d30s http://go.fcgi.tcp/

Running 30s test @ http://go.fcgi.tcp/

100 threads and 5000 connections

Thread Stats   Avg      Stdev   Max   +/- Stdev

    Latency   217.69ms121.22ms   1.80s    75.14%

    Req/Sec   263.09    102.78   629.00   62.54%

721027 requests in 30.01s, 121.02MB read

Socket errors: connect 0, read 0, write 176, timeout 1343

Requests/sec:24026.50

Transfer/sec:      4.03MB
Nginx + Go through FastCGI Unix Socket

复制代码代码如下:


# wrk -t100 -c5000 -d30s http://go.fcgi.unix/

Running 30s test @ http://go.fcgi.unix/

100 threads and 5000 connections

Thread Stats   Avg      Stdev   Max   +/- Stdev

    Latency   694.32ms332.27ms   1.79s    62.13%

    Req/Sec   646.86    669.65   6.11k    87.80%

909836 requests in 30.00s, 152.71MB read

Requests/sec:30324.77

Transfer/sec:      5.09MB
结论
第一组基准测试时一些Nginx的设置还没有很好的优化(启用gzip,Go的后端没有使用keep-alive连接)。当改为wrk以及按建议优化Nginx后结果有较大差异。
当GOMAXPROCS=1时,Nginx的开销不是那么大,但当OMAXPROCS=8时差异就很大了。以后可能会再试一下其他设置。如果你需要使用Nginx像虚拟主机,负载均衡,缓存等特性,使用HTTP proxy,别使用FastCGI。有些人说Go的FastCGI还没有被很好优化,这也许就是测试结果中巨大差异的原因。

文档来源:http://www.zzvips.com/article/23590.html
页: [1]
查看完整版本: 编写Go程序对Nginx服务器进行性能测试的方法