文章

不同ingress controller的性能测试

Ingress Controller有很多实现方式,如Kubernetes官方,Nginx官方,Kong,Traefik等等,下面对以下三种不同的Ingress Controller进行性能测试,Kubernetes Ingress VS Nginx Ingress(Nginx官方出品)VS Traefix Ingress。

硬件配置

  • CPU:2.5GHz Intel® Xeon® Cascade Lake 处理器,睿频3.1GHz,8 cores

  • Memory: 32GB

测试环境

  • 2台上述硬件配置的服务器

  • wrk: 0896020 [epoll] Copyright (C) 2012 Will Glozer

  • nginx:1.19.4

  • k8s:1.18.4

  • centos:7.9.2009

  • Linux:Linux version 4.14.105-19-0012 (root@TENCENT64.site) (gcc version 4.8.2 20140120 (Red Hat 4.8.2-16)

wrk的安装可以参考如下文章:Http性能压测工具 | wrk

性能测试

后端Nginx Server部署

 ---
 apiVersion: apps/v1
 kind: DaemonSet
 metadata:
   name: my-nginx
   labels:
     app: my-nginx
 spec:
   selector:
     matchLabels:
       app: my-nginx
   template:
     metadata:
       labels:
         app: my-nginx
     spec:
       initContainers:
         - name: setsysctl
           image: busybox
           securityContext:
             privileged: true
           command:
             - sh
             - -c
             - |
               sysctl -w net.core.somaxconn=65535
               sysctl -w net.ipv4.ip_local_port_range="1024 65535"
               sysctl -w net.ipv4.tcp_tw_reuse=1
               sysctl -w fs.file-max=1048576
       containers:
         - image: nginx:1.19.4
           imagePullPolicy: Always
           name: my-nginx
           ports:
             - containerPort: 80
               protocol: TCP
           volumeMounts:
             - name: app-config-volume
               mountPath: /etc/nginx/conf.d
             - name: main-config-volume
               mountPath: /etc/nginx
             - name: binary-payload
               mountPath: /usr/share/nginx/bin
       volumes:
         - name: app-config-volume
           configMap:
             name: app-conf
         - name: main-config-volume
           configMap:
             name: main-conf
         - name: binary-payload
           configMap:
             name: binary
 ---
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: app-conf
 data:
   app.conf: |
     server {
       listen 80;
       location / {
         root /usr/share/nginx/bin;
       }
 ​
       location = /status {
         stub_status;
       }
     }
 ---
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: main-conf
 data:
   nginx.conf: |+
     user  nginx;
     worker_processes  8;
     worker_rlimit_nofile 102400;
     error_log  /var/log/nginx/error.log notice;
     pid        /var/run/nginx.pid;
     events {
         worker_connections  100000;
     }
     http {
         log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                           '$status $body_bytes_sent "$http_referer" '
                           '"$http_user_agent" "$http_x_forwarded_for"';
         sendfile  on;
         tcp_nopush  on;
         tcp_nodelay on;
         access_log off;
         #access_log  /var/log/nginx/access.log  main;
         include /etc/nginx/conf.d/*.conf;
     }
 ​
 ---
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: binary
 data:
   1kb.bin: "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
 ---
 apiVersion: v1
 kind: Service
 metadata:
   name: my-nginx
 spec:
   type: ClusterIP
   ports:
     - port: 80
       protocol: TCP
       targetPort: 80
   selector:
     app: my-nginx
 ---
 apiVersion: extensions/v1beta1
 kind: Ingress
 metadata:
   name: my-nginx
   annotations:
     kubernetes.io/ingress.class: traefik  # nginx or traefik
 spec:
   rules:
     - host: nginx.qcloud.com
       http:
         paths:
           - backend:
               serviceName: my-nginx
               servicePort: 80
             path: /
 ​

使用Daemonset部署2个Nginx Pod。

直接对Nginx服务的Service IP进行压测:

 # wrk -t8 -c200 -d30s --latency http://172.19.254.47/1kb.bin
 Running 30s test @ http://172.19.255.146/1kb.bin
   8 threads and 200 connections
   Thread Stats   Avg      Stdev     Max   +/- Stdev
     Latency     1.66ms    2.89ms  76.32ms   90.05%
     Req/Sec    26.79k     7.00k   49.73k    67.91%
   Latency Distribution
      50%  538.00us
      75%    1.62ms
      90%    4.54ms
      99%   13.31ms
   6403242 requests in 30.09s, 7.39GB read
 Requests/sec: 212793.38
 Transfer/sec:    251.63MB

其中RPS大约为212793左右。

Kubernetes Ingress 压测

部署文件配置(采用Helm部署):

 controller:
   image:
     repository: docker.io/amuguelove/ingress-nginx
     tag: "v0.41.0"
     digest:
     pullPolicy: IfNotPresent
 ​
   config:
     server-tokens: "false"
     disable-access-log: "true"
 ​
     keep-alive-requests: "10000"
     upstream-keepalive-requests: "10000"
     upstream-keepalive-connections: "1000"
     max-worker-connections: "65536"
 ​
   extraInitContainers:
     - name: setsysctl
       image: busybox
       securityContext:
         privileged: true
       command:
         - sh
         - -c
         - |
           sysctl -w net.core.somaxconn=65535
           sysctl -w net.ipv4.ip_local_port_range="1024 65535"
           sysctl -w net.ipv4.tcp_tw_reuse=1
           sysctl -w fs.file-max=1048576

性能压测:

 # wrk -t8 -c200 -d30s --latency http://nginx.qcloud.com/1kb.bin
 Running 30s test @ http://nginx.qcloud.com/1kb.bin
   8 threads and 200 connections
   Thread Stats   Avg      Stdev     Max   +/- Stdev
     Latency     5.23ms    5.90ms  78.82ms   87.22%
     Req/Sec     6.35k     1.18k   12.18k    68.35%
   Latency Distribution
      50%    3.29ms
      75%    7.28ms
      90%   12.46ms
      99%   27.85ms
   1518457 requests in 30.08s, 1.72GB read
 Requests/sec:  50474.50
 Transfer/sec:     58.63MB

其中RPS大约为50474左右。

Nginx Ingress 压测

部署文件配置(采用Helm部署):

 controller:
   kind: daemonset
 ​
   config:
     entries:
       server-tokens: "false"
       disable-access-log: "true"
 ​
       keep-alive-requests: "10000"
       upstream-keepalive-requests: "10000"
       upstream-keepalive-connections: "1000"
       max-worker-connections: "65536"

性能压测:

 # wrk -t8 -c200 -d30s --latency http://nginx.qcloud.com/1kb.bin
 Running 30s test @ http://nginx.qcloud.com/1kb.bin
   8 threads and 200 connections
   Thread Stats   Avg      Stdev     Max   +/- Stdev
     Latency     6.11ms    7.36ms 227.84ms   92.47%
     Req/Sec     4.74k   794.04    10.76k    72.04%
   Latency Distribution
      50%    4.53ms
      75%    7.80ms
      90%   12.04ms
      99%   25.63ms
   1133947 requests in 30.09s, 1.30GB read
 Requests/sec:  37686.72
 Transfer/sec:     44.31MB

其中RPS大约为37686左右。

Traefix Ingress 压测

部署文件配置(采用Helm部署):

 deployment:
   enabled: true
   kind: DaemonSet # DaemonSet or Deployment
 ​
   initContainers:
     - name: setsysctl
       image: busybox
       securityContext:
         privileged: true
       command:
         - sh
         - -c
         - |
           sysctl -w net.core.somaxconn=65535
           sysctl -w net.ipv4.ip_local_port_range="1024 65535"
           sysctl -w net.ipv4.tcp_tw_reuse=1
           sysctl -w fs.file-max=1048576
 ​
 providers:
   kubernetesIngress:
     enabled: true
     publishedService:
       enabled: true

性能测试(忘记截取wrk日志。。。):

 wrk -t8 -c200 -d30s --latency http://nginx.qcloud.com/1kb.bin
 ​
 三次结果为:
 53439.33
 53290.16
 53026.58

其中RPS大约为53439左右。

最后

使用Ingress做访问入口代理,性能有较大的损耗。其中测试下来,Ingress的性能如下:Traefik Ingress > Kubernetes Ingress > Nginx Ingress。

配置文件

本文测试文件见: https://github.com/DevOpsDays2020/ingress-benchmark/tree/master/ingress-controller

参考资料

  1. Kubernete Ingress Helm Chart: https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx

  2. Nginx Ingress Helm Chart: https://artifacthub.io/packages/helm/nginx/nginx-ingress

  3. Traefix Ingress Helm Chart: https://artifacthub.io/packages/helm/traefik/traefik

License:  CC BY 4.0