Nginx often causes some performance bottlenecks because of high concurrent connections. Here are some optimization configurations for your reference.
The main focus here is on the optimization of the NGINX configuration.
This is just a reference, and it needs to be constantly adjusted in practice.
Nginx.conf configuration
1.worker_processes
The number of nginx processes, it is recommended to specify according to the number of cpu, generally the same as the cpu core number or a multiple of it.
1 |
worker_processes 8; |
2.worker_cpu_affinity
Assign cpu to each process. In the above example, 8 processes are allocated to 8 cpus. Of course, you can write multiple or assign one process to multiple cpus.
1 |
worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000; |
3.worker_rlimit_nofile
The following instruction refers to the maximum number of file descriptors that are opened when an nginx process is opened. The theoretical value should be the maximum number of open files (ulimit -n) of the system divided by the number of nginx processes, but the nginx allocation request is not so uniform, so the most Good to match the value of ulimit -n.
1 |
worker_rlimit_nofile 65535; |
4.epoll
Use Epoll’s I/O model to efficiently handle asynchronous events.
1 |
use epoll; |
5.worker_connections
Set the maximum number of connections allowed per process. In theory, the maximum number of connections per nginx server is worker_processes*worker_connections.
1 |
worker_connections 65535; |
6.keepalive_timeout
Adjust the http connection timeout time as needed. The default is 60s. This function is to make the client-to-server connection last for a set period of time. When a subsequent request to the server occurs, the function avoids establishing or re-establishing. connection. Remember that this parameter can’t be set too large! Otherwise it will cause many invalid http connections to occupy the number of nginx connections, and the final nginx crashes!
1 |
keepalive_timeout 60; |
7.client_header_buffer_size
The client requests the buffer size of the header. This can be set according to the paging size of your system. Generally, the size of a request header will not exceed 1k. However, since the general system paging is larger than 1k, it is set to the paging size. The page size can be obtained with the command getconf PAGESIZE.
1 |
client_header_buffer_size 4k; |
8.open_file_cache
The following parameter will specify the cache for the open file. The default is not enabled. max specifies the number of caches. It is recommended to match the number of open files. Inactive refers to how long the file is deleted after the file is not requested
1 |
open_file_cache max=102400 inactive=20s; |
9.open_file_cache_valid
The following is how long it takes to check the cache for valid information.
1 |
open_file_cache_valid 30s; |
10.open_file_cache_min_uses
The inactive parameter in the open_file_cache directive indicates the minimum number of times a file is used per unit of time. If this number is exceeded, the file descriptor will always be opened in the cache. As in the above example, if a file has never been used in inactive time. , then it will be removed.
1 |
open_file_cache_min_uses 1; |
11.server_tokens
Hiding the information about the operating system and web server (Nginx) version number in the response header is good for the security of the WEB server.
1 |
server_tokens off; |
12.sendfile
You can make sendfile() work. Sendfile() can copy data (or any two file descriptors) between disk and TCP socket. The Pre-sendfile is the data buffer requested in the user space before the data is transferred. Then use read() to copy the data from the file to this buffer, and write() to write the buffer data to the network. Sendfile() immediately reads data from disk to the OS cache. Because this copy is done in the kernel, sendfile() is more efficient than combining read() and write() and opening and dropping the buffer.
1 |
sendfile on; |
13.tcp_nopush
Tells nginx to send all header files in one packet, not one after another. That is to say, the data packet will not be transmitted immediately, and when the data packet is the largest, it will be transmitted once, which will help solve the network congestion.
1 |
tcp_nopush on; |
14.tcp_nodelay
Tell nginx not to cache data, but to send it in a piece of time — when you need to send data in time, you should set this property to the application, so you can’t get the return value immediately when sending a small piece of data.
1 |
tcp_nodelay on; |
such as:
1 2 3 4 5 6 7 |
http { server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay on; ...... } |
15.client_header_buffer_size 4k;
The size of the buffer of the client request header, this can be set according to the system paging size, generally the size of a request header will not exceed 1k, but since the general system paging is greater than 1k, it is set to the paging size.
The page size can be obtained with the command getconf PAGESIZE.
1 2 |
[root@test-nginxer ~]# getconf PAGESIZE 4096 |
However, there may be cases where client_header_buffer_size exceeds 4k, but client_header_buffer_size must be set to an integer multiple of “system paging size”.
16.open_file_cache
To specify a cache for opening files, the default is not enabled. max specifies the number of caches. It is recommended to match the number of open files. Inactive refers to how long the file is deleted after the file is not requested.
1 |
open_file_cache max=65535 inactive=60s; |
17.open_file_cache_valid
Specifies how often to check for valid information for the cache.
1 |
open_file_cache_valid 80s; |
18.A complete nginx.conf example reference:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
[root@test-nginxer ~]# cat /usr/local/nginx/conf/nginx.conf user www www; worker_processes 8; worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000; error_log /www/log/nginx_error.log crit; pid /usr/local/nginx/nginx.pid; worker_rlimit_nofile 65535; events { use epoll; worker_connections 65535; } http { include mime.types; default_type application/octet-stream; charset utf-8; server_names_hash_bucket_size 128; client_header_buffer_size 2k; large_client_header_buffers 4 4k; client_max_body_size 8m; sendfile on; tcp_nopush on; keepalive_timeout 60; fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=TEST:10m inactive=5m; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 16k; fastcgi_buffers 16 16k; fastcgi_busy_buffers_size 16k; fastcgi_temp_file_write_size 16k; fastcgi_cache TEST; fastcgi_cache_valid 200 302 1h; fastcgi_cache_valid 301 1d; fastcgi_cache_valid any 1m; fastcgi_cache_min_uses 1; fastcgi_cache_use_stale error timeout invalid_header http_500; open_file_cache max=204800 inactive=20s; open_file_cache_min_uses 1; open_file_cache_valid 30s; tcp_nodelay on; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on; server { listen 8080; server_name test.nginxer.com; index index.php index.htm; root /www/wwwroot/default/; location /status { stub_status on; } location ~ .*\.(php|php5)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fcgi.conf; } location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|js|css)$ { expires 30d; } log_format access '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; access_log /www/log/access.log access; } } |
FastCGI instruction
1.fastcgi_cache_path
This directive specifies a path for the FastCGI cache, the directory structure level, the key area storage time, and the inactive delete time.
1 |
fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=TEST:10m inactive=5m; |
2.fastcgi_connect_timeout
Specifies the timeout for connecting to the backend FastCGI.
1 |
fastcgi_connect_timeout 300; |
3.fastcgi_send_timeout
The timeout period for transmitting a request to FastCGI. This value is the timeout period for sending a request to FastCGI after two handshakes have been completed.
1 |
fastcgi_send_timeout 300; |
4.fastcgi_read_timeout
Timeout period for receiving a FastCGI response. This value is the timeout period for receiving a FastCGI response after two handshakes have been completed.
1 |
fastcgi_read_timeout 300; |
5.fastcgi_buffer_size
Specifies how much buffer to use to read the first part of the FastCGI response. This can be set to the buffer size specified by the fastcgi_buffers directive. The above command specifies that it will use a 16k buffer to read the first part of the response, which is the response header. In fact, this response header is generally small (no more than 1k), but if you specify the size of the buffer in the fastcgi_buffers directive, it will also allocate a buffer size specified by fastcgi_buffers to cache.
1 |
fastcgi_buffer_size 16k; |
6.fastcgi_buffers
Specify how many and how large the buffer needs to buffer the FastCGI response. As shown above, if a php script produces a page size of 256k, it will allocate 16 16k buffers for caching. If it is greater than 256k, The part that is increased by 256k will be cached in the path specified by fastcgi_temp. Of course, this is not a sensible solution for the server load, because the data is processed faster in the memory than the hard disk. Usually, the setting of this value should be selected in your site. The intermediate value of the page size generated by the php script, such as the page size generated by most scripts of your site is 256k, you can set this value to 16 16k, or 4 64k or 64 4k, but obviously, the latter two It’s not a good way to set up, because if the generated page is only 32k, if it uses 4 64k it will allocate a 64k buffer to cache, and if you use 64 4k it will allocate 8 4k buffers to cache, and if Using 16 16k, it will allocate 2 16k to cache the page, which seems more reasonable.
1 |
fastcgi_buffers 16 16k; |
7.fastcgi_busy_buffers
The default value is twice of fastcgi_buffer
1 |
fastcgi_busy_buffers_size 32k; |
8.fastcgi_temp_file_write_size
How many data blocks will be used when writing fastcgi_temp_path, the default value is twice of fastcgi_buffers
1 |
fastcgi_temp_file_write_size 32k; |
9.fastcgi_cache
Open the FastCGI cache and give it a name. Personally feel that opening the cache is very useful, can effectively reduce the CPU load, and prevent 502 errors. But this cache can cause a lot of problems because it caches dynamic pages. The specific use also needs to be based on your own needs.
1 |
fastcgi_cache TEST |
10.fastcgi_cache_valid
Specify the cache time for the specified response code. In the above example, the 200, 302 response buffer is one hour, the 301 response buffer is one day, and the others are one minute.
1 2 3 |
fastcgi_cache_valid 200 302 1h; fastcgi_cache_valid 301 1d; fastcgi_cache_valid any 1m; |
11.fastcgi_cache_min_uses
The minimum number of times the cache is cached within the value of the inactive parameter of the fastcgi_cache_path directive. For example, if a file is not used once in 5 minutes, the file will be removed.
1 |
fastcgi_cache_min_uses 1; |
12.fastcgi_cache_use_stale
Define which cases use expired cache
fastcgi_cache_use_stale : fastcgi_cache_use_stale error | timeout | invalid_header | updating | http_500 | http_503 | http_403 | http_404 | off …;
1 |
fastcgi_cache_use_stale error timeout invalid_header http_500; |
FastCGI separate optimized configuration
FastCGI itself also has some configuration that needs to be optimized. If you use php-fpm to manage FastCGI, you can modify the following values in the configuration file:
1.max_children
The number of concurrent requests processed at the same time, it will open up to 60 child threads to handle concurrent connections.
1 |
<value name="max_children">60</value> |
2.rlimit_files
The maximum number of open files.
1 |
<value name="rlimit_files">65535</value> |
3.max_requests
The maximum number of requests that each process can execute before resetting.
1 |
<value name="max_requests">65535</value> |
This article was first published by V on 2018-10-11 and can be reprinted with permission, but please be sure to indicate the original link address of the article :http://www.nginxer.com/records/optimization-reference-for-nginx-in-high-concurrency-scenarios/