By default when Nginx starts receiving a response from a FastCGI backend (such as PHP-FPM) it will buffer the response in memory before delivering it to the client. Any response larger than the set buffer size is saved to a temporary file on disk.
This process is outlined at the Nginx ngx_http_fastcgi_module page manual page.
Since disk is slow and memory is fast the aim is to get as many FastCGI responses passing only through memory. On the flip side we don't want to set an excessively large buffer as they are created and sized on a per request basis - it's not shared.
The related Nginx options are:
-
fastcgi_buffering
first appeared in Nginx1.5.6
(1.6.0
stable) and can be used to turn buffering completely on/off. It's on by default. -
fastcgi_buffer_size
is a special buffer space used to hold the first chunk of the FastCGI response, which is going to be HTTP response headers.You shouldn't need to adjust this from the default - even if Nginx defaults to the smallest page size of
4k
(your platform will determine the default of4k/8k
buffers) - it should be able to fit typical HTTP response headers.The one possible exception - frameworks that push large amounts of cookie data via the
Set-Cookie
HTTP header during user verification/login phases - blowing out the buffer and resulting in HTTP500
errors. In these instances you will need to increase this buffer to8k/16k/32k
to fully accommodate the largest upstream HTTP header being sent. -
fastcgi_buffers
controls the number and size of buffer segments used for the payload of each FastCGI response. Most, if not all of our adjustments will be around this setting for the remainder of this guide.
By grepping our Nginx access logs we can determine both maximum and average response sizes. The basis of this awk
recipe was lifted from here:
$ awk '($9 ~ /200/) { i++;sum+=$10;max=$10>max?$10:max; } END { printf("Maximum: %d\nAverage: %d\n",max,i?sum/i:0); }' access.log
# Maximum: 76716
# Average: 10358
Note
These recipes will report on all access requests returning an HTTP 200 code, you might want to split out just FastCGI requests into a separate access log for reporting, like so (PHP-FPM here):
location ~ "\.php$" {
fastcgi_index index.php;
if (!-f $realpath_root$fastcgi_script_name) {
return 404;
}
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/run/php5/php-fpm.sock;
# output just FastCGI requests to it's own Nginx log file
access_log /var/log/nginx/phpfpm-only-access.log;
}
With these values in hand we're now much better equipped to set fastcgi_buffers
.
As noted earlier, the fastcgi_buffers
setting takes two values, )buffer segment count and memory size, by default this will be:
fastcgi_buffers 8 4k|8k;
So a total of 8 buffer segments at either 4k/8k
, which is determined by the platform memory page size. For Debian/Ubuntu Linux that turns out to be 4096
bytes (4KB
) - so a default total buffer size of 8 * 4K = 32KB
.
Based on the maximum/average response sizes determined above we can now raise/lower these values to suit. I typically keep buffer size at the default (memory page size) and adjust only the buffer segment count to a value that keeps the bulk of responses handled fully in buffer RAM.
The default memory page size (in bytes) for an operating system can be determined by the following command:
$ getconf PAGESIZE
If your response size average tips on the higher side you might want to alternatively lower the buffer segment count and raise the memory size in page size multiples (8k/16k/32k
).
We can see how often FastCGI responses are saved out to temporary disk by grepping Nginx error log(s):
$ cat error.log | grep --extended-regexp "\[warn\].+buffered"
# will return lines like:
YYYY/MM/DD HH:MM:SS [warn] 1234#0: *123456 an upstream response is buffered to a temporary file...
Tip
Remember its not necessarily bad to have some larger responses buffered to disk, but aim for a balance where ideally a smaller portion of your larger responses are handled in this way.
The alternative of ramping up fastcgi_buffers
to excessively large number and/or size values to fit all FastCGI responses purely in RAM is something I would strongly recommend against. Unless your Nginx server is receiving only a few concurrent requests at any one moment - you risk exhausting your available system memory.
FYI, anyone who may be finding this Gist via a Google search: We recently ran into an issue where we were streaming large amounts of data over a long time period, and were seeing nginx's processing ballooning with memory (like, 1.5GB of RAM after 5-10 minutes.) The client was receiving about a gig of data before the server OOM'd the process and everything fell apart.
Output buffering was off in PHP, our buffers were set to a total of ~4MB. No idea what was going on.
We upgraded from nginx 1.4.7 to 1.7.6 in order to attempt
fastcgi_buffering off;
. While this did work, we removed thefastcgi_buffering off;
flag and our issue still hadn't returned.In other words: There may be a memory leak in nginx 1.4.7 when sending large amounts of data from PHP-FPM, through nginx, to a client. If the memory hog is nginx and not your php-fpm process, try upgrading. If you figure out the real cause, tag me, I'm interested. :)