Skip to content

Instantly share code, notes, and snippets.

@magnetikonline
Last active November 28, 2024 03:14
Show Gist options
  • Save magnetikonline/11312172 to your computer and use it in GitHub Desktop.
Save magnetikonline/11312172 to your computer and use it in GitHub Desktop.
Setting Nginx FastCGI response buffer sizes.

Nginx FastCGI response buffer sizes

By default when Nginx starts receiving a response from a FastCGI backend (such as PHP-FPM) it will buffer the response in memory before delivering it to the client. Any response larger than the set buffer size is saved to a temporary file on disk.

This process is outlined at the Nginx ngx_http_fastcgi_module page manual page.

Introduction

Since disk is slow and memory is fast the aim is to get as many FastCGI responses passing only through memory. On the flip side we don't want to set an excessively large buffer as they are created and sized on a per request basis - it's not shared.

The related Nginx options are:

  • fastcgi_buffering first appeared in Nginx 1.5.6 (1.6.0 stable) and can be used to turn buffering completely on/off. It's on by default.

  • fastcgi_buffer_size is a special buffer space used to hold the first chunk of the FastCGI response, which is going to be HTTP response headers.

    You shouldn't need to adjust this from the default - even if Nginx defaults to the smallest page size of 4k (your platform will determine the default of 4k/8k buffers) - it should be able to fit typical HTTP response headers.

    The one possible exception - frameworks that push large amounts of cookie data via the Set-Cookie HTTP header during user verification/login phases - blowing out the buffer and resulting in HTTP 500 errors. In these instances you will need to increase this buffer to 8k/16k/32k to fully accommodate the largest upstream HTTP header being sent.

  • fastcgi_buffers controls the number and size of buffer segments used for the payload of each FastCGI response. Most, if not all of our adjustments will be around this setting for the remainder of this guide.

Determine actual FastCGI response sizes

By grepping our Nginx access logs we can determine both maximum and average response sizes. The basis of this awk recipe was lifted from here:

$ awk '($9 ~ /200/) { i++;sum+=$10;max=$10>max?$10:max; } END { printf("Maximum: %d\nAverage: %d\n",max,i?sum/i:0); }' access.log

# Maximum: 76716
# Average: 10358

Note

These recipes will report on all access requests returning an HTTP 200 code, you might want to split out just FastCGI requests into a separate access log for reporting, like so (PHP-FPM here):

location ~ "\.php$" {
  fastcgi_index index.php;
  if (!-f $realpath_root$fastcgi_script_name) {
    return 404;
  }

  include /etc/nginx/fastcgi_params;
  fastcgi_pass unix:/run/php5/php-fpm.sock;

  # output just FastCGI requests to it's own Nginx log file
  access_log /var/log/nginx/phpfpm-only-access.log;
}

With these values in hand we're now much better equipped to set fastcgi_buffers.

Setting the buffer size

As noted earlier, the fastcgi_buffers setting takes two values, )buffer segment count and memory size, by default this will be:

fastcgi_buffers 8 4k|8k;

So a total of 8 buffer segments at either 4k/8k, which is determined by the platform memory page size. For Debian/Ubuntu Linux that turns out to be 4096 bytes (4KB) - so a default total buffer size of 8 * 4K = 32KB.

Based on the maximum/average response sizes determined above we can now raise/lower these values to suit. I typically keep buffer size at the default (memory page size) and adjust only the buffer segment count to a value that keeps the bulk of responses handled fully in buffer RAM.

The default memory page size (in bytes) for an operating system can be determined by the following command:

$ getconf PAGESIZE

If your response size average tips on the higher side you might want to alternatively lower the buffer segment count and raise the memory size in page size multiples (8k/16k/32k).

Verifying results

We can see how often FastCGI responses are saved out to temporary disk by grepping Nginx error log(s):

$ cat error.log | grep --extended-regexp "\[warn\].+buffered"

# will return lines like:
YYYY/MM/DD HH:MM:SS [warn] 1234#0: *123456 an upstream response is buffered to a temporary file...

Tip

Remember its not necessarily bad to have some larger responses buffered to disk, but aim for a balance where ideally a smaller portion of your larger responses are handled in this way.

The alternative of ramping up fastcgi_buffers to excessively large number and/or size values to fit all FastCGI responses purely in RAM is something I would strongly recommend against. Unless your Nginx server is receiving only a few concurrent requests at any one moment - you risk exhausting your available system memory.

@johnmaguire
Copy link

FYI, anyone who may be finding this Gist via a Google search: We recently ran into an issue where we were streaming large amounts of data over a long time period, and were seeing nginx's processing ballooning with memory (like, 1.5GB of RAM after 5-10 minutes.) The client was receiving about a gig of data before the server OOM'd the process and everything fell apart.

Output buffering was off in PHP, our buffers were set to a total of ~4MB. No idea what was going on.

We upgraded from nginx 1.4.7 to 1.7.6 in order to attempt fastcgi_buffering off;. While this did work, we removed the fastcgi_buffering off; flag and our issue still hadn't returned.

In other words: There may be a memory leak in nginx 1.4.7 when sending large amounts of data from PHP-FPM, through nginx, to a client. If the memory hog is nginx and not your php-fpm process, try upgrading. If you figure out the real cause, tag me, I'm interested. :)

@magnetikonline
Copy link
Author

Thanks for the update. Sure that information will be helpful for some!

@CMCDragonkai
Copy link

Do you have any information on the busy_buffer_size?

@jeveloper
Copy link

By the way, i thought i'd share this with you, Nginx 1.9.12 with PHP7 FPM , ubuntu 14 LTS, running ecom site , does produce this warning.

Anyone have a reasonable number (based on e.g. total ram of 1,5gb) per node that they use for buffering?

thanks

@GreenReaper
Copy link

As magnetikonline says, it depends on how big your output is - gzipped, if you're using gzip (and you should be, for everything compressible; check the types it's applied to). Note that gzip has separate buffers, the ones mentioned here are for the output after gzip.

Use your browser's developer tools to see how big your various fastcgi output pages are likely to be, divide by page size (usually 4k), and round up to a power of two. Then apply settings and test.

@schkovich
Copy link

there is an extra quote here:

$ cat error.log | grep -E "\[warn\].+buffered""

it should read:

$ cat error.log | grep -E "\[warn\].+buffered"

@magnetikonline
Copy link
Author

Awesome @schkovich - have fixed.

@miken32
Copy link

miken32 commented Oct 12, 2016

LOL those awk commands are somebody playing a bad joke on you. Try this for maximum and average response sizes for PHP requests:

awk '($9 ~ /200/ && $7 ~ /\.php/) {i++; sum+=$10; max=$10>max?$10:max;} END {printf("MAX: %d\nAVG: %d\n", max, i?sum/i:0);}' /var/log/nginx/access.log

Also, there's an extra cat here:

cat error.log | grep -E "\[warn\].+buffered"

it should read:

grep -E "\[warn\].+buffered" /var/log/nginx/error.log

@larssn
Copy link

larssn commented Dec 20, 2016

Good stuff @miken32

@magnetikonline
Copy link
Author

magnetikonline commented Dec 23, 2016

Thanks @miken32 - was not aware you could do ternary operators with awk. Have included your improvements.

@fliespl
Copy link

fliespl commented Mar 21, 2017

@miken32 thank for your alternative awk command. Unfortunately it won't work with frameworks like symfony2, which handle all requests with single php via internal nginx redirect (file extension is not saved in access.log)

@campones
Copy link

campones commented Apr 27, 2018

I am using nginx with rtmp module to stream hls. I have tons of these warnings.

[warn] 3151#0: *145993 an upstream response is buffered to a temporary file /usr/local/nginx/fastcgi_temp/0/22/0000020220 while reading upstream,

currently I have the following conf:

          fastcgi_buffer_size 64k;
          fastcgi_buffers 4 64k;

Should I increase this even more to use more ram (I have plenty ram available) or did I misunderstood.

@magnetikonline
Copy link
Author

@campones again it's about finding the balance to try and get the majority of responses into RAM. With a streaming setup this is going to be harder I would suspect as your response payloads are going to be generally pretty large for each hunk of video/audio.

@campones
Copy link

maybe, but the thing is I have 5 others servers doing the same thing, and this one is the only one producing this kind of error in the log..

@minusf
Copy link

minusf commented Oct 2, 2018

this is not related directly the the buffers issue, but the php block has some issues i believe:

  • with location ~ "\.php$", this block will never see a non-php ending request, hence fastcgi_index can never append index.php, use plain index above the block.
  • use try_files instead of if

@magnetikonline
Copy link
Author

magnetikonline commented Oct 3, 2018

To @minusf :

@virgilwashere
Copy link

  • fastcgi_index only controls the setting of the $fastcgi_script_name variable nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_index. So this is defined correctly.
  • Also, not sure how try_files will help here - I'm not looking for a fallback here over several documents - if the .php file doesn't exist - it's a hard 404.

@magnetikonline CC @minusf

Peter, don't you think this

try_files $fastcgi_script_name =404;

is easier to use than your proposed if block to achieve a 404.

See /etc/nginx/snippets/fastcgi-php.conf.

# regex to split $uri to $fastcgi_script_name and $fastcgi_path
fastcgi_split_path_info ^(.+\.php)(/.+)$;

# Check that the PHP script exists before passing it
try_files $fastcgi_script_name =404;

# Bypass the fact that try_files resets $fastcgi_path_info
# see: http://trac.nginx.org/nginx/ticket/321
set $path_info $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;

fastcgi_index index.php;
include fastcgi.conf;

Your block becomes:

location ~ "\.php$" {
	include /etc/nginx/snippets/fastcgi-php.conf;
	fastcgi_pass unix:/run/php5/php-fpm.sock;

	# Not required permanently. Output just FastCGI requests to it's own Nginx log file
	access_log /var/log/nginx/phpfpmonly-access.log;
}

Feek free to comment on my fork too: http://bit.ly/nginx_fastcgi_buffers

Virgil

@magnetikonline
Copy link
Author

@virgilwashere that is a nice alternative, thanks for posting.

@farooqza
Copy link

i am seeing "unsupported FastCGI protocol version: while reading upstream."

This error is seen for every 2nd request on the same API endpoint
Ex: /test results in json response. firing it again results in 502 with above mentioned error in error log.
is this due to buffering

logging upstream header shows "cat :"
Thanks

@ClosetGeek-Git
Copy link

ClosetGeek-Git commented Aug 20, 2022

I'm not sure if this is the right place to post these questions, but what is the general behavior of this buffer? Does it store the data flushed by PHP until it gets a full response? Or is it implemented to limit serial writes and to maximize throughput? What are the benefits/consequences if turning off the buffers or limiting them to a small size? What are the buffers effects on segments of chunked responses? These are all good things to know when tuning

@magnetikonline
Copy link
Author

magnetikonline commented Aug 21, 2022

I'm not sure if this is the right place to post these questions, but what is the general behavior of this buffer? Does it store the data flushed by PHP until it gets a full response? Or is it implemented to limit serial writes and to maximize throughput?

@ClosetMonkey I'm not 100% of those internal behaviours, but reading through the man page, both in-memory and disk buffers have size limits. My assumption would be these buffers are filled - at that point the response is flushed to the client. You want to ideally minimise those "fill & flush" events/moments.

What are the benefits/consequences if turning off the buffers or limiting them to a small size?

A small/disabled buffer is going to mean excessive pulling and flushing from FastCGI -> Nginx -> Client. This is going to tie up FastCGI and thus an available PHP worker. You really want the push the PHP response over to Nginx (and it's buffers) as quickly as possible - this then frees up an allocated PHP worker to accept then another/next request.

@ClosetGeek-Git
Copy link

At just 4 buffers by 4kb to 8kb each connection I would imagine that it's flush is not dependent on the state of the response. I did a quick download of this page and it was over 300kb, which would be enough to fill and empty each of it's buffers multiple times each during the single response.

The man page states that the buffers can be enabled/disabled by passing the response header X-Accel-Buffering: yes and X-Accel-Buffering: no regardless of the current fastcgi_buffering configuration. I think that is pretty slick all things considered. I can see reasons where one may desire different buffering strategies within the same application.

Another interesting couple of directive which may also effect buffering are the limit_rate and it's corresponding response header X-Accel-Limit-Rate, as well as limit_rate_after. They don't directly effect the buffering configuration per se, but controlling the amount that passes through the buffers may cause different effects

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment