<feed xmlns='http://www.w3.org/2005/Atom'>
<title>nginx.git/src/http/modules/ngx_http_upstream_keepalive_module.c, branch release-1.29.2</title>
<subtitle>nginx</subtitle>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/'/>
<entry>
<title>Changed keepalive_requests default to 1000 (ticket #2155).</title>
<updated>2021-04-07T21:16:30+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2021-04-07T21:16:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=eb52de83114e8d98722cd17ec8435c47956b6315'/>
<id>eb52de83114e8d98722cd17ec8435c47956b6315</id>
<content type='text'>
It turns out no browsers implement HTTP/2 GOAWAY handling properly, and
large enough number of resources on a page results in failures to load
some resources.  In particular, Chrome seems to experience errors if
loading of all resources requires more than 1 connection (while it
is usually able to retry requests at least once, even with 2 connections
there are occasional failures for some reason), Safari if loading requires
more than 3 connections, and Firefox if loading requires more than 10
connections (can be configured with network.http.request.max-attempts,
defaults to 10).

It does not seem to be possible to resolve this on nginx side, even strict
limiting of maximum concurrency does not help, and loading issues seems to
be triggered by merely queueing of a request for a particular connection.
The only available mitigation seems to use higher keepalive_requests value.

The new default is 1000 and matches previously used default for
http2_max_requests.  It is expected to be enough for 99.98% of the pages
(https://httparchive.org/reports/state-of-the-web?start=latest#reqTotal)
even in Chrome.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
It turns out no browsers implement HTTP/2 GOAWAY handling properly, and
large enough number of resources on a page results in failures to load
some resources.  In particular, Chrome seems to experience errors if
loading of all resources requires more than 1 connection (while it
is usually able to retry requests at least once, even with 2 connections
there are occasional failures for some reason), Safari if loading requires
more than 3 connections, and Firefox if loading requires more than 10
connections (can be configured with network.http.request.max-attempts,
defaults to 10).

It does not seem to be possible to resolve this on nginx side, even strict
limiting of maximum concurrency does not help, and loading issues seems to
be triggered by merely queueing of a request for a particular connection.
The only available mitigation seems to use higher keepalive_requests value.

The new default is 1000 and matches previously used default for
http2_max_requests.  It is expected to be enough for 99.98% of the pages
(https://httparchive.org/reports/state-of-the-web?start=latest#reqTotal)
even in Chrome.
</pre>
</div>
</content>
</entry>
<entry>
<title>Introduced the "keepalive_time" directive.</title>
<updated>2021-04-07T21:15:48+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2021-04-07T21:15:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=d9996d6f27150bfb9c9c00d77fac940712aa1d28'/>
<id>d9996d6f27150bfb9c9c00d77fac940712aa1d28</id>
<content type='text'>
Similar to lingering_time, it limits total connection lifetime before
keepalive is switched off.  The default is 1 hour, which is close to
the total maximum connection lifetime possible with default
keepalive_requests and keepalive_timeout.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Similar to lingering_time, it limits total connection lifetime before
keepalive is switched off.  The default is 1 hour, which is close to
the total maximum connection lifetime possible with default
keepalive_requests and keepalive_timeout.
</pre>
</div>
</content>
</entry>
<entry>
<title>Upstream keepalive: clearing of c-&gt;data in cached connections.</title>
<updated>2019-12-05T16:38:06+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2019-12-05T16:38:06+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=953f53921505a884f3912f2d8db5217a71c0479a'/>
<id>953f53921505a884f3912f2d8db5217a71c0479a</id>
<content type='text'>
Previously, connections returned from keepalive cache had c-&gt;data
pointing to the keepalive cache item.  While this shouldn't be a problem
for correct code, as c-&gt;data is not expected to be used before it is set,
explicitly clearing it might help to avoid confusion.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Previously, connections returned from keepalive cache had c-&gt;data
pointing to the keepalive cache item.  While this shouldn't be a problem
for correct code, as c-&gt;data is not expected to be used before it is set,
explicitly clearing it might help to avoid confusion.
</pre>
</div>
</content>
</entry>
<entry>
<title>Upstream keepalive: keepalive_requests directive.</title>
<updated>2018-08-10T18:54:46+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2018-08-10T18:54:46+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=d817ceae729914e7423c4c206165fc244513f021'/>
<id>d817ceae729914e7423c4c206165fc244513f021</id>
<content type='text'>
The directive configures maximum number of requests allowed on
a connection kept in the cache.  Once a connection reaches the number
of requests configured, it is no longer saved to the cache.
The default is 100.

Much like keepalive_requests for client connections, this is mostly
a safeguard to make sure connections are closed periodically and the
memory allocated from the connection pool is freed.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The directive configures maximum number of requests allowed on
a connection kept in the cache.  Once a connection reaches the number
of requests configured, it is no longer saved to the cache.
The default is 100.

Much like keepalive_requests for client connections, this is mostly
a safeguard to make sure connections are closed periodically and the
memory allocated from the connection pool is freed.
</pre>
</div>
</content>
</entry>
<entry>
<title>Upstream keepalive: keepalive_timeout directive.</title>
<updated>2018-08-10T18:54:23+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2018-08-10T18:54:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=7de808990b8e49e2e18c2a013e4cca82a092fdbc'/>
<id>7de808990b8e49e2e18c2a013e4cca82a092fdbc</id>
<content type='text'>
The directive configures maximum time a connection can be kept in the
cache.  By configuring a time which is smaller than the corresponding
timeout on the backend side one can avoid the race between closing
a connection by the backend and nginx trying to use the same connection
to send a request at the same time.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The directive configures maximum time a connection can be kept in the
cache.  By configuring a time which is smaller than the corresponding
timeout on the backend side one can avoid the race between closing
a connection by the backend and nginx trying to use the same connection
to send a request at the same time.
</pre>
</div>
</content>
</entry>
<entry>
<title>Upstream keepalive: comment added.</title>
<updated>2018-08-10T18:54:17+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2018-08-10T18:54:17+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=f3d1a925b5e27d8d78fc6da408ccc2f75994939c'/>
<id>f3d1a925b5e27d8d78fc6da408ccc2f75994939c</id>
<content type='text'>
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
</pre>
</div>
</content>
</entry>
<entry>
<title>Upstream keepalive: clean read delayed flag in stored connections.</title>
<updated>2017-11-28T11:00:00+00:00</updated>
<author>
<name>Roman Arutyunyan</name>
<email>arut@nginx.com</email>
</author>
<published>2017-11-28T11:00:00+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=e13268714fa3f57adbc7c3891db86b025d79eaf4'/>
<id>e13268714fa3f57adbc7c3891db86b025d79eaf4</id>
<content type='text'>
If a connection with the read delayed flag set was stored in the keepalive
cache, and after picking it from the cache a read timer was set on that
connection, this timer was considered a delay timer rather than a socket read
event timer as expected.  The latter timeout is usually much longer than the
former, which caused a significant delay in request processing.

The issue manifested itself with proxy_limit_rate and upstream keepalive
enabled and exists since 973ee2276300 (1.7.7) when proxy_limit_rate was
introduced.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If a connection with the read delayed flag set was stored in the keepalive
cache, and after picking it from the cache a read timer was set on that
connection, this timer was considered a delay timer rather than a socket read
event timer as expected.  The latter timeout is usually much longer than the
former, which caused a significant delay in request processing.

The issue manifested itself with proxy_limit_rate and upstream keepalive
enabled and exists since 973ee2276300 (1.7.7) when proxy_limit_rate was
introduced.
</pre>
</div>
</content>
</entry>
<entry>
<title>Introduced the ngx_sockaddr_t type.</title>
<updated>2016-05-23T13:37:20+00:00</updated>
<author>
<name>Ruslan Ermilov</name>
<email>ru@nginx.com</email>
</author>
<published>2016-05-23T13:37:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=fd064d3b88e59ee71aec508687403539b01d643c'/>
<id>fd064d3b88e59ee71aec508687403539b01d643c</id>
<content type='text'>
It's properly aligned and can hold any supported sockaddr.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
It's properly aligned and can hold any supported sockaddr.
</pre>
</div>
</content>
</entry>
<entry>
<title>Upstream: don't keep connections on early responses (ticket #669).</title>
<updated>2015-12-17T13:39:15+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2015-12-17T13:39:15+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=fda7d021ca23d43cb7a17ff92cd4ad5c805a156c'/>
<id>fda7d021ca23d43cb7a17ff92cd4ad5c805a156c</id>
<content type='text'>
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
</pre>
</div>
</content>
</entry>
<entry>
<title>Core: idle connections now closed only once on exiting.</title>
<updated>2015-08-11T13:28:55+00:00</updated>
<author>
<name>Valentin Bartenev</name>
<email>vbart@nginx.com</email>
</author>
<published>2015-08-11T13:28:55+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=50ff8b3c3a3eba0984ce55c63ab8ac07dcb65265'/>
<id>50ff8b3c3a3eba0984ce55c63ab8ac07dcb65265</id>
<content type='text'>
Iterating through all connections takes a lot of CPU time, especially
with large number of worker connections configured.  As a result
nginx processes used to consume CPU time during graceful shutdown.
To mitigate this we now only do a full scan for idle connections when
shutdown signal is received.

Transitions of connections to idle ones are now expected to be
avoided if the ngx_exiting flag is set.  The upstream keepalive module
was modified to follow this.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Iterating through all connections takes a lot of CPU time, especially
with large number of worker connections configured.  As a result
nginx processes used to consume CPU time during graceful shutdown.
To mitigate this we now only do a full scan for idle connections when
shutdown signal is received.

Transitions of connections to idle ones are now expected to be
avoided if the ngx_exiting flag is set.  The upstream keepalive module
was modified to follow this.
</pre>
</div>
</content>
</entry>
</feed>
