<feed xmlns='http://www.w3.org/2005/Atom'>
<title>nginx.git/src/http, branch release-1.19.2</title>
<subtitle>nginx</subtitle>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/'/>
<entry>
<title>SSL: disabled sending shutdown after ngx_http_test_reading().</title>
<updated>2020-08-10T15:52:34+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2020-08-10T15:52:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=eae2b2fdf15c52f058c0c08763a5c373997d0535'/>
<id>eae2b2fdf15c52f058c0c08763a5c373997d0535</id>
<content type='text'>
Sending shutdown when ngx_http_test_reading() detects the connection is
closed can result in "SSL_shutdown() failed (SSL: ... bad write retry)"
critical log messages if there are blocked writes.

Fix is to avoid sending shutdown via the c-&gt;ssl-&gt;no_send_shutdown flag,
similarly to how it is done in ngx_http_keepalive_handler() for kqueue
when pending EOF is detected.

Reported by Jan Prachař
(http://mailman.nginx.org/pipermail/nginx-devel/2018-December/011702.html).
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Sending shutdown when ngx_http_test_reading() detects the connection is
closed can result in "SSL_shutdown() failed (SSL: ... bad write retry)"
critical log messages if there are blocked writes.

Fix is to avoid sending shutdown via the c-&gt;ssl-&gt;no_send_shutdown flag,
similarly to how it is done in ngx_http_keepalive_handler() for kqueue
when pending EOF is detected.

Reported by Jan Prachař
(http://mailman.nginx.org/pipermail/nginx-devel/2018-December/011702.html).
</pre>
</div>
</content>
</entry>
<entry>
<title>HTTP/2: fixed c-&gt;timedout flag on timed out connections.</title>
<updated>2020-08-10T15:52:20+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2020-08-10T15:52:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=1d696cd37947ef816bde4d54d7b6f97374f1151d'/>
<id>1d696cd37947ef816bde4d54d7b6f97374f1151d</id>
<content type='text'>
Without the flag, SSL shutdown is attempted on such connections,
resulting in useless work and/or bogus "SSL_shutdown() failed
(SSL: ... bad write retry)" critical log messages if there are
blocked writes.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Without the flag, SSL shutdown is attempted on such connections,
resulting in useless work and/or bogus "SSL_shutdown() failed
(SSL: ... bad write retry)" critical log messages if there are
blocked writes.
</pre>
</div>
</content>
</entry>
<entry>
<title>Request body: optimized handling of small chunks.</title>
<updated>2020-08-06T02:02:57+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2020-08-06T02:02:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=130a5e71269200154b55e85d9e30186feaeb64a7'/>
<id>130a5e71269200154b55e85d9e30186feaeb64a7</id>
<content type='text'>
If there is a previous buffer, copy small chunks into it instead of
allocating additional buffer.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If there is a previous buffer, copy small chunks into it instead of
allocating additional buffer.
</pre>
</div>
</content>
</entry>
<entry>
<title>Request body: allowed large reads on chunk boundaries.</title>
<updated>2020-08-06T02:02:55+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2020-08-06T02:02:55+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=150cbb017b4fda599dcda172dca87ca11f6219f1'/>
<id>150cbb017b4fda599dcda172dca87ca11f6219f1</id>
<content type='text'>
If some additional data from a pipelined request happens to be
read into the body buffer, we copy it to r-&gt;header_in or allocate
an additional large client header buffer for it.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If some additional data from a pipelined request happens to be
read into the body buffer, we copy it to r-&gt;header_in or allocate
an additional large client header buffer for it.
</pre>
</div>
</content>
</entry>
<entry>
<title>Request body: all read data are now sent to filters.</title>
<updated>2020-08-06T02:02:44+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2020-08-06T02:02:44+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=9edc93fe0ed60bac336d11f7d20d3c2ed9db3227'/>
<id>9edc93fe0ed60bac336d11f7d20d3c2ed9db3227</id>
<content type='text'>
This is a prerequisite for the next change to allow large reads
on chunk boundaries.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is a prerequisite for the next change to allow large reads
on chunk boundaries.
</pre>
</div>
</content>
</entry>
<entry>
<title>Added size check to ngx_http_alloc_large_header_buffer().</title>
<updated>2020-08-06T02:02:22+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2020-08-06T02:02:22+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=bd7dad5b0eb9f667a9c66ea5175a017ac51cd027'/>
<id>bd7dad5b0eb9f667a9c66ea5175a017ac51cd027</id>
<content type='text'>
This ensures that copying won't write more than the buffer size
even if the buffer comes from hc-&gt;free and it is smaller than the large
client header buffer size in the virtual host configuration.  This might
happen if size of large client header buffers is different in name-based
virtual hosts, similarly to the problem with number of buffers fixed
in 6926:e662cbf1b932.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This ensures that copying won't write more than the buffer size
even if the buffer comes from hc-&gt;free and it is smaller than the large
client header buffer size in the virtual host configuration.  This might
happen if size of large client header buffers is different in name-based
virtual hosts, similarly to the problem with number of buffers fixed
in 6926:e662cbf1b932.
</pre>
</div>
</content>
</entry>
<entry>
<title>FastCGI: fixed zero size buf alerts on extra data (ticket #2018).</title>
<updated>2020-07-27T13:02:15+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2020-07-27T13:02:15+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=d2744ad26fef1e4f4f6e9c12e95b57866345c071'/>
<id>d2744ad26fef1e4f4f6e9c12e95b57866345c071</id>
<content type='text'>
After 05e42236e95b (1.19.1) responses with extra data might result in
zero size buffers being generated and "zero size buf" alerts in writer
(if f-&gt;rest happened to be 0 when processing additional stdout data).
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
After 05e42236e95b (1.19.1) responses with extra data might result in
zero size buffers being generated and "zero size buf" alerts in writer
(if f-&gt;rest happened to be 0 when processing additional stdout data).
</pre>
</div>
</content>
</entry>
<entry>
<title>Xslt: disabled ranges.</title>
<updated>2020-07-22T19:16:19+00:00</updated>
<author>
<name>Roman Arutyunyan</name>
<email>arut@nginx.com</email>
</author>
<published>2020-07-22T19:16:19+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=4dd43dfca71f3fc2c6768606ff3700a4317a9176'/>
<id>4dd43dfca71f3fc2c6768606ff3700a4317a9176</id>
<content type='text'>
Previously, the document generated by the xslt filter was always fully sent
to client even if a range was requested and response status was 206 with
appropriate Content-Range.

The xslt module is unable to serve a range because of suspending the header
filter chain.  By the moment full response xml is buffered by the xslt filter,
range header filter is not called yet, but the range body filter has already
been called and did nothing.

The fix is to disable ranges by resetting the r-&gt;allow_ranges flag much like
the image filter that employs a similar technique.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Previously, the document generated by the xslt filter was always fully sent
to client even if a range was requested and response status was 206 with
appropriate Content-Range.

The xslt module is unable to serve a range because of suspending the header
filter chain.  By the moment full response xml is buffered by the xslt filter,
range header filter is not called yet, but the range body filter has already
been called and did nothing.

The fix is to disable ranges by resetting the r-&gt;allow_ranges flag much like
the image filter that employs a similar technique.
</pre>
</div>
</content>
</entry>
<entry>
<title>Slice filter: clear original Accept-Ranges.</title>
<updated>2020-07-09T13:21:37+00:00</updated>
<author>
<name>Roman Arutyunyan</name>
<email>arut@nginx.com</email>
</author>
<published>2020-07-09T13:21:37+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=5cef7de7a116bab3af9097dac5a22f7652be4273'/>
<id>5cef7de7a116bab3af9097dac5a22f7652be4273</id>
<content type='text'>
The slice filter allows ranges for the response by setting the r-&gt;allow_ranges
flag, which enables the range filter.  If the range was not requested, the
range filter adds an Accept-Ranges header to the response to signal the
support for ranges.

Previously, if an Accept-Ranges header was already present in the first slice
response, client received two copies of this header.  Now, the slice filter
removes the Accept-Ranges header from the response prior to setting the
r-&gt;allow_ranges flag.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The slice filter allows ranges for the response by setting the r-&gt;allow_ranges
flag, which enables the range filter.  If the range was not requested, the
range filter adds an Accept-Ranges header to the response to signal the
support for ranges.

Previously, if an Accept-Ranges header was already present in the first slice
response, client received two copies of this header.  Now, the slice filter
removes the Accept-Ranges header from the response prior to setting the
r-&gt;allow_ranges flag.
</pre>
</div>
</content>
</entry>
<entry>
<title>gRPC: generate error when response size is wrong.</title>
<updated>2020-07-06T15:36:25+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2020-07-06T15:36:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=5348706fe607c2b6704b52078cba77ee8fa298b8'/>
<id>5348706fe607c2b6704b52078cba77ee8fa298b8</id>
<content type='text'>
As long as the "Content-Length" header is given, we now make sure
it exactly matches the size of the response.  If it doesn't,
the response is considered malformed and must not be forwarded
(https://tools.ietf.org/html/rfc7540#section-8.1.2.6).  While it
is not really possible to "not forward" the response which is already
being forwarded, we generate an error instead, which is the closest
equivalent.

Previous behaviour was to pass everything to the client, but this
seems to be suboptimal and causes issues (ticket #1695).  Also this
directly contradicts HTTP/2 specification requirements.

Note that the new behaviour for the gRPC proxy is more strict than that
applied in other variants of proxying.  This is intentional, as HTTP/2
specification requires us to do so, while in other types of proxying
malformed responses from backends are well known and historically
tolerated.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
As long as the "Content-Length" header is given, we now make sure
it exactly matches the size of the response.  If it doesn't,
the response is considered malformed and must not be forwarded
(https://tools.ietf.org/html/rfc7540#section-8.1.2.6).  While it
is not really possible to "not forward" the response which is already
being forwarded, we generate an error instead, which is the closest
equivalent.

Previous behaviour was to pass everything to the client, but this
seems to be suboptimal and causes issues (ticket #1695).  Also this
directly contradicts HTTP/2 specification requirements.

Note that the new behaviour for the gRPC proxy is more strict than that
applied in other variants of proxying.  This is intentional, as HTTP/2
specification requires us to do so, while in other types of proxying
malformed responses from backends are well known and historically
tolerated.
</pre>
</div>
</content>
</entry>
</feed>
