<feed xmlns='http://www.w3.org/2005/Atom'>
<title>nginx.git/src/http, branch release-1.1.1</title>
<subtitle>nginx</subtitle>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/'/>
<entry>
<title>The change in adaptive loader behaviour introduced in r3975:</title>
<updated>2011-08-22T10:16:49+00:00</updated>
<author>
<name>Igor Sysoev</name>
<email>igor@sysoev.ru</email>
</author>
<published>2011-08-22T10:16:49+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=0d18687b03b2ebfe83a70ae4b4612c33129d4e04'/>
<id>0d18687b03b2ebfe83a70ae4b4612c33129d4e04</id>
<content type='text'>
now cache loader processes either as many files as specified by loader_files
or works no more than time specified by loader_threshold during each iteration.

loader_threshold was previously used to decrease loader_files or
to increase loader_timeout and this might eventually result in
downgrading loader_files to 1 and increasing loader_timeout to large values
causing loading cache for forever.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
now cache loader processes either as many files as specified by loader_files
or works no more than time specified by loader_threshold during each iteration.

loader_threshold was previously used to decrease loader_files or
to increase loader_timeout and this might eventually result in
downgrading loader_files to 1 and increasing loader_timeout to large values
causing loading cache for forever.
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix ignored headers handling in fastcgi/scgi/uwsgi.</title>
<updated>2011-08-19T20:11:39+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2011-08-19T20:11:39+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=5a52d67a0899031bfb4d93fbee76e3d0c6c32558'/>
<id>5a52d67a0899031bfb4d93fbee76e3d0c6c32558</id>
<content type='text'>
The bug had appeared in r3561 (fastcgi), r3638 (scgi), r3567 (uwsgi).
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The bug had appeared in r3561 (fastcgi), r3638 (scgi), r3567 (uwsgi).
</pre>
</div>
</content>
</entry>
<entry>
<title>Upstream: properly allocate memory for tried flags.</title>
<updated>2011-08-18T17:04:52+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2011-08-18T17:04:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=b7fcb430c156952fce4cb43a0a3cd81c2a5c939e'/>
<id>b7fcb430c156952fce4cb43a0a3cd81c2a5c939e</id>
<content type='text'>
Previous allocation only took into account number of non-backup servers, and
this caused memory corruption with many backup servers.

See report here:
http://mailman.nginx.org/pipermail/nginx/2011-May/026531.html
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Previous allocation only took into account number of non-backup servers, and
this caused memory corruption with many backup servers.

See report here:
http://mailman.nginx.org/pipermail/nginx/2011-May/026531.html
</pre>
</div>
</content>
</entry>
<entry>
<title>Fixing cpu hog with all upstream servers marked "down".</title>
<updated>2011-08-18T16:52:38+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2011-08-18T16:52:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=624fbe94a23e183dc356f9c3e696a816cb67acc2'/>
<id>624fbe94a23e183dc356f9c3e696a816cb67acc2</id>
<content type='text'>
The following configuration causes nginx to hog cpu due to infinite loop
in ngx_http_upstream_get_peer():

    upstream backend {
        server 127.0.0.1:8080 down;
        server 127.0.0.1:8080 down;
    }

    server {
       ...
       location / {
           proxy_pass http://backend;
       }
    }

Make sure we don't loop infinitely in ngx_http_upstream_get_peer() but stop
after resetting peer weights once.

Return 0 if we are stuck.  This is guaranteed to work as peer 0 always exists,
and eventually ngx_http_upstream_get_round_robin_peer() will do the right
thing falling back to backup servers or returning NGX_BUSY.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The following configuration causes nginx to hog cpu due to infinite loop
in ngx_http_upstream_get_peer():

    upstream backend {
        server 127.0.0.1:8080 down;
        server 127.0.0.1:8080 down;
    }

    server {
       ...
       location / {
           proxy_pass http://backend;
       }
    }

Make sure we don't loop infinitely in ngx_http_upstream_get_peer() but stop
after resetting peer weights once.

Return 0 if we are stuck.  This is guaranteed to work as peer 0 always exists,
and eventually ngx_http_upstream_get_round_robin_peer() will do the right
thing falling back to backup servers or returning NGX_BUSY.
</pre>
</div>
</content>
</entry>
<entry>
<title>Fixing proxy_set_body and proxy_pass_request_body with SSL.</title>
<updated>2011-08-18T16:34:24+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2011-08-18T16:34:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=9bc8fc4602fb2b39b6000fed060e185ffcf2571b'/>
<id>9bc8fc4602fb2b39b6000fed060e185ffcf2571b</id>
<content type='text'>
Flush flag wasn't set in constructed buffer and this prevented any data
from being actually sent to upstream due to SSL buffering.  Make sure
we always set flush in the last buffer we are going to sent.

See here for report:
http://nginx.org/pipermail/nginx-ru/2011-June/041552.html
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Flush flag wasn't set in constructed buffer and this prevented any data
from being actually sent to upstream due to SSL buffering.  Make sure
we always set flush in the last buffer we are going to sent.

See here for report:
http://nginx.org/pipermail/nginx-ru/2011-June/041552.html
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix names of the referer hash size directives introduced in r3940.</title>
<updated>2011-08-18T16:27:30+00:00</updated>
<author>
<name>Igor Sysoev</name>
<email>igor@sysoev.ru</email>
</author>
<published>2011-08-18T16:27:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=c4ff39ae2b79a6a54535f3c8abc9f2ccefa4ee99'/>
<id>c4ff39ae2b79a6a54535f3c8abc9f2ccefa4ee99</id>
<content type='text'>
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix body with request_body_in_single_buf.</title>
<updated>2011-08-18T15:52:00+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2011-08-18T15:52:00+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=b09ceca2610410f2be67c3c41a92aa80d7952d50'/>
<id>b09ceca2610410f2be67c3c41a92aa80d7952d50</id>
<content type='text'>
If there were preread data and request body was big enough first part
of the request body was duplicated.

See report here:
http://mailman.nginx.org/pipermail/nginx/2011-July/027756.html
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If there were preread data and request body was big enough first part
of the request body was duplicated.

See report here:
http://mailman.nginx.org/pipermail/nginx/2011-July/027756.html
</pre>
</div>
</content>
</entry>
<entry>
<title>Correctly set body if it's preread and there are extra data.</title>
<updated>2011-08-18T15:27:57+00:00</updated>
<author>
<name>Maxim Dounin</name>
<email>mdounin@mdounin.ru</email>
</author>
<published>2011-08-18T15:27:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=f48b45119557790fc26f7d4b3d081ea6f27c9301'/>
<id>f48b45119557790fc26f7d4b3d081ea6f27c9301</id>
<content type='text'>
Previously all available data was used as body, resulting in garbage after
real body e.g. in case of pipelined requests.  Make sure to use only as many
bytes as request's Content-Length specifies.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Previously all available data was used as body, resulting in garbage after
real body e.g. in case of pipelined requests.  Make sure to use only as many
bytes as request's Content-Length specifies.
</pre>
</div>
</content>
</entry>
<entry>
<title>fix gzip quantity: "q=0." and "q=1." are valid values according to RFC</title>
<updated>2011-08-05T08:51:29+00:00</updated>
<author>
<name>Igor Sysoev</name>
<email>igor@sysoev.ru</email>
</author>
<published>2011-08-05T08:51:29+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=de236d3a2c5320d32f861a0efdb6fb5e322be7b9'/>
<id>de236d3a2c5320d32f861a0efdb6fb5e322be7b9</id>
<content type='text'>
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
</pre>
</div>
</content>
</entry>
<entry>
<title>refactor gzip quantity introduced in r3981: it ignored "q=1.000"</title>
<updated>2011-08-04T14:50:59+00:00</updated>
<author>
<name>Igor Sysoev</name>
<email>igor@sysoev.ru</email>
</author>
<published>2011-08-04T14:50:59+00:00</published>
<link rel='alternate' type='text/html' href='https://git.sigsegv.uk/nginx.git/commit/?id=48d17bca947540c31c077051ab6d7073fd25c986'/>
<id>48d17bca947540c31c077051ab6d7073fd25c986</id>
<content type='text'>
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
</pre>
</div>
</content>
</entry>
</feed>
