| Age | Commit message (Collapse) | Author | Files | Lines |
|
When filter finalization is triggered when working with an upstream server,
and error_page redirects request processing to some simple handler,
ngx_http_request_finalize() triggers request termination when the response
is sent. In particular, via the upstream cleanup handler, nginx will close
the upstream connection and the corresponding socket.
Still, this can happen to be with ngx_event_pipe() on stack. While
the code will set p->downstream_error due to NGX_ERROR returned from the
output filter chain by filter finalization, otherwise the error will be
ignored till control returns to ngx_http_upstream_process_request().
And event pipe might try reading from the (already closed) socket, resulting
in "readv() failed (9: Bad file descriptor) while reading upstream" errors
(or even segfaults with SSL).
Such errors were seen with the following configuration:
location /t2 {
proxy_pass http://127.0.0.1:8080/big;
image_filter_buffer 10m;
image_filter resize 150 100;
error_page 415 = /empty;
}
location /empty {
return 204;
}
location /big {
# big enough static file
}
Fix is to clear p->upstream in ngx_http_upstream_finalize_request(),
and ensure that p->upstream is checked in ngx_event_pipe_read_upstream()
and when handling events at ngx_event_pipe() exit.
|
|
Previous behaviour was to pass everything to the client, but this
seems to be suboptimal and causes issues (ticket #1695). Fix is to
drop extra data instead, as it naturally happens in most clients.
This change covers generic buffered and unbuffered filters as used
in the scgi and uwsgi modules. Appropriate input filter init
handlers are provided by the scgi and uwsgi modules to set corresponding
lengths.
Note that for responses to HEAD requests there is an exception:
we do allow any response length. This is because responses to HEAD
requests might be actual full responses, and it is up to nginx
to remove the response body. If caching is enabled, only full
responses matching the Content-Length header will be cached
(see b779728b180c).
|
|
In SSL connections, data can be buffered by the SSL layer, and it is
wrong to avoid doing c->recv_chain() if c->read->available is 0 and
c->read->pending_eof is set. And tests show that the optimization in
question indeed can result in incorrect detection of premature connection
close if upstream closes the connection without sending a close notify
alert at the same time. Fix is to disable c->read->available optimization
for SSL connections.
|
|
The ngx_event_pipe() function wasn't called on write events with
wev->delayed set. As a result, threaded writing results weren't
properly collected in ngx_event_pipe_write_to_downstream() when a
write event was triggered for a completed write.
Further, this wasn't detected, as p->aio was reset by a thread completion
handler, and results were later collected in ngx_event_pipe_read_upstream()
instead of scheduling a new write of additional data. If this happened
on the last reading from an upstream, last part of the response was never
written to the cache file.
Similar problems might also happen in case of timeouts when writing to
client, as this also results in ngx_event_pipe() not being called on write
events. In this scenario socket leaks were observed.
Fix is to check if p->writing is set in ngx_event_pipe_read_upstream(), and
therefore collect results of previous write operations in case of read events
as well, similar to how we do so in ngx_event_pipe_write_downstream().
This is enough to fix the wev->delayed case. Additionally, we now call
ngx_event_pipe() from ngx_http_upstream_process_request() if there are
uncollected write operations (p->writing and !p->aio). This also fixes
the wev->timedout case.
|
|
This fixes a problem with aio threads and sendfile with aio_write switched
off, as observed with range requests after fc72784b1f52 (1.9.13). Potential
problems with sendfile in threads were previously described in 9fd738b85fad,
and this seems to be one of them.
The problem occurred as file's thread_handler was set to NULL by event pipe
code after a sendfile thread task was scheduled. As a result, no sendfile
completion code was executed, and the same buffer was additionally sent
using non-threaded sendfile. Fix is to avoid modifying file's thread_handler
if aio_write is switched off.
Note that with "aio_write on" it is still possible that sendfile will use
thread_handler as set by event pipe. This is believed to be safe though,
as handlers used are compatible.
|
|
When c->recv_chain() returns an error, it is possible that we already
have some data previously read, e.g., in preread buffer. And in some
cases it may be even a complete response. Changed c->recv_chain() error
handling to process the data, much like it is already done if kevent
reports about an error.
This change, in particular, fixes processing of small responses
when an upstream fails to properly close a connection with lingering and
therefore the connection is reset, but the response is already fully
obtained by nginx (see ticket #1037).
|
|
|
|
|
|
The "aio_write" directive is introduced, which enables use of aio
for writing. Currently it is meaningful only with "aio threads".
Note that aio operations can be done by both event pipe and output
chain, so proper mapping between r->aio and p->aio is provided when
calling ngx_event_pipe() and in output filter.
In collaboration with Valentin Bartenev.
|
|
The change was missed in f69d1aab6a0f.
|
|
|
|
The directives limit the upstream read rate. For example,
"proxy_limit_rate 42" limits proxy upstream read rate to
42 bytes per second.
|
|
|
|
Previously, nginx closed client connection in cases when a response body
from upstream was needed to be cached or stored but shouldn't be sent to
the client. While this is normal for HTTP, it is unacceptable for SPDY.
Fix is to use instead the p->downstream_error flag to prevent nginx from
sending anything downstream. To make this work, the event pipe code was
modified to properly cache empty responses with the flag set.
|
|
No functional changes.
|
|
Several warnings silenced, notably (ngx_socket_t) -1 is now checked
on socket operations instead of -1, as ngx_socket_t is unsigned on win32
and gcc complains on comparison.
With this patch, it's now possible to compile nginx using mingw gcc,
with options we normally compile on win32.
|
|
|
|
With previous code the p->temp_file->offset wasn't adjusted if a temp
file was written by the code in ngx_event_pipe_write_to_downstream()
after an EOF, resulting in cache not being used with empty scgi and uwsgi
responses with Content-Length set to 0.
Fix it to call ngx_event_pipe_write_chain_to_temp_file() there instead
of calling ngx_write_chain_to_temp_file() directly.
|
|
Input filter might free a buffer if there is no data in it, and in case
of first buffer (used for cache header and request header, aka p->buf_to_file)
this resulted in cache corruption. Buffer memory was reused to read upstream
response before headers were written to disk.
Fix is to avoid moving pointers in ngx_event_pipe_add_free_buf() to a buffer
start if we were asked to free a buffer used by p->buf_to_file.
This fixes occasional cache file corruption, usually resulted
in "cache file ... has md5 collision" alerts.
Reported by Anatoli Marinov.
|
|
|
|
With previous code raw buffer might be lost if p->input_filter() was called
on a buffer without any data and used ngx_event_pipe_add_free_buf() to
return it to the free list. This eventually might cause "all buffers busy"
problem, resulting in segmentation fault due to null pointer dereference in
ngx_event_pipe_write_chain_to_temp_file().
In ngx_event_pipe_add_free_buf() the buffer was added to the list start
due to pos == last, and then "p->free_raw_bufs = cl->next" in
ngx_event_pipe_read_upstream() dropped both chain links to the buffer
from the p->free_raw_bufs list.
Fix is to move "p->free_raw_bufs = cl->next" before calling the
p->input_filter().
|
|
|
|
If possible we now just extend already present file buffer in p->out chain
instead of keeping ngx_buf_t for each buffer we've flushed to disk. This
saves about 120 bytes of memory per buffer flushed to disk, and resolves
high CPU usage observed in edge cases (due to coalescing these buffers on
send).
|
|
1. In ngx_event_pipe_write_chain_to_temp_file() make sure to fully write
all shadow buffers up to last_shadow. With this change recycled buffers
cannot appear in p->out anymore. This also fixes segmentation faults
observed due to ngx_event_pipe_write_chain_to_temp() not freeing any
raw buffers while still returning NGX_OK.
2. In ngx_event_pipe_write_to_downstream() we now properly check for busy
size as a size of buffers, not a size of data in these buffers. This
fixes situations where all available buffers became busy (including
segmentation faults due to this).
3. The ngx_event_pipe_free_shadow_raw_buf() function is dropped. It's
incorrect and not needed.
|
|
|
|
If client closed connection in ngx_event_pipe_write_to_downstream(), buffers
in the "out" chain were lost. This caused cpu hog if all available buffers
were in the "out" chain. Fix is to call ngx_chain_update_chains() before
checking return code of output filter to avoid loosing buffers in the "out"
chain.
Note that this situation (all available buffers in the "out" chain) isn't
normal, it should be prevented by busy buffers limit. Though right now it
may happen with complex protocols like fastcgi. This should be addressed
separately.
|
|
As long as ngx_event_pipe() has more data read from upstream than specified
in p->length it's passed to input filter even if buffer isn't yet full. This
allows to process data with known length without relying on connection close
to signal data end.
By default p->length is set to -1 in upstream module, i.e. end of data is
indicated by connection close. To set it from per-protocol handlers upstream
input_filter_init() now called in buffered mode (as well as in
unbuffered mode).
|
|
The ngx_chain_update_chains() needs pool to free chain links used for buffers
with non-matching tags. Providing one helps to reduce memory consumption
for long-lived requests.
|
|
Setting read->eof to 0 seems to be just a typo. It appeared in
nginx-0.0.1-2003-10-28-18:45:41 import (r164), while identical code in
ngx_recv.c introduced in the same import do actually set read->eof to 1.
Failure to set read->eof to 1 results in EOF not being generally detectable
from connection flags. On the other hand, kqueue won't report any read
events on such a connection since we use EV_CLEAR. This resulted in read
timeouts if such connection was cached and used for another request.
|
|
|
|
|
|
it seems this affected header only FastCGI responses only:
proxied header only responses were cached right
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
the previous commit did not fix too
|
|
r841 did not fix it
|
|
|
|
|
|
|
|
|
|
*) Feature: the "restrict_host_names" directive was canceled.
*) Feature: the --with-cpu-opt=ppc64 configuration parameter.
*) Bugfix: on some condition the proxied connection with a client was
terminated prematurely.
Thanks to Vladimir Shutoff.
*) Bugfix: the "X-Accel-Limit-Rate" header line was not taken into
account if the request was redirected using the "X-Accel-Redirect"
header line.
*) Bugfix: the "post_action" directive ran only after a successful
completion of a request.
*) Bugfix: the proxied response body generated by the "post_action"
directive was transferred to a client.
|
|
*) Feature: the new 444 code of the "return" directive to close
connection.
*) Feature: the "so_keepalive" directive in IMAP/POP3 proxy.
*) Bugfix: if there are unclosed connection nginx now calls abort()
only on gracefull quit and active "debug_points" directive.
|
|
*) Feature: the IMAP/POP3 proxy supports STARTTLS and STLS.
*) Bugfix: the IMAP/POP3 proxy did not work with the select, poll, and
/dev/poll methods.
*) Bugfix: in SSI handling.
*) Bugfix: now Solaris sendfilev() is not used to transfer the client
request body to FastCGI-server via the unix domain socket.
*) Bugfix: the "auth_basic" directive did not disable the
authorization; the bug had appeared in 0.3.11.
|