summaryrefslogtreecommitdiffhomepage
path: root/src/http/ngx_http_upstream.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2018-04-05Upstream: fixed u->conf->preserve_output (ticket #1519).Maxim Dounin1-6/+12
Previously, ngx_http_upstream_process_header() might be called after we've finished reading response headers and switched to a different read event handler, leading to errors with gRPC proxying. Additionally, the u->conf->read_timeout timer might be re-armed during reading response headers (while this is expected to be a single timeout on reading the whole response header).
2018-04-03Upstream: fixed ngx_http_upstream_test_next() conditions.Maxim Dounin1-2/+18
Previously, ngx_http_upstream_test_next() used an outdated condition on whether it will be possible to switch to a different server or not. It did not take into account restrictions on non-idempotent requests, requests with non-buffered request body, and the next upstream timeout. For such requests, switching to the next upstream server was rejected later in ngx_http_upstream_next(), resulting in nginx own error page being returned instead of the original upstream response.
2018-03-19Fixed checking ngx_tcp_push() and ngx_tcp_nopush() return values.Ruslan Ermilov1-1/+1
No functional changes.
2018-03-17Upstream: u->conf->preserve_output flag.Maxim Dounin1-2/+4
The flag can be used to continue sending request body even after we've got a response from the backend. In particular, this is needed for gRPC proxying of bidirectional streaming RPCs, and also to send control frames in other forms of RPCs.
2018-03-17Upstream: u->request_body_blocked flag.Maxim Dounin1-2/+19
The flag indicates whether last ngx_output_chain() returned NGX_AGAIN or not. If the flag is set, we arm the u->conf->send_timeout timer. The flag complements c->write->ready test, and allows to stop sending the request body in an output filter due to protocol-specific flow control.
2018-03-17Upstream: trailers support, u->conf->pass_trailers flag.Maxim Dounin1-0/+96
Basic trailer headers support allows one to access response trailers via the $upstream_trailer_* variables. Additionally, the u->conf->pass_trailers flag was introduced. When the flag is set, trailer headers from the upstream response are passed to the client. Like normal headers, trailer headers will be hidden if present in u->conf->hide_headers_hash.
2018-02-28Generic subrequests in memory.Roman Arutyunyan1-125/+1
Previously, only the upstream response body could be accessed with the NGX_HTTP_SUBREQUEST_IN_MEMORY feature. Now any response body from a subrequest can be saved in a memory buffer. It is available as a single buffer in r->out and the buffer size is configured by the subrequest_output_buffer_size directive. Upstream, proxy and fastcgi code used to handle the old-style feature is removed.
2018-02-08Basic support of the Link response header.Ruslan Ermilov1-0/+5
2018-01-30Upstream: removed X-Powered-By from the list of special headers.Ruslan Ermilov1-4/+0
After 1e720b0be7ec, it's neither specially processed nor copied when redirecting with X-Accel-Redirect.
2018-01-11Upstream: fixed "header already sent" alerts on backend errors.Maxim Dounin1-3/+4
Following ad3f342f14ba046c (1.9.13), it is possible that a request where header was already sent will be finalized with NGX_HTTP_BAD_GATEWAY, triggering an attempt to return additional error response and the "header already sent" alert as a result. In particular, it is trivial to reproduce the problem with a HEAD request and caching enabled. With caching enabled nginx will change HEAD to GET and will set u->pipe->downstream_error to suppress sending the response body to the client. When a backend-related error occurs (for example, proxy_read_timeout expires), ngx_http_finalize_upstream_request() will be called with NGX_HTTP_BAD_GATEWAY. After ad3f342f14ba046c this will result in ngx_http_finalize_request(NGX_HTTP_BAD_GATEWAY). Fix is to move u->pipe->downstream_error handling to a later point, where all special response codes are changed to NGX_ERROR. Reported by Jan Prachar, http://mailman.nginx.org/pipermail/nginx-devel/2018-January/010737.html.
2017-12-13Retain CAP_NET_RAW capability for transparent proxying.Roman Arutyunyan1-0/+6
The capability is retained automatically in unprivileged worker processes after changing UID if transparent proxying is enabled at least once in nginx configuration. The feature is only available in Linux.
2017-12-01Upstream: flush low-level buffers on write retry.Patryk Lesiewicz1-1/+1
If the data to write is bigger than what the socket can send, and the reminder is smaller than NGX_SSL_BUFSIZE, then SSL_write() fails with SSL_ERROR_WANT_WRITE. The reminder of payload however is successfully copied to the low-level buffer and all the output chain buffers are flushed. This means that retry logic doesn't work because ngx_http_upstream_process_non_buffered_request() checks only if there's anything in the output chain buffers and ignores the fact that something may be buffered in low-level parts of the stack. Signed-off-by: Patryk Lesiewicz <patryk@google.com>
2017-10-11Upstream: disabled upgrading in subrequests.Roman Arutyunyan1-0/+7
Upgrading an upstream connection is usually followed by reading from the client which a subrequest is not allowed to do. Moreover, accessing the header_in request field while processing upgraded connection ends up with a null pointer dereference since the header_in buffer is only created for the the main request.
2017-10-11Upstream: fixed $upstream_status when upstream returns 503/504.Ruslan Ermilov1-0/+5
If proxy_next_upstream includes http_503/http_504, and upstream returns 503/504, $upstream_status converted this to 502 for any values except the last one.
2017-10-10Upstream: fixed error handling of stale and revalidated cache send.Sergey Kandaurov1-6/+36
The NGX_DONE value returned from ngx_http_upstream_cache_send() indicates that upstream was already finalized in ngx_http_upstream_process_headers(). It was treated as a generic error which resulted in duplicate finalization. Handled NGX_HTTP_UPSTREAM_INVALID_HEADER from ngx_http_upstream_cache_send(). Previously, it could return within ngx_http_upstream_finalize_request(), and since it's below NGX_HTTP_SPECIAL_RESPONSE, a client connection could stuck.
2017-10-09Upstream: even better handling of invalid headers in cache files.Maxim Dounin1-0/+1
When parsing of headers in a cache file fails, already parsed headers need to be cleared, and protocol state needs to be reinitialized. To do so, u->request_sent is now set to ensure ngx_http_upstream_reinit() will be called. This change complements improvements in 46ddff109e72.
2017-10-03Cache: fixed caching of intercepted errors (ticket #1382).Maxim Dounin1-5/+15
When caching intercepted errors, previous behaviour was to use proxy_cache_valid times specified, regardless of various cache control headers present in the response. Fix is to check u->cacheable and use u->cache->valid_sec as set by various cache control response headers, similar to how we do this in the normal caching code path.
2017-10-02Upstream: better handling of invalid headers in cache files.Maxim Dounin1-0/+10
If cache file is truncated, it is possible that u->process_header() will return NGX_AGAIN. Added appropriate handling of this case by changing the error to NGX_HTTP_UPSTREAM_INVALID_HEADER. Also, added appropriate logging of this and NGX_HTTP_UPSTREAM_INVALID_HEADER cases at the "crit" level. Note that this will result in duplicate logging in case of NGX_HTTP_UPSTREAM_INVALID_HEADER. While this is something better to avoid, it is considered to be an overkill to implement cache-specific error logging in u->process_header(). Additionally, u->buffer.start is now reset to be able to receive a new response, and u->cache_status set to MISS to provide the value in the $upstream_cache_status variable, much like it happens on other cache file errors detected by ngx_http_file_cache_read(), instead of HIT, which is believed to be misleading.
2017-08-23Upstream: unconditional parsing of last_modified_time.Maxim Dounin1-17/+3
This fixes at least the following cases, where no last_modified_time (assuming caching is not enabled) resulted in incorrect behaviour: - slice filter and If-Range requests (ticket #1357); - If-Range requests with proxy_force_ranges; - expires modified.
2017-08-01Variables: macros for null variables.Ruslan Ermilov1-1/+1
No functional changes.
2017-07-19Upstream: keep request body file from removal if requested.Roman Arutyunyan1-1/+7
The new request flag "preserve_body" indicates that the request body file should not be removed by the upstream module because it may be used later by a subrequest. The flag is set by the SSI (ticket #585), addition and slice modules. Additionally, it is also set by the upstream module when a background cache update subrequest is started to prevent the request body file removal after an internal redirect. Only the main request is now allowed to remove the file.
2017-07-17Parenthesized ASCII-related calculations.Valentin Bartenev1-3/+3
This also fixes potential undefined behaviour in the range and slice filter modules, caused by local overflows of signed integers in expressions.
2017-06-22Upstream: introduced ngx_http_upstream_ssl_handshake_handler().Maxim Dounin1-14/+24
This change reworks 13a5f4765887 to only run posted requests once, with nothing on stack. Running posted requests with other request functions on stack may result in use-after-free in case of errors, similar to the one reported in #788. To only run posted request once, a separate function was introduced to be used as ssl handshake handler in c->ssl->handler, ngx_http_upstream_ssl_handshake_handler(). The ngx_http_run_posted_requests() is only called in this function, and not in ngx_http_upstream_ssl_handshake() which may be called directly on stack. Additionaly, ngx_http_upstream_ssl_handshake_handler() now does appropriate debug logging of the current subrequest, similar to what is done in other event handlers.
2017-06-14Upstream: fixed running posted requests (ticket #788).Roman Arutyunyan1-1/+6
Previously, the upstream resolve handler always called ngx_http_run_posted_requests() to run posted requests after processing the resolver response. However, if the handler was called directly from the ngx_resolve_name() function (for example, if the resolver response was cached), running posted requests from the handler could lead to the following errors: - If the request was scheduled for termination, it could actually be terminated in the resolve handler. Upper stack frames could reference the freed request object in this case. - If a significant number of requests were posted, and for each of them the resolve handler was called directly from the ngx_resolve_name() function, posted requests could be run recursively and lead to stack overflow. Now ngx_http_run_posted_requests() is only called from asynchronously invoked resolve handlers.
2017-05-31Upstream: style.Piotr Sikora1-1/+1
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
2017-05-26Introduced ngx_tcp_nodelay().Ruslan Ermilov1-76/+14
2017-05-25Background subrequests for cache updates.Roman Arutyunyan1-4/+4
Previously, cache background update might not work as expected, making client wait for it to complete before receiving the final part of a stale response. This could happen if the response could not be sent to the client socket in one filter chain call. Now background cache update is done in a background subrequest. This type of subrequest does not block any other subrequests or the main request.
2017-04-20Cleaned up r->headers_out.headers allocation error handling.Sergey Kandaurov1-5/+6
If initialization of a header failed for some reason after ngx_list_push(), leaving the header as is can result in uninitialized memory access by the header filter or the log module. The fix is to clear partially initialized headers in case of errors. For the Cache-Control header, the fix is to postpone pushing r->headers_out.cache_control until its value is completed.
2017-03-24Upstream: allow recovery from "429 Too Many Requests" response.Piotr Sikora1-0/+5
This change adds "http_429" parameter to "proxy_next_upstream" for retrying rate-limited requests, and to "proxy_cache_use_stale" for serving stale cached responses after being rate-limited. Signed-off-by: Piotr Sikora <piotrsikora@google.com>
2017-04-02Moved handling of wev->delayed to the connection event handler.Maxim Dounin1-49/+10
With post_action or subrequests, it is possible that the timer set for wev->delayed will expire while the active subrequest write event handler is not ready to handle this. This results in request hangs as observed with limit_rate / sendfile_max_chunk and post_action (ticket #776) or subrequests (ticket #1228). Moving the handling to the connection event handler fixes the hangs observed, and also slightly simplifies the code.
2017-03-28Threads: fixed request hang with aio_write and subrequests.Maxim Dounin1-2/+12
If the subrequest is already finalized, the handler set with aio_write may still be used by sendfile in threads when using range requests (see also e4c1f5b32868, and the original note in 9fd738b85fad). Calling already finalized subrequest's r->write_event_handler in practice results in request hang in some cases. Fix is to trigger connection event handler if the subrequest was already finalized.
2017-03-06Added missing "static" specifiers found by gcc -Wtraditional.Ruslan Ermilov1-1/+1
2017-03-02Added missing static specifiers.Eran Kornblau1-1/+1
2017-02-10Upstream: read handler cleared on upstream finalization.Maxim Dounin1-0/+2
With "proxy_ignore_client_abort off" (the default), upstream module changes r->read_event_handler to ngx_http_upstream_rd_check_broken_connection(). If the handler is not cleared during upstream finalization, it can be triggered later, causing unexpected effects, if, for example, a request was redirected to a different location using error_page or X-Accel-Redirect. In particular, it makes "proxy_ignore_client_abort on" non-working after a redirection in a configuration like this: location = / { error_page 502 = /error; proxy_pass http://127.0.0.1:8082; } location /error { proxy_pass http://127.0.0.1:8083; proxy_ignore_client_abort on; } It is also known to cause segmentation faults with aio used, see http://mailman.nginx.org/pipermail/nginx-ru/2015-August/056570.html. Fix is to explicitly set r->read_event_handler to ngx_http_block_reading() during upstream finalization, similar to how it is done in the request body reading code and in the limit_req module.
2017-02-10Upstream: proxy_cache_background_update and friends.Roman Arutyunyan1-2/+45
The directives enable cache updates in subrequests.
2016-12-22Cache: support for stale-while-revalidate and stale-if-error.Roman Arutyunyan1-20/+77
Previously, there was no way to enable the proxy_cache_use_stale behavior by reading the backend response. Now, stale-while-revalidate and stale-if-error Cache-Control extensions (RFC 5861) are supported. They specify, how long a stale response can be used when a cache entry is being updated, or in case of an error.
2017-01-31Variables: generic prefix variables.Dmitry Volyntsev1-2/+12
2017-01-20Upstream: fixed cache corruption and socket leaks with aio_write.Maxim Dounin1-0/+15
The ngx_event_pipe() function wasn't called on write events with wev->delayed set. As a result, threaded writing results weren't properly collected in ngx_event_pipe_write_to_downstream() when a write event was triggered for a completed write. Further, this wasn't detected, as p->aio was reset by a thread completion handler, and results were later collected in ngx_event_pipe_read_upstream() instead of scheduling a new write of additional data. If this happened on the last reading from an upstream, last part of the response was never written to the cache file. Similar problems might also happen in case of timeouts when writing to client, as this also results in ngx_event_pipe() not being called on write events. In this scenario socket leaks were observed. Fix is to check if p->writing is set in ngx_event_pipe_read_upstream(), and therefore collect results of previous write operations in case of read events as well, similar to how we do so in ngx_event_pipe_write_downstream(). This is enough to fix the wev->delayed case. Additionally, we now call ngx_event_pipe() from ngx_http_upstream_process_request() if there are uncollected write operations (p->writing and !p->aio). This also fixes the wev->timedout case.
2016-12-22Fixed missing "Location" field with some relative redirects.Ruslan Ermilov1-2/+2
Relative redirects did not work with directory redirects and auto redirects issued by nginx.
2016-11-14Upstream: handling of upstream SSL handshake timeouts.Maxim Dounin1-0/+7
Previously SSL handshake timeouts were not properly logged, and resulted in 502 errors instead of 504 (ticket #1126).
2016-11-03Cache: prefix-based temporary files.Maxim Dounin1-2/+3
On Linux, the rename syscall can be slow due to a global file system lock, acquired for the entire rename operation, unless both old and new files are in the same directory. To address this temporary files are now created in the same directory as the expected resulting cache file when using the "use_temp_path=off" parameter. This change mostly reverts 99639bfdfa2a and 3281de8142f5, restoring the behaviour as of a9138c35120d (with minor changes).
2016-11-03Upstream: avoid holding a cache node with upgraded connections.Maxim Dounin1-0/+17
Holding a cache node lock doesn't make sense as we can't use caching anyway, and results in "ignore long locked inactive cache entry" alerts if a node is locked for a long time. The same is done for unbuffered connections, as they can be alive for a long time as well.
2016-11-02Cache: proxy_cache_max_range_offset and friends.Dmitry Volyntsev1-0/+55
It configures a threshold in bytes, above which client range requests are not cached. In such a case the client's Range header is passed directly to a proxied server.
2016-10-17Upstream: removed ngx_http_upstream_srv_conf_t.default_port.Ruslan Ermilov1-2/+0
This is an API change.
2016-10-17Upstream: don't consider default_port when matching upstreams.Ruslan Ermilov1-6/+0
The only thing that default_port comparison did in the current code is prevented implicit upstreams to the same address/port from being aliased for http and https, e.g.: proxy_pass http://10.0.0.1:12345; proxy_pass https://10.0.0.1:12345; This is inconsistent because it doesn't work for a similar case with uswgi_pass: uwsgi_pass uwsgi://10.0.0.1:12345; uwsgi_pass suwsgi://10.0.0.1:12345; or with an explicit upstream: upstream u { server 10.0.0.1:12345; } proxy_pass http://u; proxy_pass https://u; Before c9059bd5445b, default_port comparison was needed to differentiate implicit upstreams in proxy_pass http://example.com; and proxy_pass https://example.com; as u->port was not set.
2016-10-17Upstream: consistently initialize explicit upstreams.Ruslan Ermilov1-0/+2
When an upstream{} block follows a proxy_pass reference to it, such an upstream inherited port and default_port settings from proxy_pass. This was different from when they came in another order (see ticket #1059). Explicit upstreams should not have port and default_port in any case. This fixes the following case: server { location / { proxy_pass http://u; } ... } upstream u { server 127.0.0.1; } server { location / { proxy_pass https://u; } ... } but not the following: server { location / { proxy_pass http://u; } ... } server { location / { proxy_pass https://u; } ... } upstream u { server 127.0.0.1; }
2016-10-31Upstream: do not unnecessarily create per-request upstreams.Ruslan Ermilov1-17/+17
If proxy_pass (and friends) with variables evaluates an upstream specified with literal address, nginx always created a per-request upstream. Now, if there's a matching upstream specified in the configuration (either implicit or explicit), it will be used instead.
2016-10-19SSL: compatibility with BoringSSL.Maxim Dounin1-1/+4
BoringSSL changed SSL_set_tlsext_host_name() to be a real function with a (const char *) argument, so it now triggers a warning due to conversion from (u_char *). Added an explicit cast to silence the warning. Prodded by Piotr Sikora, Alessandro Ghedini.
2016-10-14Upstream: hide_headers_hash handling at http level.Maxim Dounin1-1/+17
When headers to hide are set at the "http" level and not redefined in a server block, we now preserve compiled headers hash into the "http" section configuration to inherit this hash to all servers.
2016-10-14Upstream: hide_headers_hash inherited regardless of cache settings.Maxim Dounin1-6/+1
Dependency on cache settings existed prior to 2728c4e4a9ae (0.8.44) as Set-Cookie header was automatically hidden from responses when using cache. This is no longer the case, and hide_headers_hash can be safely inherited regardless of cache settings.