summaryrefslogtreecommitdiffhomepage
path: root/src/http/ngx_http_upstream_round_robin.h (follow)
AgeCommit message (Collapse)AuthorFilesLines
2024-11-07Upstream: copy upstream zone DNS valid time during config reload.Mini Hawthorne1-0/+1
Previously, all upstream DNS entries would be immediately re-resolved on config reload. With a large number of upstreams, this creates a spike of DNS resolution requests. These spikes can overwhelm the DNS server or cause drops on the network. This patch retains the TTL of previous resolutions across reloads by copying each upstream's name's expiry time across configuration cycles. As a result, no additional resolutions are needed.
2024-11-07Upstream: construct upstream peers from DNS SRV records.Dmitry Volyntsev1-1/+2
2024-11-07Upstream: re-resolvable servers.Ruslan Ermilov1-3/+83
Specifying the upstream server by a hostname together with the "resolve" parameter will make the hostname to be periodically resolved, and upstream servers added/removed as necessary. This requires a "resolver" at the "http" configuration block. The "resolver_timeout" parameter also affects when the failed DNS requests will be attempted again. Responses with NXDOMAIN will be attempted again in 10 seconds. Upstream has a configuration generation number that is incremented each time servers are added/removed to the primary/backup list. This number is remembered by the peer.init method, and if peer.get detects a change in configuration, it returns NGX_BUSY. Each server has a reference counter. It is incremented by peer.get and decremented by peer.free. When a server is removed, it is removed from the list of servers and is marked as "zombie". The memory allocated by a zombie peer is freed only when its reference count becomes zero. Co-authored-by: Roman Arutyunyan <arut@nginx.com> Co-authored-by: Sergey Kandaurov <pluknet@nginx.com> Co-authored-by: Vladimir Homutov <vl@nginx.com>
2020-11-27Upstream: excluded down servers from the next_upstream tries.Ruslan Ermilov1-0/+1
Previously, the number of next_upstream tries included servers marked as "down", resulting in "no live upstreams" with the code 502 instead of the code derived from an attempt to connect to the last tried "up" server (ticket #2096).
2016-10-10Modules compatibility: compatibility with NGX_HTTP_SSL.Maxim Dounin1-1/+1
With this change it is now possible to load modules compiled without the "--with-http_ssl_module" configure option into nginx binary compiled with it, and vice versa (if a module doesn't use ssl-specific functions), assuming both use the "--with-compat" option.
2016-09-29Introduced the NGX_COMPAT macro.Ruslan Ermilov1-0/+3
When enabled, some structures are padded to be size compatible with their NGINX Plus versions.
2016-09-29Modules compatibility: down flag promoted to a bitmask.Maxim Dounin1-1/+1
It is to be used as a bitmask with various bits set/reset when appropriate. Any bit set means that the peer should not be used, that is, exactly what current checks do, no additional changes required.
2016-09-29Modules compatibility: upstream config field.Maxim Dounin1-0/+1
It is to be used to track version of an upstream configuration used for request processing.
2016-09-29Modules compatibility: slow start fields.Maxim Dounin1-0/+2
2016-09-22Upstream: max_conns.Ruslan Ermilov1-0/+1
2016-07-25Upstream: style, ngx_http_upstream_rr_peer_t.next moved.Maxim Dounin1-2/+2
2015-06-16Upstream: fixed shared upstreams on win32.Ruslan Ermilov1-0/+1
2015-04-14Upstream: the "zone" directive.Ruslan Ermilov1-1/+48
Upstreams with the "zone" directive are kept in shared memory, with a consistent view of all worker processes.
2015-04-14Upstreams: locking.Ruslan Ermilov1-0/+7
2015-04-10Upstream: store peers as a linked list.Ruslan Ermilov1-4/+8
This is an API change.
2015-04-10Upstream: track the number of active connections to upstreams.Ruslan Ermilov1-0/+2
This also simplifies the implementation of the least_conn module.
2015-03-23Removed stub implementation of win32 mutexes.Ruslan Ermilov1-2/+0
2014-06-02Upstream: generic hash module.Roman Arutyunyan1-0/+1
2013-03-25Upstream: removed rudiments of upstream connection caching.Ruslan Ermilov1-2/+0
This functionality is now provided by ngx_http_upstream_keepalive_module.
2012-06-03Upstream: weights support in ip_hash balancer.Maxim Dounin1-1/+5
2012-05-14Upstream: smooth weighted round-robin balancing.Maxim Dounin1-0/+1
For edge case weights like { 5, 1, 1 } we now produce { a, a, b, a, c, a, a } sequence instead of { c, b, a, a, a, a, a } produced previously. Algorithm is as follows: on each peer selection we increase current_weight of each eligible peer by its weight, select peer with greatest current_weight and reduce its current_weight by total number of weight points distributed among peers. In case of { 5, 1, 1 } weights this gives the following sequence of current_weight's: a b c 0 0 0 (initial state) 5 1 1 (a selected) -2 1 1 3 2 2 (a selected) -4 2 2 1 3 3 (b selected) 1 -4 3 6 -3 4 (a selected) -1 -3 4 4 -2 5 (c selected) 4 -2 -2 9 -1 -1 (a selected) 2 -1 -1 7 0 0 (a selected) 0 0 0 To preserve weight reduction in case of failures the effective_weight variable was introduced, which usually matches peer's weight, but is reduced temporarily on peer failures. This change also fixes loop with backup servers and proxy_next_upstream http_404 (ticket #47), and skipping alive upstreams in some cases if there are multiple dead ones (ticket #64).
2012-01-18Copyright updated.Maxim Konovalov1-0/+1
2011-10-12Better recheck of dead upstream servers.Maxim Dounin1-0/+1
Previously nginx used to mark backend again as live as soon as fail_timeout passes (10s by default) since last failure. On the other hand, detecting dead backend takes up to 60s (proxy_connect_timeout) in typical situation "backend is down and doesn't respond to any packets". This resulted in suboptimal behaviour in the above situation (up to 23% of requests were directed to dead backend with default settings). More detailed description of the problem may be found here (in Russian): http://mailman.nginx.org/pipermail/nginx-ru/2011-August/042172.html Fix is to only allow one request after fail_timeout passes, and mark backend as "live" only if this request succeeds. Note that with new code backend will not be marked "live" unless "check" request is completed, and this may take a while in some specific workloads (e.g. streaming). This is believed to be acceptable.
2009-11-02style fixIgor Sysoev1-1/+0
2007-11-27proxy_pass variables supportIgor Sysoev1-0/+2
2007-08-09backup upstream serversIgor Sysoev1-2/+7
2007-07-28fair upstream weight balancerIgor Sysoev1-5/+3
2007-07-10fix segfault when session was freed twiceIgor Sysoev1-2/+5
2006-12-24style fix: remove trailing spacesIgor Sysoev1-1/+1
2006-12-04upstream choice modulesIgor Sysoev1-0/+77