| Age | Commit message (Collapse) | Author | Files | Lines |
|
Previously, bidirectional shutdown never worked, due to two issues
in the code:
1. The code only tested SSL_ERROR_WANT_READ and SSL_ERROR_WANT_WRITE
when there was an error in the error queue, which cannot happen.
The bug was introduced in an attempt to fix unexpected error logging
as reported with OpenSSL 0.9.8g
(http://mailman.nginx.org/pipermail/nginx/2008-January/003084.html).
2. The code never called SSL_shutdown() for the second time to wait for
the peer's close_notify alert.
This change fixes both issues.
Note that after this change bidirectional shutdown is expected to work for
the first time, so c->ssl->no_wait_shutdown now makes a difference. This
is not a problem for HTTP code which always uses c->ssl->no_wait_shutdown,
but might be a problem for stream and mail code, as well as 3rd party
modules.
To minimize the effect of the change, the timeout, which was used to be 30
seconds and not configurable, though never actually used, is now set to
3 seconds. It is also expanded to apply to both SSL_ERROR_WANT_READ and
SSL_ERROR_WANT_WRITE, so timeout is properly set if writing to the socket
buffer is not possible.
|
|
|
|
Previous behaviour was to pass everything to the client, but this
seems to be suboptimal and causes issues (ticket #1695). Fix is to
drop extra data instead, as it naturally happens in most clients.
This change covers generic buffered and unbuffered filters as used
in the scgi and uwsgi modules. Appropriate input filter init
handlers are provided by the scgi and uwsgi modules to set corresponding
lengths.
Note that for responses to HEAD requests there is an exception:
we do allow any response length. This is because responses to HEAD
requests might be actual full responses, and it is up to nginx
to remove the response body. If caching is enabled, only full
responses matching the Content-Length header will be cached
(see b779728b180c).
|
|
Using SSL_CTX_set_verify(SSL_VERIFY_PEER) implies that OpenSSL will
send a certificate request during an SSL handshake, leading to unexpected
certificate requests from browsers as long as there are any client
certificates installed. Given that ngx_ssl_trusted_certificate()
is called unconditionally by the ngx_http_ssl_module, this affected
all HTTPS servers. Broken by 699f6e55bbb4 (not released yet).
Fix is to set verify callback in the ngx_ssl_trusted_certificate() function
without changing the verify mode.
|
|
When validating second and further certificates, ssl callback could be called
twice to report the error. After the first call client connection is
terminated and its memory is released. Prior to the second call and in it
released connection memory is accessed.
Errors triggering this behavior:
- failure to create the request
- failure to start resolving OCSP responder name
- failure to start connecting to the OCSP responder
The fix is to rearrange the code to eliminate the second call.
|
|
This ensures that certificate verification is properly logged to debug
log during upstream server certificate verification. This should help
with debugging various certificate issues.
|
|
|
|
When enabled, certificate status is stored in cache and is used to validate
the certificate in future requests.
New directive ssl_ocsp_cache is added to configure the cache.
|
|
OCSP validation for client certificates is enabled by the "ssl_ocsp" directive.
OCSP responder can be optionally specified by "ssl_ocsp_responder".
When session is reused, peer chain is not available for validation.
If the verified chain contains certificates from the peer chain not available
at the server, validation will fail.
|
|
Previously only the first responder address was used per each stapling update.
Now, in case of a network or parsing error, next address is used.
This also fixes the issue with unsupported responder address families
(ticket #1330).
|
|
|
|
|
|
Previous change 1ce3f01a4355 incorrectly introduced processing of the
ngx_posted_next_events queue at the end of operation, effectively making
posted next events a nop, since at the end of an event loop iteration
the queue is always empty. Correct approach is to move events to the
ngx_posted_events queue at an iteration start, as it was done previously.
Further, in some cases the c->read event might be already in the
ngx_posted_events queue, and calling ngx_post_event() with the
ngx_posted_next_events queue won't do anything. To make sure the event
will be correctly placed into the ngx_posted_next_events queue
we now check if it is already posted.
|
|
Introduced in 9d2ad2fb4423 available bytes handling in SSL relied
on connection read handler being overwritten to set the ready flag
and the amount of available bytes. This approach is, however, does
not work properly when connection read handler is changed, for example,
when switching to a next pipelined request, and can result in unexpected
connection timeouts, see here:
http://mailman.nginx.org/pipermail/nginx-devel/2019-December/012825.html
Fix is to introduce ngx_event_process_posted_next() instead, which
will set ready and available regardless of how event handler is set.
|
|
Added code to track number of bytes available in the socket.
This makes it possible to avoid looping for a long time while
working with fast enough peer when data are added to the socket buffer
faster than we are able to read and process data.
When kernel does not provide number of bytes available, it is
retrieved using ioctl(FIONREAD) as long as a buffer is filled by
SSL_read().
It is assumed that number of bytes returned by SSL_read() is close
to the number of bytes read from the socket, as we do not use
SSL compression. But even if it is not true for some reason, this
is not important, as we post an additional reading event anyway.
Note that data can be buffered at SSL layer, and it is not possible
to simply stop reading at some point and wait till the event will
be reported by the kernel again. This can be only done when there
are no data in SSL buffers, and there is no good way to find out if
it's the case.
Instead of trying to figure out if SSL buffers are empty, this patch
introduces events posted for the next event loop iteration - such
events will be processed only on the next event loop iteration,
after going into the kernel and retrieving additional events. This
seems to be simple and reliable approach.
|
|
This makes it possible to avoid looping for a long time while working
with a fast enough peer when data are added to the socket buffer faster
than we are able to read and process them (ticket #1431). This is
basically what we already do on FreeBSD with kqueue, where information
about the number of bytes in the socket buffer is returned by
the kevent() call.
With other event methods rev->available is now set to -1 when the socket
is ready for reading. Later in ngx_recv() and ngx_recv_chain(), if
full buffer is received, real number of bytes in the socket buffer is
retrieved using ioctl(FIONREAD). Reading more than this number of bytes
ensures that even with edge-triggered event methods the event will be
triggered again, so it is safe to stop processing of the socket and
switch to other connections.
Using ioctl(FIONREAD) only after reading a full buffer is an optimization.
With this approach we only call ioctl(FIONREAD) when there are at least
two recv()/readv() calls.
|
|
As long as there are data to read in the socket, yet the amount of data
is less than total size of the buffers in the chain, this saves one
unneeded read() syscall. Before this change, reading only stopped if
ngx_ssl_recv() returned no data, that is, two read() syscalls in a row
returned EAGAIN.
|
|
In SSL connections, data can be buffered by the SSL layer, and it is
wrong to avoid doing c->recv_chain() if c->read->available is 0 and
c->read->pending_eof is set. And tests show that the optimization in
question indeed can result in incorrect detection of premature connection
close if upstream closes the connection without sending a close notify
alert at the same time. Fix is to disable c->read->available optimization
for SSL connections.
|
|
Winsock uses ECONNABORTED instead of ECONNRESET in some cases.
For non-SSL connections this is already handled since baad3036086e.
Reported at
http://mailman.nginx.org/pipermail/nginx-ru/2019-August/062363.html.
|
|
|
|
If OCSP stapling was enabled with dynamic certificate loading, with some
OpenSSL versions (1.0.2o and older, 1.1.0h and older; fixed in 1.0.2p,
1.1.0i, 1.1.1) a segmentation fault might happen.
The reason is that during an abbreviated handshake the certificate
callback is not called, but the certificate status callback was called
(https://github.com/openssl/openssl/issues/1662), leading to NULL being
returned from SSL_get_certificate().
Fix is to explicitly check SSL_get_certificate() result.
|
|
OCSP response uses the DER format and as such needs to be opened in binary-mode.
This only has any effect under Win32.
|
|
If X509_get_issuer_name() or X509_get_subject_name() returned NULL,
this could lead to a certificate reference leak. It cannot happen
in practice though, since each function returns an internal pointer
to a mandatory subfield of the certificate successfully decoded by
d2i_X509() during certificate message processing (closes #1751).
|
|
This makes it possible to provide certificates directly via variables
in ssl_certificate / ssl_certificate_key directives, without using
intermediate files.
|
|
It was accidentally introduced in 77436d9951a1 (1.15.9). In MSVC 2015
and more recent MSVC versions it triggers warning C4456 (declaration of
'pkey' hides previous local declaration). Previously, all such warnings
were resolved in 2a621245f4cf.
Reported by Steve Stevenson.
|
|
The SSL_OP_NO_CLIENT_RENEGOTIATION option was introduced in LibreSSL 2.5.1.
Unlike OpenSSL's SSL_OP_NO_RENEGOTIATION, it only disables client-initiated
renegotiation, and hence can be safely used on all SSL contexts.
|
|
Notably this affects various allocation errors, and should generally
improve things if an allocation error actually happens during a callback.
Depending on the OpenSSL version, returning an error can result in
either SSL_R_CALLBACK_FAILED or SSL_R_CLIENTHELLO_TLSEXT error from
SSL_do_handshake(), so both errors were switched to the "info" level.
|
|
Dynamic certificates re-introduce problem with incorrect session
reuse (AKA "virtual host confusion", CVE-2014-3616), since there are
no server certificates to generate session id context from.
To prevent this, session id context is now generated from ssl_certificate
directives as specified in the configuration. This approach prevents
incorrect session reuse in most cases, while still allowing sharing
sessions across multiple machines with ssl_session_ticket_key set as
long as configurations are identical.
|
|
Passwords have to be copied to the configuration pool to be used
at runtime. Also, to prevent blocking on stdin (with "daemon off;")
an empty password list is provided.
To make things simpler, password handling was modified to allow
an empty array (with 0 elements and elts set to NULL) as an equivalent
of an array with 1 empty password.
|
|
|
|
This makes it possible to reuse certificate loading at runtime,
as introduced in the following patches.
Additionally, this improves error logging, so nginx will now log
human-friendly messages "cannot load certificate" instead of only
referring to sometimes cryptic names of OpenSSL functions.
|
|
The "(SSL:)" snippet currently appears in logs when nginx code uses
ngx_ssl_error() to log an error, but OpenSSL's error queue is empty.
This can happen either because the error wasn't in fact from OpenSSL,
or because OpenSSL did not indicate the error in the error queue
for some reason.
In particular, currently "(SSL:)" can be seen in errors at least in
the following cases:
- When SSL_write() fails due to a syscall error,
"[info] ... SSL_write() failed (SSL:) (32: Broken pipe)...".
- When loading a certificate with no data in it,
"[emerg] PEM_read_bio_X509_AUX(...) failed (SSL:)".
This can easily happen due to an additional empty line before
the end line, so all lines of the certificate are interpreted
as header lines.
- When trying to configure an unknown curve,
"[emerg] SSL_CTX_set1_curves_list("foo") failed (SSL:)".
Likely there are other cases as well.
With this change, "(SSL:...)" will be only added to the error message
if there is something in the error queue. This is expected to make
logs more readable in the above cases. Additionally, with this change
it is now possible to use ngx_ssl_error() to log errors when some
of the possible errors are not from OpenSSL and not expected to have
anything in the error queue.
|
|
|
|
Checking multiple errors at once is a bad practice, as in general
it is not guaranteed that an object can be used after the error.
In this particular case, checking errors after multiple allocations
can result in excessive errors being logged when there is no memory
available.
|
|
|
|
|
|
|
|
On Windows, connect() errors are only reported via exceptfds descriptor set
from select(). Previously exceptfds was set to NULL, and connect() errors
were not detected at all, so connects to closed ports were waiting till
a timeout occurred.
Since ongoing connect() means that there will be a write event active,
except descriptor set is copied from the write one. While it is possible
to construct except descriptor set as a concatenation of both read and write
descriptor sets, this looks unneeded.
With this change, connect() errors are properly detected now when using
select(). Note well that it is not possible to detect connect() errors with
WSAPoll() (see https://daniel.haxx.se/blog/2012/10/10/wsapoll-is-broken/).
|
|
WSAPoll() is only available with Windows Vista and newer (and only
available during compilation if _WIN32_WINNT >= 0x0600). To make
sure the code works with Windows XP, we do not redefine _WIN32_WINNT,
but instead load WSAPoll() dynamically if it is not available during
compilation.
Also, sockets are not guaranteed to be small integers on Windows.
So an index array is used instead of NGX_USE_FD_EVENT to map
events to connections.
|
|
Previously, the code incorrectly assumed "ngx_event_t *" elements
instead of "struct pollfd".
This is mostly cosmetic change, as this code is never called now.
|
|
A shared connection does not own its file descriptor, which means that
ngx_handle_read_event/ngx_handle_write_event calls should do nothing for it.
Currently the c->shared flag is checked in several places in the stream proxy
module prior to calling these functions. However it was not done everywhere.
Missing checks could lead to calling
ngx_handle_read_event/ngx_handle_write_event on shared connections.
The problem manifested itself when using proxy_upload_rate and resulted in
either duplicate file descriptor error (e.g. with epoll) or incorrect further
udp packet processing (e.g. with kqueue).
The fix is to set and reset the event active flag in a way that prevents
ngx_handle_read_event/ngx_handle_write_event from scheduling socket events.
|
|
If SSL_write_early_data() returned SSL_ERROR_WANT_WRITE, stop further reading
using a newly introduced c->ssl->write_blocked flag, as otherwise this would
result in SSL error "ssl3_write_bytes:bad length". Eventually, normal reading
will be restored by read event posted from successful SSL_write_early_data().
While here, place "SSL_write_early_data: want write" debug on the path.
|
|
|
|
The directive allows to drop binding between a client and existing UDP stream
session after receiving a specified number of packets. First packet from the
same client address and port will start a new session. Old session continues
to exist and will terminate at moment defined by configuration: either after
receiving the expected number of responses, or after timeout, as specified by
the "proxy_responses" and/or "proxy_timeout" directives.
By default, proxy_requests is zero (disabled).
|
|
The session handler may result in session termination, thus a connection
pool (from which c->udp was allocated) may be destroyed.
|
|
|
|
With maximum version explicitly set, TLSv1.3 will not be unexpectedly
enabled if nginx compiled with OpenSSL 1.1.0 (without TLSv1.3 support)
will be run with OpenSSL 1.1.1 (with TLSv1.3 support).
|
|
The directives enable the use of the SO_KEEPALIVE option on
upstream connections. By default, the value is left unchanged.
|
|
|
|
The "no suitable signature algorithm" errors are reported by OpenSSL 1.1.1
when using TLSv1.3 if there are no shared signature algorithms. In
particular, this can happen if the client limits available signature
algorithms to something we don't have a certificate for, or to an empty
list. For example, the following command:
openssl s_client -connect 127.0.0.1:8443 -sigalgs rsa_pkcs1_sha1
will always result in the "no suitable signature algorithm" error
as the "rsa_pkcs1_sha1" algorithm refers solely to signatures which
appear in certificates and not defined for use in TLS 1.3 handshake
messages.
The SSL_R_NO_COMMON_SIGNATURE_ALGORITHMS error is what BoringSSL returns
in the same situation.
|