Age | Commit message (Collapse) | Author |
|
by using API instead of accessing an internal structure.
This is required starting OpenSSL 1.1.0-pre3.
Closes #650
|
|
libssh2_scp_recv2 is introduced in libssh2 1.7.0 - to be released "any
day now.
Closes #451
|
|
Bug: https://github.com/curl/curl/pull/651
|
|
On 32bit systems, make sure we don't overflow and return funky values
for very large time differences.
Reported-by: Anders Bakken
Closes #646
|
|
It is wasteful to search it backwards if we look for _any_ slash.
|
|
We only care if at least one cipher-suite is enabled, so it does
not make any sense to iterate till the end and count all enabled
cipher-suites.
|
|
|
|
Closes #626
|
|
Since we didn't keep the input argument around after having called
mbedtls, it could end up accessing the wrong memory when figuring out
the ALPN protocols.
Closes #642
|
|
As of https://boringssl-review.googlesource.com/#/c/6980/, almost all of
BoringSSL #ifdefs in cURL should be unnecessary:
- BoringSSL provides no-op stubs for compatibility which replaces most
#ifdefs.
- DES_set_odd_parity has been in BoringSSL for nearly a year now. Remove
the compatibility codepath.
- With a small tweak to an extend_key_56_to_64 call, the NTLM code
builds fine.
- Switch OCSP-related #ifdefs to the more generally useful
OPENSSL_NO_OCSP.
The only #ifdefs which remain are Curl_ossl_version and the #undefs to
work around OpenSSL and wincrypt.h name conflicts. (BoringSSL leaves
that to the consumer. The in-header workaround makes things sensitive to
include order.)
This change errs on the side of removing conditionals despite many of
the restored codepaths being no-ops. (BoringSSL generally adds no-op
compatibility stubs when possible. OPENSSL_VERSION_NUMBER #ifdefs are
bad enough!)
Closes #640
|
|
It turns out Firefox and Chrome both allow spaces in cookie names and
there are sites out there using that.
Turned out the code meant to strip off trailing space from cookie names
didn't work. Fixed now.
Test case 8 modified to verify both these changes.
Closes #639
|
|
When trying to verify a peer without having any root CA certificates
set, this makes libcurl use the TLS library's built in default as
fallback.
Closes #569
|
|
RFC 7230 says we should stop. Firefox already stopped.
Bug: https://github.com/curl/curl/issues/633
Reported-By: Brad Fitzpatrick
Closes #633
|
|
sk_X509_EXTENSION_num may return an unsigned integer, however the value
will fit in an int.
Bug: https://github.com/curl/curl/commit/dd1b44c#commitcomment-15913896
Reported-by: Gisle Vanem
|
|
.. also fix a conversion bug in the unused function
curl_win32_ascii_to_idn().
And remove wprintfs on error (Jay).
Bug: https://github.com/curl/curl/pull/637
|
|
|
|
Free an existing domain before replacing it.
Bug: https://github.com/curl/curl/issues/635
Reported-by: silveja1@users.noreply.github.com
|
|
Closes #632
|
|
|
|
|
|
It isn't used by the code in current conditions but for safety it seems
sensible to at least not crash on such input.
Extended unit test 1395 to verify this too as well as a plain "/" input.
|
|
|
|
Closes https://github.com/bagder/curl/pull/618
|
|
Proxy NTLM authentication should compare credentials when
re-using a connection similar to host authentication, as it
authenticate the connection.
Example:
curl -v -x http://proxy:port http://host/ -U good_user:good_pwd
--proxy-ntlm --next -x http://proxy:port http://host/
[-U fake_user:fake_pwd --proxy-ntlm]
CVE-2016-0755
Bug: http://curl.haxx.se/docs/adv_20160127A.html
|
|
- Switch from verifying a pinned public key in a callback during the
certificate verification to inline after the certificate verification.
The callback method had three problems:
1. If a pinned public key didn't match, CURLE_SSL_PINNEDPUBKEYNOTMATCH
was not returned.
2. If peer certificate verification was disabled the pinned key
verification did not take place as it should.
3. (related to #2) If there was no certificate of depth 0 the callback
would not have checked the pinned public key.
Though all those problems could have been fixed it would have made the
code more complex. Instead we now verify inline after the certificate
verification in mbedtls_connect_step2.
Ref: http://curl.haxx.se/mail/lib-2016-01/0047.html
Ref: https://github.com/bagder/curl/pull/601
|
|
The CURLOPT_SSH_PUBLIC_KEYFILE option has been documented to handle
empty strings specially since curl-7_25_0-31-g05a443a but the behavior
was unintentionally removed in curl-7_38_0-47-gfa7d04f.
This commit restores the original behavior and clarifies it in the
documentation that NULL and "" have both the same meaning when passed
to CURLOPT_SSH_PUBLIC_KEYFILE.
Bug: http://curl.haxx.se/mail/lib-2016-01/0072.html
|
|
... by extracting the LIB + REASON from the OpenSSL error code. OpenSSL
1.1.0+ returned a new func number of another cerfificate fail so this
required a fix and this is the better way to catch this error anyway.
|
|
|
|
When an HTTP/2 upgrade request fails (no protocol switch), it would
previously detect that as still possible to pipeline on (which is
acorrect) and do that when PIPEWAIT was enabled even if pipelining was
not explictily enabled.
It should only pipelined if explicitly asked to.
Closes #584
|
|
Before this patch, if a URL does not start with the protocol
name/scheme, effective URLs would be prefixed with upper-case protocol
names/schemes. This behavior might not be expected by library users or
end users.
For example, if `CURLOPT_DEFAULT_PROTOCOL` is set to "https". And the
URL is "hostname/path". The effective URL would be
"HTTPS://hostname/path" instead of "https://hostname/path".
After this patch, effective URLs would be prefixed with a lower-case
protocol name/scheme.
Closes #597
Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
|
|
Closes #596
|
|
|
|
|
|
Previously, when HTTP/2 is enabled and used, and stream has content
length known, Curl_read was not called when there was no bytes left to
read. Because of this, we could not make sure that
http2_handle_stream_close was called for every stream. Since we use
http2_handle_stream_close to emit trailer fields, they were
effectively ignored. This commit changes the code so that Curl_read is
called even if no bytes left to read, to ensure that
http2_handle_stream_close is called for every stream.
Discussed in https://github.com/bagder/curl/pull/564
|
|
This regression landed in 5778e6f5 and made libcurl not act on received
settings and instead stayed with its internal defaults.
Bug: http://curl.haxx.se/mail/lib-2016-01/0031.html
Reported-by: Bankde
|
|
This reverts commit 46cb70e9fa81c9a56de484cdd7c5d9d0d9fbec36.
Bug: http://curl.haxx.se/mail/lib-2016-01/0031.html
|
|
Discussed in https://github.com/bagder/curl/pull/564
|
|
Use the ACE form of IDN hostnames as key in the connection cache. Add
new tests.
Closes #592
|
|
- Fix ALPN reply detection.
- Wrap nghttp2 code in ifdef USE_NGHTTP2.
Prior to this change ALPN and HTTP/2 did not work properly in mbedTLS.
|
|
Check that the trailer buffer exists before attempting a client write
for trailers on stream close.
Refer to comments in https://github.com/bagder/curl/pull/564
|
|
Mistake from commit a464f33843ee1
|
|
To make sure curl doesn't allow multiplexing before a connection is
upgraded to HTTP/2 (like when Upgrade: h2c fails), we must make sure the
connection uses HTTP/2 as well and not only check what's wanted.
Closes #584
Patch-by: c0ff
|
|
Previously file.txt[CR][LF] would have been returned as file.tx
(without the last t) if filetype is symlink. Now the t is
included and the internal item_length includes the zero byte.
Spotted using test 576 on Windows.
|
|
Try harder to prevent libcurl from opening up an additional socket when
CURLOPT_PIPEWAIT is set. Accomplished by letting ongoing TCP and TLS
handshakes complete first before the decision is made.
Closes #575
|
|
The function is only present in wolfssl/cyassl if it was built with
--enable-opensslextra. With these checks added, pinning support is disabled
unless the TLS lib has that function available.
Also fix the mistake in configure that checks for the wrong lib name.
Closes #566
|
|
|
|
This commit adds trailer support in HTTP/2. In HTTP/1.1, chunked
encoding must be used to send trialer fields. HTTP/2 deprecated any
trandfer-encoding, including chunked. But trailer fields are now
always available.
Since trailer fields are relatively rare these days (gRPC uses them
extensively though), allocating buffer for trailer fields is done when
we detect that HEADERS frame containing trailer fields is started. We
use Curl_add_buffer_* functions to buffer all trailers, just like we
do for regular header fields. And then deliver them when stream is
closed. We have to be careful here so that all data are delivered to
upper layer before sending trailers to the application.
We can deliver trailer field one by one using NGHTTP2_ERR_PAUSE
mechanism, but current method is far more simple.
Another possibility is use chunked encoding internally for HTTP/2
traffic. I have not tested it, but it could add another overhead.
Closes #564
|
|
- In Curl_verifyhost check all altnames in the certificate.
Prior to this change only the first altname was checked. Only the GSKit
SSL backend was affected by this bug.
Bug: http://curl.haxx.se/mail/lib-2015-12/0062.html
Reported-by: John Kohl
|
|
|
|
Closes #565
|