Age | Commit message (Collapse) | Author |
|
NTLM support with mbedTLS was added in 497e7c9 but requires that mbedTLS
is built with the MD4 functions available, which it isn't in default
builds. This now adapts if the funtion isn't there and builds libcurl
without NTLM support if so.
Fixes #1004
|
|
... like when a HTTP/0.9 response comes back without any headers at all
and just a body this now prevents that body from being sent to the
callback etc.
Adapted test 1144 to verify.
Fixes #973
Assisted-by: Ray Satiro
|
|
Detect support for compiler symbol visibility flags and apply those
according to CURL_HIDDEN_SYMBOLS option.
It should work true to the autotools build except it tries to unhide
symbols on Windows when requested and prints warning if it fails.
Ref: https://github.com/curl/curl/issues/981#issuecomment-242665951
Reported-by: Daniel Stenberg
|
|
... by partially reverting f975f06033b1. The allocation could be made by
OpenSSL so the free must be made with OPENSSL_free() to avoid problems.
Reported-by: Harold Stuart
Fixes #1005
|
|
... by making sure we don't count down the "upload left" counter when the
uploaded size is unknown and then it can be allowed to continue forever.
Fixes #996
|
|
Since we're using CURLE_FTP_WEIRD_SERVER_REPLY in imap, pop3 and smtp as
more of a generic "failed to parse" introduce an alias without FTP in
the name.
Closes https://github.com/curl/curl/pull/975
|
|
... as that function slipped through once before.
|
|
This hash is used to verify the original downloaded certificate bundle
and also included in the generated bundle's comment header. Also
rename related internal symbols to algorithm-agnostic names.
|
|
CURLINFO_SSL_VERIFYRESULT does not get the certificate verification
result when SSL_connect fails because of a certificate verification
error.
This fix saves the result of SSL_get_verify_result so that it is
returned by CURLINFO_SSL_VERIFYRESULT.
Closes https://github.com/curl/curl/pull/995
|
|
While noErr and errSecSuccess are defined as the same value, the API
documentation states that SecPKCS12Import() returns errSecSuccess if
there were no errors in importing. Ensure that a future change of the
defined value doesn't break (however unlikely) and be consistent with
the API docs.
|
|
With OPENSSL_API_COMPAT=0x10100000L (OpenSSL 1.1 API), the cleanup
functions are unavailable (they're no-ops anyway in OpenSSL 1.1). The
replacements for SSL_load_error_strings, SSLeay_add_ssl_algorithms, and
OpenSSL_add_all_algorithms are called automatically [1][2]. SSLeay() is
now called OpenSSL_version_num().
[1]: https://www.openssl.org/docs/man1.1.0/ssl/OPENSSL_init_ssl.html
[2]: https://www.openssl.org/docs/man1.1.0/crypto/OPENSSL_init_crypto.html
Closes #992
|
|
Fixes #982
|
|
|
|
|
|
|
|
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes #971
|
|
* Added description to Curl_sspi_free_identity()
* Added parameter and return explanations to Curl_sspi_global_init()
* Added parameter explaination to Curl_sspi_global_cleanup()
|
|
CURLDEBUG is for the memory debugging
DEBUGBUILD is for the extra debug stuff
Pointed-out-by: Steve Holme
|
|
Follow-up to c3e906e9cd0f, seems like a more appropriate error code
Suggested-by: Jay Satiro
|
|
Fixes #986
|
|
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes #941
|
|
... instead of if() before the switch(), add a default to the switch so
that the compilers don't warn on "warning: enumeration value
'PLATFORM_DONT_CARE' not handled in switch" anymore.
|
|
- Disable ALPN on Wine.
- Don't pass input secbuffer when ALPN is disabled.
When ALPN support was added a change was made to pass an input secbuffer
to initialize the context. When ALPN is enabled the buffer contains the
ALPN information, and when it's disabled the buffer is empty. In either
case this input buffer caused problems with Wine and connections would
not complete.
Bug: https://github.com/curl/curl/issues/983
Reported-by: Christian Fillion
|
|
Serialise the call to PK11_FindSlotByName() to avoid spurious errors in
a multi-threaded environment. The underlying cause is a race condition
in nssSlot_IsTokenPresent().
Bug: https://bugzilla.mozilla.org/1297397
Closes #985
|
|
... when we are not asked to use a certificate from file
|
|
|
|
|
|
- unknown protocols probably won't send more headers (e.g. WebSocket)
- improved comments and moved them to the correct case statements
Closes #899
|
|
synced with OpenSSL git master commit cc06906707
|
|
.. also remove same from scp
|
|
When we're uploading using FTP and the server issues a tiny pause
between opening the connection to the client's secondary socket, the
client's initial poll() times out, which leads to second poll() which
does not wait for POLLIN on the secondary socket. So that poll() also
has to time out, creating a long (200ms) pause.
This patch adds the correct flag to the secondary socket, making the
second poll() correctly wait for the connection there too.
Signed-off-by: Ales Novak <alnovak@suse.cz>
Closes #978
|
|
Closes #820
|
|
Only choose the GSSAPI authentication mechanism when the user name
contains a Windows domain name or the user is a valid UPN.
Fixes #718
|
|
Completing commit 00417fd66c and 2708d4259b.
|
|
From commit 2708d4259b.
|
|
Instead of displaying the requested hostname the one returned
by the SOCKS5 proxy server is used in case of connection error.
The requested hostname is displayed earlier in the connection sequence.
The upper-value of the port is moved to a temporary variable and
replaced with a 0-byte to make sure the hostname is 0-terminated.
|
|
As of 7.25.0 and commit 5430007222.
|
|
Replace custom string formatting with Curl_printable_address.
Add additional debug and error output in case of failures.
|
|
Calling sscanf is not required since the raw IPv4 address is
available and the protocol can be detected using ai_family.
|
|
Made by Visual Studio's auto-correct feature and missed by me in my own
code reviews!
|
|
Hooked up the HTTP authentication layer to query the new 'is mechanism
supported' functions when deciding what mechanism to use.
As per commit 00417fd66c existing functionality is maintained for now.
|
|
|
|
Hooked up the SASL authentication layer to query the new 'is mechanism
supported' functions when deciding what mechanism to use.
For now existing functionality is maintained.
|
|
|
|
As Windows SSPI authentication calls fail when a particular mechanism
isn't available, introduced these functions for DIGEST, NTLM, Kerberos 5
and Negotiate to allow both HTTP and SASL authentication the opportunity
to query support for a supported mechanism before selecting it.
For now each function returns TRUE to maintain compatability with the
existing code when called.
|
|
|
|
This allows for better memmory debugging and torture tests.
|
|
This reverts commit 113f04e664b16b944e64498a73a4dab990fe9a68.
|
|
Follow up to a96319ebb93
|
|
I discovered some people have been using "https://example.com" style
strings as proxy and it "works" (curl doesn't complain) because curl
ignores unknown schemes and then assumes plain HTTP instead.
I think this misleads users into believing curl uses HTTPS to proxies
when it doesn't. Now curl rejects proxy strings using unsupported
schemes instead of just ignoring and defaulting to HTTP.
|