Age | Commit message (Collapse) | Author |
|
|
|
Since we do prefix match using given header by application code
against header name pair in format "NAME:VALUE", and VALUE part can
contain ":", we have to careful about existence of ":" in header
parameter. ":" should be allowed to match HTTP/2 pseudo-header field,
and other use of ":" in header must be treated as error, and
curl_pushheader_byname should return NULL. This commit implements
this behaviour.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
... to properly support that options are set to the handle after it is
added to the multi handle.
Bug: http://curl.haxx.se/mail/lib-2015-06/0122.html
Reported-by: Stefan Bühler
|
|
In 3013bb6 I had changed cookie export to ignore any-domain cookies,
however the logic I used to do so was incorrect, and would lead to a
busy loop in the case of exporting a cookie list that contained
any-domain cookies. The result of that is worse though, because in that
case the other cookies would not be written resulting in an empty file
once the application is terminated to stop the busy loop.
|
|
|
|
It is similar to existing CURL_CFLAG_EXTRAS, but for
extra linker option.
|
|
Coverity CID 1306668
|
|
Make sure that the error buffer is always initialized and simplify the
use of it to make the logic easier.
Bug: https://github.com/bagder/curl/issues/318
Reported-by: sneis
|
|
OPENSSL_load_builtin_modules does not exist in BoringSSL. Regression
from cae43a1
|
|
The symbol SSL3_MT_NEWSESSION_TICKET appears to have been introduced at
around openssl 0.9.8f, and the use of it in lib/vtls/openssl.c breaks
builds with older openssls (certainly with 0.9.8b, which is the latest
older version I have to try with).
|
|
** WORK-AROUND **
The introduced non-blocking general behaviour for Curl_proxyCONNECT()
didn't work for the data connection establishment unless it was very
fast. The newly introduced function argument makes it operate in a more
blocking manner, more like it used to work in the past. This blocking
approach is only used when the FTP data connecting through HTTP proxy.
Blocking like this is bad. A better fix would make it work more
asynchronously.
Bug: https://github.com/bagder/curl/issues/278
|
|
CVE-2015-3236
This partially reverts commit curl-7_39_0-237-g87c4abb
Reported-by: Tomas Tomecek, Kamil Dudka
Bug: http://curl.haxx.se/docs/adv_20150617A.html
|
|
CVE-2015-3237
Detected by Coverity. CID 1299430.
Bug: http://curl.haxx.se/docs/adv_20150617B.html
|
|
This commit is several drafts squashed together. The changes from each
draft are noted below. If any changes are similar and possibly
contradictory the change in the latest draft takes precedence.
Bug: https://github.com/bagder/curl/issues/244
Reported-by: Chris Araman
%%
%% Draft 1
%%
- return 0 if len == 0. that will have to be documented.
- continue on and process the caches regardless of raw recv
- if decrypted data will be returned then set the error code to CURLE_OK
and return its count
- if decrypted data will not be returned and the connection has closed
(eg nread == 0) then return 0 and CURLE_OK
- if decrypted data will not be returned and the connection *hasn't*
closed then set the error code to CURLE_AGAIN --only if an error code
isn't already set-- and return -1
- narrow the Win2k workaround to only Win2k
%%
%% Draft 2
%%
- Trying out a change in flow to handle corner cases.
%%
%% Draft 3
%%
- Back out the lazier decryption change made in draft2.
%%
%% Draft 4
%%
- Some formatting and branching changes
- Decrypt all encrypted cached data when len == 0
- Save connection closed state
- Change special Win2k check to use connection closed state
%%
%% Draft 5
%%
- Default to CURLE_AGAIN in cleanup if an error code wasn't set and the
connection isn't closed.
%%
%% Draft 6
%%
- Save the last error only if it is an unrecoverable error.
Prior to this I saved the last error state in all cases; unfortunately
the logic to cover that in all cases would lead to some muddle and I'm
concerned that could then lead to a bug in the future so I've replaced
it by only recording an unrecoverable error and that state will persist.
- Do not recurse on renegotiation.
Instead we'll continue on to process any trailing encrypted data
received during the renegotiation only.
- Move the err checks in cleanup after the check for decrypted data.
In either case decrypted data is always returned but I think it's easier
to understand when those err checks come after the decrypted data check.
%%
%% Draft 7
%%
- Regardless of len value go directly to cleanup if there is an
unrecoverable error or a close_notify was already received. Prior to
this change we only acknowledged those two states if len != 0.
- Fix a bug in connection closed behavior: Set the error state in the
cleanup, because we don't know for sure it's an error until that time.
- (Related to above) In the case the connection is closed go "greedy"
with the decryption to make sure all remaining encrypted data has been
decrypted even if it is not needed at that time by the caller. This is
necessary because we can only tell if the connection closed gracefully
(close_notify) once all encrypted data has been decrypted.
- Do not renegotiate when an unrecoverable error is pending.
%%
%% Draft 8
%%
- Don't show 'server closed the connection' info message twice.
- Show an info message if server closed abruptly (missing close_notify).
|
|
"At condition p_request, the value of p_request cannot be NULL."
Coverity CID 1306668.
|
|
... by removing the "do {} while (0)" block.
Coverity CID 1306669
|
|
... to simplify checking when PUT _or_ POST have completed.
Reported-by: Frank Meier
Bug: http://curl.haxx.se/mail/lib-2015-06/0019.html
|
|
Some servers will request a client certificate, but not require one.
This change allows libcurl to connect to such servers when using
schannel as its ssl/tls backend. When a server requests a client
certificate, libcurl will now continue the handshake without one,
rather than terminating the handshake. The server can then decide
if that is acceptable or not. Prior to this change, libcurl would
terminate the handshake, reporting a SEC_I_INCOMPLETE_CREDENTIALS
error.
|
|
|
|
and a conversion to markdown. Removed the lib/README.* files. The idea
being to move toward having INTERNALS as the one and only "book" of
internals documentation.
Added a TOC to top of the document.
|
|
Although OpenSSL 1.1.0+ deprecated SSLv23_client_method in favor of
TLS_client_method LibreSSL and BoringSSL didn't and still use
SSLv23_client_method.
Bug: https://github.com/bagder/curl/commit/49a6642#commitcomment-11578009
Reported-by: asavah@users.noreply.github.com
|
|
When CURL_SOCKET_BAD is returned in the callback, it should be treated
as an error (CURLE_COULDNT_CONNECT) if no other socket is subsequently
created when trying to connect to a server.
Bug: http://curl.haxx.se/mail/lib-2015-06/0047.html
|
|
- Try building a chain using issuers in the trusted store first to avoid
problems with server-sent legacy intermediates.
Prior to this change server-sent legacy intermediates with missing
legacy issuers would cause verification to fail even if the client's CA
bundle contained a valid replacement for the intermediate and an
alternate chain could be constructed that would verify successfully.
https://rt.openssl.org/Ticket/Display.html?id=3621&user=guest&pass=guest
|
|
ERR_error_string_n() was introduced in 0.9.6, no need to #ifdef anymore
|
|
Code for OpenSSL 0.9.4 serves no purpose anymore!
|
|
It was present for OpenSSL 0.9.5 code but we only support 0.9.7 or
later.
|
|
The existing callback served no purpose.
|
|
Prior to this change any-domain cookies (cookies without a domain that
are sent to any domain) were exported with domain name "unknown".
Bug: https://github.com/bagder/curl/issues/292
|
|
Bug: https://github.com/bagder/curl/pull/258#issuecomment-107915198
Reported-by: Gisle Vanem
|
|
Follow-up to e8423f9ce150 with discussionis in
https://github.com/bagder/curl/pull/258
This check scans for fopen() with a mode string without 'b' present, as
it may indicate that an FOPEN_* define should rather be used.
|
|
- Change fopen calls to use FOPEN_READTEXT instead of "r" or "rt"
- Change fopen calls to use FOPEN_WRITETEXT instead of "w" or "wt"
This change is to explicitly specify when we need to read/write text.
Unfortunately 't' is not part of POSIX fopen so we can't specify it
directly. Instead we now have FOPEN_READTEXT, FOPEN_WRITETEXT.
Prior to this change we had an issue on Windows if an application that
uses libcurl overrides the default file mode to binary. The default file
mode in Windows is normally text mode (translation mode) and that's what
libcurl expects.
Bug: https://github.com/bagder/curl/pull/258#issuecomment-107093055
Reported-by: Orgad Shaneh
|
|
Bug: https://github.com/bagder/curl/issues/256
|
|
|
|
|
|
|
|
|
|
SSLv23_client_method is deprecated starting in OpenSSL 1.1.0. The
equivalent is TLS_client_method.
https://github.com/openssl/openssl/commit/13c9bb3#diff-708d3ae0f2c2973b272b811315381557
|
|
Previously, after seeing upgrade to HTTP/2, we feed data followed by
upgrade response headers directly to nghttp2_session_mem_recv() in
Curl_http2_switched(). But it turns out that passed buffer, mem, is
part of stream->mem, and callbacks called by
nghttp2_session_mem_recv() will write stream specific data into
stream->mem, overwriting input data. This will corrupt input, and
most likely frame length error is detected by nghttp2 library. The
fix is first copy the passed data to HTTP/2 connection buffer,
httpc->inbuf, and call nghttp2_session_mem_recv().
|
|
|
|
By (void) prefixing it and adding a comment. Did some minor related
cleanups.
Coverity CID 1299423.
|
|
Coverity CID 1299424 identified dead code because of checks that could
never equal true (if the mechanism's name was NULL).
Simplified the function by removing a level of pointers and removing the
loop and array that weren't used.
|
|
Replace use of assert with code that properly catches bad input at
run-time even in non-debug builds.
This flaw was sort of detected by Coverity CID 1299425 which claimed the
"case RTSPREQ_NONE" was dead code.
|
|
A failed calloc() would lead to NULL pointer use.
Coverity CID 1299427.
|
|
non-HTTP proxy implies not using CURLOPT_HTTPPROXYTUNNEL
Bug: http://curl.haxx.se/mail/lib-2015-05/0056.html
Reported-by: Sean Boudreau
|