Age | Commit message (Collapse) | Author |
|
|
|
|
|
Following the fix made in 903b6e05565bf.
|
|
... to properly support that options are set to the handle after it is
added to the multi handle.
Bug: http://curl.haxx.se/mail/lib-2015-06/0122.html
Reported-by: Stefan Bühler
|
|
|
|
Advise that WinSSL in versions <= XP will not be able to connect to
servers that no longer support the legacy handshakes and algorithms used
by those versions, and to use an alternate backend like OpenSSL instead.
Bug: https://github.com/bagder/curl/issues/253
Reported-by: zenden2k <zenden2k@gmail.com>
|
|
... in commit curl-7_43_0-18-g570076e
|
|
|
|
In 3013bb6 I had changed cookie export to ignore any-domain cookies,
however the logic I used to do so was incorrect, and would lead to a
busy loop in the case of exporting a cookie list that contained
any-domain cookies. The result of that is worse though, because in that
case the other cookies would not be written resulting in an empty file
once the application is terminated to stop the busy loop.
|
|
|
|
libcurl can still be built with it, even if the tool is not. Maintain
independence!
|
|
|
|
It is similar to existing CURL_CFLAG_EXTRAS, but for
extra linker option.
|
|
Coverity CID 1306668
|
|
Make sure that the error buffer is always initialized and simplify the
use of it to make the logic easier.
Bug: https://github.com/bagder/curl/issues/318
Reported-by: sneis
|
|
|
|
Using this fixed format for example descriptions, we can generate a
better list on the web site.
|
|
|
|
|
|
OPENSSL_load_builtin_modules does not exist in BoringSSL. Regression
from cae43a1
|
|
The symbol SSL3_MT_NEWSESSION_TICKET appears to have been introduced at
around openssl 0.9.8f, and the use of it in lib/vtls/openssl.c breaks
builds with older openssls (certainly with 0.9.8b, which is the latest
older version I have to try with).
|
|
** WORK-AROUND **
The introduced non-blocking general behaviour for Curl_proxyCONNECT()
didn't work for the data connection establishment unless it was very
fast. The newly introduced function argument makes it operate in a more
blocking manner, more like it used to work in the past. This blocking
approach is only used when the FTP data connecting through HTTP proxy.
Blocking like this is bad. A better fix would make it work more
asynchronously.
Bug: https://github.com/bagder/curl/issues/278
|
|
|
|
|
|
|
|
|
|
|
|
CVE-2015-3236
This partially reverts commit curl-7_39_0-237-g87c4abb
Reported-by: Tomas Tomecek, Kamil Dudka
Bug: http://curl.haxx.se/docs/adv_20150617A.html
|
|
|
|
CVE-2015-3237
Detected by Coverity. CID 1299430.
Bug: http://curl.haxx.se/docs/adv_20150617B.html
|
|
This commit is several drafts squashed together. The changes from each
draft are noted below. If any changes are similar and possibly
contradictory the change in the latest draft takes precedence.
Bug: https://github.com/bagder/curl/issues/244
Reported-by: Chris Araman
%%
%% Draft 1
%%
- return 0 if len == 0. that will have to be documented.
- continue on and process the caches regardless of raw recv
- if decrypted data will be returned then set the error code to CURLE_OK
and return its count
- if decrypted data will not be returned and the connection has closed
(eg nread == 0) then return 0 and CURLE_OK
- if decrypted data will not be returned and the connection *hasn't*
closed then set the error code to CURLE_AGAIN --only if an error code
isn't already set-- and return -1
- narrow the Win2k workaround to only Win2k
%%
%% Draft 2
%%
- Trying out a change in flow to handle corner cases.
%%
%% Draft 3
%%
- Back out the lazier decryption change made in draft2.
%%
%% Draft 4
%%
- Some formatting and branching changes
- Decrypt all encrypted cached data when len == 0
- Save connection closed state
- Change special Win2k check to use connection closed state
%%
%% Draft 5
%%
- Default to CURLE_AGAIN in cleanup if an error code wasn't set and the
connection isn't closed.
%%
%% Draft 6
%%
- Save the last error only if it is an unrecoverable error.
Prior to this I saved the last error state in all cases; unfortunately
the logic to cover that in all cases would lead to some muddle and I'm
concerned that could then lead to a bug in the future so I've replaced
it by only recording an unrecoverable error and that state will persist.
- Do not recurse on renegotiation.
Instead we'll continue on to process any trailing encrypted data
received during the renegotiation only.
- Move the err checks in cleanup after the check for decrypted data.
In either case decrypted data is always returned but I think it's easier
to understand when those err checks come after the decrypted data check.
%%
%% Draft 7
%%
- Regardless of len value go directly to cleanup if there is an
unrecoverable error or a close_notify was already received. Prior to
this change we only acknowledged those two states if len != 0.
- Fix a bug in connection closed behavior: Set the error state in the
cleanup, because we don't know for sure it's an error until that time.
- (Related to above) In the case the connection is closed go "greedy"
with the decryption to make sure all remaining encrypted data has been
decrypted even if it is not needed at that time by the caller. This is
necessary because we can only tell if the connection closed gracefully
(close_notify) once all encrypted data has been decrypted.
- Do not renegotiate when an unrecoverable error is pending.
%%
%% Draft 8
%%
- Don't show 'server closed the connection' info message twice.
- Show an info message if server closed abruptly (missing close_notify).
|
|
s/curret/current/
|
|
|
|
|
|
|
|
|
|
* use SSL/TLS where available
* follow permanent redirects
|
|
|
|
|
|
"At condition p_request, the value of p_request cannot be NULL."
Coverity CID 1306668.
|
|
... by removing the "do {} while (0)" block.
Coverity CID 1306669
|
|
|
|
use \fI-style instead of .BR for references
|
|
... to simplify checking when PUT _or_ POST have completed.
Reported-by: Frank Meier
Bug: http://curl.haxx.se/mail/lib-2015-06/0019.html
|
|
|
|
|
|
|
|
|
|
|
|
|