Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
Issue posted to the list by malinowsky AT FTW DOT at.
|
|
I just noticed that OS X no longer supports SSLv2. Other TLS engines return
an error if the requested protocol isn't supported by the underlying
engine, so we do that now for SSLv2 if the framework returns an error
when trying to turn on SSLv2 support. (Note: As always, SSLv2 support is
only enabled in curl when starting the app with the -2 argument; it's off
by default. SSLv2 is really old and insecure.)
|
|
This commit fixes a regression introduced in
fddb7b44a79d78e05043e1c97e069308b6b85f79.
Reported by: Markus Moeller
Bug: http://curl.haxx.se/mail/archive-2013-06/0052.html
|
|
|
|
Use the new improved Curl_rand() to generate better random nonce for
Digest auth.
|
|
When doing multi-part formposts, libcurl used a pseudo-random value that
was seeded with time(). This turns out to be bad for users who formpost
data that is provided with users who then can guess how the boundary
string will look like and then they can forge a different formpost part
and trick the receiver.
My advice to such implementors is (still even after this change) to not
rely on the boundary strings being cryptographically strong. Fix your
code and logic to not depend on them that much!
I moved the Curl_rand() function into the sslgen.c source file now to be
able to take advantage of the SSL library's random function if it
provides one. If not, try to use the RANDOM_FILE for seeding and as a
last resort keep the old logic, just modified to also add microseconds
which makes it harder to properly guess the exact seed.
The formboundary() function in formdata.c is now using 64 bit entropy
for the boundary and therefore the string of dashes was reduced by 4
letters and there are 16 hex digits following it. The total length is
thus still the same.
Bug: http://curl.haxx.se/bug/view.cgi?id=1251
Reported-by: "Floris"
|
|
When using %x, the number must be treated as unsigned as otherwise it
would get sign-extended on for example 64bit machines and do wrong
output. This problem showed when doing printf("%08x", 0xffeeddcc) on a
64bit host.
|
|
Follow-up fix from 7d80ed64e43515.
The SessionHandle may not be around to use when we restore the sigpipe
sighandler so we store the no_signal boolean in the local struct to know
if/how to restore.
|
|
When the c-ares based resolver backend failed to resolve a name, it
tried to show the name that failed from existing structs. This caused
the wrong output and shown hostname when for example --interface
[hostname] was used and that name resolving failed.
Now we use the hostname used in the actual resolve attempt in the error
message as well.
Bug: http://curl.haxx.se/bug/view.cgi?id=1191
Reported-by: Kim Vandry
|
|
When we recently started to treat a zero return code from SSL_read() as
an error we also got false positives - which primarily looks to be
because the OpenSSL documentation is wrong and a zero return code is not
at all an error case in many situations.
Now ossl_recv() will check with ERR_get_error() to see if there is a
stored error and only then consider it to be a true error if SSL_read()
returned zero.
Bug: http://curl.haxx.se/bug/view.cgi?id=1249
Reported-by: Nach M. S.
Patch-by: Nach M. S.
|
|
|
|
Something (a recent security update maybe?) changed in Lion, and now it
has changed SSLCopyPeerTrust such that it may return noErr but also give
us a null trust, which caught us off guard and caused an eventual crash.
|
|
... and restore the ordinary handling again when it returns. This is
done for curl_easy_perform() and curl_easy_cleanup() only for now - and
only when built to use OpenSSL as backend as this is the known culprit
for the spurious SIGPIPEs people have received.
Bug: http://curl.haxx.se/bug/view.cgi?id=1180
Reported by: LluĂs Batlle i Rossell
|
|
This doesn't need to be in the release notes. I cleaned up a lot of the #if
lines in the code to use MAC_OS_X_VERSION_MIN_REQUIRED and
MAC_OS_X_VERSION_MAX_ALLOWED instead of checking for whether things like
__MAC_10_6 or whatever were defined, because for some SDKs Apple has released
they were defined out of place.
|
|
RFC3986 details how a path part passed in as part of a URI should be
"cleaned" from dot sequences before getting used. The described
algorithm is now implemented in lib/dotdot.c with the accompanied test
case in test 1395.
Bug: http://curl.haxx.se/bug/view.cgi?id=1200
Reported-by: Alex Vinnik
|
|
Security problem: CVE-2013-2174
If a program would give a string like "%FF" to curl_easy_unescape() but
ask for it to decode only the first byte, it would still parse and
decode the full hex sequence. The function then not only read beyond the
allowed buffer but it would also deduct the *unsigned* counter variable
for how many more bytes there's left to read in the buffer by two,
making the counter wrap. Continuing this, the function would go on
reading beyond the buffer and soon writing beyond the allocated target
buffer...
Bug: http://curl.haxx.se/docs/adv_20130622.html
Reported-by: Timo Sirainen
|
|
As a remedy to the problem when a socket gets closed and a new one is
opened with the same file descriptor number and as a result
multi.c:singlesocket() doesn't detect the difference, the new function
Curl_multi_closed() gets told when a socket is closed so that it can be
removed from the socket hash. When the old one has been removed, a new
socket should be detected fine by the singlesocket() on next invoke.
Bug: http://curl.haxx.se/bug/view.cgi?id=1248
Reported-by: Erik Johansson
|
|
When performing COOKIELIST operations the cookie lock needs to be taken
for the cases where the cookies are shared among multiple handles!
Verified by Benjamin Gilbert's updated test 506
Bug: http://curl.haxx.se/bug/view.cgi?id=1215
Reported-by: Benjamin Gilbert
|
|
When curl_multi_wait() finds no file descriptor to wait for, it returns
instantly and this must be handled gracefully within curl_easy_perform()
or cause a busy-loop. Starting now, repeated fast returns without any
file descriptors is detected and a gradually increasing sleep will be
used (up to a max of 1000 milliseconds) before continuing the loop.
Bug: http://curl.haxx.se/bug/view.cgi?id=1238
Reported-by: Miguel Angel
|
|
The initial fix to only compare full path names were done in commit
04f52e9b4db0 but found out to be incomplete. This takes should make the
change more complete and there's now two additional tests to verify
(test 31 and 62).
|
|
|
|
By always returning the md5 for an empty body when auth-int is asked
for, libcurl now at least sometimes does the right thing.
Bug: http://curl.haxx.se/bug/view.cgi?id=1235
Patched-by: Nach M. S.
|
|
Allow less room for "triggered too early" mistakes by applications /
timers on non-windows platforms. Starting now, we assume that a timeout
call is never made earlier than 3 milliseconds before the actual
timeout. This greatly improves timeout accuracy on Linux.
Bug: http://curl.haxx.se/bug/view.cgi?id=1228
Reported-by: Hang Su
|
|
In the pkcs12 code, we get a list of x509 records returned from
PKCS12_parse but when iterating over the list and passing each to
SSL_CTX_add_extra_chain_cert() we didn't also properly remove them from
the "stack", which made them get freed twice (both in sk_X509_pop_free()
and then later in SSL_CTX_free).
This isn't really documented anywhere...
Bug: http://curl.haxx.se/bug/view.cgi?id=1236
Reported-by: Nikaiw
|
|
|
|
When VERIFYHOST == 0, libcurl should let invalid certificates to pass.
|
|
commit 29bf0598aad5 introduced a problem when the "internal" timeout is
prefered to the given if shorter, as it didn't consider the case where
-1 was returned. Now the internal timeout is only considered if not -1.
Reported-by: Tor Arntsen
Bug: http://curl.haxx.se/mail/lib-2013-06/0015.html
|
|
If the multi handle's pending timeout is less than what is passed into
this function, it will now opt to use the shorter time anyway since it
is a very good hint that the handle wants to process something in a
shorter time than what otherwise would happen.
curl_multi_wait.3 was updated accordingly to clarify
This is the reason for bug #1224
Bug: http://curl.haxx.se/bug/view.cgi?id=1224
Reported-by: Andrii Moiseiev
|
|
... because there's an identical check right next to it so using the
operators in the check in the same order increases readability.
|
|
|
|
|
|
|
|
When sending the HTTP Authorization: header for digest, the user name
needs to be escaped if it contains a double-quote or backslash.
Test 1229 was added to verify
Reported and fixed by: Nach M. S
Bug: http://curl.haxx.se/bug/view.cgi?id=1230
|
|
SSL_read can return 0 for "not successful", according to the open SSL
documentation: http://www.openssl.org/docs/ssl/SSL_read.html
|
|
We found that in specific cases if the connection is abruptly closed,
the underlying socket is listed in a close_wait state. We continue to
call the curl_multi_perform, curl_mutli_fdset etc. None of these APIs
report the socket closed / connection finished. Since we have cases
where the multi connection is only used once, this can pose a problem
for us. I've read that if another connection was to come in, curl would
see the socket as bad and attempt to close it at that time -
unfortunately, this does not work for us.
I found that in specific situations, if SSL_write returns 0, curl did
not recognize the socket as closed (or errored out) and did not report
it to the application. I believe we need to change the code slightly, to
check if ssl_write returns 0. If so, treat it as an error - the same as
a negative return code.
For OpenSSL - the ssl_write documentation is here:
http://www.openssl.org/docs/ssl/SSL_write.html
|
|
1 - don't skip host names with a colon in them in an attempt to bail out
on HTTP headers in the cookie file parser. It was only a shortcut anyway
and trying to parse a file with HTTP headers will still be handled, only
slightly slower.
2 - don't skip domain names based on number of dots. The original
netscape cookie spec had this oddity mentioned and while our code
decreased the check to only check for two, the existing cookie spec has
no such dot counting required.
Bug: http://curl.haxx.se/bug/view.cgi?id=1221
Reported-by: Stefan Neis
|
|
I found a bug which cURL sends cookies to the path not to aim at.
For example:
- cURL sends a request to http://example.fake/hoge/
- server returns cookie which with path=/hoge;
the point is there is NOT the '/' end of path string.
- cURL sends a request to http://example.fake/hogege/ with the cookie.
The reason for this old "feature" is because that behavior is what is
described in the original netscape cookie spec:
http://curl.haxx.se/rfc/cookie_spec.html
The current cookie spec (RFC6265) clarifies the situation:
http://tools.ietf.org/html/rfc6265#section-5.2.4
|
|
|
|
This reverts commit 8ec2cb5544b86306b702484ea785b6b9596562ab.
We don't have any code anywhere in libcurl (or the curl tool) that use
wcsdup so there's no such memory use to track. It seems to cause mild
problems with the Borland compiler though that we may avoid by reverting
this change again.
Bug: http://curl.haxx.se/mail/lib-2013-05/0070.html
|
|
|
|
Reported by: David Strauss
Bug: http://curl.haxx.se/mail/lib-2013-05/0088.html
|
|
Bug: http://curl.haxx.se/bug/view.cgi?id=1220
Patch by: John Gardiner Myers
|
|
|
|
|
|
comparison between signed and unsigned integer expressions
|
|
|
|
If the mail sent during the transfer contains a terminating <CRLF> then
we should not send the first <CRLF> of the EOB as specified in RFC-5321.
Additionally don't send the <CRLF> if there is "no mail data" as the
DATA command already includes it.
|