Age | Commit message (Collapse) | Author |
|
curl.h, adjusting auto-makefiles include path, to enhance portability to
OS's without an orthogonal directory tree structure such as OS/400.
|
|
wrong percentage for small files, most notable for <1000 bytes and could
easily end up showing more than 100% at the end. It also didn't show any
percentage, transfer size or estimated transfer times when transferring
less than 100 bytes.
|
|
|
|
|
|
CURLINFO_SIZE_DOWNLOAD (the -w variable size_download) didn't work when
getting data from ldap!
|
|
download was 0 bytes, as libcurl would then return the size as unknown (-1)
and not 0. I wrote a fix and test case 566 to verify it.
|
|
auth is used, as it caused a crash. I failed to repeat the issue, but still
made a change that now forces the TCP connection used for a freed SCP
session to get closed and not be re-used.
|
|
POST using a read callback, with Digest authentication and
"Transfer-Encoding: chunked" enforced. I would then cause the first request
to be wrongly sent and then basically hang until the server closed the
connection. I fixed the problem and added test case 565 to verify it.
|
|
unparsable expiry dates and then treat them as session cookies - previously
libcurl would reject cookies with a date format it couldn't parse. Research
shows that the major browser treat such cookies as session cookies. I
modified test 8 and 31 to verify this.
|
|
during configure.
|
|
|
|
by user 'koresh' introduced the --crlfile option to curl, which makes curl
tell libcurl about a file with CRL (certificate revocation list) data to
read.
|
|
that now makes curl_getdate(3) actually handles RFC 822 formatted dates that
use the "single letter military timezones".
http://www.rfc-ref.org/RFC-TEXTS/822/chapter5.html has the details.
|
|
data!
|
|
(http://curl.haxx.se/bug/view.cgi?id=2873666) which identified a problem which
made libcurl loop infinitely when given incorrect credentials when using HTTP
GSS negotiate authentication.
|
|
libcurl called NSS to close the SSL "session" it also closed the actual
socket.
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=2870221) that libcurl returned an
incorrect return code from the internal trynextip() function which caused
him grief. This is a regression that was introduced in 7.19.1 and I find it
strange it hasn't hit us harder, but I won't persue into figuring out
exactly why.
|
|
SO_SNDBUF to CURL_WRITE_SIZE even if the SO_SNDBUF starts out larger. The
patch doesn't do a setsockopt if SO_SNDBUF is already greater than
CURL_WRITE_SIZE. This should help folks who have set up their computer with
large send buffers.
|
|
the define CURL_MAX_HTTP_HEADER which is even exposed in the public header
file to allow for users to fairly easy rebuild libcurl with a modified
limit. The rationale for a fixed limit is that libcurl is realloc()ing a
buffer to be able to put a full header into it, so that it can call the
header callback with the entire header, but that also risk getting it into
trouble if a server by mistake or willingly sends a header that is more or
less without an end. The limit is set to 100K.
|
|
saving received cookies with no given path, if the path in the request had a
query part. That is means a question mark (?) and characters on the right
side of that. I wrote test case 1105 and fixed this problem.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2861587) identifying that libcurl used
the OpenSSL function X509_load_crl_file() wrongly and failed if it would
load a CRL file with more than one certificate within. This is now fixed.
|
|
|
|
|
|
powered libcurl in 7.19.6. If there was a X509v3 Subject Alternative Name
field in the certficate it had to match and so even if non-DNS and non-IP
entry was present it caused the verification to fail.
|
|
start second "Thu Jan 1 00:00:00 GMT 1970" as the date parser then returns 0
which internally then is treated as a session cookie. That particular date
is now made to get the value of 1.
|
|
errors.
|
|
libcurl to resolve 'localhost' whatever name you use in the URL *if* you set
the --interface option to (exactly) "LocalHost". This will enable us to
write tests for custom hosts names but still use a local host server.
|
|
when cross-compiling. The key to success is then you properly setup
PKG_CONFIG_PATH before invoking configure.
I also improved how NSS is detected by trying nss-config if pkg-config isn't
present, and as a last resort just use the lib name and force the user to
setup the LIBS/LDFLAGS/CFLAGS etc properly. The previous last resort would
add a range of various libs that would almost never be quite correct.
|
|
QUOTE commands and the request used the same path as the connection had
already changed to, it would decide that no commands would be necessary for
the "DO" action and that was not handled properly but libcurl would instead
hang.
|
|
properly and provided a fix. http://curl.haxx.se/bug/view.cgi?id=2843008
|
|
read stdin in a non-blocking fashion. This also brings back -T- (minus) to
the previous blocking behavior since it could break stuff for people at
times.
|
|
strdup() that could lead to segfault if it returned NULL. I extended his
suggest patch to now have Curl_retry_request() return a regular return code
and better check that.
|
|
Fix SIGSEGV on free'd easy_conn when pipe unexpectedly breaks
Fix data corruption issue with re-connected transfers
Fix use after free if we're completed but easy_conn not NULL
|
|
|
|
|
|
sending of the TSIZE option. I don't like fixing bugs just hours before
a release, but since it was broken and the patch fixes this for him I decided
to get it in anyway.
|
|
each test, so that the test suite can now be used to actually test the
verification of cert names etc. This made an error show up in the OpenSSL-
specific code where it would attempt to match the CN field even if a
subjectAltName exists that doesn't match. This is now fixed and verified
in test 311.
|
|
|
|
|
|
should introduce an option to disable SNI, but as we're in feature freeze
now I've addressed the obvious bug here (pointed out by Peter Sylvester): we
shouldn't try to enable SNI when SSLv2 or SSLv3 is explicitly selected.
Code for OpenSSL and GnuTLS was fixed. NSS doesn't seem to have a particular
option for SNI, or are we simply not using it?
|
|
(http://curl.haxx.se/bug/view.cgi?id=2829955) mentioning the recent SSL cert
verification flaw found and exploited by Moxie Marlinspike. The presentation
he did at Black Hat is available here:
https://www.blackhat.com/html/bh-usa-09/bh-usa-09-archives.html#Marlinspike
Apparently at least one CA allowed a subjectAltName or CN that contain a
zero byte, and thus clients that assumed they would never have zero bytes
were exploited to OK a certificate that didn't actually match the site. Like
if the name in the cert was "example.com\0theatualsite.com", libcurl would
happily verify that cert for example.com.
libcurl now better use the length of the extracted name, not assuming it is
zero terminated.
|
|
only in some OpenSSL installs - like on Windows) isn't thread-safe and we
agreed that moving it to the global_init() function is a decent way to deal
with this situation.
|
|
CURLOPT_NOPROXY to "*", or to a host that should not use a proxy, I actually
could still end up using a proxy if a proxy environment variable was set.
|
|
CURLOPT_PREQUOTE) now accept a preceeding asterisk before the command to
send when using FTP, as a sign that libcurl shall simply ignore the response
from the server instead of treating it as an error. Not treating a 400+ FTP
response code as an error means that failed commands will not abort the
chain of commands, nor will they cause the connection to get disconnected.
|
|
out that OpenSSL-powered libcurl didn't support the SHA-2 digest algorithm,
and provided the solution too: to use OpenSSL_add_all_algorithms() instead
of the older SSLeay_* alternative. OpenSSL_add_all_algorithms was added in
OpenSSL 0.9.5
|
|
They introduce known_host support for SSH keys to libcurl. See docs for
details.
|
|
(https://bugzilla.novell.com/523919). When looking at the code, I found
that also the ptr pointer can leak.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2813123) and an a patch that fixes the
problem:
Url A is accessed using auth. Url A redirects to Url B (on a different
server0. Url B reuses a persistent connection. Url B has auth, even though
it's on a different server.
Note: if Url B does not reuse a persistent connection, auth is not sent.
|
|
|