Age | Commit message (Collapse) | Author |
|
|
|
sending of the TSIZE option. I don't like fixing bugs just hours before
a release, but since it was broken and the patch fixes this for him I decided
to get it in anyway.
|
|
each test, so that the test suite can now be used to actually test the
verification of cert names etc. This made an error show up in the OpenSSL-
specific code where it would attempt to match the CN field even if a
subjectAltName exists that doesn't match. This is now fixed and verified
in test 311.
|
|
|
|
|
|
should introduce an option to disable SNI, but as we're in feature freeze
now I've addressed the obvious bug here (pointed out by Peter Sylvester): we
shouldn't try to enable SNI when SSLv2 or SSLv3 is explicitly selected.
Code for OpenSSL and GnuTLS was fixed. NSS doesn't seem to have a particular
option for SNI, or are we simply not using it?
|
|
(http://curl.haxx.se/bug/view.cgi?id=2829955) mentioning the recent SSL cert
verification flaw found and exploited by Moxie Marlinspike. The presentation
he did at Black Hat is available here:
https://www.blackhat.com/html/bh-usa-09/bh-usa-09-archives.html#Marlinspike
Apparently at least one CA allowed a subjectAltName or CN that contain a
zero byte, and thus clients that assumed they would never have zero bytes
were exploited to OK a certificate that didn't actually match the site. Like
if the name in the cert was "example.com\0theatualsite.com", libcurl would
happily verify that cert for example.com.
libcurl now better use the length of the extracted name, not assuming it is
zero terminated.
|
|
only in some OpenSSL installs - like on Windows) isn't thread-safe and we
agreed that moving it to the global_init() function is a decent way to deal
with this situation.
|
|
CURLOPT_NOPROXY to "*", or to a host that should not use a proxy, I actually
could still end up using a proxy if a proxy environment variable was set.
|
|
CURLOPT_PREQUOTE) now accept a preceeding asterisk before the command to
send when using FTP, as a sign that libcurl shall simply ignore the response
from the server instead of treating it as an error. Not treating a 400+ FTP
response code as an error means that failed commands will not abort the
chain of commands, nor will they cause the connection to get disconnected.
|
|
out that OpenSSL-powered libcurl didn't support the SHA-2 digest algorithm,
and provided the solution too: to use OpenSSL_add_all_algorithms() instead
of the older SSLeay_* alternative. OpenSSL_add_all_algorithms was added in
OpenSSL 0.9.5
|
|
They introduce known_host support for SSH keys to libcurl. See docs for
details.
|
|
(https://bugzilla.novell.com/523919). When looking at the code, I found
that also the ptr pointer can leak.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2813123) and an a patch that fixes the
problem:
Url A is accessed using auth. Url A redirects to Url B (on a different
server0. Url B reuses a persistent connection. Url B has auth, even though
it's on a different server.
Note: if Url B does not reuse a persistent connection, auth is not sent.
|
|
|
|
range if given colon-separated after the host name/address part. Like
"192.168.0.1:2000-10000"
|
|
(no newline translations). Use -B/--use-ascii if you rather get the ascii
approach.
|
|
provided in the url, add it there (squid needs this).
|
|
This allows curl(1) to be used as a client-side tunnel for arbitrary stream
protocols by abusing chunked transfer encoding in both the HTTP request and
HTTP response. This requires server support for sending a response while a
request is still being read, of course.
If attempting to read from stdin returns EAGAIN, then we pause our sender.
This leaves curl to attempt to read from the socket while reading from stdin
(and thus sending) is paused.
|
|
contributed a range of patches to fix them.
|
|
|
|
issue with client certs that caused issues like segfaults.
http://curl.haxx.se/mail/lib-2009-05/0316.html
|
|
to detect gnutls build options with pkg-config only and not libgnutls-config
anymore since GnuTLS has stopped distributing that tool. If an explicit path
is given to configure, we will instead guess on how to link and use that
lib. I did not use the patch from the bug report.
|
|
|
|
- Added some cmake docs and fixed socklen_t in the build.
|
|
broken since 7.19.0
|
|
|
|
|
|
is almost always a VERY BAD IDEA. Yet there are still apps out there doing
this, and now recently it triggered a bug/side-effect in libcurl as when
libcurl sends a POST or PUT with NTLM, it sends an empty post first when it
knows it will just get a 401/407 back. If the app then replaced the
Content-Length header, it caused the server to wait for input that libcurl
wouldn't send. Aaron Oneal reported this problem in bug report #2799008
http://curl.haxx.se/bug/view.cgi?id=2799008) and helped us verify the fix.
|
|
|
|
without pkg-config.
|
|
|
|
PK11_CreateGenericObject() function.
|
|
the auth credentials back in 7.19.0 and earlier while now you have to set ""
to get the same effect. His patch brings back the ability to use NULL.
|
|
for a failure properly.
|
|
fine with Nokia 5th edition 1.0 SDK for Symbian.
|
|
out that the cookie parser would leak memory when it parses cookies that are
received with domain, path etc set multiple times in the same header. While
such a cookie is questionable, they occur in the wild and libcurl no longer
leaks memory for them. I added such a header to test case 8.
|
|
https://bugzilla.redhat.com/show_bug.cgi?id=427966 which was libcurl closing
a bad file descriptor when closing down the FTP data connection. Caolan
McNamara seems to be the original author of it.
|
|
|
|
no_proxy which made it not skip the proxy if the URL entered contained a
user name. I added test case 1101 to verify.
|
|
of streams that had some parts (legitimately) missing. We now provide and use
a proper cleanup function for the content encoding submodule.
http://curl.haxx.se/mail/lib-2009-05/0092.html
|
|
at https://bugzilla.redhat.com/show_bug.cgi?id=453612#c12
If an incorrect password is given while loading a private key, libcurl ends
up in an infinite loop consuming memory. The bug is critical.
|
|
as reported by Ebenezer Ikonne (on curl-users) and Laurent Rabret (on
curl-library). The transfer was mistakenly marked to get more data to send
but since it didn't actually have that, it just hung there...
|
|
to work so I'll let it remain and here's the comment about it! From Lenaic's
mail posted to curl-library Date: Fri, 1 May 2009 22:46:14 +0200.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2784055) identifying a problem to
connect to SOCKS proxies when using the multi interface. It turned out to
almost not work at all previously. We need to wait for the TCP connect to
be properly verified before doing the SOCKS magic.
There's still a flaw in the FTP code for this.
|
|
reported in the Debian package.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2723236) identifying a problem with
libcurl's TFTP code and its lack of dealing with the OACK packet.
|
|
corresponding fix in the GnuTLS code: make sure to store the new session id
in case the re-used one is rejected.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2786255) with a patch, identifying how
libcurl did not deal with SSL session ids properly if the server rejected a
re-use of one. Starting now, it will forget the rejected one and remember
the new. This change was for OpenSSL only, it is likely that other SSL lib
code needs similar fixes.
|
|
platform HTTP requests" patch
|