Age | Commit message (Collapse) | Author |
|
easy handle if curl_easy_reset() was used between them. I fixed it and Brian
verified that it cured his problem.
- Brian Ulm reported that if you first tried to download a non-existing SFTP
file and then fetched an existing one and re-used the handle, libcurl would
still report the second one as non-existing as well! I fixed it abd Brian
verified that it cured his problem.
|
|
Michael Calmer)
|
|
select/poll calls will only be retried upon EINTR failures as
it previously was in lib/select.c revision 1.29
In this way Curl_socket_ready() and Curl_poll() will again fail
on any select/poll errors different than EINTR.
|
|
|
|
a re-used connection where both requests used Negotiate.
|
|
forces it to prefer SSLv3.
|
|
http://curl.haxx.se/bug/feature.cgi?id=1900014 that makes libcurl (built to
use OpenSSL) support a full chain of certificates in a given PKCS12
certificate.
|
|
options as the lib/Makefile.vc6 already did.
|
|
happened if you set the connection cache size to 1 and for example failed to
login to an FTP site. Bug report #1896698
(http://curl.haxx.se/bug/view.cgi?id=1896698)
|
|
|
|
better control at the exact state of the connection's SSL status so that we
know exactly when it has completed the SSL negotiation or not so that there
won't be accidental re-uses of connections that are wrongly believed to be
in SSL-completed-negotiate state.
|
|
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
|
|
get a fresh one downloaded and created with 'make ca-bundle' or you can get
one from here => http://curl.haxx.se/docs/caextract.html if you want a fresh
new one extracted from Mozilla's recent list of ca certs.
The configure option --with-ca-bundle now lets you specify what file to use
as default ca bundle for your build. If not specified, the configure script
will check a few known standard places for a global ca cert to use.
|
|
connection by force when it was called before the entire request is
completed, simply because we can't know if the connection really can be
re-used safely at that point.
|
|
verification is requested. Previously it would even return failure if gnutls
failed to get the server cert even though no verification was asked for.
- Fix my Curl_timeleft() leftover mistake in the gnutls code
|
|
|
|
http request and then you'd reuse the handle and replace the Accept: header,
as then libcurl would send two Accept: headers!
|
|
out and provides test program that demonstrates that libcurl might not set
error description message for error CURLE_COULDNT_RESOLVE_HOST for Windows
threaded name resolver builds. Fixed now.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1889856): When using the gnutls ssl
layer, cleaning-up and reinitializing curl ends up with https requests
failing with "ASN1 parser: Element was not found" errors. Obviously a
regression added in 7.16.3.
|
|
creates a suitable ca-bundle.crt file in PEM format for use with curl. The
recommended way to run it is to use 'make ca-bundle' in the build tree root.
|
|
them all use the same (hopefully correct) logic to make it less error-prone
and easier to introduce library-wide where it should be used.
|
|
|
|
"HttpOnly" feature introduced by Microsoft and apparently also supported by
Firefox: http://msdn2.microsoft.com/en-us/library/ms533046.aspx . HttpOnly
is now supported when received from servers in HTTP headers, when written to
cookie jars and when read from existing cookie jars.
|
|
crash!
|
|
working on other IP-addresses or port numbers.
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=1879375) which describes how libcurl
got lost in this scenario: proxy tunnel (or HTTPS over proxy), ask to do any
proxy authentication and the proxy replies with an auth (like NTLM) and then
closes the connection after that initial informational response.
libcurl would not properly re-initialize the connection to the proxy and
continue the auth negotiation like supposed. It does now however, as it will
now detect if one or more authentication methods were available and asked
for, and will thus retry the connection and continue from there.
- I made the progress callback get called properly during proxy CONNECT.
|
|
|
|
did "SESS". Fixed now.
|
|
that it is bad anyway. Starting now, removing a handle that is in used in a
pipeline will break the pipeline - it'll be set back up again but still...
|
|
|
|
|
|
CONNECT over a proxy. curl_multi_fdset() didn't report back the socket
properly during that state, due to a missing case in the switch in the
multi_getsock() function.
|
|
|
|
out what valgrind to run.
|
|
data url encoded HTTP POSTs when reading it from a file.
|
|
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
|
|
function itself adds that. Fixed on 50 or something strings!
|
|
(http://curl.haxx.se/bug/view.cgi?id=1871269) and we could fix his hang-
problem that occurred when doing a large HTTP POST request with the
response-body read from a callback.
|
|
their long option names and all descriptions are one-liners.
|
|
--keepalive-time to curl to set the keepalive probe interval. I also took
the opportunity to rename the recently added no-keep-alive option to
no-keepalive to keep a consistent naming and to avoid getting two dashes in
these option names. Eric also provided an update to the man page for the new
option.
|
|
already worked for FTP:// URLs
|
|
(it already before skipped /usr/lib). /usr/lib64 is the default library
directory on many 64bit systems and it's unlikely that anyone would use the
path privately on systems where it's not.
|
|
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1868255) with a patch. It identifies
and fixes a problem with parsing WWW-Authenticate: headers with additional
spaces in the line that the parser wasn't written to deal with.
|
|
and the write callbacks that now can make a connection's reading and/or
writing get paused.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1863171) where he pointed out that
libcurl's date parser didn't accept a +1300 time zone which actually is used
fairly often (like New Zealand's Dailight Savings Time), so I modified the
parser to now accept up to and including -1400 to +1400.
|
|
code to instead introduce support for a new proxy type called
CURLPROXY_SOCKS5_HOSTNAME that is used to send the host name to the proxy
instead of IP address and there's thus no longer any need for a new
curl_easy_setopt() option.
The default SOCKS5 proxy is again back to sending the IP address to the
proxy. The new curl command line option for enabling sending host name to a
SOCKS5 proxy is now --socks5-hostname.
|
|
|
|
proxy do the host name resolving and only if --socks5ip (or
CURLOPT_SOCKS5_RESOLVE_LOCAL) is used we resolve the host name locally and
pass on the IP address only to the proxy.
|