Age | Commit message (Collapse) | Author |
|
- Added macros for curl_share_setopt and curl_multi_setopt to check at least
the correct number of arguments.
|
|
reproduce the --ftp-create-dirs problem reported by Brian Ulm, but that
seems to need a call curl_easy_reset() which this test case doesn't do.
|
|
CURLOPT_FTP_CREATE_MISSING_DIRS to create a directory, and then re-used the
handle and uploaded another file to another directory that needed to be
created, the second upload would fail. Another case of a state variable that
wasn't properly reset between requests.
- I rewrote the 100-continue code to use a single state variable instead of
the previous two ones. I think it made the logic somewhat clearer.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1911069) that identified a race
condition in the name resolver code when the DNS cache is shared between
multiple easy handles, each running in simultaneous threads that could cause
crashes.
|
|
does nothing with them, just to make sure libcurl users always use three
arguments to this function. Due to its use of ... for the third argument, it
is otherwise hard to detect abuse.
|
|
works in C mode atm (http://curl.haxx.se/mail/lib-2008-02/0267.html ,
http://curl.haxx.se/mail/lib-2008-02/0292.html )
|
|
(test 620 tests the just-fixed problem reported by Brian Ulm).
|
|
easy handle if curl_easy_reset() was used between them. I fixed it and Brian
verified that it cured his problem.
- Brian Ulm reported that if you first tried to download a non-existing SFTP
file and then fetched an existing one and re-used the handle, libcurl would
still report the second one as non-existing as well! I fixed it abd Brian
verified that it cured his problem.
|
|
Michael Calmer)
|
|
select/poll calls will only be retried upon EINTR failures as
it previously was in lib/select.c revision 1.29
In this way Curl_socket_ready() and Curl_poll() will again fail
on any select/poll errors different than EINTR.
|
|
files, as questioned by Mike Protts. SFTP does for me but SCP doesn't
so test 617 is disabled for now.
|
|
|
|
a re-used connection where both requests used Negotiate.
|
|
Patch submitted by Kaspar Brand.
|
|
forces it to prefer SSLv3.
|
|
http://curl.haxx.se/bug/feature.cgi?id=1900014 that makes libcurl (built to
use OpenSSL) support a full chain of certificates in a given PKCS12
certificate.
|
|
options as the lib/Makefile.vc6 already did.
|
|
happened if you set the connection cache size to 1 and for example failed to
login to an FTP site. Bug report #1896698
(http://curl.haxx.se/bug/view.cgi?id=1896698)
|
|
|
|
|
|
better control at the exact state of the connection's SSL status so that we
know exactly when it has completed the SSL negotiation or not so that there
won't be accidental re-uses of connections that are wrongly believed to be
in SSL-completed-negotiate state.
|
|
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
|
|
|
|
get a fresh one downloaded and created with 'make ca-bundle' or you can get
one from here => http://curl.haxx.se/docs/caextract.html if you want a fresh
new one extracted from Mozilla's recent list of ca certs.
The configure option --with-ca-bundle now lets you specify what file to use
as default ca bundle for your build. If not specified, the configure script
will check a few known standard places for a global ca cert to use.
|
|
connection by force when it was called before the entire request is
completed, simply because we can't know if the connection really can be
re-used safely at that point.
|
|
verification is requested. Previously it would even return failure if gnutls
failed to get the server cert even though no verification was asked for.
- Fix my Curl_timeleft() leftover mistake in the gnutls code
|
|
http request and then you'd reuse the handle and replace the Accept: header,
as then libcurl would send two Accept: headers!
|
|
Feb 7 that didn't abort properly on timeouts. These are actually old
problems but now they should be fixed.
|
|
out and provides test program that demonstrates that libcurl might not set
error description message for error CURLE_COULDNT_RESOLVE_HOST for Windows
threaded name resolver builds. Fixed now.
|
|
Removed a few unnecessary requires SSL statements.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1889856): When using the gnutls ssl
layer, cleaning-up and reinitializing curl ends up with https requests
failing with "ASN1 parser: Element was not found" errors. Obviously a
regression added in 7.16.3.
|
|
all curl's tests generated configuration and key files are fine, a real
connection is established to the test harness sftp server authenticating
and running a simple sftp remote pwd command.
The verification is done using OpenSSH's or SunSSH's sftp client tool with
a configuration file with the same options as the test harness socks server
with the exception that dynamic forwarding is not used for sftp.
|
|
creates a suitable ca-bundle.crt file in PEM format for use with curl. The
recommended way to run it is to use 'make ca-bundle' in the build tree root.
|
|
--vernum
|
|
them all use the same (hopefully correct) logic to make it less error-prone
and easier to introduce library-wide where it should be used.
|
|
|
|
use of the "is_in_pipeline" struct field.
|
|
"HttpOnly" feature introduced by Microsoft and apparently also supported by
Firefox: http://msdn2.microsoft.com/en-us/library/ms533046.aspx . HttpOnly
is now supported when received from servers in HTTP headers, when written to
cookie jars and when read from existing cookie jars.
|
|
the SingleRequest one to make pipelining better. It is a bit tricky to keep
them in the right place, to keep things related to the actual request or to
the actual connection in the right place.
|
|
crash!
|
|
working on other IP-addresses or port numbers.
|
|
|
|
pipelining. Broken connection is not restored and we get into infinite
loop. It happens because of wrong is_in_pipeline values.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1879375) which describes how libcurl
got lost in this scenario: proxy tunnel (or HTTPS over proxy), ask to do any
proxy authentication and the proxy replies with an auth (like NTLM) and then
closes the connection after that initial informational response.
libcurl would not properly re-initialize the connection to the proxy and
continue the auth negotiation like supposed. It does now however, as it will
now detect if one or more authentication methods were available and asked
for, and will thus retry the connection and continue from there.
- I made the progress callback get called properly during proxy CONNECT.
|
|
|
|
did "SESS". Fixed now.
|
|
it when sys/poll.h is unavailable
|
|
that it is bad anyway. Starting now, removing a handle that is in used in a
pipeline will break the pipeline - it'll be set back up again but still...
|
|
|
|
CONNECT over a proxy. curl_multi_fdset() didn't report back the socket
properly during that state, due to a missing case in the switch in the
multi_getsock() function.
|