Age | Commit message (Collapse) | Author |
|
Patch submitted by Kaspar Brand.
|
|
forces it to prefer SSLv3.
|
|
http://curl.haxx.se/bug/feature.cgi?id=1900014 that makes libcurl (built to
use OpenSSL) support a full chain of certificates in a given PKCS12
certificate.
|
|
options as the lib/Makefile.vc6 already did.
|
|
happened if you set the connection cache size to 1 and for example failed to
login to an FTP site. Bug report #1896698
(http://curl.haxx.se/bug/view.cgi?id=1896698)
|
|
|
|
|
|
better control at the exact state of the connection's SSL status so that we
know exactly when it has completed the SSL negotiation or not so that there
won't be accidental re-uses of connections that are wrongly believed to be
in SSL-completed-negotiate state.
|
|
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
|
|
|
|
get a fresh one downloaded and created with 'make ca-bundle' or you can get
one from here => http://curl.haxx.se/docs/caextract.html if you want a fresh
new one extracted from Mozilla's recent list of ca certs.
The configure option --with-ca-bundle now lets you specify what file to use
as default ca bundle for your build. If not specified, the configure script
will check a few known standard places for a global ca cert to use.
|
|
connection by force when it was called before the entire request is
completed, simply because we can't know if the connection really can be
re-used safely at that point.
|
|
verification is requested. Previously it would even return failure if gnutls
failed to get the server cert even though no verification was asked for.
- Fix my Curl_timeleft() leftover mistake in the gnutls code
|
|
http request and then you'd reuse the handle and replace the Accept: header,
as then libcurl would send two Accept: headers!
|
|
Feb 7 that didn't abort properly on timeouts. These are actually old
problems but now they should be fixed.
|
|
out and provides test program that demonstrates that libcurl might not set
error description message for error CURLE_COULDNT_RESOLVE_HOST for Windows
threaded name resolver builds. Fixed now.
|
|
Removed a few unnecessary requires SSL statements.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1889856): When using the gnutls ssl
layer, cleaning-up and reinitializing curl ends up with https requests
failing with "ASN1 parser: Element was not found" errors. Obviously a
regression added in 7.16.3.
|
|
all curl's tests generated configuration and key files are fine, a real
connection is established to the test harness sftp server authenticating
and running a simple sftp remote pwd command.
The verification is done using OpenSSH's or SunSSH's sftp client tool with
a configuration file with the same options as the test harness socks server
with the exception that dynamic forwarding is not used for sftp.
|
|
creates a suitable ca-bundle.crt file in PEM format for use with curl. The
recommended way to run it is to use 'make ca-bundle' in the build tree root.
|
|
--vernum
|
|
them all use the same (hopefully correct) logic to make it less error-prone
and easier to introduce library-wide where it should be used.
|
|
|
|
use of the "is_in_pipeline" struct field.
|
|
"HttpOnly" feature introduced by Microsoft and apparently also supported by
Firefox: http://msdn2.microsoft.com/en-us/library/ms533046.aspx . HttpOnly
is now supported when received from servers in HTTP headers, when written to
cookie jars and when read from existing cookie jars.
|
|
the SingleRequest one to make pipelining better. It is a bit tricky to keep
them in the right place, to keep things related to the actual request or to
the actual connection in the right place.
|
|
crash!
|
|
working on other IP-addresses or port numbers.
|
|
|
|
pipelining. Broken connection is not restored and we get into infinite
loop. It happens because of wrong is_in_pipeline values.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1879375) which describes how libcurl
got lost in this scenario: proxy tunnel (or HTTPS over proxy), ask to do any
proxy authentication and the proxy replies with an auth (like NTLM) and then
closes the connection after that initial informational response.
libcurl would not properly re-initialize the connection to the proxy and
continue the auth negotiation like supposed. It does now however, as it will
now detect if one or more authentication methods were available and asked
for, and will thus retry the connection and continue from there.
- I made the progress callback get called properly during proxy CONNECT.
|
|
|
|
did "SESS". Fixed now.
|
|
it when sys/poll.h is unavailable
|
|
that it is bad anyway. Starting now, removing a handle that is in used in a
pipeline will break the pipeline - it'll be set back up again but still...
|
|
|
|
CONNECT over a proxy. curl_multi_fdset() didn't report back the socket
properly during that state, due to a missing case in the switch in the
multi_getsock() function.
|
|
|
|
out what valgrind to run.
|
|
data url encoded HTTP POSTs when reading it from a file.
|
|
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
|
|
function itself adds that. Fixed on 50 or something strings!
|
|
silly code left from when we switched to let the multi handle "hold" the dns
cache when using the multi interface... Of course this only triggered when a
certain function call returned error at the correct moment.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1871269) and we could fix his hang-
problem that occurred when doing a large HTTP POST request with the
response-body read from a callback.
|
|
their long option names and all descriptions are one-liners.
|
|
--keepalive-time to curl to set the keepalive probe interval. I also took
the opportunity to rename the recently added no-keep-alive option to
no-keepalive to keep a consistent naming and to avoid getting two dashes in
these option names. Eric also provided an update to the man page for the new
option.
|
|
already worked for FTP:// URLs
|
|
spanking new CURLOPT_SEEKFUNCTION simply to take advantage of the improved
performance for the upload resume cases where you want to upload the last
few bytes of a very large file. To implement this decently, I had to switch
the client code for uploading from fopen()/fread() to plain open()/read() so
that we can use lseek() to do >32bit seeks (as fseek() doesn't allow that)
on systems that offer support for that.
|
|
(it already before skipped /usr/lib). /usr/lib64 is the default library
directory on many 64bit systems and it's unlikely that anyone would use the
path privately on systems where it's not.
|
|
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
|