Age | Commit message (Collapse) | Author |
|
|
|
better control at the exact state of the connection's SSL status so that we
know exactly when it has completed the SSL negotiation or not so that there
won't be accidental re-uses of connections that are wrongly believed to be
in SSL-completed-negotiate state.
|
|
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
|
|
http://sourceforge.net/tracker/index.php?func=detail&aid=1767276&group_id=976&atid=350976
Submitted by Kaspar Brand.
|
|
|
|
|
|
|
|
|
|
'enumerated type mixed with another type'
and
'variable was set but never used'
|
|
get a fresh one downloaded and created with 'make ca-bundle' or you can get
one from here => http://curl.haxx.se/docs/caextract.html if you want a fresh
new one extracted from Mozilla's recent list of ca certs.
The configure option --with-ca-bundle now lets you specify what file to use
as default ca bundle for your build. If not specified, the configure script
will check a few known standard places for a global ca cert to use.
|
|
|
|
DONE before the entire request operation is complete and thus we can't know in
what state it is for re-using, so we're forced to close it. In a perfect world
we can add code that keep track of if we really must close it here or not, but
currently we have no such detail knowledge.
Jerome Muffat-Meridol helped us work this out.
|
|
premature set TRUE, which means it was done before the request comleted. It
could then very well not have received any data.
|
|
|
|
|
|
|
|
|
|
removed unsused ldap module dependency since the module didnt autounload from protected address space.
|
|
verification is requested. Previously it would even return failure if gnutls
failed to get the server cert even though no verification was asked for.
- Fix my Curl_timeleft() leftover mistake in the gnutls code
|
|
|
|
|
|
http request and then you'd reuse the handle and replace the Accept: header,
as then libcurl would send two Accept: headers!
|
|
|
|
before help option; trial to add a version number.
|
|
Feb 7 that didn't abort properly on timeouts. These are actually old
problems but now they should be fixed.
|
|
|
|
|
|
added -t switch to make text info of CAs optional;
added -q switch to be really quiet.
|
|
out and provides test program that demonstrates that libcurl might not set
error description message for error CURLE_COULDNT_RESOLVE_HOST for Windows
threaded name resolver builds. Fixed now.
|
|
file.
|
|
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=1889856): When using the gnutls ssl
layer, cleaning-up and reinitializing curl ends up with https requests
failing with "ASN1 parser: Element was not found" errors. Obviously a
regression added in 7.16.3.
|
|
|
|
|
|
added a switch to display certdata.txt version header.
|
|
|
|
them all use the same (hopefully correct) logic to make it less error-prone
and easier to introduce library-wide where it should be used.
|
|
ca-bundle.crt file as outdated and in need for replacement by anyone who wants
to verify modern peers as the one we have is from year 2000!
|
|
|
|
|
|
|
|
use of the "is_in_pipeline" struct field.
|
|
"HttpOnly" feature introduced by Microsoft and apparently also supported by
Firefox: http://msdn2.microsoft.com/en-us/library/ms533046.aspx . HttpOnly
is now supported when received from servers in HTTP headers, when written to
cookie jars and when read from existing cookie jars.
|
|
the SingleRequest one to make pipelining better. It is a bit tricky to keep
them in the right place, to keep things related to the actual request or to
the actual connection in the right place.
|
|
added curl.res to clean target.
|
|
crash!
|
|
|
|
pipelining. Broken connection is not restored and we get into infinite
loop. It happens because of wrong is_in_pipeline values.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1879375) which describes how libcurl
got lost in this scenario: proxy tunnel (or HTTPS over proxy), ask to do any
proxy authentication and the proxy replies with an auth (like NTLM) and then
closes the connection after that initial informational response.
libcurl would not properly re-initialize the connection to the proxy and
continue the auth negotiation like supposed. It does now however, as it will
now detect if one or more authentication methods were available and asked
for, and will thus retry the connection and continue from there.
- I made the progress callback get called properly during proxy CONNECT.
|