Age | Commit message (Collapse) | Author |
|
since they're already included through "setup.h".
|
|
the multi interface. Note that it still does a part of the connection in a
blocking manner.
|
|
|
|
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.
|
|
SSL/TLS layer. http://www.mozilla.org/projects/security/pki/nss/
|
|
|
|
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
|
|
header, you got _two_ User-Agent headers in the CONNECT request...! Added
test case 287 to verify the fix.
|
|
|
|
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
|
|
non-ASCII platforms. It does add some complexity, most notably with more
#ifdefs, but I want to see this supported added and I can't see how we can
add it without the extra stuff added.
|
|
non-ASCII platforms.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
|
|
|
|
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.
|
|
could very well cause a negate number get passed in and thus cause reading
outside of the array usually used for this purpose.
We avoid this by using the uppercase macro versions introduced just now that
does some extra crazy typecasts to avoid byte codes > 127 to cause negative
int values.
|
|
|
|
|
|
|
|
to the CURLOPT_DEBUGFUNCTION callback has been fixed (it was erroneously
included as part of the header). A message was also added to the
command line tool to show when data is being sent, enabled when
--verbose is used.
|
|
|
|
|
|
|
|
cache within the multi handle.
|
|
" !defined(__GNUC__) || defined(__MINGW32__)" implies
CygWin.
|
|
|
|
|
|
command on subsequent requests on a re-used connection unless it has to.
|
|
send the whole request at once, even though the Expect: header was disabled
by the application. An effect of this change is also that small (< 1024
bytes) POSTs are now always sent without Expect: header since we deem it
more costly to bother about that than the risk that we send the data in
vain.
|
|
on a persistent connection and allowed the first to use that header, you
could not disable it for the second request.
|
|
if the header "Transfer-Encoding: chunked" was set by the application.
http://curl.haxx.se/bug/view.cgi?id=1531838
|
|
formposts work exactly the way you want it (and the way you'd assume it
works)
|
|
|
|
|
|
Proxy-Connection: header when using a proxy and not doing CONNECT.
|
|
like this in this source file. The quickfix for now is to provide a simple
version for GnuTLS builds. The GnuTLS version of libcurl doesn't yet allow
fully non-blocking connects anyway so this function doesn't get used.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1481217), with follow-ups by Michele Bini
and David Byron. libcurl previously wrongly used GetLastError() on windows to
get error details after socket-related function calls, when it really should
use WSAGetLastError() instead.
When changing to this, the former function Curl_ourerrno() is now instead
called Curl_sockerrno() as it is necessary to only use it to get errno from
socket-related functions as otherwise it won't work as intended on Windows.
|
|
code rearrange to fit the future better.
|
|
(when using OpenSSL).
|
|
an app can use to let libcurl only connect to a remote host and then extract
the socket from libcurl. libcurl will then not attempt to do any transfer at
all after the connect is done.
|
|
|
|
actually used a new connection and not sent the second request on the first
socket!
|
|
(wrongly) sends *two* WWW-Authenticate headers for Digest. While this should
never happen in a sane world, libcurl previously got into an infinite loop
when this occurred. Dave added test 273 to verify this.
|
|
|
|
fix the CONNECT authentication code with multi-pass auth methods (such as
NTLM) as it didn't previously properly ignore response-bodies - in fact it
stopped reading after all response headers had been received. This could
lead to libcurl sending the next request and reading the body from the first
request as response to the second request. (I also renamed the function,
which wasn't strictly necessary but...)
The best fix would to once and for all make the CONNECT code use the
ordinary request sending/receiving code, treating it as any ordinary request
instead of the special-purpose function we have now. It should make it
better for multi-interface too. And possibly lead to less code...
Added test case 265 for this. It doesn't work as a _really_ good test case
since the test proxy is too stupid, but the test case helps when running the
debugger to verify.
|
|
A) Normal non-proxy HTTP:
- no more "Pragma: no-cache" (this only makes sense to proxies)
B) Non-CONNECT HTTP request over proxy:
- "Pragma: no-cache" is used (like before)
- "Proxy-Connection: Keep-alive" (for older style 1.0-proxies)
C) CONNECT HTTP request over proxy:
- "Host: [name]:[port]"
- "Proxy-Connection: Keep-alive"
|
|
.netrc, and when following a Location: the subsequent requests didn't properly
use the auth as found in the netrc file. Added test case 257 to verify my fix.
|
|
libcurl didn't properly send an Expect: 100-continue header. It does now.
|
|
internally, with code provided by sslgen.c. All SSL-layer-specific code is
then written in ssluse.c (for OpenSSL) and gtls.c (for GnuTLS).
As far as possible, internals should not need to know what SSL layer that is
in use. Building with GnuTLS currently makes two test cases fail.
TODO.gnutls contains a few known outstanding issues for the GnuTLS support.
GnuTLS support is enabled with configure --with-gnutls
|
|
also affecting NTLM and Negotiate.) It turned out that if the server responded
with 100 Continue before the initial 401 response, libcurl didn't take care of
the response properly. Test case 245 and 246 added to verify this.
|