Age | Commit message (Collapse) | Author |
|
header in a third, not suppported by libcurl, format and we agreed that we
could make the parser more forgiving to accept all the three found
variations.
|
|
holding the conn->data value
|
|
responded with a single status line and no headers nor body. Starting now, a
HTTP response on a persistent connection (i.e not set to be closed after the
response has been taken care of) must have Content-Length or chunked
encoding set, or libcurl will simply assume that there is no body.
To my horror I learned that we had no less than 57(!) test cases that did bad
HTTP responses like this, and even the test http server (sws) responded badly
when queried by the test system if it is the test system. So although the
actual fix for the problem was tiny, going through all the newly failing test
cases got really painful and boring.
|
|
|
|
case when 401 or 407 are returned, *IF* no auth credentials have been given.
The CURLOPT_FAILONERROR option is not possible to make fool-proof for 401
and 407 cases when auth credentials is given, but we've now covered this
somewhat more.
You might get some amounts of headers transferred before this situation is
detected, like for when a "100-continue" is received as a response to a
POST/PUT and a 401 or 407 is received immediately afterwards.
Added test 281 to verify this change.
|
|
re-use connections (for pipelining) before the name resolving is done.
|
|
|
|
could very well cause a negate number get passed in and thus cause reading
outside of the array usually used for this purpose.
We avoid this by using the uppercase macro versions introduced just now that
does some extra crazy typecasts to avoid byte codes > 127 to cause negative
int values.
|
|
|
|
|
|
|
|
|
|
cache within the multi handle.
|
|
" !defined(__GNUC__) || defined(__MINGW32__)" implies
CygWin.
|
|
|
|
|
|
|
|
command on subsequent requests on a re-used connection unless it has to.
|
|
on a persistent connection and allowed the first to use that header, you
could not disable it for the second request.
|
|
|
|
|
|
CURLOPT_MAX_RECV_SPEED_LARGE that limit tha maximum rate libcurl is allowed
to send or receive data. This kind of adds the the command line tool's
option --limit-rate to the library.
The rate limiting logic in the curl app is now removed and is instead
provided by libcurl itself. Transfer rate limiting will now also work for -d
and -F, which it didn't before.
|
|
|
|
|
|
transfers. They are done on non-windows systems and translate CRLF to LF.
|
|
code rearrange to fit the future better.
|
|
|
|
occurred when asking libcurl to follow HTTP redirects and the original URL had
more than one question mark (?). Added test case 276 to verify.
|
|
content when libcurl didn't honor the internal ignorebody flag.
|
|
an app can use to let libcurl only connect to a remote host and then extract
the socket from libcurl. libcurl will then not attempt to do any transfer at
all after the connect is done.
|
|
with re-used FTP connections. If the second request on the same connection was
set not to fetch a "body", libcurl could get confused and consider it an
attempt to use a dead connection and would go acting mighty strange.
|
|
connection setup as a follow-redirect. It turns out 1) this fails when a FTP
connection is re-setup and 2) it does make the max-redirs counter behave
wrong. This fix was not verified since the reporter vanished, but I believe
this is the right fix nonetheless.
|
|
configure.
|
|
|
|
string in the given error buffer to address the flaw mention on 21 sep 2005.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1338648) which really is more of a
feature request, but anyway. It pointed out that --max-redirs did not allow
it to be set to 0, which then would return an error code on the first
Location: found. Based on Nis' patch, now libcurl supports CURLOPT_MAXREDIRS
set to 0, or -1 for infinity. Added test case 274 to verify.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1299181) that identified a silly problem
with Content-Range: headers with the 'bytes' keyword written in a different
case than all lowercase! It would cause a segfault!
|
|
|
|
from the command line tool with --ignore-content-length. This will make it
easier to download files from Apache 1.x (and similar) servers that are
still having problems serving files larger than 2 or 4 GB. When this option
is enabled, curl will simply have to wait for the server to close the
connection to signal end of transfer. I wrote test case 269 that runs a
simple test that this works.
|
|
CURLOPT_COOKIEFILE), add a cookie (with CURLOPT_COOKIELIST), tell it to
write the result to a given cookie jar and then never actually call
curl_easy_perform() - the given file(s) to read was never read but the
output file was written and thus it caused a "funny" result.
- While doing some tests for the bug above, I noticed that Firefox generates
large numbers (for the expire time) in the cookies.txt file and libcurl
didn't treat them properly. Now it does.
|
|
site responds with bad HTTP response that doesn't contain any header at all,
only a response body, and the write callback returns 0 to abort the
transfer, it didn't have any real effect but the write callback would be
called once more anyway.
|
|
trailer is then sent to the normal header callback/stream.
|
|
binary zeroes within the headers. They confused libcurl to do wrong so the
downloaded headers become incomplete. The fix is now verified with test case
262.
|
|
be returned at times when we want to ignore them. Test case 160 fails on Linux,
so I modify the comparison to check for _only_ the error bit set...
|
|
|
|
|
|
|
|
body there is nothing chunked-encoded!
|
|
VS2005.
|
|
|