Age | Commit message (Collapse) | Author |
|
|
|
string in the given error buffer to address the flaw mention on 21 sep 2005.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1338648) which really is more of a
feature request, but anyway. It pointed out that --max-redirs did not allow
it to be set to 0, which then would return an error code on the first
Location: found. Based on Nis' patch, now libcurl supports CURLOPT_MAXREDIRS
set to 0, or -1 for infinity. Added test case 274 to verify.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1299181) that identified a silly problem
with Content-Range: headers with the 'bytes' keyword written in a different
case than all lowercase! It would cause a segfault!
|
|
|
|
from the command line tool with --ignore-content-length. This will make it
easier to download files from Apache 1.x (and similar) servers that are
still having problems serving files larger than 2 or 4 GB. When this option
is enabled, curl will simply have to wait for the server to close the
connection to signal end of transfer. I wrote test case 269 that runs a
simple test that this works.
|
|
CURLOPT_COOKIEFILE), add a cookie (with CURLOPT_COOKIELIST), tell it to
write the result to a given cookie jar and then never actually call
curl_easy_perform() - the given file(s) to read was never read but the
output file was written and thus it caused a "funny" result.
- While doing some tests for the bug above, I noticed that Firefox generates
large numbers (for the expire time) in the cookies.txt file and libcurl
didn't treat them properly. Now it does.
|
|
site responds with bad HTTP response that doesn't contain any header at all,
only a response body, and the write callback returns 0 to abort the
transfer, it didn't have any real effect but the write callback would be
called once more anyway.
|
|
trailer is then sent to the normal header callback/stream.
|
|
binary zeroes within the headers. They confused libcurl to do wrong so the
downloaded headers become incomplete. The fix is now verified with test case
262.
|
|
be returned at times when we want to ignore them. Test case 160 fails on Linux,
so I modify the comparison to check for _only_ the error bit set...
|
|
|
|
|
|
|
|
body there is nothing chunked-encoded!
|
|
VS2005.
|
|
|
|
internally, with code provided by sslgen.c. All SSL-layer-specific code is
then written in ssluse.c (for OpenSSL) and gtls.c (for GnuTLS).
As far as possible, internals should not need to know what SSL layer that is
in use. Building with GnuTLS currently makes two test cases fail.
TODO.gnutls contains a few known outstanding issues for the GnuTLS support.
GnuTLS support is enabled with configure --with-gnutls
|
|
that picks NTLM. Thanks to David Byron letting me test NTLM against his
servers, I could quickly repeat and fix the problem. It turned out to be:
When libcurl POSTs without knowing/using an authentication and it gets back a
list of types from which it picks NTLM, it needs to either continue sending
its data if it keeps the connection alive, or not send the data but close the
connection. Then do the first step in the NTLM auth. libcurl didn't send the
data nor close the connection but simply read the response-body and then sent
the first negotiation step. Which then failed miserably of course. The fixed
version forces a connection if there is more than 2000 bytes left to send.
|
|
do pretransfer stuff like Curl_pretransfer().
|
|
at fixing this issue.
|
|
The tag 'before_ftp_statemachine' was set just before this commit in case
of future need.
|
|
operation to the caller. Disconnecting has the disadvantage that the conn
pointer gets completely invalidated and this is not handled on lots of places
in the code.
|
|
the buffer is already BUFSIZE +1 one big to fit the extra trailing zero. This
change is reported to fix David's weird SSL problem...
|
|
gets closed just after the request has been sent failed and did not re-issue
a request on a fresh reconnect like the easy interface did. Now it does!
(define CURL_MULTIEASY, run test case 160)
|
|
|
|
select() overhaul fix.
|
|
using a custom Host: header and curl fails to send a request on a re-used
persistent connection and thus creates a new connection and resends it. It
then sent two Host: headers. Cyrill's analysis was posted here:
http://curl.haxx.se/mail/archive-2005-01/0022.html
|
|
libcurl without cookie support. This is mainly useful if you want to build a
minimalistic libcurl with no cookies support at all. Like for embedded
systems or similar.
|
|
at a chunk boundary it was not considered an error and thus went unnoticed.
Added test case 207 to verify.
|
|
|
|
clash with djgpp ioctl() macro in setup.h.
|
|
(http://qa.mandrakesoft.com/show_bug.cgi?id=12289), curl would print a newline
to "finish" the progress meter after each redirect and not only after a
completed transfer.
|
|
|
|
|
|
errors.
|
|
|
|
file that was already completely downloaded caused an error, while it
doesn't if you don't use --fail! I added test case 194 to verify the fix.
Grrr. CURLOPT_FAILONERROR is now added to the list stuff to remove in
libcurl v8 due to all the kludges needed to support it.
|
|
both source and destination being the same host. It can be useful if you want
to move a file on a server or similar.
|
|
CURLOPT_FOLLOWLOCATION, libcurl reported error if a redirect happened even if
the new URL would provide the resumed file. Test case 188 added to verify the
fix (together with existing test 99).
|
|
|
|
|
|
Roman Koifman found out.
|
|
byte file is downloaded.
|
|
|
|
picky warnings
|
|
|
|
|
|
|
|
linked list for name resolved data, even on hosts/systems with only IPv4
stacks as this simplifies a lot of code.
|