Age | Commit message (Collapse) | Author |
|
low-level ftp_readresp() function. Hopefully adressing bug #1779054.
|
|
|
|
out that libcurl didn't deal with large responses from server commands, when
the single response was consisting of multiple lines but of a total size of
16KB or more. Dan Fandrich improved the ftp test script and provided test
case 1006 to repeat the problem, and I fixed the code to make sure this new
test case runs fine.
|
|
out that libcurl didn't deal with very long (>16K) FTP server response lines
properly. Starting now, libcurl will chop them off (thus the client app will
not get the full line) but survive and deal with them fine otherwise. Test
case 1003 was added to verify this.
|
|
download transfer size much earlier to be possible to get read with
CURLINFO_CONTENT_LENGTH_DOWNLOAD as soon as possible.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1776232) about libcurl calling
Curl_client_write(), passing on a const string that the caller may not
modify and yet it does (on some platforms).
|
|
(http://curl.haxx.se/bug/view.cgi?id=1776235) about ftp requests with NOBODY
on a directory would do a "SIZE (null)" request. This is now fixed and test
case 1000 was added to verify.
|
|
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
|
|
of a socket after it has been closed, when the FTP-SSL data connection is taken
down.
|
|
some few internal identifiers to avoid conflicts, which could be useful on
other platforms.
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=1757328) and submitted a patch. It turns
out we broke login to FTP servers that don't require (nor understand) PASS
after the USER command
|
|
|
|
a control connection that was deemed "dead" to yet be re-used in a following
request. We must make sure the connection gets closed on this situation.
|
|
define the symbols for backwards source compatibility)
|
|
libcurl. This also makes the options change name to --krb (from --krb4) and
CURLOPT_KRBLEVEL (from CURLOPT_KRB4LEVEL) but the old names are still
|
|
|
|
http://curl.haxx.se/mail/lib-2007-06/0238.html, libcurl didn't properly do
no-body requests on FTP files on re-used connections properly, or at least
it didn't provide the info back in the header callback properly in the
subsequent requests.
|
|
from the #1739100 bug report
|
|
server!
|
|
|
|
using custom timeout values.
|
|
returning an error code, to allow connections to be torn down
cleanly since this function can be called AFTER an OOM situation
has already been reached.
|
|
|
|
|
|
anyway and we can just as well rely on it being valid.
CID 12, coverity.com scan
|
|
Removed the NULL check since the pointer must be valid already.
|
|
and added tests 290 and 291 to check.
|
|
|
|
|
|
|
|
|
|
since they're already included through "setup.h".
|
|
different server behaviour
|
|
|
|
5).
|
|
|
|
CURLOPT_RANGE back to no range on an easy handle when using FTP.
|
|
|
|
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
|
|
|
|
|
|
|
|
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
|
|
|
|
curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it
will make libcurl shutdown SSL/TLS after the authentication is done on a
FTP-SSL operation.
|
|
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
|
|
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.
|