Age | Commit message (Collapse) | Author |
|
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
|
|
while () => while()
and some other minor re-indentings
|
|
consistency
|
|
|
|
|
|
|
|
was even mentioned to be bad in a comment! Should make test 2000 and 2001 work
fine.
Also, freedirs() now take a ftp_conn struct pointer which saves some extra
unnecessary variable assignments.
|
|
https://bugzilla.novell.com/show_bug.cgi?id=332917 about a HTTP redirect to
FTP that caused memory havoc. His work together with my efforts created two
fixes:
#1 - FTP::file was moved to struct ftp_conn, because is has to be dealt with
at connection cleanup, at which time the struct HandleData could be
used by another connection.
Also, the unused char *urlpath member is removed from struct FTP.
#2 - provide a Curl_reset_reqproto() function that frees
data->reqdata.proto.* on connection setup if needed (that is if the
SessionHandle was used by a different connection).
|
|
when assigning a NULL pointer to a function pointer var.
|
|
|
|
|
|
in the connectdata structure by a single handler table ptr.
|
|
|
|
|
|
|
|
variables to avoid shadowing global declarations.
|
|
CURLOPT_NOBODY enabled but not CURLOPT_HEADER, libcurl wouldn't do TYPE
before it does SIZE which makes it less useful. I walked over the code and
made it do this properly, and added test case 542 to verify it.
|
|
URLs ending with a slash properly (it should list the contents of that
directory). Test case 351 brought back and also test 1010 was added.
|
|
second transfer as it didn't store and remember the "" path from the
previous transfer so it would instead CWD to the entry path as stored. This
worked, but did a superfluous command. Thus, test case 541 now also verifies
this fix.
|
|
underlying ftp_readresp() function has a separate "cache" where there might
in fact be leftover data...
|
|
Renamed the curl_ftpssl enum to curl_usessl and its enumerated constants,
creating macros for backward compatibility.
|
|
and allow reuse by multiple protocols. Several unused error codes were
removed. In all cases, macros were added to preserve source (and binary)
compatibility with the old names. These macros are subject to removal at
a future date, but probably not before 2009. An application can be
tested to see if it is using any obsolete code by compiling it with the
CURL_NO_OLDIES macro defined.
Documented some newer error codes in libcurl-error(3)
|
|
low-level ftp_readresp() function. Hopefully adressing bug #1779054.
|
|
|
|
out that libcurl didn't deal with large responses from server commands, when
the single response was consisting of multiple lines but of a total size of
16KB or more. Dan Fandrich improved the ftp test script and provided test
case 1006 to repeat the problem, and I fixed the code to make sure this new
test case runs fine.
|
|
out that libcurl didn't deal with very long (>16K) FTP server response lines
properly. Starting now, libcurl will chop them off (thus the client app will
not get the full line) but survive and deal with them fine otherwise. Test
case 1003 was added to verify this.
|
|
download transfer size much earlier to be possible to get read with
CURLINFO_CONTENT_LENGTH_DOWNLOAD as soon as possible.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1776232) about libcurl calling
Curl_client_write(), passing on a const string that the caller may not
modify and yet it does (on some platforms).
|
|
(http://curl.haxx.se/bug/view.cgi?id=1776235) about ftp requests with NOBODY
on a directory would do a "SIZE (null)" request. This is now fixed and test
case 1000 was added to verify.
|
|
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
|
|
of a socket after it has been closed, when the FTP-SSL data connection is taken
down.
|
|
some few internal identifiers to avoid conflicts, which could be useful on
other platforms.
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=1757328) and submitted a patch. It turns
out we broke login to FTP servers that don't require (nor understand) PASS
after the USER command
|
|
|
|
a control connection that was deemed "dead" to yet be re-used in a following
request. We must make sure the connection gets closed on this situation.
|
|
define the symbols for backwards source compatibility)
|
|
libcurl. This also makes the options change name to --krb (from --krb4) and
CURLOPT_KRBLEVEL (from CURLOPT_KRB4LEVEL) but the old names are still
|
|
|
|
http://curl.haxx.se/mail/lib-2007-06/0238.html, libcurl didn't properly do
no-body requests on FTP files on re-used connections properly, or at least
it didn't provide the info back in the header callback properly in the
subsequent requests.
|
|
from the #1739100 bug report
|
|
server!
|
|
|
|
using custom timeout values.
|
|
returning an error code, to allow connections to be torn down
cleanly since this function can be called AFTER an OOM situation
has already been reached.
|
|
|
|
|
|
anyway and we can just as well rely on it being valid.
CID 12, coverity.com scan
|
|
Removed the NULL check since the pointer must be valid already.
|
|
and added tests 290 and 291 to check.
|