Age | Commit message (Collapse) | Author |
|
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
|
|
|
|
|
|
callback was used, as it could wrongly pass on a bad size for the outgoing
HTTP header. The bad size would be a very large value as it was a wrapped
size_t content. This happened when the whole HTTP request failed to get sent
in one single send. http://curl.haxx.se/mail/lib-2007-11/0165.html
|
|
#ifdef CURL_DOES_CONVERSIONS anyway! I turned it into a DEBUGASSERT() instead.
|
|
huge send buffer sizes
|
|
consistency
|
|
|
|
https://bugzilla.novell.com/show_bug.cgi?id=332917 about a HTTP redirect to
FTP that caused memory havoc. His work together with my efforts created two
fixes:
#1 - FTP::file was moved to struct ftp_conn, because is has to be dealt with
at connection cleanup, at which time the struct HandleData could be
used by another connection.
Also, the unused char *urlpath member is removed from struct FTP.
#2 - provide a Curl_reset_reqproto() function that frees
data->reqdata.proto.* on connection setup if needed (that is if the
SessionHandle was used by a different connection).
|
|
when assigning a NULL pointer to a function pointer var.
|
|
CURLOPT_COPYPOSTFIELDS option added for dynamic.
Fix some OS400 features.
|
|
|
|
in the connectdata structure by a single handler table ptr.
|
|
a response that was larger than 16KB is now improved slightly so that now
the restriction at 16KB is for the headers only and it should be a rare
situation where the response-headers exceed 16KB. Thus, I consider #47 fixed
and the header limitation is now known as known bug #48.
|
|
Added test case 1008 to verify. Note that #47 is still there.
|
|
the --proxy-negotiate command line option to allow a user to explicitly
select it.
|
|
proxies for FTP urls.
|
|
|
|
|
|
HTTP PUT using Digest authentication. Test case 5320 and 5322 were also
added to verify the functionality.
|
|
was set, this should make it work for all cases!
|
|
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
|
|
NTLM, and he provided test code and a test server and we worked out a bug
fix. We failed to count sent body data at times, which then caused internal
confusions when libcurl tried to send the rest of the data in order to
maintain the same connection alive.
(and then I did some minor reformatting of code in lib/http.c)
|
|
|
|
some few internal identifiers to avoid conflicts, which could be useful on
other platforms.
|
|
to a proxy during CONNECT auth negotiation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
since they're already included through "setup.h".
|
|
the multi interface. Note that it still does a part of the connection in a
blocking manner.
|
|
|
|
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.
|
|
SSL/TLS layer. http://www.mozilla.org/projects/security/pki/nss/
|
|
|
|
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
|
|
header, you got _two_ User-Agent headers in the CONNECT request...! Added
test case 287 to verify the fix.
|
|
|
|
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
|
|
non-ASCII platforms. It does add some complexity, most notably with more
#ifdefs, but I want to see this supported added and I can't see how we can
add it without the extra stuff added.
|
|
non-ASCII platforms.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
|
|
|
|
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.
|
|
could very well cause a negate number get passed in and thus cause reading
outside of the array usually used for this purpose.
We avoid this by using the uppercase macro versions introduced just now that
does some extra crazy typecasts to avoid byte codes > 127 to cause negative
int values.
|