Age | Commit message (Collapse) | Author |
|
proved how PUT and POST with a redirect could lead to a "hang" due to the
data stream not being rewound properly when it had to in order to get sent
properly (again) to the subsequent URL. This is now fixed and these test
cases are no longer disabled.
|
|
--disable-http
|
|
|
|
overrun" (http://curl.haxx.se/bug/view.cgi?id=2026240) identifying two
problems, and providing the fix for them:
- CURL_READFUNC_PAUSE did in fact not pause the _sending_ of data that it is
designed for but paused _receiving_ of data!
- libcurl didn't internally set the read counter to zero when this return
code was detected, which would potentially lead to junk getting sent to
the server.
|
|
|
|
|
|
response codes. Previously libcurl would hang on such occurances. I added
test case 1033 to verify.
|
|
how the HTTP redirect following code didn't properly follow to a new URL if
the new url was but a query string such as "Location: ?moo=foo". Test case
1031 was added to verify this fix.
|
|
Added a few comments while at it.
|
|
|
|
|
|
when using CURL_AUTH_ANY" (http://curl.haxx.se/bug/view.cgi?id=1945240).
The problem was that when libcurl rewound a stream meant for upload when it
would prepare for a second request, it could accidentally continue the
sending of the rewound data on the first request instead of on the second.
Ben also provided test case 1030 that verifies this fix.
|
|
redirections and thus cannot use CURLOPT_FOLLOWLOCATION easily, we now
introduce the new CURLINFO_REDIRECT_URL option that lets applications
extract the URL libcurl would've redirected to if it had been told to. This
then enables the application to continue to that URL as it thinks is
suitable, without having to re-implement the magic of creating the new URL
from the Location: header etc. Test 1029 verifies it.
|
|
case 617 (which was added by Daniel Fandrich 5 Mar 2008).
|
|
a single state variable to make the code easier to follow and understand.
|
|
happened if you set the connection cache size to 1 and for example failed to
login to an FTP site. Bug report #1896698
(http://curl.haxx.se/bug/view.cgi?id=1896698)
|
|
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
|
|
the SingleRequest one to make pipelining better. It is a bit tricky to keep
them in the right place, to keep things related to the actual request or to
the actual connection in the right place.
|
|
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
|
|
function itself adds that. Fixed on 50 or something strings!
|
|
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
|
|
|
|
|
|
and the write callbacks that now can make a connection's reading and/or
writing get paused.
|
|
use that prefix as we use that prefix only for library-wide internal global
symbols.
|
|
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
|
|
do_init() and do_complete() which now are called first and last in the DO
function. It simplified the flow in multi.c and the functions got more
sensible names!
|
|
for the cases where there's nothing to do in here, like for SFTP directory
listings that already is complete when this function gets called. The init
stuff clears byte counters which isn't really desired.
|
|
consistency
|
|
CURLOPT_COPYPOSTFIELDS option added for dynamic.
Fix some OS400 features.
|
|
Added test case 1008 to verify. Note that #47 is still there.
|
|
variables to avoid shadowing global declarations.
|
|
curl_easy_setopt() that alters how libcurl functions when following
redirects. It makes libcurl obey the RFC2616 when a 301 response is received
after a non-GET request is made. Default libcurl behaviour is to change
method to GET in the subsequent request (like it does for response code 302
- because that's what many/most browsers do), but with this CURLOPT_POST301
option enabled it will do what the spec says and do the next request using
the same method again. I.e keep POST after 301.
The curl tool got this option as --post301
Test case 1011 and 1012 were added to verify.
|
|
out a problem with doing an empty upload over FTP on a re-used connection.
I added test case 541 to reproduce it and to verify the fix.
|
|
and allow reuse by multiple protocols. Several unused error codes were
removed. In all cases, macros were added to preserve source (and binary)
compatibility with the old names. These macros are subject to removal at
a future date, but probably not before 2009. An application can be
tested to see if it is using any obsolete code by compiling it with the
CURL_NO_OLDIES macro defined.
Documented some newer error codes in libcurl-error(3)
|
|
|
|
size earlier when doing HTTP downloads, so that applications and the
progress meter etc know get the info earlier in the flow than before.
|
|
LIBSSH2_APINO
|
|
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
|
|
some few internal identifiers to avoid conflicts, which could be useful on
other platforms.
|
|
|
|
* Move scp:// into a state machine so it won't block in multi mode
* When available use the full directory entry from the sftp:// server
|
|
|
|
chunked encoding (that also lacks "Connection: close"). It now simply
assumes that the connection WILL be closed to signal the end, as that is how
RFC2616 section 4.4 point #5 says we should behave.
|
|
|
|
|
|
bug report #1715394 (http://curl.haxx.se/bug/view.cgi?id=1715394), and the
transfer-related info "variables" were indeed overwritten with zeroes wrongly
and have now been adjusted. The upload size still isn't accurate.
|
|
when CURLOPT_HTTP200ALIASES is used to avoid the problem mentioned below is
not very nice if the client wants to be able to use _either_ a HTTP 1.1
server or one within the aliases list... so starting now, libcurl will
simply consider 200-alias matches the to be HTTP 1.0 compliant.
|
|
libcurls, which turned out to be the 25-nov-2006 change which treats HTTP
responses without Content-Length or chunked encoding as without bodies. We
now added the conditional that the above mentioned response is only without
body if the response is HTTP 1.1.
|
|
was 16385 bytes (16K+1) and it turned out we didn't properly always "suck
out" all data from libssh2. The effect being that libcurl would hang on the
socket waiting for data when libssh2 had in fact already read it all...
|