Age | Commit message (Collapse) | Author |
|
FTP with the multi interface: when a transfer fails, like when aborted by a
write callback, the control connection was wrongly closed and thus not
re-used properly.
This change is also an attempt to cleanup the code somewhat in this area, as
now the FTP code attempts to keep (better) track on pending responses
necessary to get read in ftp_done().
|
|
|
|
When using the multi interface over HTTP and the server returns a Location
header, the running easy handle will get stuck in the CURLM_STATE_PERFORM
state, leaving the external event loop stuck waiting for data from the
ingoing socket (when using the curl_multi_socket_action stuff). While this
bug was pretty hard to find, it seems to require only a one-line fix. The
break statement on line 1374 in multi.c caused the function to skip the call
to multistate().
How to reproduce this bug? Well, that's another question. evhiperfifo.c in
the examples directory chokes on this bug only _sometimes_, probably
depending on how fast the URLs are added. One way of testing the bug out is
writing to hiper.fifo from more than one source at the same time.
|
|
pipelining, as libcurl could then easily get confused and A) work on the
handle that was not "first in queue" on a pipeline, or even B) tell the app
to REMOVE a socket while it was in use by a second handle in a pipeline. Both
errors caused hanging or stalling applications.
|
|
was actually ready to get done, as the internal time resolution is higher
than the returned millisecond timer. Therefore it could cause applications
running on fast processors to do short bursts of busy-loops.
curl_multi_timeout() will now only return 0 if the timeout is actually
alreay triggered.
|
|
now has an improved ability to do right when the multi interface (both
"regular" and multi_socket) is used for SCP and SFTP transfers. This should
result in (much) less busy-loop situations and thus less CPU usage with no
speed loss.
|
|
removing easy handles from multi handles when the easy handle is/was within
a HTTP pipeline. His bug report #2351653
(http://curl.haxx.se/bug/view.cgi?id=2351653) was also related and was
eventually fixed by a patch by Igor himself.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2351645) that identified a problem with
the multi interface that occured if you removed an easy handle while in
progress and the handle was used in a HTTP pipeline.
|
|
Use a wrapper function to call system's getaddrinfo().
|
|
|
|
|
|
eventually identified a flaw in how the multi_socket interface in some cases
missed to call the timeout callback when easy interfaces are removed and
added within the same millisecond.
|
|
CURLINFO_REDIRECT_URL in multi mode" also contained a patch that fixed the
problem.
|
|
|
|
|
|
FreeBSD ports system.
|
|
|
|
|
|
|
|
interface, and the proxy would send Connection: close during the
authentication phase. http://curl.haxx.se/bug/view.cgi?id=2069047
|
|
request to prematurely end.
|
|
libcurl to not tell the app properly when a socket was closed (when the name
resolve done by c-ares is done) and then immediately re-created and put to
use again (for the actual connection). Since the closure will make the
"watch status" get lost in several event-based systems libcurl will need to
tell the app about this close/re-create case.
|
|
is for the TCP connect. I changed it to TIMER_PRETRANSFER which seems to be
what was intended here.
|
|
the curl_multi_socket() API with HTTP pipelining enabled and could lead to
the pipeline basically stalling for a very long period of time until it took
off again.
|
|
go straight to DO
we had multiple states for which the internal function returned no socket at
all to wait for, with the effect that libcurl calls the socket callback (when
curl_multi_socket() is used) with REMOVE prematurely (as it would be added
again within very shortly)
|
|
and doing CONNECT to a proxy. The app would then busy-loop until the proxy
completed its response.
|
|
the use of microsecond resolution keys for internal splay trees.
http://curl.haxx.se/mail/lib-2008-04/0513.html
|
|
redirections and thus cannot use CURLOPT_FOLLOWLOCATION easily, we now
introduce the new CURLINFO_REDIRECT_URL option that lets applications
extract the URL libcurl would've redirected to if it had been told to. This
then enables the application to continue to that URL as it thinks is
suitable, without having to re-implement the magic of creating the new URL
from the Location: header etc. Test 1029 verifies it.
|
|
- Added macros for curl_share_setopt and curl_multi_setopt to check at least
the correct number of arguments.
|
|
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
|
|
|
|
use of the "is_in_pipeline" struct field.
|
|
pipelining. Broken connection is not restored and we get into infinite
loop. It happens because of wrong is_in_pipeline values.
|
|
|
|
that it is bad anyway. Starting now, removing a handle that is in used in a
pipeline will break the pipeline - it'll be set back up again but still...
|
|
CONNECT over a proxy. curl_multi_fdset() didn't report back the socket
properly during that state, due to a missing case in the switch in the
multi_getsock() function.
|
|
|
|
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
|
|
and the write callbacks that now can make a connection's reading and/or
writing get paused.
|
|
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
|
|
do_init() and do_complete() which now are called first and last in the DO
function. It simplified the flow in multi.c and the functions got more
sensible names!
|
|
consistency
|
|
|
|
variables to avoid shadowing global declarations.
|
|
|
|
cleaning up after an OOM condition in curl_multi_add_handle
|
|
|
|
hash function for different hashes, and also expanded the default size for
the socket hash table used in multi handles to greatly enhance speed when
very many connections are added and the socket API is used.
|
|
HTTP CONNECT over a proxy
|
|
|