Age | Commit message (Collapse) | Author |
|
"if(a)" is our style, not "if( a )"
|
|
By the use of a the new lib/checksrc.pl script that checks that our
basic source style rules are followed.
|
|
asyn-ares.c and asyn-thread.c are two separate backends that implement
the same (internal) async resolver API for libcurl to use. Backend is
specified at build time.
The internal resolver API is defined in asyn.h for asynch resolvers.
|
|
Found with codespell.
|
|
We keep them less than 80 columns
|
|
Within multi_socket when conn is used as a shorthand, data could be
changed and multi_runsingle could modify the connectdata struct to deal
with. This bug has not been included in a public release.
Using 'conn' like that turned out to be ugly. This change is a partial
revert of commit f1c6cd42f474df59.
Reported by: Miroslav Spousta
Bug: http://curl.haxx.se/bug/view.cgi?id=3265485
|
|
The protocol handler struct got a 'flags' field for special information
and characteristics of the given protocol.
This now enables us to move away central protocol information such as
CLOSEACTION and DUALCHANNEL from single defines in a central place, out
to each protocol's definition. It also made us stop abusing the protocol
field for other info than the protocol, and we could start cleaning up
other protocol-specific things by adding flags bits to set in the
handler struct.
The "protocol" field connectdata struct was removed as well and the code
now refers directly to the conn->handler->protocol field instead. To
make things work properly, the code now always store a conn->given
pointer that points out the original handler struct so that the code can
learn details from the original protocol even if conn->handler is
modified along the way - for example when switching to go over a HTTP
proxy.
|
|
Some protocols have to call the underlying functions without regard to
what exact state the socket signals. For example even if the socket says
"readable", the send function might need to be called while uploading,
or vice versa. This is the case for libssh2 based protocols: SCP and
SFTP and we now introduce a define to set those protocols and we make
the multi interface code aware of this concept.
This is another fix to make test 582 run properly.
|
|
After a request times out, the connection wasn't properly closed and
prevented to get re-used, so subsequent transfers could still mistakenly
get to use the previously aborted connection.
|
|
When failing to connect the protocol during the CURLM_STATE_PROTOCONNECT
state, Curl_done() has to be called with the premature flag set TRUE as
for the pingpong protocols this can be important.
When Curl_done() is called with premature == TRUE, it needs to call
Curl_disconnect() with its 'dead_connection' argument set to TRUE as
well so that any protocol handler's disconnect function won't attempt to
use the (control) connection for anything.
This problem caused the pingpong protocols to fail to disconnect when
STARTTLS failed.
Reported by: Alona Rossen
Bug: http://curl.haxx.se/mail/lib-2011-02/0195.html
|
|
The code in the toofast state needs to first recalculate the values
before it uses them again since it may have been a while since it last
did it when it reaches this point.
|
|
As the function doesn't really use the connectdata struct but only the
SessionHanadle struct I modified what argument it wants.
|
|
The info about pipe status and expire cleared are clearly debug-related
and not anything mere mortals will or should care about so they are now
ifdef'ed DEBUGBUILD
|
|
The generic timeout code must not check easy handles that are already
completed. Going to completed (again) within there risked decreasing the
number of alive handles again and thus it could go negative.
This regression bug was added in 7.21.2 in commit ca10e28f06f1
|
|
It helps to prevent a hangup with some FTP servers in case idle session
timeout has exceeded. But it may be useful also for other protocols
that send any quit message on disconnect. Currently used by FTP, POP3,
IMAP and SMTP.
|
|
|
|
|
|
|
|
|
|
The functions Curl_disconnect() and Curl_done() are both used within the
scope of a single request so they cannot be allowed to use
Curl_expire(... 0) to kill all timeouts as there are some timeouts that
are set before a request that are supposed to remain until the request
is done.
The timeouts are now instead cleared at curl_easy_cleanup() and when the
multi state machine changes a handle to the complete state.
|
|
With the latest changes to fix the timeout handling with multi interface
we lost the timeout error messages. This patch brings them back.
|
|
Add a timeout check for handles in the state machine so that they will
timeout in all states disregarding what actions that may or may not
happen.
Fixed a bug in socket_action introduced recently when looping over timed
out handles: it wouldn't assign the 'data' variable and thus it wouldn't
properly take care of handles.
In the update_timer function, the code now checks if the timeout has
been removed and then it tells the application. Previously it would
always let the remaining timeout(s) just linger to expire later on.
|
|
Each easy handle has a list of timeouts, so as soon as the main timeout
for a handle expires, we must make sure to get the next entry from the
list and re-add the handle to the splay tree.
This was attempted previously but was done poorly in my commit
232ad6549a68450.
|
|
|
|
|
|
I fell over this bug report that mentioned that libcurl could wrongly
send more than one complete messages at the end of a transfer. Reading
the code confirmed this, so I've added a new multi state to make it not
happen. The mentioned bug report was made by Brad Jorsch but is (oddly
enough) filed in Debian's bug tracker for the "wmweather+" tool.
Bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=593390
|
|
When detecting that the send or recv speed, the multi interface changes
state to TOOFAST and previously there was no timeout set that would
force a recheck but it would rely on the application to somehow call
libcurl anyway. This now sets a timeout for a suitable future time to
check again if the average transfer speed is then below the threshold
again.
|
|
Curl_expire() is now expanded to hold a list of timeouts for each easy
handle. Only the closest in time will be the one used as the primary
timeout for the handle and will be used for the splay tree (which sorts
and lists all handles within the multi handle).
When the main timeout has triggered/expired, the next timeout in time
that is kept in the list will be moved to the main timeout position and
used as the key to splay with. This way, all timeouts that are set with
Curl_expire() internally will end up as a proper timeout. Previously any
Curl_expire() that set a _later_ timeout than what was already set was
just silently ignored and thus missed.
Setting Curl_expire() with timeout 0 (zero) will cancel all previously
added timeouts.
Corrects known bug #62.
|
|
Instead of looping over all attached easy handles, this now keeps a list
of messages in the multi handle. It allows curl_multi_info_read() to
perform O(1) no matter how many easy handles that are handled. This is
of importance since this function may be polled very frequently by apps
using the multi interface.
|
|
|
|
The struct used for storing the message for a completed transfer is now
no longer allocated separatly but is kept within the main struct kept
for each easy handle so that we avoid one malloc (and the subsequent
free).
|
|
curl_multi perform has two phases: run through every easy handle calling
multi_runsingle and remove expired timers (timer removal).
If a small timer (e.g. 1-10ms) is set during multi_runsingle, then it's
possible that the timer has passed by when the timer removal runs. The
timer which was just added is then removed. This will potentially cause
the timer list to be empty and cause the next call to curl_multi_timeout
to return -1. Ideally, curl_multi_timeout should return 0 in this case.
One way to fix this is to move the struct timeval now = Curl_tvnow(); to
the top of curl_multi_perform. The change does that.
|
|
When curl_multi_remove_handle() is called and an easy handle is returned
to the connection cache held in the multi handle, then we cannot allow
CURLINFO_LASTSOCKET to extract it since that will more or less encourage
that the user uses the socket while it can get used by libcurl again.
Without this fix, we'd get a segfault in Curl_getconnectinfo() trying to
dereference the NULL pointer in 'data->state.connc'.
Bug: http://curl.haxx.se/bug/view.cgi?id=3023840
|
|
My additional call to Curl_pgrsUpdate() would sometimes get
called even though there's no connection (left) so a NULL pointer
would get passed, causing a segfault.
|
|
|
|
1) no need to call the progress function twice when in the
CURLM_STATE_TOOFAST state.
2) Make sure that the progress callback's return code is
acknowledged when used
|
|
As long as no error is reported, the progress function can get
called. This may be a little TOO often so we should keep an eye
on this and possibly make this conditional somehow.
|
|
|
|
Igor Novoseltsev reported a problem with the multi socket API and
using timeouts and timers. It boiled down to a problem with
libcurl's use of GetTickCount() interally to figure out the
current time, while Igor's own application code used another
function call.
It made his app call the socket API timeout function a bit
_before_ libcurl would consider the timeout to trigger, and that
could easily lead to timeouts or stalls in the app. It seems
GetTickCount() in general often has no better resolution than
16ms and switching to the alternative function
QueryPerformanceCounter has its share of problems:
http://www.virtualdub.org/blog/pivot/entry.php?id=106
We address this problem by simply having libcurl treat timers
that already has occured or will occur within 40ms subject for
treatment. I'm confident that there are other implementations and
operating systems with similarly in accurate timer functions so
it makes sense to have applied generically and I don't believe we
sacrifice much by adding a 40ms inaccuracy on these timeouts.
|
|
An update had added a couple of lines with DOS line endings,
and some compilers will choke on that (e.g. the Tru64 compiler).
|
|
|
|
|
|
Hauke Duden provided an example program that made the multi
interface crash. His example simply used the multi interface and
did first one FTP transfer and after completion it used a second
easy handle and did another FTP transfer on the same FTP server.
This triggered a bug in the "delayed easy handle kill" system
that curl uses: when an FTP connection is left alive it must keep
an easy handle around internally - only for the purpose of having
an easy handle when it later disconnects it. The code assumed
that when the easy handle was removed and an internal reference
was made, that version could be killed later on when a new easy
handle came using the same connection. This was wrong as Hauke's
example showed that the removed handle wasn't killed for real
until later. This caused a double close attempt => segfault.
|
|
The problem mentioned on Dec 10 2009
(http://curl.haxx.se/bug/view.cgi?id=2905220) was only partially fixed.
Partially because an easy handle can be associated with many connections in
the cache (e.g. if there is a redirect during the lifetime of the easy
handle). The previous patch only cleaned up the first one. The new fix now
removes the easy handle from all connections, not just the first one.
|
|
|
|
possible loss of data
|
|
simply check for CURLM_CALL_MULTI_PERFORM internally. This has the added
benefit that this goes in line with my long-term wishes to get rid of the
CURLM_CALL_MULTI_PERFORM all together from the public API.
|
|
|
|
state, we return CURLM_CALL_MULTI_PERFORM unconditionally then so that we
can act faster like in the case the protocol-specific connect doesn't block
on anything and we can just persue on the next action immediately. It also
then avoids a case where curl_multi_fdset() would return -1.
|
|
|