Age | Commit message (Collapse) | Author |
|
to a multi stack will cause CURLM_BAD_EASY_HANDLE to get returned.
|
|
and while doing so it became apparent that the current timeout system for
the socket API really was a bit awkward since it become quite some work to
be sure we have the correct timeout set.
Jeff then provided the new CURLMOPT_TIMERFUNCTION that is yet another
callback the app can set to get to know when the general timeout time
changes and thus for an application like hiperfifo.c it makes everything a
lot easier and nicer. There's a CURLMOPT_TIMERDATA option too of course in
good old libcurl tradition.
|
|
|
|
anymore
|
|
|
|
case 535 and it now runs fine. Again a problem with the pipelining code not
taking all possible (error) conditions into account.
|
|
|
|
now runs fine.
|
|
but that worked nicely in 7.15.5. I converted it into test case 532 and
fixed the problem.
|
|
would crash if a bad function sequence was used when shutting down after
using the multi interface (i.e using easy_cleanup after multi_cleanup) so
precautions have been added to make sure it doesn't any more - test case 529
was added to verify.
|
|
|
|
|
|
it basically was that we didn't remove the current connection from the pipe
list when following a redirect. Also in this commit: several cases of
additional debug code for debug builds helping to check and track down some
signs of run-time trouble.
|
|
currently fits in the cache, to make the cache work better especially for
pipelining cases but also for "mere" (persistent) connection re-use.
|
|
|
|
handle that is part of a multi handle first removes the handle from the
stack.
- Added CURLOPT_SSL_SESSIONID_CACHE and --no-sessionid to disable SSL
session-ID re-use on demand since there obviously are broken servers out
there that misbehave with session-IDs used.
|
|
problem with it (SIGSEGV-style). It clearly showed that the existing
socket-state and state-difference function wasn't good enough so I rewrote
it and could then re-run Jeff's program without any crash. The previous
version clearly could miss to tell the application when a handle changed
from using one socket to using another.
While I was at it (as I could use this as a means to track this problem
down), I've now added a 'magic' number to the easy handle struct that is
inited at curl_easy_init() time and cleared at curl_easy_cleanup() time that
we can use internally to detect that an easy handle seems to be fine, or at
least not closed or freed (freeing in debug builds fill the area with 0x13
bytes but in normal builds we can of course not assume any particular data
in the freed areas).
|
|
|
|
|
|
cache within the multi handle.
|
|
name resolves. It could get stuck in the wrong state.
|
|
properly if you removed a "live" handle from a multi handle with
curl_multi_remove_handle().
|
|
to multi_runsingle() without it being really necessary or good
|
|
timeout'ed connections and possibly deal with them too
|
|
used anymore since multi->num_alive was introduced
|
|
multi_runsingle(). This is done here returning multi->num_alive in the running_handles parameter even when functions that call multi_runsingle() at this moment overwrite the returned value with the one that is valid when those functions curl_multi_perform() and multi_socket() have removed expired timers from the splay. Most probably, parameter 'running_handles' in function multi_runsingle() should be just removed.
|
|
*multi_socket*() can't return the proper number
|
|
|
|
both now provide the number of running handles back to the calling function.
|
|
set a private pointer added to the internal libcurl hash table for the
particular socket passed in to this function.
|
|
we did wrong and patched it: When nodes were removed from the splay tree,
and we didn't properly remove it from the splay tree when an easy handle was
removed from a multi stack and thus we could wrongly leave a node in the
splay tree pointing to (bad) memory.
|
|
|
|
|
|
anyone in curl_multi_add_handle.
|
|
the crash was that libcurl internally was a bit confused about who owned the
DNS cache at all times so if you created an easy handle that uses a shared
DNS cache and added that to a multi handle it would crash. Now we keep more
careful internal track of exactly what kind of DNS cache each easy handle
uses: None, Private (allocated for and used only by this single handle),
Shared (points to a cache held by a shared object), Global (points to the
global cache) or Multi (points to the cache within the multi handle that is
automatically shared between all easy handles that are added with private
caches).
|
|
|
|
|
|
curl_socket_t is unsigned (like Windows) that could cause it to wrongly
return a max fd of -1.
|
|
CURLOPT_MAX_RECV_SPEED_LARGE that limit tha maximum rate libcurl is allowed
to send or receive data. This kind of adds the the command line tool's
option --limit-rate to the library.
The rate limiting logic in the curl app is now removed and is instead
provided by libcurl itself. Transfer rate limiting will now also work for -d
and -F, which it didn't before.
|
|
|
|
|
|
multi stack and that easy handle had already been used to do one or more
easy interface transfers, as then the code threw away the previously used
DNS cache without properly freeing it.
|
|
#ifdef around WSAEDISCON in strerror.c.
|
|
|
|
can and will use more than one socket
|
|
|
|
code rearrange to fit the future better.
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=1431750) helped me identify and fix two
different but related bugs:
1) Removing an easy handle from a multi handle before the transfer is done
could leave a connection in the connection cache for that handle that is
in a state that isn't suitable for re-use. A subsequent re-use could then
read from a NULL pointer and segfault.
2) When an easy handle was removed from the multi handle, there could be an
outstanding c-ares DNS name resolve request. When the response arrived,
it caused havoc since the connection struct it "belonged" to could've
been freed already.
Now Curl_done() is called when an easy handle is removed from a multi handle
pre-maturely (that is, before the transfer was complteted). Curl_done() also
makes sure to cancel all (if any) outstanding c-ares requests.
|
|
an app can use to let libcurl only connect to a remote host and then extract
the socket from libcurl. libcurl will then not attempt to do any transfer at
all after the connect is done.
|