Age | Commit message (Collapse) | Author |
|
|
|
resolving is completed so we must make sure to survive it and mark the
connection as async (ie not yet connected completely).
|
|
ever actually are used
|
|
|
|
could very well cause a negate number get passed in and thus cause reading
outside of the array usually used for this purpose.
We avoid this by using the uppercase macro versions introduced just now that
does some extra crazy typecasts to avoid byte codes > 127 to cause negative
int values.
|
|
|
|
|
|
|
|
|
|
|
|
case 535 and it now runs fine. Again a problem with the pipelining code not
taking all possible (error) conditions into account.
|
|
|
|
would crash if a bad function sequence was used when shutting down after
using the multi interface (i.e using easy_cleanup after multi_cleanup) so
precautions have been added to make sure it doesn't any more - test case 529
was added to verify.
|
|
|
|
it now will read the full data sent from servers. The SOCKS-related code was
also moved to the new lib/socks.c source file.
|
|
empty password or no password at all. Test case 278 and 279 were added to
verify.
|
|
it basically was that we didn't remove the current connection from the pipe
list when following a redirect. Also in this commit: several cases of
additional debug code for debug builds helping to check and track down some
signs of run-time trouble.
|
|
currently fits in the cache, to make the cache work better especially for
pipelining cases but also for "mere" (persistent) connection re-use.
|
|
Curl_signalPipeClose is now signalPipeClose().
|
|
|
|
|
|
we certainly MUST NOT kill an active connection... Problem tracked down thanks
to Michael Wallner's excellent test program.
|
|
handle that is part of a multi handle first removes the handle from the
stack.
- Added CURLOPT_SSL_SESSIONID_CACHE and --no-sessionid to disable SSL
session-ID re-use on demand since there obviously are broken servers out
there that misbehave with session-IDs used.
|
|
|
|
|
|
|
|
problem with it (SIGSEGV-style). It clearly showed that the existing
socket-state and state-difference function wasn't good enough so I rewrote
it and could then re-run Jeff's program without any crash. The previous
version clearly could miss to tell the application when a handle changed
from using one socket to using another.
While I was at it (as I could use this as a means to track this problem
down), I've now added a 'magic' number to the easy handle struct that is
inited at curl_easy_init() time and cleared at curl_easy_cleanup() time that
we can use internally to detect that an easy handle seems to be fine, or at
least not closed or freed (freeing in debug builds fill the area with 0x13
bytes but in normal builds we can of course not assume any particular data
in the freed areas).
|
|
|
|
|
|
|
|
cache within the multi handle.
|
|
|
|
while not fixing things very nicely, it does make the SOCKS5 proxy
connection slightly better as it now acknowledges the timeout for connection
and it no longer segfaults in the case when SOCKS requires authentication
and you did not specify username:password.
|
|
" !defined(__GNUC__) || defined(__MINGW32__)" implies
CygWin.
|
|
|
|
|
|
allow applications to set their own socket options.
|
|
command on subsequent requests on a re-used connection unless it has to.
|
|
|
|
tool option named --ftp-alternative-to-user. It provides a mean to send a
particular command if the normal USER/PASS approach fails.
|
|
|
|
|
|
|
|
|
|
for FTP ASCII transfers.
|
|
the crash was that libcurl internally was a bit confused about who owned the
DNS cache at all times so if you created an easy handle that uses a shared
DNS cache and added that to a multi handle it would crash. Now we keep more
careful internal track of exactly what kind of DNS cache each easy handle
uses: None, Private (allocated for and used only by this single handle),
Shared (points to a cache held by a shared object), Global (points to the
global cache) or Multi (points to the cache within the multi handle that is
automatically shared between all easy handles that are added with private
caches).
|
|
CURLOPT_MAX_RECV_SPEED_LARGE that limit tha maximum rate libcurl is allowed
to send or receive data. This kind of adds the the command line tool's
option --limit-rate to the library.
The rate limiting logic in the curl app is now removed and is instead
provided by libcurl itself. Transfer rate limiting will now also work for -d
and -F, which it didn't before.
|
|
|
|
|
|
CURLOPT_COOKIELIST, which then makes all session cookies get cleared. (slightly
edited by me, and the re-indent in cookie.c was also done by me)
|