Age | Commit message (Collapse) | Author |
|
curl_socket_t is unsigned (like Windows) that could cause it to wrongly
return a max fd of -1.
|
|
CURLOPT_MAX_RECV_SPEED_LARGE that limit tha maximum rate libcurl is allowed
to send or receive data. This kind of adds the the command line tool's
option --limit-rate to the library.
The rate limiting logic in the curl app is now removed and is instead
provided by libcurl itself. Transfer rate limiting will now also work for -d
and -F, which it didn't before.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
fail. When using the md5-sess, the result was not Md5 encoded and Base64
transformed.
|
|
CURLOPT_COOKIELIST, which then makes all session cookies get cleared. (slightly
edited by me, and the re-indent in cookie.c was also done by me)
|
|
checks on the to-be-returned socket to make sure it truly seems to be alive
and well. For SSL connection it (only) uses OpenSSL functions.
|
|
2 - properly escape certain letters within a DICT word to comply to the RFC2229
|
|
autotools project, which optionally (default=yes) uses libcurl on a system
without a (usable) libcurl installation, but not specifying
`--without-libcurl', configure determines correctly that no libcurl is
available, however, the LIBCURL variable gets expanded to `LIBCURL = -lcurl'
in the resulting Makefiles.
David Shaw fixed the flaw.
|
|
|
|
multi stack and that easy handle had already been used to do one or more
easy interface transfers, as then the code threw away the previously used
DNS cache without properly freeing it.
|
|
thus works reliably on more platforms.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1481217), with follow-ups by Michele Bini
and David Byron. libcurl previously wrongly used GetLastError() on windows to
get error details after socket-related function calls, when it really should
use WSAGetLastError() instead.
When changing to this, the former function Curl_ourerrno() is now instead
called Curl_sockerrno() as it is necessary to only use it to get errno from
socket-related functions as otherwise it won't work as intended on Windows.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1480821) He found and identified a
problem with how libcurl dealt with GnuTLS and a case where gnutls returned
GNUTLS_E_AGAIN indicating it would block. It would then return an unexpected
return code, making Curl_ssl_send() confuse the upper layer - causing random
28 bytes trash data to get inserted in the transfered stream.
The proper fix was to make the Curl_gtls_send() function return the proper
return codes that the callers would expect. The Curl_ossl_send() function
already did this.
|
|
|
|
transfers. They are done on non-windows systems and translate CRLF to LF.
|
|
the stream (wrongly) lacks a proper zlib header. This seems to be the case on
too many actual server implementations.
|
|
|
|
the control connection when using FTP, for example when you remove an easy
handle from a multi stack.
|
|
|
|
typecast in the curl tool leading to a crash with (64bit?) VS2005 (at least)
since the struct timeval field tv_sec is an int while time_t is 64bit.
|
|
|
|
(http://curl.haxx.se/mail/lib-2006-02/0154.html) by adding the NTLM hash
function in addition to the LM one and making some other adjustments in the
order the different parts of the data block are sent in the Type-2 reply.
Inspiration for this work was taken from the Firefox NTLM implementation.
I edited the existing 21(!) NTLM test cases to run fine with these news. Due
to the fact that we now properly include the host name in the Type-2 message
the test cases now only compare parts of that chunk.
|
|
occurred when asking libcurl to follow HTTP redirects and the original URL had
more than one question mark (?). Added test case 276 to verify.
|
|
--enable-debug, as then curl used free() on memory allocated both with
normal malloc() and with libcurl-provided functions, when the latter MUST be
freed with curl_free() in debug builds.
|
|
called bind() with a too big argument in the 3rd parameter and at least
Tru64, AIX and IRIX seem to be very picky about it.
|
|
|
|
(when using OpenSSL).
|
|
|
|
reacts properly according to the CURLOPT_FTP_SSL setting.
|
|
|
|
|
|
|
|
with the multi interface and multi-part formposts. The fix from February
22nd could make the Curl_done() function get called twice on the same
connection and it was not designed for that and thus tried to call free() on
an already freed memory area!
|
|
is used properly.
|
|
callback" error message so I've now made the setting of that callback not be
as critical as before. The function is only used for additional loggging/
trace anyway so a failure just means slightly less data. It should still be
able to proceed and connect fine to the server.
|
|
|
|
|
|
|
|
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=1431750) helped me identify and fix two
different but related bugs:
1) Removing an easy handle from a multi handle before the transfer is done
could leave a connection in the connection cache for that handle that is
in a state that isn't suitable for re-use. A subsequent re-use could then
read from a NULL pointer and segfault.
2) When an easy handle was removed from the multi handle, there could be an
outstanding c-ares DNS name resolve request. When the response arrived,
it caused havoc since the connection struct it "belonged" to could've
been freed already.
Now Curl_done() is called when an easy handle is removed from a multi handle
pre-maturely (that is, before the transfer was complteted). Curl_done() also
makes sure to cancel all (if any) outstanding c-ares requests.
|
|
type to the already provided type CURLPROXY_SOCKS4.
I added a --socks4 option that works like the current --socks5 option but
instead use the socks4 protocol.
|
|
content when libcurl didn't honor the internal ignorebody flag.
|
|
code. It should however not be the cause of any troubles. He also fixed a
few similar problems in the HTTP test server code.
|
|
as previously it could be holding on to old cached entries longer than
requested.
|