Age | Commit message (Collapse) | Author |
|
when logging in), the operation will fail since libcurl does detect this and
thus fails to issue the correct command:
http://curl.haxx.se/bug/view.cgi?id=1693337
|
|
one over to KNOWN_BUGS
|
|
test, fixing KNOWN_BUGS #11. Fixed some tests to more accurately specify
their required servers and features.
|
|
|
|
different server behaviour
|
|
5).
|
|
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.
|
|
|
|
platforms.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1603712) (known bug #36) --limit-rate
(CURLOPT_MAX_SEND_SPEED_LARGE and CURLOPT_MAX_RECV_SPEED_LARGE) are broken
on Windows (since 7.16.0, but that's when they were introduced as previous
to that the limiting logic was made in the application only and not in the
library). It was actually also broken on select()-based systems (as apposed
to poll()) but we haven't had any such reports. We now use select(), Sleep()
or delay() properly to sleep a while without waiting for anything input or
output when the rate limiting is activated with the easy interface.
|
|
authentication (with performs multiple "passes" and authenticates a
connection rather than a HTTP request), and particularly when using the
multi interface, there's a risk that libcurl will re-use a wrong connection
when doing the different passes in the NTLM negotiation and thus fail to
negotiate (in seemingly mysterious ways).
36. --limit-rate (CURLOPT_MAX_SEND_SPEED_LARGE and
CURLOPT_MAX_RECV_SPEED_LARGE) are broken on Windows (since 7.16.0, but
that's when they were introduced as previous to that the limiting logic was
made in the application only and not in the library). This problem is easily
repeated and it takes a Windows person to fire up his/hers debugger in order
to fix. http://curl.haxx.se/bug/view.cgi?id=1603712
|
|
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.
|
|
|
|
supported
|
|
|
|
|
|
while not fixing things very nicely, it does make the SOCKS5 proxy
connection slightly better as it now acknowledges the timeout for connection
and it no longer segfaults in the case when SOCKS requires authentication
and you did not specify username:password.
|
|
CURLOPT_MAX_RECV_SPEED_LARGE that limit tha maximum rate libcurl is allowed
to send or receive data. This kind of adds the the command line tool's
option --limit-rate to the library.
The rate limiting logic in the curl app is now removed and is instead
provided by libcurl itself. Transfer rate limiting will now also work for -d
and -F, which it didn't before.
|
|
|
|
thus works reliably on more platforms.
|
|
transfers. They are done on non-windows systems and translate CRLF to LF.
|
|
This happens because the multi-pass code abuses the redirect following code
for doing multiple requests, and when we following redirects to an absolute
URL we must use the newly specified port and not the one specified in the
original URL. A proper fix to this would need to separate the negotiation
"redirect" from an actual redirect.
|
|
|
|
|
|
|
|
|
|
|
|
server configured in the system, the ares_init() call fails and thus
curl_easy_init() fails as well. This causes weird effects for people who use
numerical IP addresses only.
|
|
run that might be needed only for building libcurl.
|
|
|
|
|
|
issues in the code.
|
|
|
|
Curl_ossl_connect(), the Curl_gtls_connect() function does not send the user
certificate to the peer. In fact, it ignores the conn->data->set.cert field
completely, it always uses the anonymous credentials. See
http://curl.haxx.se/bug/view.cgi?id=1348930
|
|
correct (and overly complicated) sourceforge bug tracker URL given the bug
report ID number.
|
|
|
|
|
|
The CONNECT problem is now bug #25 planned to get fixed in next release.
|
|
fixed after 7.14.1
|
|
we have a fancier valgrind error parser these days and it seems to work rather
well
|
|
|
|
|
|
least it should no longer cause a compiler error. However, it does not have
AI_NUMERICHOST so we cannot getaddrinfo() any numerical addresses with it (we
use that for FTP PORT/EPRT)! So, I modified the configure check that checks if
the getaddrinfo() is working, to use AI_NUMERICHOST since then it'll fail on
AIX 4.3 and it will automatically build with IPv6 support disabled.
|
|
|
|
these days...!
|
|
the correct dynamic library names by default, and provides override switches
--with-ldap-lib, --with-lber-lib and --without-lber-lib. Added
CURL_DISABLE_LDAP to platform-specific config files to disable LDAP
support on those platforms that probably don't have dynamic OpenLDAP
libraries available to avoid compile errors.
|
|
many third-party libs and tools have problems. When curl is built without
--disable-shared, the testing is done with a front-end script which makes the
valgrind testing include (ba)sh as well and that often causes valgrind
errors. Either we improve the valgrind error scanner a lot to better identify
(lib)curl errors only, or we disable valgrind checking by default
|
|
|
|
curl_easy_perform() invokes. It was previously unlocked at disconnect, which
could mean that it remained locked between multiple transfers. The DNS cache
may not live as long as the connection cache does, as they are separate.
To deal with the lack of DNS (host address) data availability in re-used
connections, libcurl now keeps a copy of the IP adress as a string, to be able
to show it even on subsequent requests on the same connection.
|
|
|