Age | Commit message (Collapse) | Author |
|
Disabling pipelining on multi handle with in-progress pipelined requests
leads to heap corruption and crash
|
|
Use Unix when generically writing about Unix based systems as UNIX is
the trademark and should only be used in a particular product's name.
|
|
|
|
Problem: if CURLOPT_FORBID_REUSE is set, requests using NTLM failed
since NTLM requires multiple requests that re-use the same connection
for the authentication to work
Solution: Ignore the forbid reuse flag in case the NTLM authentication
handshake is in progress, according to the NTLM state flag.
Fixed known bug #77.
|
|
Reflect recent changes in SPNEGO and GSS-API code in the docs.
Update them with appropriate namings and remove visible spots for
GSS-Negotiate.
|
|
|
|
|
|
|
|
|
|
|
|
...also added as KNOWN_BUG #87 with reference to bug #1294
|
|
|
|
added 85. Wrong STARTTRANSFER timer accounting for POST requests
|
|
CURLINFO_SSL_VERIFYRESULT is only implemented for the OpenSSL and NSS
backends and not for any other!
|
|
The old numbers would still redirect but who knows for how long...
|
|
|
|
|
|
|
|
Bug: http://curl.haxx.se/bug/view.cgi?id=1169
|
|
|
|
The test would hang and get aborted with a "ABORTING TEST, since it
seems that it would have run forever." until I prevented that from
happening.
I also fixed the data file which got broken CRLF line endings when I
sucked down the path from Joe's repo == my fault.
Removed #37 from KNOWN_BUGS as this fix and test case verifies exactly
this.
|
|
|
|
|
|
|
|
Bug #75 updated with additional info, still remains for builds with
other backends.
|
|
http://curl.haxx.se/bug/view.cgi?id=3438362
|
|
|
|
Multiple auths in the same WWW-Authenticate header
Fixed in commit 7d81e3f7193b8c
|
|
Curl_expire() is now expanded to hold a list of timeouts for each easy
handle. Only the closest in time will be the one used as the primary
timeout for the handle and will be used for the splay tree (which sorts
and lists all handles within the multi handle).
When the main timeout has triggered/expired, the next timeout in time
that is kept in the list will be moved to the main timeout position and
used as the key to splay with. This way, all timeouts that are set with
Curl_expire() internally will end up as a proper timeout. Previously any
Curl_expire() that set a _later_ timeout than what was already set was
just silently ignored and thus missed.
Setting Curl_expire() with timeout 0 (zero) will cancel all previously
added timeouts.
Corrects known bug #62.
|
|
|
|
The SOCKET type in Win64 is 64 bits large (and thus so is curl_socket_t
on that platform), and long is only 32 bits. It makes it impossible for
curl_easy_getinfo() to return a socket properly with the
CURLINFO_LASTSOCKET option as for all other operating systems.
|
|
http://curl.haxx.se/mail/lib-2009-10/0024.html
http://curl.haxx.se/bug/view.cgi?id=2944325
|
|
|
|
|
|
instead of being repeated several times. This also include Authenticate: and
Proxy-Authenticate: headers and while this hardly every happens in real life
it will confuse libcurl which does not properly support it for all headers -
like those Authenticate headers.
|
|
have been fixed!
|
|
|
|
sends the 220 response or otherwise is dead slow, libcurl will not
acknowledge the connection timeout during that phase but only the "real"
timeout - which may surprise users as it is probably considered to be the
connect phase to most people. Brought up (and is being misunderstood) in:
http://curl.haxx.se/bug/view.cgi?id=2844077
|
|
Fix SIGSEGV on free'd easy_conn when pipe unexpectedly breaks
Fix data corruption issue with re-connected transfers
Fix use after free if we're completed but easy_conn not NULL
|
|
bugs we know about that will appear in the next release (too)
|
|
something beyond ascii but currently libcurl will only pass in the verbatim
string the app provides. There are several browsers that already do this
encoding. The key seems to be the updated draft to RFC2231:
http://tools.ietf.org/html/draft-reschke-rfc2231-in-http-02
|
|
http://curl.haxx.se/bug/view.cgi?id=2818950
|
|
not in the mood enough to fight this now.
65. When doing FTP over a socks proxy or CONNECT through HTTP proxy and the
multi interface is used, libcurl will fail if the (passive) TCP connection
for the data transfer isn't more or less instant as the code does not
properly wait for the connect to be confirmed. See test case 564 for a first
shot at a test case.
|
|
report failed to mention that a proxy must be used to reproduce it.
|
|
If the CURLOPT_PORT option is used on an FTP URL like
"ftp://example.com/file;type=A" the ";type=A" is stripped off.
I added test case 562 to verify, only to find out that I couldn't repeat
this bug so I hereby consider it not a bug anymore!
|
|
|
|
for any further requests or transfers. The work-around is then to close that
handle with curl_easy_cleanup() and create a new. Some more details:
http://curl.haxx.se/mail/lib-2009-04/0300.html
|
|
getaddrinfo() sorts the response list
This isn't a libcurl bug since this is how getaddrinfo() is *supposed* to work!
Apparently you deal with this using the /etc/gai.conf file.
|
|
later)
|
|
multi_socket interfaces. The work-around for apps is to simply remove the
easy handle once the time is up. See also:
http://curl.haxx.se/bug/view.cgi?id=2501457
|