Age | Commit message (Collapse) | Author |
|
fail since they used "1 feb 2007"...
- Manfred Schwarb reported that socks5 support was broken and help us pinpoint
the problem. The code now tries harder to use httproxy and proxy where
apppropriate, as not all proxies are HTTP...
|
|
|
|
|
|
|
|
header, you got _two_ User-Agent headers in the CONNECT request...! Added
test case 287 to verify the fix.
|
|
|
|
ordinary curl command line, and you will get a libcurl-using source code
written to the file that does the equivalent operation of what your command
line operation does!
|
|
|
|
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
|
|
non-ASCII platforms. It does add some complexity, most notably with more
#ifdefs, but I want to see this supported added and I can't see how we can
add it without the extra stuff added.
|
|
|
|
curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it
will make libcurl shutdown SSL/TLS after the authentication is done on a
FTP-SSL operation.
|
|
non-ASCII platforms.
|
|
downloaded data in two buffers, just to be able to deal with a special HTTP
pipelining case. That is now only activated for pipelined transfers. In
Matt's case, it showed as a considerable performance difference,
|
|
(http://curl.haxx.se/bug/view.cgi?id=1603712) (known bug #36) --limit-rate
(CURLOPT_MAX_SEND_SPEED_LARGE and CURLOPT_MAX_RECV_SPEED_LARGE) are broken
on Windows (since 7.16.0, but that's when they were introduced as previous
to that the limiting logic was made in the application only and not in the
library). It was actually also broken on select()-based systems (as apposed
to poll()) but we haven't had any such reports. We now use select(), Sleep()
or delay() properly to sleep a while without waiting for anything input or
output when the rate limiting is activated with the easy interface.
|
|
|
|
|
|
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.
|
|
'curl [URL]' with a URL without a protocol prefix, curl would not send a
correct request as it failed to add the protocol prefix.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
|
|
during certain conditions when GnuTLS is used.
|
|
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1603712) which is about connections
getting cut off prematurely when --limit-rate is used. While I found no such
problems in my tests nor in my reading of the code, I found that the
--limit-rate code was severly flawed (since it was moved into the lib, since
7.15.5) when used with the easy interface and it didn't work as documented so
I reworked it somewhat and now it works for my tests.
|
|
passing a curl_off_t argument to the Curl_read_rewind() function which takes
an size_t argument. Curl_read_rewind() also had debug code left in it and it
was put in a different source file with no good reason when only used from
one single spot.
|
|
no code present in the library that receives the option. Since it was not
possible to use, we know that no current users exist and thus we simply
removed it from the docs and made the code always use the default path of
the code.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1604956) which identified setting
CURLOPT_MAXCONNECTS to zero caused libcurl to SIGSEGV. Starting now, libcurl
will always internally use no less than 1 entry in the connection cache.
|
|
Curl_done()
|
|
(http://curl.haxx.se/bug/view.cgi?id=1230118) curl_getdate() did not work
properly for all input dates on Windows. It was mostly seen on some TZ time
zones using DST. Luckily, Martin also provided a fix.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1600447) in which he noted that active
FTP connections don't work with the multi interface. The problem is here that
the multi interface state machine has a state during which it can wait for the
data connection to connect, but the active connection is not done in the same
step in the sequence as the passive one is so it doesn't quite work for
active. The active FTP code still use a blocking function to allow the remote
server to connect.
The fix (work-around is a better word) for this problem is to set the
boolean prematurely that the data connection is completed, so that the "wait
for connect" phase ends at once.
|
|
HTTP upload was disconnected:
"What appears to be happening is that my system (Linux 2.6.17 and 2.6.13) is
setting *only* POLLHUP on poll() when the conditions in my previous mail
occur. As you can see, select.c:Curl_select() does not check for POLLHUP. So
basically what was happening, is poll() was returning immediately (with
POLLHUP set), but when Curl_select() looked at the bits, neither POLLERR or
POLLOUT was set. This still caused Curl_readwrite() to be called, which
quickly returned. Then the transfer() loop kept continuing at full speed
forever."
|
|
|
|
header in a third, not suppported by libcurl, format and we agreed that we
could make the parser more forgiving to accept all the three found
variations.
|
|
responded with a single status line and no headers nor body. Starting now, a
HTTP response on a persistent connection (i.e not set to be closed after the
response has been taken care of) must have Content-Length or chunked
encoding set, or libcurl will simply assume that there is no body.
To my horror I learned that we had no less than 57(!) test cases that did bad
HTTP responses like this, and even the test http server (sws) responded badly
when queried by the test system if it is the test system. So although the
actual fix for the problem was tiny, going through all the newly failing test
cases got really painful and boring.
|
|
|
|
|
|
|
|
out a stack overwrite (and the corresponding fix) on 64bit Windows when
dealing with HTTP chunked encoding.
|
|
Versions, not to ./Versions and indentation improvments
|
|
2006. It turned out we wrongly assumed that the connection cache was present
when tearing down a connection.
|
|
multi interface, but I could also repeat it doing multiple sequential ones
with the easy interface. Using Ciprian's test case, I could fix it.
|
|
CURLOPT_VERBOSE set to non-zero, you still got a few debug messages from the
SSL handshake. This is now stopped.
|
|
|
|
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.
|
|
|
|
|
|
|
|
|
|
case when 401 or 407 are returned, *IF* no auth credentials have been given.
The CURLOPT_FAILONERROR option is not possible to make fool-proof for 401
and 407 cases when auth credentials is given, but we've now covered this
somewhat more.
You might get some amounts of headers transferred before this situation is
detected, like for when a "100-continue" is received as a response to a
POST/PUT and a 401 or 407 is received immediately afterwards.
Added test 281 to verify this change.
|
|
|
|
reading the (local) CA cert file to let users easier pinpoint the actual
problem. CURLE_SSL_CACERT_BADFILE (77) is the new libcurl error code.
|