Age | Commit message (Collapse) | Author |
|
|
|
out that the cookie parser would leak memory when it parses cookies that are
received with domain, path etc set multiple times in the same header. While
such a cookie is questionable, they occur in the wild and libcurl no longer
leaks memory for them. I added such a header to test case 8.
|
|
|
|
tests on machines that have broken DNS configurations (such as
those configured to use OpenDNS).
|
|
no_proxy which made it not skip the proxy if the URL entered contained a
user name. I added test case 1101 to verify.
|
|
|
|
the multi interface, which currently doesn't work because of how the data
connection is not waiting for connect before it tries to do proxy magic.
|
|
as reported by Ebenezer Ikonne (on curl-users) and Laurent Rabret (on
curl-library). The transfer was mistakenly marked to get more data to send
but since it didn't actually have that, it just hung there...
|
|
redirect" doesn't work, seems to repeat what Ebenezer Ikonne (on curl-users)
and Laurent Rabret (on curl-library) have reported. Disabled for now.
|
|
headers and reading from cookie-jar
|
|
|
|
|
|
report failed to mention that a proxy must be used to reproduce it.
|
|
If the CURLOPT_PORT option is used on an FTP URL like
"ftp://example.com/file;type=A" the ";type=A" is stripped off.
I added test case 562 to verify, only to find out that I couldn't repeat
this bug so I hereby consider it not a bug anymore!
|
|
I've now made TFTP "connections" not being kept for re-use within libcurl.
TFTP is UDP-based so the benefit was really low (if even existing) to begin
with so instead of tracking down to fix this problem we instead removed the
re-use. I also enabled test case 1099 that I wrote a few days ago to verify
that this change fixes the reported problem.
|
|
as things don't work right here!
|
|
proxy. libcurl would then wrongly close the connection after each
request. In his case it had the weird side-effect that it killed NTLM auth
for the proxy causing an inifinite loop!
I added test case 1098 to verify this fix. The test case does however not
properly verify that the transfers are done persistently - as I couldn't
think of a clever way to achieve it right now - but you need to read the
stderr output after a test run to see that it truly did the right thing.
|
|
on curl-users, it is also added to DISABLED since I don't have time to work
on it further right now.
|
|
connection is kept alive afterwards
|
|
version 1.1 instead of 1.0 like before. This change also introduces the new
proxy type for libcurl called 'CURLPROXY_HTTP_1_0' that then allows apps to
switch (back) to CONNECT 1.0 requests. The curl tool also got a --proxy1.0
option that works exactly like --proxy but sets CURLPROXY_HTTP_1_0.
I updated all test cases cases that use CONNECT and I tried to do some using
--proxy1.0 and some updated to do CONNECT 1.1 to get both versions run.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2535504) pointing out that realms with
quoted quotation marks in HTTP Digest headers didn't work. I've now added
test case 1095 that verifies my fix.
|
|
|
|
clarity. This does fix one problem that causes ;type=i FTP URLs
to fail in the Turkish locale when CURLOPT_PROXY_TRANSFER_MODE is
used (test case 561)
Added tests 561 and 1092 through 1094 to test various combinations
of ;type= and ;mode= URLs that could potentially fail in the Turkish
locale.
|
|
|
|
|
|
avoid dns redirections.
|
|
|
|
test a report that the size didn't work, but these test cases pass.
|
|
researching it, it turned out he got a 550 response back from a SIZE command
and then I fell over the text in RFC3659 that says:
The presence of the 550 error response to a SIZE command MUST NOT be taken
by the client as an indication that the file cannot be transferred in the
current MODE and TYPE.
In other words: the change I did on September 30th 2008 and that has been
included in the last two releases were a regression and a bad idea. We MUST
NOT take a 550 response from SIZE as a hint that the file doesn't exist.
|
|
with and without --location-trusted
|
|
|
|
|
|
used. It has been used since forever but it was never a good idea to use
unless explicitly asked for.
|
|
while still causing a timeout during the data phase.
|
|
|
|
This test was added after the HTTPS-using-multi-interface with OpenSSL
regression of 7.19.1 to hopefully prevent this embarassing mistake from
appearing again... Unfortunately the bug wasn't triggered by this test, which
presumably is because the connect to a local server is too fast/different
compared to the real/distant servers we saw the bug happen with.
|
|
|
|
|
|
test #559 tests internal hash create/add/destroy
|
|
|
|
now to minimally exercise some internal hash routines.
|
|
|
|
|
|
|
|
|
|
|
|
use the startup of the IPv6 test server as a substitute check for this).
|
|
|
|
All but the first test cause an infinite loop or other failure and so
are added to DISABLED.
|
|
fixed a CURLINFO_REDIRECT_URL memory leak and an additional wrong-doing:
Any subsequent transfer with a redirect leaks memory, eventually crashing
the process potentially.
Any subsequent transfer WITHOUT a redirect causes the most recent redirect
that DID occur on some previous transfer to still be reported.
|