Age | Commit message (Collapse) | Author |
|
2 - store the time it took to verify it and allow that time to be used as
%FTPTIME[23] in command lines to allow us to adjust better to slow hosts
since test 190 failed on my slow solaris machine just because it hadn't
gotten time to run all the way the test assumed all machines would reach
before the time-out elapsed.
|
|
transfers. They are done on non-windows systems and translate CRLF to LF.
|
|
read one byte at a time...
|
|
hosts
|
|
(http://curl.haxx.se/mail/lib-2006-02/0154.html) by adding the NTLM hash
function in addition to the LM one and making some other adjustments in the
order the different parts of the data block are sent in the Type-2 reply.
Inspiration for this work was taken from the Firefox NTLM implementation.
I edited the existing 21(!) NTLM test cases to run fine with these news. Due
to the fact that we now properly include the host name in the Type-2 message
the test cases now only compare parts of that chunk.
|
|
occurred when asking libcurl to follow HTTP redirects and the original URL had
more than one question mark (?). Added test case 276 to verify.
|
|
|
|
|
|
|
|
|
|
even after EPSV returned a positive response code, if libcurl failed to
connect to the port number the EPSV response said. Obviously some people are
going through protocol-sensitive firewalls (or similar) that don't understand
EPSV and then they don't allow the second connection unless PASV was
used. This also called for a minor fix of test case 238.
|
|
(CURLOPT_FTPPORT) didn't work for ipv6-enabed curls if the IP wasn't a
"native" IP while it works fine for ipv6-disabled builds!
In the process of fixing this, I removed the support for LPRT since I can't
think of many reasons to keep doing it and asking on the mailing list didn't
reveal anyone else that could either. The code that sends EPRT and PORT is
now also a lot simpler than before (IMHO).
|
|
|
|
same server
|
|
(http://curl.haxx.se/bug/view.cgi?id=1338648) which really is more of a
feature request, but anyway. It pointed out that --max-redirs did not allow
it to be set to 0, which then would return an error code on the first
Location: found. Based on Nis' patch, now libcurl supports CURLOPT_MAXREDIRS
set to 0, or -1 for infinity. Added test case 274 to verify.
|
|
(wrongly) sends *two* WWW-Authenticate headers for Digest. While this should
never happen in a sane world, libcurl previously got into an infinite loop
when this occurred. Dave added test 273 to verify this.
|
|
to the remote one
|
|
|
|
|
|
|
|
from the command line tool with --ignore-content-length. This will make it
easier to download files from Apache 1.x (and similar) servers that are
still having problems serving files larger than 2 or 4 GB. When this option
is enabled, curl will simply have to wait for the server to close the
connection to signal end of transfer. I wrote test case 269 that runs a
simple test that this works.
|
|
|
|
|
|
trailer is then sent to the normal header callback/stream.
|
|
fix the CONNECT authentication code with multi-pass auth methods (such as
NTLM) as it didn't previously properly ignore response-bodies - in fact it
stopped reading after all response headers had been received. This could
lead to libcurl sending the next request and reading the body from the first
request as response to the second request. (I also renamed the function,
which wasn't strictly necessary but...)
The best fix would to once and for all make the CONNECT code use the
ordinary request sending/receiving code, treating it as any ordinary request
instead of the special-purpose function we have now. It should make it
better for multi-interface too. And possibly lead to less code...
Added test case 265 for this. It doesn't work as a _really_ good test case
since the test proxy is too stupid, but the test case helps when running the
debugger to verify.
|
|
|
|
with CURLOPT_PROXY can use a http:// prefix and user + password. The user
and password fields are now also URL decoded properly.
Test case 264 added to verify.
|
|
|
|
address was not possible to use. It is now, but requires it written
RFC2732-style, within brackets - which incidently is how you enter numerical
IPv6 addresses in URLs. Test case 263 added to verify.
|
|
|
|
binary zeroes within the headers. They confused libcurl to do wrong so the
downloaded headers become incomplete. The fix is now verified with test case
262.
|
|
|
|
|
|
|
|
|
|
|
|
to the docs.
|
|
"http://somehost?data" as it added a slash too much in the request ("GET
/?data/"...). Added test case 260 to verify.
|
|
or the test is skipped. Ideally, we should let this test case go over a few
frequently used IPv6 localhost aliases...
|
|
A) Normal non-proxy HTTP:
- no more "Pragma: no-cache" (this only makes sense to proxies)
B) Non-CONNECT HTTP request over proxy:
- "Pragma: no-cache" is used (like before)
- "Proxy-Connection: Keep-alive" (for older style 1.0-proxies)
C) CONNECT HTTP request over proxy:
- "Host: [name]:[port]"
- "Proxy-Connection: Keep-alive"
|
|
HTTP test server is a bit limited though, as it never responds to the POST
request until all data has been sent (and received)...
|
|
the runtests.pl to check this differently on operating systems that
differentiate on this.
|
|
|
|
.netrc, and when following a Location: the subsequent requests didn't properly
use the auth as found in the netrc file. Added test case 257 to verify my fix.
|
|
|
|
|
|
|
|
|
|
used the default port. He was right. I fixed the problem and added the test
cases 521, 522 and 523 to verify the fix.
|
|
libcurl didn't properly send an Expect: 100-continue header. It does now.
|