Age | Commit message (Collapse) | Author |
|
are not, due mainly to the lack of support for XML character entities
(e.g. & => & ). This will make it easier to validate test files using
tools like xmllint, as well as edit and view them using XML tools.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.
|
|
|
|
responded with a single status line and no headers nor body. Starting now, a
HTTP response on a persistent connection (i.e not set to be closed after the
response has been taken care of) must have Content-Length or chunked
encoding set, or libcurl will simply assume that there is no body.
To my horror I learned that we had no less than 57(!) test cases that did bad
HTTP responses like this, and even the test http server (sws) responded badly
when queried by the test system if it is the test system. So although the
actual fix for the problem was tiny, going through all the newly failing test
cases got really painful and boring.
|
|
|
|
when more than FD_SETSIZE file descriptors are open.
This means that if for any reason we are not able to
open more than FD_SETSIZE file descriptors then test
518 should not be run.
test 537 is all about testing libcurl functionality
when the system has nearly exhausted the number of
free file descriptors. Test 537 will try to run with
very few free file descriptors.
|
|
case when 401 or 407 are returned, *IF* no auth credentials have been given.
The CURLOPT_FAILONERROR option is not possible to make fool-proof for 401
and 407 cases when auth credentials is given, but we've now covered this
somewhat more.
You might get some amounts of headers transferred before this situation is
detected, like for when a "100-continue" is received as a response to a
POST/PUT and a 401 or 407 is received immediately afterwards.
Added test 281 to verify this change.
|
|
reading the (local) CA cert file to let users easier pinpoint the actual
problem. CURLE_SSL_CACERT_BADFILE (77) is the new libcurl error code.
|
|
with multi interface and pipelining. This test just works and did not repeat
the problem his test code showed, but could still serve as a useful test.
|
|
case 535 and it now runs fine. Again a problem with the pipelining code not
taking all possible (error) conditions into account.
|
|
|
|
|
|
now runs fine.
|
|
but that worked nicely in 7.15.5. I converted it into test case 532 and
fixed the problem.
|
|
|
|
would crash if a bad function sequence was used when shutting down after
using the multi interface (i.e using easy_cleanup after multi_cleanup) so
precautions have been added to make sure it doesn't any more - test case 529
was added to verify.
|
|
jar has died and we now instead point out our own version of that
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=1561470) that is said to crash when an
FTP upload fails with the multi interface. It did not, but I made a failed
upload still assume the control connection to be fine.
|
|
empty password or no password at all. Test case 278 and 279 were added to
verify.
|
|
|
|
FTP 3rd party transfers to that file for now until I have them sorted out.
|
|
|
|
cache within the multi handle.
|
|
|
|
command on subsequent requests on a re-used connection unless it has to.
|
|
send the whole request at once, even though the Expect: header was disabled
by the application. An effect of this change is also that small (< 1024
bytes) POSTs are now always sent without Expect: header since we deem it
more costly to bother about that than the risk that we send the data in
vain.
|
|
|
|
CURLSHOPT_UNLOCKFUNC and CURLSHOPT_USERDATA, so we now also have to check them here.
|
|
string comparisons on the path which is incorrect and provided a patch that
fixes this. I edited test case 8 to include details that test for this.
|
|
|
|
fine
|
|
2 - store the time it took to verify it and allow that time to be used as
%FTPTIME[23] in command lines to allow us to adjust better to slow hosts
since test 190 failed on my slow solaris machine just because it hadn't
gotten time to run all the way the test assumed all machines would reach
before the time-out elapsed.
|
|
transfers. They are done on non-windows systems and translate CRLF to LF.
|
|
read one byte at a time...
|
|
hosts
|
|
(http://curl.haxx.se/mail/lib-2006-02/0154.html) by adding the NTLM hash
function in addition to the LM one and making some other adjustments in the
order the different parts of the data block are sent in the Type-2 reply.
Inspiration for this work was taken from the Firefox NTLM implementation.
I edited the existing 21(!) NTLM test cases to run fine with these news. Due
to the fact that we now properly include the host name in the Type-2 message
the test cases now only compare parts of that chunk.
|
|
occurred when asking libcurl to follow HTTP redirects and the original URL had
more than one question mark (?). Added test case 276 to verify.
|
|
|
|
|
|
|
|
|
|
even after EPSV returned a positive response code, if libcurl failed to
connect to the port number the EPSV response said. Obviously some people are
going through protocol-sensitive firewalls (or similar) that don't understand
EPSV and then they don't allow the second connection unless PASV was
used. This also called for a minor fix of test case 238.
|