Age | Commit message (Collapse) | Author |
|
|
|
It makes no difference from curl's point of view but
makes it more convenient to use the tests with a
lws-normalizing proxy between curl and the test server.
|
|
They currently only work for 127.0.0.1 which
is hardcoded and can't be easily changed.
|
|
.. and add a precheck to skip the test otherwise.
|
|
This makes it easier to skip it automatically when
the test suite is used with external proxies.
|
|
|
|
|
|
Trailing spaces were left unmodifed, assuming they were intentional.
|
|
Consistently use CRLF instead. The mixed endings weren't
documented so I assume they were unintentional.
This change doesn't matter for curl itself but makes using
the tests with a proxy between curl and the test server
more convenient.
Tests that consistently use no carriage returns were
left unmodified as one can easily work around this.
|
|
DNS cache entries populated with CURLOPT_RESOLVE were not properly freed
again when done using the multi interface.
Test case 1502 added to verify.
Bug: http://curl.haxx.se/bug/view.cgi?id=3575448
Reported by: Alex Gruz
|
|
If we use memory functions (malloc, free, strdup etc) in C sources in
libcurl and we fail to include curl_memory.h or memdebug.h we either
fail to properly support user-provided memory callbacks or the memory
leak system of the test suite fails.
After Ajit's report of a failure in the first category in http_proxy.c,
I spotted a few in the second category as well. These problems are now
tested for by test 1132 which runs a perl program that scans for and
attempts to check that we use the correct include files if a memory
related function is used in the source code.
Reported by: Ajit Dhumale
Bug: http://curl.haxx.se/mail/lib-2012-11/0125.html
|
|
They broke the NTLM tests from 2023 to 2031.
|
|
|
|
|
|
|
|
The bug report claimed it didn't work. This problem was probably fixed
in 473003fbdf.
Bug: http://curl.haxx.se/bug/view.cgi?id=3581898
|
|
The existing logic only cut off the fragment from the separate 'path'
buffer which is used when sending HTTP to hosts. The buffer that held
the full URL used for proxies were not dealt with. It is now.
Test case 5 was updated to use a fragment on a URL over a proxy.
Bug: http://curl.haxx.se/bug/view.cgi?id=3579813
|
|
With the reversion of ce8311c7e49eca and the new clear logic, this flaw
is present and we allow it.
|
|
This test case verifies that bug 3582718 is fixed.
Bug: http://curl.haxx.se/bug/view.cgi?id=3582718
Reported by: Nick Zitzmann (originally)
|
|
Since automake 1.12.4, the warnings are issued on running automake:
warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS')
Avoid INCLUDES and roll these flags into AM_CPPFLAGS.
Compile tested on:
Ubuntu 10.04 (automake 1:1.11.1-1)
Ubuntu 12.04 (automake 1:1.11.3-1ubuntu2)
Arch Linux (automake 1.12.4)
|
|
|
|
|
|
As pointed out in Bug report #3579064, curl_multi_perform() would
wrongly use a blocking mechanism internally for some commands which
could lead to for example a very long block if the LIST response never
showed.
The solution was to make sure to properly continue to use the multi
interface non-blocking state machine.
The new test 1501 verifies the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3579064
Reported by: Guido Berhoerster
|
|
Output changed in commit a34197ef77cb
|
|
|
|
Minor change to recently introduced function. BC breaking, but since
curl_multi_wait() doesn't exist in any releases that should be fine.
|
|
|
|
Fixed tests/libtest/libntlmconnect.c:52: warning: call to
'_curl_easy_getinfo_err_long' declared with attribute warning:
curl_easy_getinfo expects a pointer to long for this info
|
|
|
|
Windows does not use -1 to represent invalid sockets and the
SOCKET type is unsigned.
|
|
|
|
|
|
... and specify that SIZE is supported. 250 is the "correct" response
code according to RFC 2821
|
|
|
|
|
|
The test would hang and get aborted with a "ABORTING TEST, since it
seems that it would have run forever." until I prevented that from
happening.
I also fixed the data file which got broken CRLF line endings when I
sucked down the path from Joe's repo == my fault.
Removed #37 from KNOWN_BUGS as this fix and test case verifies exactly
this.
|
|
Add test2032 to test that NTLM does not switch connections in the middle
of the handshake
|
|
I suspect this is a regression introduced in commit 207cf150, included
since 7.24.0.
Avoid showing '(nil)' as hostname in verbose output by making sure the
hostname fixup function is called early enough to set the pointers that
are used for this. The name data is set again for each request even for
re-used connections to handle multiple hostnames over the same
connection (like with proxy) or that the casing etc of the host name is
changed between requests (which has proven to be important at least once
in the past).
Test1011 was modified to use a redirect with a re-used a connection
since it then showed the bug and now lo longer does. There's currently
no easy way to have the test suite detect 'nil' texts in verbose ouputs
so no tests will detect if this problem gets reintroduced.
Bug: http://curl.haxx.se/mail/lib-2012-07/0111.html
Reported by: Gisle Vanem
|
|
|
|
Fix a bug where closed sockets (fd -1) were left in the all_sockets
list, because of missing parens in a pointer arithmetic expression
Reenable the tests that were locking up due to this bug.
|
|
|
|
|
|
|
|
|
|
The tests 2025, 2028 and 2031 don't work for me so I'll have them
disabled for now until we solve the problem.
|
|
|
|
the O_NONBLOCK and
SO_KEEPALIVE flag to all sockets. Note that several loops which used to continue on a return value
of 0 (theoretical since 0 would never be returned without O_NONBLOCK) now break on 0 so that they
won't continue reading until after poll is called again.
|
|
service_connection to add a return code
for non-blocking sockets: now -1 means error or connection finished, 1 means data was read, and 0
means there is no data available now so need to wait for poll (new return value)
|
|
when a request is
half-finished.
Note the the req struct used to be re-initialized AFTER reading pipeline data, so now that we
initialize it from the caller we must be careful not to overwrite the pipeline data.
Also we now need to handle the case where the buffer is already full when get_request is called -
previously this never happened as it was always called with an empty buffer and looped until done.
Now get_request is called in a loop, so the next step is to run the loop on a socket only when poll
signals it is readable.
|
|
easier refactoring later.
The next step will be to call the correct function after a poll, rather than looping unconditionally
|