Age | Commit message (Collapse) | Author |
|
|
|
Test command 'time curl http://localhost/80GB -so /dev/null' on a Debian
Linux.
Before (middle performing run out 9):
real 0m28.078s
user 0m11.240s
sys 0m12.876s
After (middle performing run out 9)
real 0m26.356s (93.9%)
user 0m5.324s (47.4%)
sys 0m8.368s (65.0%)
Also, doing SFTP over a 200 millsecond latency link is now about 6 times
faster.
Closes #1446
|
|
The 'list element' struct now has to be within the data that is being
added to the list. Removes 16.6% (tiny) mallocs from a simple HTTP
transfer. (96 => 80)
Also removed return codes since the llist functions can't fail now.
Test 1300 updated accordingly.
Closes #1435
|
|
|
|
A few random typos, and minor whitespace cleanups, found in comments
while reading code.
Closes #1423
|
|
This makes test 1135 pass with CRLF checkouts.
Ref: https://github.com/curl/curl/pull/1344#issuecomment-289243166
Closes https://github.com/curl/curl/pull/1422
|
|
MinGW-w64 complains:
warning: conversion to 'long int' from 'time_t {aka long long int}' may
alter its value [-Wconversion]
Fix this by using the correct type.
|
|
Ref: https://github.com/curl/curl/issues/1408
Closes https://github.com/curl/curl/pull/1412
|
|
Follow-up to aa573c3c55cda72ec5ef677d87f6f46a53385f0c
Ref: https://github.com/curl/curl/pull/1406
|
|
If the existing timer is still in there but has expired, the new timer
should be added.
Reported-by: Rainer Canavan
Bug: https://curl.haxx.se/mail/lib-2017-04/0030.html
Closes #1407
|
|
|
|
... the sizes and the formatting strings are what's really important and
avoids problems with int64_t vs "long long".
Bug: https://curl.haxx.se/mail/lib-2017-04/0019.html
|
|
This checks the new behavior of Curl_splaygetbest, so that the smallest
node not larger than the key is removed, and FIFO behavior is kept even
when there are multiple nodes with the same key.
Closes #1358
|
|
Multi handles repeatedly invert the queue of pending easy handles when
used with CURLMOPT_MAX_TOTAL_CONNECTIONS. This is caused by a multistep
process involving Curl_splaygetbest and violates the FIFO property of
the multi handle.
This patch fixes this issue by redefining the "best" node in the
context of timeouts as the "smallest not larger than now", and
implementing the necessary data structure modifications to do this
effectively, namely:
- splay nodes with the same key are now stored in a doubly-linked
circular list instead of a non-circular one to enable O(1)
insertion to the tail of the list
- Curl_splayinsert inserts nodes with the same key to the tail of
the same list
- in case of multiple nodes with the same key, the one on the head of
the list gets selected
|
|
No longer allocate the curl_llist head struct for lists separately.
Removes 17 (15%) tiny allocations in a normal "curl localhost" invoke.
closes #1381
|
|
system.h is aimed to replace curlbuild.h at a later point in time when
we feel confident system.h works sufficiently well.
curl/system.h is currently used in parallel with curl/curlbuild.h
curl/system.h determines a data sizes, data types and include file
status based on available preprocessor defines instead of getting
generated at build-time. This, in order to avoid relying on a build-time
generated file that makes it complicated to do 32 and 64 bit bields from
the same installed set of headers.
Test 1541 verifies that system.h comes to the same conclusion that
curlbuild.h offers.
Closes #1373
|
|
In ancient MinGW versions, in6addr_any was declared as extern, but not
defined. Because of that, 22a0c57746ae12506b1ba0f0fafffd26c1907d6a added
definitions for in6addr_any when compiling with MinGW. The bug was fixed in
w32api version 3.6 from 2006, so this workaround is not needed anymore for
recent versions.
This fixes the following MinGW-w64 warnings because the MinGW-w64 version of
IN6ADDR_ANY_INIT has the two additional braces inside the macro:
util.c:59:14: warning: braces around scalar initializer
util.c:59:40: warning: excess elements in scalar initializer
Ref: https://sourceforge.net/p/mingw/mingw-org-wsl/ci/e4803e0da25c57ae1ad0fa75ae2b7182ff7fa339/tree/w32api/ChangeLog
Closes https://github.com/curl/curl/pull/1379
|
|
When receiving chunked encoded data with trailers, and the write
callback returns PAUSE, there might be both body and header to store to
resend on unpause. Previously libcurl returned error for that case.
Added test case 1540 to verify.
Reported-by: Stephen Toub
Fixes #1354
Closes #1357
|
|
When using basic-auth, connections and proxy connections
can be re-used with different Authorization headers since
it does not authenticate the connection (like NTLM does).
For instance, the below command should re-use the proxy
connection, but it currently doesn't:
curl -v -U alice:a -x http://localhost:8181 http://localhost/
--next -U bob:b -x http://localhost:8181 http://localhost/
This is a regression since refactoring of ConnectionExists()
as part of: cb4e2be7c6d42ca0780f8e0a747cecf9ba45f151
Fix the above by removing the username and password compare
when re-using proxy connection at proxy_info_matches().
However, this fix brings back another bug would make curl
to re-print the old proxy-authorization header of previous
proxy basic-auth connection because it wasn't cleared.
For instance, in the below command the second request should
fail if the proxy requires authentication, but would succeed
after the above fix (and before aforementioned commit):
curl -v -U alice:a -x http://localhost:8181 http://localhost/
--next -x http://localhost:8181 http://localhost/
Fix this by clearing conn->allocptr.proxyuserpwd after use
unconditionally, same as we do for conn->allocptr.userpwd.
Also fix test 540 to not expect digest auth header to be
resent when connection is reused.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Closes https://github.com/curl/curl/pull/1350
|
|
Closes #1356
|
|
Reported-by: Brian Carpenter
Added test 1442 to verify
|
|
curl must be built before building the tests.
Closes https://github.com/curl/curl/pull/1352
|
|
Signed-off-by: Anders Roxell <anders.roxell@gmail.com>
Closes #1342
|
|
Running this in the root build dir will invoke the test suite to only
run tests not marked as 'flaky'.
|
|
|
|
|
|
|
|
These tests use an HTTP proxy so require that curl be built with HTTP
support.
|
|
The CURLOPT_USERAGENT and CURLOPT_MAXREDIRS options are only set if HTTP
support is available, so ignore them in tests where HTTP is not
guaranteed.
|
|
|
|
Depend on the known behaviour of URLs for nonexistent files rather than
the undefined behaviour of URLs for directories (which fails on Windows).
The test isn't about file: URLs at all, so the URL used doesn't really
matter.
|
|
Otherwise, the contents will end up in the output and fail the
verification.
|
|
|
|
If a % ended the statement, the string's trailing NUL would be skipped
and memory past the end of the buffer would be accessed and potentially
displayed as part of the --write-out output. Added tests 1440 and 1441
to check for this kind of condition.
Reported-by: Brian Carpenter
|
|
- Add new option CURLOPT_SUPPRESS_CONNECT_HEADERS to allow suppressing
proxy CONNECT response headers from the user callback functions
CURLOPT_HEADERFUNCTION and CURLOPT_WRITEFUNCTION.
- Add new tool option --suppress-connect-headers to expose
CURLOPT_SUPPRESS_CONNECT_HEADERS and allow suppressing proxy CONNECT
response headers from --dump-header and --include.
Assisted-by: Jay Satiro
Assisted-by: CarloCannas@users.noreply.github.com
Closes https://github.com/curl/curl/pull/783
|
|
A client MUST ignore any Content-Length or Transfer-Encoding header
fields received in a successful response to CONNECT.
"Successful" described as: 2xx (Successful). RFC 7231 4.3.6
Prior to this change such a case would cause an error.
In some ways this bug appears to be a regression since c50b878. Prior to
that libcurl may have appeared to function correctly in such cases by
acting on those headers instead of causing an error. But that behavior
was also incorrect.
Bug: https://github.com/curl/curl/issues/1317
Reported-by: mkzero@users.noreply.github.com
|
|
Do not call curl_easy_reset() between the requests, because the
auth state must be preserved for these tests.
Follow-up to 0afbcfd
|
|
Test 1903 is doing HTTP pipelining, and that is a timing and ordering
sensitive operation and this fails far too often on the Travis CI
leading to people more or less ignoring test failures there. Not good.
The end of pipelning is probably coming sooner rather than later
anyway...
|
|
|
|
Ignore man page dist files generated by scripts/updatemanpages.pl
|
|
|
|
|
|
... because it causes confusion with users. Example URLs:
"http://[127.0.0.1]:11211:80" which a lot of languages' URL parsers will
parse and claim uses port number 80, while libcurl would use port number
11211.
"http://user@example.com:80@localhost" which by the WHATWG URL spec will
be treated to contain user name 'user@example.com' but according to
RFC3986 is user name 'user' for the host 'example.com' and then port 80
is followed by "@localhost"
Both these formats are now rejected, and verified so in test 1260.
Reported-by: Orange Tsai
|
|
|
|
|
|
This is likely to be the case when building from a tar ball release
package which includes a prebuilt man page. In that case, test the
packaged man page instead. This only makes a difference when building
out-of-tree (in-tree, the location in both cases is identical).
|
|
The character set in POSIX is set by the locale defined by (in
decreasing order of precedence) the LC_ALL, LC_CTYPE and LANG
environment variables (CHARSET was used by libidn but not libidn2).
LC_ALL is cleared to ensure that LC_CTYPE takes effect, but LC_ALL is
not used to set the locale to ensure that other parts of the locale
aren't overridden. Since there doesn't seem to be a cross-platform way
of specifying a UTF-8 locale, and not all systems may support UTF-8, a
<precheck> is used to skip the test if UTF-8 can't be verified to be
available. Test 1035 was also converted to UTF-8 for consistency, as
the actual character set used there is irrelevant to the test.
This patch uses a different UTF-8 locale than the last attempt, namely
en_US.UTF-8. This one has been verified on 7 different Linux and BSD
distributions and is more complete and usable than the locale UTF-8 (on
at least some systems).
|
|
|
|
This reverts commit ecd1d020abdae3c3ce3643ddab3106501e62e7c0.
That commit caused test failures on my Debian Linux machine for all
changed test cases. We need to reconsider how that should get done.
|
|
Character set in POSIX is set by the locale defined (in decreasing order
of precedence) by the LC_ALL, LC_CTYPE and LANG environment variables (I
believe CHARSET is only historic). LC_ALL is cleared to ensure that
LC_CTYPE takes effect, but LC_ALL is not used to set the locale to
ensure that other parts of the locale aren't overriden, if set. Since
there doesn't seem to be a cross-platform way of specifying a UTF-8
locale, and not all systems may support UTF-8, a <precheck> is used
(where relevant) to skip the test if UTF-8 isn't in use. Test 1035 was
also converted to UTF-8 for consistency, as the actual character set
used there is irrelevant to the test.
|