Age | Commit message (Collapse) | Author |
|
There are some bugs in how timers are managed for a single easy handle
that causes the wrong "next timeout" value to be reported to the
application when a new minimum needs to be recomputed and that new
minimum should be an existing timer that isn't currently set for the
easy handle. When the application drives a set of easy handles via the
`curl_multi_socket_action()` API (for example), it gets told to wait the
wrong amount of time before the next call, which causes requests to
linger for a long time (or, it is my guess, possibly forever).
Bug: https://curl.haxx.se/mail/lib-2017-07/0033.html
|
|
... to make all libcurl internals able to use the same data types for
the struct members. The timeval struct differs subtly on several
platforms so it makes it cumbersome to use everywhere.
Ref: #1652
Closes #1693
|
|
Reported-by: ovidiu-benea@users.noreply.github.com
Closes #1675
Closes #1683
|
|
Mentioned as a problem since 2007 (8f87c15bdac63) and of course it
existed even before that.
Closes #1547
|
|
With the introduction of expire IDs and the fact that existing timers
can be removed now and thus never expire, the concept with adding a
"latest" timer is not working anymore as it risks to not expire at all.
So, to be certain the timers actually are in line and will expire, the
plain Curl_expire() needs to be used. The _latest() function was added
as a sort of shortcut in the past that's quite simply not necessary
anymore.
Follow-up to 31b39c40cf90
Reported-by: Paul Harris
Closes #1555
|
|
Test 1261 added to verify.
Reported-by: Lloyd Fournier
Fixes #1489
Closes #1497
|
|
|
|
... since the total amount is low this is faster, easier and reduces
memory overhead.
Also, Curl_expire_done() can now mark an expire timeout as done so that
it never times out.
Closes #1472
|
|
A) reduces the timeout lists drastically
B) prevents a lot of superfluous loops for timers that expires "in vain"
when it has actually already been extended to fire later on
|
|
`if(nfds || extra_nfds) {` is followed by `malloc(nfds * ...)`.
If `extra_fs` could be non-zero when `nfds` was zero, then we have
`malloc(0)` which is allowed to return `NULL`. But, malloc returning
NULL can be confusing. In this code, the next line would treat the NULL
as an allocation failure.
It turns out, if `nfds` is zero then `extra_nfds` must also be zero.
The final value of `nfds` includes `extra_nfds`. So the test for
`extra_nfds` is redundant. It can only confuse the reader.
Closes #1439
|
|
The 'list element' struct now has to be within the data that is being
added to the list. Removes 16.6% (tiny) mallocs from a simple HTTP
transfer. (96 => 80)
Also removed return codes since the llist functions can't fail now.
Test 1300 updated accordingly.
Closes #1435
|
|
If the existing timer is still in there but has expired, the new timer
should be added.
Reported-by: Rainer Canavan
Bug: https://curl.haxx.se/mail/lib-2017-04/0030.html
Closes #1407
|
|
No longer allocate the curl_llist head struct for lists separately.
Removes 17 (15%) tiny allocations in a normal "curl localhost" invoke.
closes #1381
|
|
When only a few additional file descriptors are used, avoid the malloc.
Closes #1377
|
|
When receiving chunked encoded data with trailers, and the write
callback returns PAUSE, there might be both body and header to store to
resend on unpause. Previously libcurl returned error for that case.
Added test case 1540 to verify.
Reported-by: Stephen Toub
Fixes #1354
Closes #1357
|
|
error: conversion to 'long int' from 'time_t {aka long long int}' may alter
its value [-Werror=conversion]
|
|
The code would refer to the wrong data pointer. Only debug builds do
this - for verbosity.
Reported-by: zelinchen@users.noreply.github.com
Fixes #1329
|
|
... by removing the else branch after a return, break or continue.
Closes #1310
|
|
Follow-up to 4b86113
Fixes https://github.com/curl/curl/issues/793
Fixes https://github.com/curl/curl/issues/942
|
|
Properly resolve, convert and log the proxy host names.
Support the "--connect-to" feature for SOCKS proxies and for passive FTP
data transfers.
Follow-up to cb4e2be
Reported-by: Jay Satiro
Fixes https://github.com/curl/curl/issues/1248
|
|
|
|
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
|
|
Visual C++ now complains about implicitly casting time_t (64-bit) to
long (32-bit). Fix this by changing some variables from long to time_t,
or explicitly casting to long where the public interface would be
affected.
Closes #1131
|
|
Several independent reports on infinite loops hanging in the
close_all_connections() function when closing a multi handle, can be
fixed by first marking the connection to get closed before calling
Curl_disconnect.
This is more fixing-the-symptom rather than the underlying problem
though.
Bug: https://curl.haxx.se/mail/lib-2016-10/0011.html
Bug: https://curl.haxx.se/mail/lib-2016-10/0059.html
Reported-by: Dan Fandrich, Valentin David, Miloš Ljumović
|
|
In short the easy handle needs to be disconnected from its connection at
this point since the connection still is serving other easy handles.
In our app we can reliably reproduce a crash in our http2 stress test
that is fixed by this change. I can't easily reproduce the same test in
a small example.
This is the gdb/asan output:
==11785==ERROR: AddressSanitizer: heap-use-after-free on address 0xe9f4fb80 at pc 0x09f41f19 bp 0xf27be688 sp 0xf27be67c
READ of size 4 at 0xe9f4fb80 thread T13 (RESOURCE_HTTP)
#0 0x9f41f18 in curl_multi_remove_handle /path/to/source/3rdparty/curl/lib/multi.c:666
0xe9f4fb80 is located 0 bytes inside of 1128-byte region [0xe9f4fb80,0xe9f4ffe8)
freed by thread T13 (RESOURCE_HTTP) here:
#0 0xf7b1b5c2 in __interceptor_free /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:45
#1 0x9f7862d in conn_free /path/to/source/3rdparty/curl/lib/url.c:2808
#2 0x9f78c6a in Curl_disconnect /path/to/source/3rdparty/curl/lib/url.c:2876
#3 0x9f41b09 in multi_done /path/to/source/3rdparty/curl/lib/multi.c:615
#4 0x9f48017 in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1896
#5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123
#6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854
#7 0x9c445e0 in ...
#8 0x9c4cf1d in ...
#9 0xa2be6b5 in ...
#10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d)
previously allocated by thread T13 (RESOURCE_HTTP) here:
#0 0xf7b1ba27 in __interceptor_calloc /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:70
#1 0x9f7dfa6 in allocate_conn /path/to/source/3rdparty/curl/lib/url.c:3904
#2 0x9f88ca0 in create_conn /path/to/source/3rdparty/curl/lib/url.c:5797
#3 0x9f8c928 in Curl_connect /path/to/source/3rdparty/curl/lib/url.c:6438
#4 0x9f45a8c in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1411
#5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123
#6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854
#7 0x9c445e0 in ...
#8 0x9c4cf1d in ...
#9 0xa2be6b5 in ...
#10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d)
SUMMARY: AddressSanitizer: heap-use-after-free /path/to/source/3rdparty/curl/lib/multi.c:666 in curl_multi_remove_handle
Shadow bytes around the buggy address:
0x3d3e9f20: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f30: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f40: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f50: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa
0x3d3e9f60: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x3d3e9f70:[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f90: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fa0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fb0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fc0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==11785==ABORTING
Thread 14 "RESOURCE_HTTP" received signal SIGABRT, Aborted.
[Switching to Thread 0xf27bfb40 (LWP 12324)]
0xf7fd8be9 in __kernel_vsyscall ()
(gdb) bt
#0 0xf7fd8be9 in __kernel_vsyscall ()
#1 0xf4c7ee89 in __GI_raise (sig=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#2 0xf4c803e7 in __GI_abort () at abort.c:89
#3 0xf7b2ef2e in __sanitizer::Abort () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_posix_libcdep.cc:122
#4 0xf7b262fa in __sanitizer::Die () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_common.cc:145
#5 0xf7b21ab3 in __asan::ScopedInErrorReport::~ScopedInErrorReport (this=0xf27be171, __in_chrg=<optimized out>) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:689
#6 0xf7b214a5 in __asan::ReportGenericError (pc=166993689, bp=4068206216, sp=4068206204, addr=3925146496, is_write=false, access_size=4, exp=0, fatal=true) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:1074
#7 0xf7b21fce in __asan::__asan_report_load4 (addr=3925146496) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_rtl.cc:129
#8 0x09f41f19 in curl_multi_remove_handle (multi=0xf3406080, data=0xde582400) at /path/to/source3rdparty/curl/lib/multi.c:666
#9 0x09f6b277 in Curl_close (data=0xde582400) at /path/to/source3rdparty/curl/lib/url.c:415
#10 0x09f3354e in curl_easy_cleanup (data=0xde582400) at /path/to/source3rdparty/curl/lib/easy.c:860
#11 0x09c6de3f in ...
#12 0x09c378c5 in ...
#13 0x09c48133 in ...
#14 0x09c4d092 in ...
#15 0x0a2be6b6 in ...
#16 0xf7aa5781 in asan_thread_start (arg=0xf2d22938) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#17 0xf5de52b5 in start_thread (arg=0xf27bfb40) at pthread_create.c:333
#18 0xf4d3a16e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:114
Fixes #1083
|
|
The closure handle only ever has default timeouts set. To improve the
state somewhat we clone the timeouts from each added handle so that the
closure handle always has the same timeouts as the most recently added
easy handle.
Fixes #739
|
|
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes #971
|
|
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes #941
|
|
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
|
|
Regression added in 790d6de48515. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
|
|
CVE-2016-5421
Bug: https://curl.haxx.se/docs/adv_20160803C.html
Reported-by: Marcelo Echeverria and Fernando Muñoz
|
|
... and save the typedef'ed names for headers and external APIs.
|
|
|
|
The proper FTP wildcard init is now more properly done in Curl_pretransfer()
and the corresponding cleanup in Curl_close().
The previous place of init/cleanup code made the internal pointer to be NULL
when this feature was used with the multi_socket() API, as it was made within
the curl_multi_perform() function.
Reported-by: Jonathan Cardoso Machado
Fixes #800
|
|
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes #743
|
|
Regression introduced in 09b5a998
Bug: https://curl.haxx.se/mail/lib-2016-04/0084.html
Reported-by: BoBo
|
|
Makes curl connect to the given host+port instead of the host+port found
in the URL.
|
|
Previously, when a stream was closed with other than NGHTTP2_NO_ERROR
by RST_STREAM, underlying TCP connection was dropped. This is
undesirable since there may be other streams multiplexed and they are
very much fine. This change introduce new error code
CURLE_HTTP2_STREAM, which indicates stream error that only affects the
relevant stream, and connection should be kept open. The existing
CURLE_HTTP2 means connection error in general.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
|
|
|
|
... as it now is used by multi.c only.
|
|
now a file local function in multi.c
|
|
... called multi_do and multi_do_done as they're file local now.
|
|
Use the local, reasonably updated, 'now' value when creating the message
string to output for the timeout condition.
Fixes #619
|
|
Since sh_getentry() now checks for invalid sockets itself and by
narrowing the scope of the remove_sock_from_hash variable.
|
|
Simplify the code by using a single entry that looks for a socket in the
socket hash. As indicated in #712, the code looked for CURL_SOCKET_BAD
at some point and that is ineffective/wrong and this makes it easier to
avoid that.
|
|
Closes #712
|
|
Closes #703
|
|
Such a return value isn't documented but could still happen, and the
curl tool code checks for it. It would happen when the underlying
Curl_poll() function returns an error. Starting now we mask that error
as a user of curl_multi_wait() would have no way to handle it anyway.
Reported-by: Jay Satiro
Closes #707
|
|
The internal Curl_done() function uses Curl_expire() at times and that
uses the timeout list. Better clean up the list once we're done using
it. This caused a segfault.
Reported-by: 蔡文凱
Bug: https://curl.haxx.se/mail/lib-2016-02/0097.html
|