Age | Commit message (Collapse) | Author |
|
Curl_setup_transfer() can be called to setup a new individual transfer
over a multiplexed connection so it shouldn't unset writesockfd.
Bug: #2520
Closes #2549
|
|
Commit 3c630f9b0af097663a64e5c875c580aa9808a92b partially reverted the
changes from commit dd7521bcc1b7a6fcb53c31f9bd1192fcc884bd56 because of
the problem that strcpy_url() was modified unilaterally without also
modifying strlen_url(). As a consequence strcpy_url() was again
depending on ASCII encoding.
This change fixes strlen_url() and strcpy_url() in parallel to use a
common host-encoding independent criterion for deciding whether an URL
character must be %-escaped.
Closes #2535
|
|
OSS-Fuzz detected
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=8000
Broke in dd7521bcc1b7
|
|
With commit 4272a0b0fc49a1ac0ceab5c4a365c9f6ab8bf8e2 curl-speficic
character classification macros and functions were introduced in
curl_ctype.[ch] to avoid dependencies on the locale. This broke curl on
non-ASCII, e.g. EBCDIC platforms. This change restores the previous set
of character classification macros when CURL_DOES_CONVERSIONS is
defined.
Closes #2494
|
|
When receiving REFUSED_STREAM, mark the connection for close and retry
streams accordingly on another/fresh connection.
Reported-by: Terry Wu
Fixes #2416
Fixes #1618
Closes #2510
|
|
This is what "HTTP/0.9" basically looks like.
Reported on IRC
Closes #2382
|
|
CVE-2018-1000122
Bug: https://curl.haxx.se/docs/adv_2018-b047.html
Detected by OSS-fuzz
|
|
Reported-by: Michael Kaufmann
Fixes #2357
Closes #2362
|
|
Closes #2302
|
|
This is implemented as an output streaming stack of unencoders, the last
calling the client write procedure.
New test 230 checks this feature.
Bug: https://github.com/curl/curl/pull/2002
Reported-By: Daniel Bankhead
|
|
- When uploading via chunked-encoding don't compare file size to bytes
sent to determine whether the upload has finished.
Chunked-encoding adds its own overhead which why the bytes sent is not
equal to the file size. Prior to this change if a file was uploaded in
chunked-encoding and its size was known it was possible that the upload
could end prematurely without sending the final few chunks. That would
result in a server hang waiting for the remaining data, likely followed
by a disconnect.
The scope of this bug is limited to some arbitrary file sizes which have
not been determined. One size that triggers the bug is 475020.
Bug: https://github.com/curl/curl/issues/2001
Reported-by: moohoorama@users.noreply.github.com
Closes https://github.com/curl/curl/pull/2010
|
|
Fixes timeouts in the fuzzing tests for non-FTP protocols.
Closes #2016
|
|
... since the 'tv' stood for timeval and this function does not return a
timeval struct anymore.
Also, cleaned up the Curl_timediff*() functions to avoid typecasts and
clean up the descriptive comments.
Closes #2011
|
|
... to cater for systems with unsigned time_t variables.
- Renamed the functions to curlx_timediff and Curl_timediff_us.
- Added overflow protection for both of them in either direction for
both 32 bit and 64 bit time_ts
- Reprefixed the curlx_time functions to use Curl_*
Reported-by: Peter Piekarski
Fixes #2004
Closes #2005
|
|
|
|
Closes #1878
|
|
|
|
|
|
... not only HTTP uses this now.
Closes #1875
|
|
... and slightly edited to follow our code style better.
|
|
Available in HTTP, SMTP and IMAP.
Deprecates the FORM API.
See CURLOPT_MIMEPOST.
Lib code and associated documentation.
|
|
Update the progress timers `t_nslookup`, `t_connect`, `t_appconnect`,
`t_pretransfer`, and `t_starttransfer` to track the total times for
these activities when a redirect is followed. Previously, only the times
for the most recent request would be tracked.
Related changes:
- Rename `Curl_pgrsResetTimesSizes` to `Curl_pgrsResetTransferSizes`
now that the function only resets transfer sizes and no longer
modifies any of the progress timers.
- Add a bool to the `Progress` struct that is used to prevent
double-counting `t_starttransfer` times.
Added test case 1399.
Fixes #522 and Known Bug 1.8
Closes #1602
Reported-by: joshhe on github
|
|
This fixes redirects to IDN URLs
Fixes #1441
Closes #1762
Reported by: David Lord
|
|
... since CURLOPT_URL should follow the same rules as other options:
they remain set until changed or cleared.
Added test 1551 to verify.
Fixes #1631
Closes #1632
Reported-by: Pavel Rochnyak
|
|
... with a strlen() if no size was set, and do this in the pretransfer
function so that the info is set early. Otherwise, the default strlen()
done on the POSTFIELDS data never sets state.infilesize.
Reported-by: Vincas Razma
Bug: #1294
|
|
Test 1261 added to verify.
Reported-by: Lloyd Fournier
Fixes #1489
Closes #1497
|
|
... since the total amount is low this is faster, easier and reduces
memory overhead.
Also, Curl_expire_done() can now mark an expire timeout as done so that
it never times out.
Closes #1472
|
|
A) reduces the timeout lists drastically
B) prevents a lot of superfluous loops for timers that expires "in vain"
when it has actually already been extended to fire later on
|
|
... to properly use the dynamically set buffer size!
|
|
|
|
|
|
The data->req.uploadbuf struct member served no good purpose, instead we
use ->state.uploadbuffer directly. It makes it clearer in the code which
buffer that's being used.
Removed the 'SingleRequest *' argument from the readwrite_upload() proto
as it can be derived from the Curl_easy struct. Also made the code in
the readwrite_upload() function use the 'k->' shortcut to all references
to struct fields in 'data->req', which previously was made with a mix of
both.
|
|
No longer allocate the curl_llist head struct for lists separately.
Removes 17 (15%) tiny allocations in a normal "curl localhost" invoke.
closes #1381
|
|
Closes #1356
|
|
... by removing the else branch after a return, break or continue.
Closes #1310
|
|
Using sftp to delete a file with CURLOPT_NOBODY set with a reused
connection would fail as curl expected to get some data. Thus it would
retry the command again which fails as the file has already been
deleted.
Fixes #1243
|
|
In order to make the code style more uniform everywhere
|
|
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
|
|
Visual C++ now complains about implicitly casting time_t (64-bit) to
long (32-bit). Fix this by changing some variables from long to time_t,
or explicitly casting to long where the public interface would be
affected.
Closes #1131
|
|
We had some confusions on when each function was used. We should not act
differently on different locales anyway.
|
|
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
|
|
Curl_select_ready() was the former API that was replaced with
Curl_select_check() a while back and the former arg setup was provided
with a define (in order to leave existing code unmodified).
Now we instead offer SOCKET_READABLE and SOCKET_WRITABLE for the most
common shortcuts where only one socket is checked. They're also more
visibly macros.
|
|
... like when a HTTP/0.9 response comes back without any headers at all
and just a body this now prevents that body from being sent to the
callback etc.
Adapted test 1144 to verify.
Fixes #973
Assisted-by: Ray Satiro
|
|
Fixes #982
|
|
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes #971
|
|
Mark's new document about HTTP Retries
(https://mnot.github.io/I-D/httpbis-retry/) made me check our code and I
spotted that we don't retry failed HEAD requests which seems totally
inconsistent and I can't see any reason for that separate treatment.
So, no separate treatment for HEAD starting now. A HTTP request sent
over a reused connection that gets cut off before a single byte is
received will be retried on a fresh connection.
Made-aware-by: Mark Nottingham
|
|
Regression added in 790d6de48515. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
|
|
|
|
- Return value type must match function type.
s/CURLM_OUT_OF_MEMORY/CURLE_OUT_OF_MEMORY/
Caught by Travis CI
|
|
The proper FTP wildcard init is now more properly done in Curl_pretransfer()
and the corresponding cleanup in Curl_close().
The previous place of init/cleanup code made the internal pointer to be NULL
when this feature was used with the multi_socket() API, as it was made within
the curl_multi_perform() function.
Reported-by: Jonathan Cardoso Machado
Fixes #800
|