Age | Commit message (Collapse) | Author |
|
The data->req.uploadbuf struct member served no good purpose, instead we
use ->state.uploadbuffer directly. It makes it clearer in the code which
buffer that's being used.
Removed the 'SingleRequest *' argument from the readwrite_upload() proto
as it can be derived from the Curl_easy struct. Also made the code in
the readwrite_upload() function use the 'k->' shortcut to all references
to struct fields in 'data->req', which previously was made with a mix of
both.
|
|
No longer allocate the curl_llist head struct for lists separately.
Removes 17 (15%) tiny allocations in a normal "curl localhost" invoke.
closes #1381
|
|
Closes #1356
|
|
... by removing the else branch after a return, break or continue.
Closes #1310
|
|
Using sftp to delete a file with CURLOPT_NOBODY set with a reused
connection would fail as curl expected to get some data. Thus it would
retry the command again which fails as the file has already been
deleted.
Fixes #1243
|
|
In order to make the code style more uniform everywhere
|
|
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
|
|
Visual C++ now complains about implicitly casting time_t (64-bit) to
long (32-bit). Fix this by changing some variables from long to time_t,
or explicitly casting to long where the public interface would be
affected.
Closes #1131
|
|
We had some confusions on when each function was used. We should not act
differently on different locales anyway.
|
|
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
|
|
Curl_select_ready() was the former API that was replaced with
Curl_select_check() a while back and the former arg setup was provided
with a define (in order to leave existing code unmodified).
Now we instead offer SOCKET_READABLE and SOCKET_WRITABLE for the most
common shortcuts where only one socket is checked. They're also more
visibly macros.
|
|
... like when a HTTP/0.9 response comes back without any headers at all
and just a body this now prevents that body from being sent to the
callback etc.
Adapted test 1144 to verify.
Fixes #973
Assisted-by: Ray Satiro
|
|
Fixes #982
|
|
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes #971
|
|
Mark's new document about HTTP Retries
(https://mnot.github.io/I-D/httpbis-retry/) made me check our code and I
spotted that we don't retry failed HEAD requests which seems totally
inconsistent and I can't see any reason for that separate treatment.
So, no separate treatment for HEAD starting now. A HTTP request sent
over a reused connection that gets cut off before a single byte is
received will be retried on a fresh connection.
Made-aware-by: Mark Nottingham
|
|
Regression added in 790d6de48515. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
|
|
|
|
- Return value type must match function type.
s/CURLM_OUT_OF_MEMORY/CURLE_OUT_OF_MEMORY/
Caught by Travis CI
|
|
The proper FTP wildcard init is now more properly done in Curl_pretransfer()
and the corresponding cleanup in Curl_close().
The previous place of init/cleanup code made the internal pointer to be NULL
when this feature was used with the multi_socket() API, as it was made within
the curl_multi_perform() function.
Reported-by: Jonathan Cardoso Machado
Fixes #800
|
|
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes #743
|
|
|
|
When an upload is done, there are two places where that can be detected
and only one of them would rewind the input stream - which sometimes is
necessary for example when doing NTLM HTTP POSTs and more.
This could then end up libcurl hanging.
Figured-out-by: Isaac Boukris
Reported-by: Anatol Belski
Fixes #741
|
|
now a file local function in multi.c
|
|
It would also seem that share.h is not required here either as there
are no references to the Curl_share structure or functions.
|
|
|
|
Previously, when HTTP/2 is enabled and used, and stream has content
length known, Curl_read was not called when there was no bytes left to
read. Because of this, we could not make sure that
http2_handle_stream_close was called for every stream. Since we use
http2_handle_stream_close to emit trailer fields, they were
effectively ignored. This commit changes the code so that Curl_read is
called even if no bytes left to read, to ensure that
http2_handle_stream_close is called for every stream.
Discussed in https://github.com/bagder/curl/pull/564
|
|
Apparently there are sites out there that do redirects to URLs they
provide in plain UTF-8 or similar. Browsers and wget %-encode such
headers when doing a subsequent request. Now libcurl does too.
Added test 1138 to verify.
Closes #473
|
|
... and assign it from the set.fread_func_set pointer in the
Curl_init_CONNECT function. This A) avoids that we have code that
assigns fields in the 'set' struct (which we always knew was bad) and
more importantly B) it makes it impossibly to accidentally leave the
wrong value for when the handle is re-used etc.
Introducing a state-init functionality in multi.c, so that we can set a
specific function to get called when we enter a state. The
Curl_init_CONNECT is thus called when switching to the CONNECT state.
Bug: https://github.com/bagder/curl/issues/346
Closes #346
|
|
... as otherwise a really fast pipe can "lock" one transfer for some
protocols, like with HTTP/2.
|
|
... don't try to increase the supposed file size on newlines if we don't
know what file size it is!
Patch-by: lzsiga
|
|
Currently, libcurl rejects responses with "Content-Encoding: compress"
when CURLOPT_ACCEPT_ENCODING is set to "". I think that libcurl should
treat the Content-Encoding "compress" the same as other
Content-Encodings that it does not support, e.g. "bzip2". That means
just ignoring it.
|
|
... to properly support that options are set to the handle after it is
added to the multi handle.
Bug: http://curl.haxx.se/mail/lib-2015-06/0122.html
Reported-by: Stefan Bühler
|
|
|
|
With many easy handles using the same connection for multiplexing, it is
important we store and keep the transfer-oriented stuff in the
SessionHandle so that callbacks and callback data work fine even when
many easy handles share the same physical connection.
|
|
|
|
.. also make __func__ replacement in multi.
Prior to this change debug builds would fail to build if the compiler
was building pre-c99 and didn't support __func__.
|
|
|
|
... which is necessary since the socket won't be readable but there is
data waiting in the buffer.
|
|
|
|
|
|
The factor of 8 is a bytes-to-bits conversion factor, but pkt_size and
rate_bps are both in bytes. When using the rate limiting option, curl
waits 8 times too long, and then transfers very quickly until the
average rate reaches the limit. The average rate follows the limit over
time, but the actual traffic is bursty.
Thanks-to: Benjamin Gilbert
|
|
This header file must be included after all header files except
memdebug.h, as it does similar memory function redefinitions and can be
similarly affected by conflicting definitions in system or dependent
library headers.
|
|
|
|
|
|
... and as a consequence, introduce curl_printf.h with that re-define
magic instead and make all libcurl code use that instead.
|
|
Reported-by: Mohammad AlSaleh
Bug: http://curl.haxx.se/mail/lib-2015-01/0065.html
|
|
Prefer ! rather than NULL in if statements, added comments and updated
function spacing, argument spacing and line spacing to be more readble.
|
|
If the scratch buffer already existed when the CRLF conversion was
performed then the buffer pointer would be checked twice for NULL. This
second check is only necessary if the call to malloc() was performed by
the first check.
|
|
Whilst I had moved the dot stuffing code from being performed before
CRLF conversion takes place to after it, in commit 4bd860a001, I had
moved it outside the 'when something read' block of code when meant
it could perform the dot stuffing twice on partial send if nread
happened to contain the right values. It also meant the function could
potentially read past the end of buffer. This was highlighted by the
following warning:
warning: `nread' might be used uninitialized in this function
|
|
Added support for the automatic conversion of Unix newlines to CRLF
during mail uploads.
Feature: http://curl.haxx.se/bug/view.cgi?id=1456
|