Age | Commit message (Collapse) | Author |
|
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
|
|
|
|
|
|
Exit with an error on the first transfer error instead of continuing to
do the rest of the URLs.
Discussion: https://curl.haxx.se/mail/archive-2016-11/0038.html
|
|
to consider ECONNREFUSED as a transient error.
Closes #1064
|
|
Fully implemented with the NSS backend only for now.
Reviewed-by: Ray Satiro
|
|
Fixes #1107
Reported-by: Adam Piggott
|
|
|
|
Suggested-by: Dan Jacobson
Issue: https://github.com/curl/curl/issues/1097
|
|
|
|
.. and add that --proto-redir and CURLOPT_REDIR_PROTOCOLS do not
override protocols denied by --proto and CURLOPT_PROTOCOLS.
- Add a test to enforce: --proto deny must override --proto-redir allow
Closes https://github.com/curl/curl/pull/1031
|
|
Since we're using CURLE_FTP_WEIRD_SERVER_REPLY in imap, pop3 and smtp as
more of a generic "failed to parse" introduce an alias without FTP in
the name.
Closes https://github.com/curl/curl/pull/975
|
|
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes #971
|
|
for backward compatibility only
In other news, I changed one other reference to "Mac OS X" in the documentation (that I previously wrote) to say "macOS" instead.
|
|
Closes #883
|
|
|
|
|
|
Adds access to the effectively used http version to both libcurl and
curl.
Closes #799
|
|
|
|
Reported-by: mgendre
Closes #784
|
|
|
|
|
|
|
|
|
|
Even if deprecated, document it so that people will find it as old
scripts may still use it.
|
|
|
|
|
|
Makes curl connect to the given host+port instead of the host+port found
in the URL.
|
|
Make (most) example snippets use the example.com domain instead of the
random ones picked and used before. Some of those were probably
legitimate sites and some not. example.com is designed for this purpose.
|
|
It's a bad idea to send your passwords anywhere, especially over HTTP.
Modified example to send a picture instead.
Fixes #752
|
|
We never made a 7.25.1 release
|
|
|
|
Supports HTTP/2 over clear TCP
- Optimize switching to HTTP/2 by removing calls to init and setup
before switching. Switching will eventually call setup and setup calls
init.
- Supports new version to “force” the use of HTTP/2 over clean TCP
- Add common line parameter “--http2-prior-knowledge” to the Curl
command line tool.
|
|
- Add tests.
- Add an example to CURLOPT_TFTP_NO_OPTIONS.3.
- Add --tftp-no-options to expose CURLOPT_TFTP_NO_OPTIONS.
Bug: https://github.com/curl/curl/issues/481
|
|
Bug: https://github.com/curl/curl/issues/666
Reported-by: baumanj@users.noreply.github.com
|
|
|
|
The behavior has been clarified in CURLOPT_FTP_USE_{EPRT,EPSV}.3 man
pages since curl-7_12_3~131. This patch makes it clear in the curl.1
man page, too.
Bug: https://bugzilla.redhat.com/1305970
|
|
|
|
|
|
.. also warn about letting the server pick the filename.
|
|
This is the new command line option to set the value for the existing
libcurl option CURLOPT_EXPECT_100_TIMEOUT_MS
|
|
... it is just weird to include by default even if it still works.
|
|
... as the certificate is strictly speaking not private.
Reported-by: John Levon
|
|
|
|
|
|
- Warn that cookies without a domain are sent to any domain:
CURLOPT_COOKIELIST, CURLOPT_COOKIEFILE, --cookie
- Note that imported Set-Cookie cookies without a domain are no longer
exported:
CURLINFO_COOKIELIST, CURLOPT_COOKIEJAR, --cookie-jar
|
|
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
|
|
- Clarify that FILE and SCP are disabled by default since 7.19.4
- Add that SMB and SMBS are disabled by default since 7.40.0
- Add CURLPROTO_SMBS to the list of protocols
|
|
Acknowledge that SSLv3 is also widely considered to be insecure.
Also, provide references for people who want to know more about why it's
insecure.
|
|
closes #376
|