Age | Commit message (Collapse) | Author |
|
In addition to unix domain sockets, Linux also supports an
abstract namespace which is independent of the filesystem.
In order to support it, add new CURLOPT_ABSTRACT_UNIX_SOCKET
option which uses the same storage as CURLOPT_UNIX_SOCKET_PATH
internally, along with a flag to specify abstract socket.
On non-supporting platforms, the abstract address will be
interpreted as an empty string and fail gracefully.
Also add new --abstract-unix-socket tool parameter.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Chungtsun Li (typeless)
Reviewed-by: Daniel Stenberg
Reviewed-by: Peter Wu
Closes #1197
Fixes #1061
|
|
Reported in #1190
Reported-by: Dan Jacobson
|
|
curl.1 is regenerated
Fixes #1190
|
|
... as the former ones always go stale!
|
|
... and regenerated curl.1
|
|
|
|
and regenerated curl.1
Reported-by: Gisle Vanem
|
|
|
|
Fixed trailing whitespace and numerous formatting glitches
|
|
This is the first time we replace the manually edited curt.1 with the
generated one created by gen.pl and the individual option documentation
pages.
Do not edit this file, edit the individual pages and regenerate this
output.
This file will be generated by the build system soon and then removed
from git.
|
|
CURLOPT_SOCKS_PROXY -> CURLOPT_PRE_PROXY
Added the corresponding --preroxy command line option. Sets a SOCKS
proxy to connect to _before_ connecting to a HTTP(S) proxy.
|
|
There's mostly likely no need to allow setting SSLv2/3 version for HTTPS
proxy. Those protocols are insecure by design and deprecated.
|
|
Adds access to the effectively used protocol/scheme to both libcurl and
curl, both in string and numeric (CURLPROTO_*) form.
Note that the string form will be uppercase, as it is just the internal
string.
As these strings are declared internally as const, and all other strings
returned by curl_easy_getinfo() are de-facto const as well, string
handling in getinfo.c got const-ified.
Closes #1137
|
|
|
|
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
|
|
|
|
|
|
Exit with an error on the first transfer error instead of continuing to
do the rest of the URLs.
Discussion: https://curl.haxx.se/mail/archive-2016-11/0038.html
|
|
to consider ECONNREFUSED as a transient error.
Closes #1064
|
|
Fully implemented with the NSS backend only for now.
Reviewed-by: Ray Satiro
|
|
Fixes #1107
Reported-by: Adam Piggott
|
|
|
|
Suggested-by: Dan Jacobson
Issue: https://github.com/curl/curl/issues/1097
|
|
|
|
.. and add that --proto-redir and CURLOPT_REDIR_PROTOCOLS do not
override protocols denied by --proto and CURLOPT_PROTOCOLS.
- Add a test to enforce: --proto deny must override --proto-redir allow
Closes https://github.com/curl/curl/pull/1031
|
|
Since we're using CURLE_FTP_WEIRD_SERVER_REPLY in imap, pop3 and smtp as
more of a generic "failed to parse" introduce an alias without FTP in
the name.
Closes https://github.com/curl/curl/pull/975
|
|
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes #971
|
|
for backward compatibility only
In other news, I changed one other reference to "Mac OS X" in the documentation (that I previously wrote) to say "macOS" instead.
|
|
Closes #883
|
|
|
|
|
|
Adds access to the effectively used http version to both libcurl and
curl.
Closes #799
|
|
|
|
Reported-by: mgendre
Closes #784
|
|
|
|
|
|
|
|
|
|
Even if deprecated, document it so that people will find it as old
scripts may still use it.
|
|
|
|
|
|
Makes curl connect to the given host+port instead of the host+port found
in the URL.
|
|
Make (most) example snippets use the example.com domain instead of the
random ones picked and used before. Some of those were probably
legitimate sites and some not. example.com is designed for this purpose.
|
|
It's a bad idea to send your passwords anywhere, especially over HTTP.
Modified example to send a picture instead.
Fixes #752
|
|
We never made a 7.25.1 release
|
|
|
|
Supports HTTP/2 over clear TCP
- Optimize switching to HTTP/2 by removing calls to init and setup
before switching. Switching will eventually call setup and setup calls
init.
- Supports new version to “force” the use of HTTP/2 over clean TCP
- Add common line parameter “--http2-prior-knowledge” to the Curl
command line tool.
|
|
- Add tests.
- Add an example to CURLOPT_TFTP_NO_OPTIONS.3.
- Add --tftp-no-options to expose CURLOPT_TFTP_NO_OPTIONS.
Bug: https://github.com/curl/curl/issues/481
|
|
Bug: https://github.com/curl/curl/issues/666
Reported-by: baumanj@users.noreply.github.com
|
|
|