Age | Commit message (Collapse) | Author |
|
When a username and password are provided in the URL, they were wrongly
removed from the stored URL so that subsequent uses of the same URL
wouldn't find the crendentials. This made doing HTTP auth with multiple
connections (like Digest) mishave.
Regression from 46e164069d1a5230 (7.62.0)
Test case 335 added to verify.
Reported-by: Mike Crowe
Fixes #4228
Closes #4229
|
|
|
|
RFC 7838 section 5:
When using an alternative service, clients SHOULD include an Alt-Used
header field in all requests.
Removed CURLALTSVC_ALTUSED again (feature is still EXPERIMENTAL thus
this is deemed ok).
You can disable sending this header just like you disable any other HTTP
header in libcurl.
Closes #4199
|
|
Even though it cannot fall-back to a lower HTTP version automatically. The
safer way to upgrade remains via CURLOPT_ALTSVC.
CURLOPT_H3 no longer has any bits that do anything and might be removed
before we remove the experimental label.
Updated the curl tool accordingly to use "--http3".
Closes #4197
|
|
This is only the libcurl part that provides the information. There's no
user of the parsed value. This change includes three new tests for the
parser.
Ref: #3794
|
|
|
|
Added the ability for the calling program to specify the authorisation
identity (authzid), the identity to act as, in addition to the
authentication identity (authcid) and password when using SASL PLAIN
authentication.
Fixes #3653
Closes #3790
NOTE: This commit was cherry-picked and is part of a series of commits
that added the authzid feature for upcoming 7.66.0. The series was
temporarily reverted in db8ec1f so that it would not ship in a 7.65.x
patch release.
Closes https://github.com/curl/curl/pull/4186
|
|
... to make it hold microseconds too.
Fixes #4165
Closes #4168
|
|
It was used (intended) to pass in the size of the 'socks' array that is
also passed to these functions, but was rarely actually checked/used and
the array is defined to a fixed size of MAX_SOCKSPEREASYHANDLE entries
that should be used instead.
Closes #4169
|
|
USe configure --with-ngtcp2 or --with-quiche
Using either option will enable a HTTP3 build.
Co-authored-by: Alessandro Ghedini <alessandro@ghedini.me>
Closes #3500
|
|
Since more than one socket can be used by each transfer at a given time,
each sockhash entry how has its own hash table with transfers using that
socket.
In addition, the sockhash entry can now be marked 'blocked = TRUE'"
which then makes the delete function just set 'removed = TRUE' instead
of removing it "for real", as a way to not rip out the carpet under the
feet of a parent function that iterates over the transfers of that same
sockhash entry.
Reported-by: Tom van der Woerdt
Fixes #3961
Fixes #3986
Fixes #3995
Fixes #4004
Closes #3997
|
|
These are for features that used to be openssl-only but were expanded
over time to support other SSL backends.
Closes #3985
|
|
Responses with status codes 1xx, 204 or 304 don't have a response body. For
these, don't parse these headers:
- Content-Encoding
- Content-Length
- Content-Range
- Last-Modified
- Transfer-Encoding
This change ensures that HTTP/2 upgrades work even if a
"Content-Length: 0" or a "Transfer-Encoding: chunked" header is present.
Co-authored-by: Daniel Stenberg
Closes #3702
Fixes #3968
Closes #3977
|
|
They need to be removed from the socket hash linked list with more care.
When sh_delentry() is called to remove a sockethash entry, remove all
individual transfers from the list first. To enable this, each Curl_easy struct
now stores a pointer to the sockethash entry to know how to remove itself.
Reported-by: Tom van der Woerdt and Kunal Ekawde
Fixes #3952
Fixes #3904
Closes #3953
|
|
- Revert all commits related to the SASL authzid feature since the next
release will be a patch release, 7.65.1.
Prior to this change CURLOPT_SASL_AUTHZID / --sasl-authzid was destined
for the next release, assuming it would be a feature release 7.66.0.
However instead the next release will be a patch release, 7.65.1 and
will not contain any new features.
After the patch release after the reverted commits can be restored by
using cherry-pick:
git cherry-pick a14d72c a9499ff 8c1cc36 c2a8d52 0edf690
Details for all reverted commits:
Revert "os400: take care of CURLOPT_SASL_AUTHZID in curl_easy_setopt_ccsid()."
This reverts commit 0edf6907ae37e2020722e6f61229d8ec64095b0a.
Revert "tests: Fix the line endings for the SASL alt-auth tests"
This reverts commit c2a8d52a1356a722ff9f4aeb983cd4eaf80ef221.
Revert "examples: Added SASL PLAIN authorisation identity (authzid) examples"
This reverts commit 8c1cc369d0c7163c6dcc91fd38edfea1f509ae75.
Revert "curl: --sasl-authzid added to support CURLOPT_SASL_AUTHZID from the tool"
This reverts commit a9499ff136d89987af885e2d7dff0a066a3e5817.
Revert "sasl: Implement SASL authorisation identity via CURLOPT_SASL_AUTHZID"
This reverts commit a14d72ca2fec5d4eb5a043936e4f7ce08015c177.
|
|
Added the ability for the calling program to specify the authorisation
identity (authzid), the identity to act as, in addition to the
authentication identity (authcid) and password when using SASL PLAIN
authentication.
Fixed #3653
Closes #3790
|
|
|
|
Given that this member variable is not used by the SASL based protocols
there is no need to have it here.
Closes #3882
|
|
Given that this member variable is not used by the SASL based protocols
there is no need to have it here.
|
|
|
|
|
|
|
|
|
|
Closes #3846
|
|
With transfers being queued up, we only move one at a a time back to the
CONNECT state but now we mark moved transfers so that when a moved
transfer is confirmed "successful" (it connected) it will trigger the
move of another pending transfer. Previously, it would otherwise wait
until the transfer was done before doing this. This makes queued up
pending transfers get processed (much) faster.
|
|
This limits all accepted input strings passed to libcurl to be less than
CURL_MAX_INPUT_LENGTH (8000000) bytes, for these API calls:
curl_easy_setopt() and curl_url_set().
The 8000000 number is arbitrary picked and is meant to detect mistakes
or abuse, not to limit actual practical use cases. By limiting the
acceptable string lengths we also reduce the risk of integer overflows
all over.
NOTE: This does not apply to `CURLOPT_POSTFIELDS`.
Test 1559 verifies.
Closes #3805
|
|
... and disconnect too old ones instead of trying to reuse.
Default max age is set to 118 seconds.
Ref: #3722
Closes #3782
|
|
Remove the code too. The functionality has been disabled in code since
7.62.0. Setting this option will from now on simply be ignored and have
no function.
Closes #3654
|
|
As previously planned and documented in DEPRECATE.md, all pipelining
code is removed.
Closes #3651
|
|
Closes #3699
|
|
* Adjusted unit tests 2056, 2057
* do not generally close connections with CURLAUTH_NEGOTIATE after every request
* moved negotiatedata from UrlState to connectdata
* Added stream rewind logic for CURLAUTH_NEGOTIATE
* introduced negotiatedata::GSS_AUTHDONE and negotiatedata::GSS_AUTHSUCC
* Consider authproblem state for CURLAUTH_NEGOTIATE
* Consider reuse_forbid for CURLAUTH_NEGOTIATE
* moved and adjusted negotiate authentication state handling from
output_auth_headers into Curl_output_negotiate
* Curl_output_negotiate: ensure auth done is always set
* Curl_output_negotiate: Set auth done also if result code is
GSS_S_CONTINUE_NEEDED/SEC_I_CONTINUE_NEEDED as this result code may
also indicate the last challenge request (only works with disabled
Expect: 100-continue and CURLOPT_KEEP_SENDING_ON_ERROR -> 1)
* Consider "Persistent-Auth" header, detect if not present;
Reset/Cleanup negotiate after authentication if no persistent
authentication
* apply changes introduced with #2546 for negotiate rewind logic
Fixes #1261
Closes #1975
|
|
|
|
- no need to have them protocol specific
- no need to set pointers to them with the Curl_setup_transfer() call
- make Curl_setup_transfer() operate on a transfer pointer, not
connection
- switch some counters from long to the more proper curl_off_t type
Closes #3627
|
|
Introduced in 8b6314ccfb, but not used anymore in current code. Unclear
since when.
Closes #3626
|
|
This allows the compiler to pack and align the structs better in
memory. For a rather feature-complete build on x86_64 Linux, gcc 8.1.2
makes the Curl_easy struct 4.9% smaller. From 6312 bytes to 6000.
Removed an unused struct field.
No functionality changes.
Closes #3610
|
|
Instead of using a fixed 256 byte buffer in the connectdata struct.
In my build, this reduces the size of the connectdata struct by 11.8%,
from 2160 to 1904 bytes with no functionality or performance loss.
This also fixes a bug in schannel's Curl_verify_certificate where it
called Curl_sspi_strerror when it should have called Curl_strerror for
string from GetLastError. the only effect would have been no text or the
wrong text being shown for the error.
Co-authored-by: Jay Satiro
Closes #3612
|
|
and make CONNECT_ONLY conections never reuse any existing ones either.
Reported-by: Pavel Löbl
Bug: https://curl.haxx.se/mail/lib-2019-02/0064.html
Closes #3586
|
|
Heimdal includes on FreeBSD spewed out lots of them. Less so now.
Closes #3566
|
|
... since that data won't be used in the request anyway.
Fixes #3548
Reported-by: Renaud Allard
Close #3549
|
|
Attempt to add support for Secure Channel binding when negotiate
authentication is used. The problem to solve is that by default IIS
accepts channel binding and curl doesn't utilise them. The result was a
401 response. Scope affects only the Schannel(winssl)-SSPI combination.
Fixes https://github.com/curl/curl/issues/3503
Closes https://github.com/curl/curl/pull/3509
|
|
Windows extended potection (aka ssl channel binding) is required
to login to ntlm IIS endpoint, otherwise the server returns 401
responses.
Fixes #3280
Closes #3321
|
|
We use "conn" everywhere to be a pointer to the connection.
Introduces two functions that "attaches" and "detaches" the connection
to and from the transfer.
Going forward, we should favour using "data->conn" (since a transfer
always only has a single connection or none at all) to "conn->data"
(since a connection can have none, one or many transfers associated with
it and updating conn->data to be correct is error prone and a frequent
reason for internal issues).
Closes #3442
|
|
Fixes #3436
Closes #3448
Problem 1
After LOTS of scratching my head, I eventually realized that even when doing
10 uploads in parallel, sometimes the socket callback to the application that
tells it what to wait for on the socket, looked like it would reflect the
status of just the single transfer that just changed state.
Digging into the code revealed that this was indeed the truth. When multiple
transfers are using the same connection, the application did not correctly get
the *combined* flags for all transfers which then could make it switch to READ
(only) when in fact most transfers wanted to get told when the socket was
WRITEABLE.
Problem 1b
A separate but related regression had also been introduced by me when I
cleared connection/transfer association better a while ago, as now the logic
couldn't find the connection and see if that was marked as used by more
transfers and then it would also prematurely remove the socket from the socket
hash table even in times other transfers were still using it!
Fix 1
Make sure that each socket stored in the socket hash has a "combined" action
field of what to ask the application to wait for, that is potentially the ORed
action of multiple parallel transfers. And remove that socket hash entry only
if there are no transfers left using it.
Problem 2
The socket hash entry stored an association to a single transfer using that
socket - and when curl_multi_socket_action() was called to tell libcurl about
activities on that specific socket only that transfer was "handled".
This was WRONG, as a single socket/connection can be used by numerous parallel
transfers and not necessarily a single one.
Fix 2
We now store a list of handles in the socket hashtable entry and when libcurl
is told there's traffic for a particular socket, it now iterates over all
known transfers using that single socket.
|
|
This adds support for wildcard hosts in CURLOPT_RESOLVE. These are
try-last so any non-wildcard entry is resolved first. If specified,
any host not matched by another CURLOPT_RESOLVE config will use this
as fallback.
Example send a.com to 10.0.0.1 and everything else to 10.0.0.2:
curl --resolve *:443:10.0.0.2 --resolve a.com:443:10.0.0.1 \
https://a.com https://b.com
This is probably quite similar to using:
--connect-to a.com:443:10.0.0.1:443 --connect-to :443:10.0.0.2:443
Closes #3406
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
|
|
Added CURLOPT_HTTP09_ALLOWED and --http0.9 for this purpose.
For now, both the tool and library allow HTTP/0.9 by default.
docs/DEPRECATE.md lays out the plan for when to reverse that default: 6
months after the 7.64.0 release. The options are added already now so
that applications/scripts can start using them already now.
Fixes #2873
Closes #3383
|
|
Previously it was 30 minutes
|
|
This adds the CURLOPT_TRAILERDATA and CURLOPT_TRAILERFUNCTION
options that allow a callback based approach to sending trailing headers
with chunked transfers.
The test server (sws) was updated to take into account the detection of the
end of transfer in the case of trailing headers presence.
Test 1591 checks that trailing headers can be sent using libcurl.
Closes #3350
|
|
Delays stripping of trailing dots to after resolving the hostname.
Fixes #3022
Closes #3222
|
|
Allows an application to pass in a pre-parsed URL via a URL handle.
Closes #3227
|
|
The "connecting" function is used by multiple protocols, not only FTP
|