Age | Commit message (Collapse) | Author |
|
Closes #4184
|
|
Repeatedly we see problems where using curl_multi_wait() is difficult or
just awkward because if it has no file descriptor to wait for
internally, it returns immediately and leaves it to the caller to wait
for a small amount of time in order to avoid occasional busy-looping.
This is often missed or misunderstood, leading to underperforming
applications.
This change introduces curl_multi_poll() as a replacement drop-in
function that accepts the exact same set of arguments. This function
works identically to curl_multi_wait() - EXCEPT - for the case when
there's nothing to wait for internally, as then this function will by
itself wait for a "suitable" short time before it returns. This
effectiely avoids all risks of busy-looping and should also make it less
likely that apps "over-wait".
This also changes the curl tool to use this funtion internally when
doing parallel transfers and changes curl_easy_perform() to use it
internally.
Closes #4163
|
|
... and remove some verbose messages we don't need. Made transfers from
facebook.com work better.
|
|
|
|
|
|
- enable debug log
- fix use of quiche API
- use download buffer
- separate header/body
Closes #4193
|
|
As the plan has been laid out in DEPRECATED. Update docs accordingly and
verify in test 1174. Now requires the option to be set to allow HTTP/0.9
responses.
Closes #4191
|
|
|
|
|
|
Closes #4192
|
|
|
|
Closes #3780
|
|
|
|
As the NTLM code no longer calls any of TLS libraries' specific MD4
functions, there is no need to call this function for each #ifdef.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Follow-up to 3af0e76 which added experimental H3 support.
Closes https://github.com/curl/curl/pull/4185
|
|
|
|
|
|
Closes #4183
|
|
Allow pretty much anything to be part of the ALPN identifier. In
particular minus, which is used for "h3-20" (in-progress HTTP/3
versions) etc.
Updated test 356.
Closes #4182
|
|
|
|
To aid debugging
Closes #4181
|
|
|
|
If HTTPAUTH_GSSNEGOTIATE was used for a POST request and
gss_init_sec_context() failed, the POST request was sent
with empty body. This commit also restores the original
behavior of `curl --fail --negotiate`, which was changed
by commit 6c6035532383e300c712e4c1cd9fdd749ed5cf59.
Add regression tests 2077 and 2078 to cover this.
Fixes #3992
Closes #4171
|
|
Evgeny Grin, Peter Pih, Anton Malov and Marquis de Muesli
|
|
|
|
Regression from 5cf5d57ab9 (7.64.1)
Fixed-by: Lance Ware
Fixes #4176
Closes #4177
|
|
|
|
... to make it hold microseconds too.
Fixes #4165
Closes #4168
|
|
|
|
Reported-by: Michal Čaplygin
Fixes #4174
Closes #4175
|
|
Closes #3701
|
|
|
|
Closes #4167
|
|
Turned bad with commit b8894085000
Reported-by: niallor on github
Fixes #4172
Closes #4173
|
|
It was used (intended) to pass in the size of the 'socks' array that is
also passed to these functions, but was rarely actually checked/used and
the array is defined to a fixed size of MAX_SOCKSPEREASYHANDLE entries
that should be used instead.
Closes #4169
|
|
Regression, broken in commit 65eb65fde64bd5f (curl 7.64.1)
Reported-by: Jonathan Cardoso Machado
Assisted-by: Jay Satiro
Fixes #4136
Closes #4162
|
|
|
|
Follow-up to eb9a604f. Mistake caused by me when I edited the commit
before push...
|
|
|
|
Closes #4157
|
|
... to avoid integer overflows later when multiplying with 1000 to
convert seconds to milliseconds.
Added test 1269 to verify.
Reported-by: Jason Lee
Closes #4166
|
|
... to make CURLOPT_MAX_RECV_SPEED_LARGE and
CURLOPT_MAX_SEND_SPEED_LARGE work correctly on subsequent transfers that
reuse the same handle.
Fixed-by: Ironbars13 on github
Fixes #4084
Closes #4161
|
|
... so that end-of-stream is detected properly.
Reported-by: Tom van der Woerdt
Fixes #4043
Closes #4160
|
|
When curl_multi_wait() returns OK without file descriptors to wait for,
it might already have done a long timeout.
Closes #4159
|