Age | Commit message (Collapse) | Author |
|
Closes #5335
|
|
A common set of functions instead of many separate implementations for
creating buffers that can grow when appending data to them. Existing
functionality has been ported over.
In my early basic testing, the total number of allocations seem at
roughly the same amount as before, possibly a few less.
See docs/DYNBUF.md for a description of the API.
Closes #5300
|
|
Follow-up to 7ff9222ced8c
|
|
Mentioned: https://curl.haxx.se/mail/lib-2020-01/0050.html
Closes #4814
|
|
Fixes #4525
Closes #4603
|
|
Assisted-by: Javier Blazquez
Ref #4525 (partial fix)
Closes #4600
|
|
Unknown content-encoding would get returned as CURLE_WRITE_ERROR if the
response is chunked-encoded.
Reported-by: Ilya Kosarev
Fixes #4310
Closes #4449
|
|
Reviewed-by: Jay Satiro
Reported-by: Thomas Vegas
Closes #4307
|
|
and remove 'header_recvbuf', not used for anything
Reported-by: Jeremy Lainé
Closes #4257
|
|
Closes #4256
|
|
Closes #4217
|
|
Closes #4204
|
|
- enable debug log
- fix use of quiche API
- use download buffer
- separate header/body
Closes #4193
|
|
|
|
With CURLOPT_TIMECONDITION set, a header is automatically added (e.g.
If-Modified-Since). Allow this to be replaced or suppressed with
CURLOPT_HTTPHEADER.
Fixes #4103
Closes #4109
|
|
There were a leftover few prototypes of Curl_ functions that we used to
export but no longer do, this removes those prototypes and cleans up any
comments still referring to them.
Curl_write32_le(), Curl_strcpy_url(), Curl_strlen_url(), Curl_up_free()
Curl_concat_url(), Curl_detach_connnection(), Curl_http_setup_conn()
were made static in 05b100aee247bb9bec8e9a1b0166496aa4248d1c.
Curl_http_perhapsrewind() made static in 574aecee208f79d391f10d57520b3.
For the remainder, I didn't trawl the Git logs hard enough to capture
their exact time of deletion, but they were all gone: Curl_splayprint(),
Curl_http2_send_request(), Curl_global_host_cache_dtor(),
Curl_scan_cache_used(), Curl_hostcache_destroy(), Curl_second_connect(),
Curl_http_auth_stage() and Curl_close_connections().
Closes #4096
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
|
|
- no need to have them protocol specific
- no need to set pointers to them with the Curl_setup_transfer() call
- make Curl_setup_transfer() operate on a transfer pointer, not
connection
- switch some counters from long to the more proper curl_off_t type
Closes #3627
|
|
This adds the CURLOPT_TRAILERDATA and CURLOPT_TRAILERFUNCTION
options that allow a callback based approach to sending trailing headers
with chunked transfers.
The test server (sws) was updated to take into account the detection of the
end of transfer in the case of trailing headers presence.
Test 1591 checks that trailing headers can be sent using libcurl.
Closes #3350
|
|
- replace tabs with spaces where possible
- remove line ending spaces
- remove double/triple newlines at EOF
- fix a non-UTF-8 character
- cleanup a few indentations/line continuations
in manual examples
Closes https://github.com/curl/curl/pull/3037
|
|
... so that they can clear the original pointer on failure, which makes
the error-paths and their cleanups easier.
Closes #2992
|
|
Fixes gcc-8 picky compiler warnings
Reported-by: Rikard Falkeborn
Bug: #2560
Closes #2568
|
|
Found via `codespell`
Closes #2389
|
|
... don't consider it an error!
Assisted-by: Jay Satiro
Reported-by: Łukasz Domeradzki
Fixes #2365
Closes #2375
|
|
... not only HTTP uses this now.
Closes #1875
|
|
... and slightly edited to follow our code style better.
|
|
Available in HTTP, SMTP and IMAP.
Deprecates the FORM API.
See CURLOPT_MIMEPOST.
Lib code and associated documentation.
|
|
Make the name reflect its use better, and add a short comment describing
what it's for.
|
|
Ref: https://github.com/curl/curl/pull/1160
|
|
Fixes #986
|
|
|
|
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
This commit ensures that streams which was closed in on_stream_close
callback gets passed to http2_handle_stream_close. Previously, this
might not happen. To achieve this, we increment drain property to
forcibly call recv function for that stream.
To more accurately check that we have no pending event before shutting
down HTTP/2 session, we sum up drain property into
http_conn.drain_total. We only shutdown session if that value is 0.
With this commit, when stream was closed before reading response
header fields, error code CURLE_HTTP2_STREAM is returned even if
HTTP/2 level error is NO_ERROR. This signals the upper layer that
stream was closed by error just like TCP connection close in HTTP/1.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
|
|
This commit adds trailer support in HTTP/2. In HTTP/1.1, chunked
encoding must be used to send trialer fields. HTTP/2 deprecated any
trandfer-encoding, including chunked. But trailer fields are now
always available.
Since trailer fields are relatively rare these days (gRPC uses them
extensively though), allocating buffer for trailer fields is done when
we detect that HEADERS frame containing trailer fields is started. We
use Curl_add_buffer_* functions to buffer all trailers, just like we
do for regular header fields. And then deliver them when stream is
closed. We have to be careful here so that all data are delivered to
upper layer before sending trailers to the application.
We can deliver trailer field one by one using NGHTTP2_ERR_PAUSE
mechanism, but current method is far more simple.
Another possibility is use chunked encoding internally for HTTP/2
traffic. I have not tested it, but it could add another overhead.
Closes #564
|
|
Return 0 instead of NGHTTP2_ERR_CALLBACK_FAILURE if we can't locate the
SessionHandle. Apparently mod_h2 will sometimes send a frame for a
stream_id we're finished with.
Use nghttp2_session_get_stream_user_data and
nghttp2_session_set_stream_user_data to identify SessionHandles instead
of a hash.
Closes #372
|
|
|
|
|
|
|
|
Use a curl_off_t for upload left
|
|
|
|
|
|
|
|
Previously when we do pause because of out of buffer, we just throw
away unread data in connection buffer. This just broke protocol
framing, and I saw occasional FRAME_SIZE_ERROR. This commit fix this
issue by remembering how much data read, and in the next iteration, we
process remaining data.
|
|
|
|
... which is necessary since the socket won't be readable but there is
data waiting in the buffer.
|
|
|
|
|
|
... from the connection struct. The stream one being the 'struct HTTP'
which is kept in the SessionHandle struct (easy handle).
lookup streams for incoming frames in the stream hash, hashing is based
on the stream id and we get the SessionHandle for the incoming stream
that way.
|
|
|
|
|