Age | Commit message (Collapse) | Author |
|
Available in HTTP, SMTP and IMAP.
Deprecates the FORM API.
See CURLOPT_MIMEPOST.
Lib code and associated documentation.
|
|
Ref #1012
Figured-out-by: Tatsuhiro Tsujikawa
|
|
Add a connection check function to HTTP2 based off RTSP. This causes
PINGs to be handled the next time the connection is reused.
Closes #1521
|
|
Add a new type of callback to Curl_handler which performs checks on
the connection. Alter RTSP so that it uses this callback to do its
own check on connection health.
|
|
torture mode with test 1021 found it
|
|
mk-lib1521.pl generates a test program (lib1521.c) that calls
curl_easy_setopt() for every known option with a few typical values to
make sure they work (ignoring the return codes).
Some small changes were necessary to avoid asserts and NULL accesses
when doing this.
The perl script needs to be manually rerun when we add new options.
Closes #1543
|
|
... as it does extra checks to actually work.
Reported-by: jonrumsey at github
Fixes #1504
|
|
... since the total amount is low this is faster, easier and reduces
memory overhead.
Also, Curl_expire_done() can now mark an expire timeout as done so that
it never times out.
Closes #1472
|
|
A) reduces the timeout lists drastically
B) prevents a lot of superfluous loops for timers that expires "in vain"
when it has actually already been extended to fire later on
|
|
|
|
This fixes the following clang warnings:
http2.c:184:27: error: no previous extern declaration for non-static
variable 'Curl_handler_http2' [-Werror,-Wmissing-variable-declarations]
http2.c:204:27: error: no previous extern declaration for non-static
variable 'Curl_handler_http2_ssl'
[-Werror,-Wmissing-variable-declarations]
|
|
Add missing newhandle free call in push_promise().
Closes #1416
|
|
In release mode, MinGW complains:
error: unused parameter 'lib_error_code' [-Werror=unused-parameter]
|
|
Reported-by: zelinchen@users.noreply.github.com
Fixes #1229
|
|
When removing an easy handler from a multi before it completed its
transfer, and it had pushed streams, it would segfault due to the pushed
counted not being cleared.
Fixed-by: zelinchen@users.noreply.github.com
Fixes #1249
|
|
Ref: https://github.com/curl/curl/pull/1160
|
|
... when checking for a too large request.
|
|
The function only exists since nghttp2 1.12.0.
Bug: https://github.com/curl/curl/commit/a4d8888#commitcomment-19985676
Reported-by: Michael Kaufmann
|
|
Closes #1125
|
|
|
|
|
|
- Improve performance by using a huge HTTP/2 window size.
Bug: https://github.com/curl/curl/issues/1102
Reported-by: afrind@users.noreply.github.com
Assisted-by: Tatsuhiro Tsujikawa
|
|
- In Curl_http2_switched don't call memcpy when src is NULL.
Curl_http2_switched can be called like:
Curl_http2_switched(conn, NULL, 0);
.. and prior to this change memcpy was then called like:
memcpy(dest, NULL, 0)
.. causing address sanitizer to warn:
http2.c:2057:3: runtime error: null pointer passed as argument 2, which
is declared to never be null
|
|
Discussed: https://curl.haxx.se/mail/lib-2016-11/0087.html
|
|
Previously, we just ignored "Connection" header field. But HTTP/2
specification actually prohibits few more header fields. This commit
ignores all of them so that we don't send these bad header fields.
Bug: https://curl.haxx.se/mail/archive-2016-10/0033.html
Reported-by: Ricki Hirner
Closes https://github.com/curl/curl/pull/1092
|
|
We had some confusions on when each function was used. We should not act
differently on different locales anyway.
|
|
|
|
... by making sure we don't count down the "upload left" counter when the
uploaded size is unknown and then it can be allowed to continue forever.
Fixes #996
|
|
Fixes #982
|
|
|
|
|
|
|
|
Follow-up to c3e906e9cd0f, seems like a more appropriate error code
Suggested-by: Jay Satiro
|
|
Fixes #986
|
|
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes #941
|
|
.. also remove same from scp
|
|
Since the server can at any time send a HTTP/2 frame to us, we need to
wait for the socket to be readable during all transfers so that we can
act on incoming frames even when uploading etc.
Reminded-by: Tatsuhiro Tsujikawa
|
|
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
|
|
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
|
|
... and save the typedef'ed names for headers and external APIs.
|
|
|
|
... when generating them, not "2.0" as the protocol is called just
HTTP/2 and nothing else.
|
|
curl's representation of HTTP/2 responses involves transforming the
response to a format that is similar to HTTP/1.1. Prior to this change,
curl would do this by separating header names and values with only a
colon, without introducing a space after the colon.
While this is technically a valid way to represent a HTTP/1.1 header
block, it is much more common to see a space following the colon. This
change introduces that space, to ensure that incautious tools are safely
able to parse the header block.
This also ensures that the difference between the HTTP/1.1 and HTTP/2
response layout is as minimal as possible.
Bug: https://github.com/curl/curl/issues/797
Closes #798
Fixes #797
|
|
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes #743
|
|
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
- Error if a header line is larger than supported.
- Warn if cumulative header line length may be larger than supported.
- Allow spaces when parsing the path component.
- Make sure each header line ends in \r\n. This fixes an out of bounds.
- Disallow header continuation lines until we decide what to do.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
Sicne we write header field in temporary location, not in the memory
that upper layer provides, incrementing drain should not happen.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
This commit ensures that streams which was closed in on_stream_close
callback gets passed to http2_handle_stream_close. Previously, this
might not happen. To achieve this, we increment drain property to
forcibly call recv function for that stream.
To more accurately check that we have no pending event before shutting
down HTTP/2 session, we sum up drain property into
http_conn.drain_total. We only shutdown session if that value is 0.
With this commit, when stream was closed before reading response
header fields, error code CURLE_HTTP2_STREAM is returned even if
HTTP/2 level error is NO_ERROR. This signals the upper layer that
stream was closed by error just like TCP connection close in HTTP/1.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
This commit ensures that data from network are processed before HTTP/2
session is terminated. This is achieved by pausing nghttp2 whenever
different stream than current easy handle receives data.
This commit also fixes the bug that sometimes processing hangs when
multiple HTTP/2 streams are multiplexed.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|