Age | Commit message (Collapse) | Author |
|
.. also remove same from scp
|
|
Since the server can at any time send a HTTP/2 frame to us, we need to
wait for the socket to be readable during all transfers so that we can
act on incoming frames even when uploading etc.
Reminded-by: Tatsuhiro Tsujikawa
|
|
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
|
|
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
|
|
... and save the typedef'ed names for headers and external APIs.
|
|
|
|
... when generating them, not "2.0" as the protocol is called just
HTTP/2 and nothing else.
|
|
curl's representation of HTTP/2 responses involves transforming the
response to a format that is similar to HTTP/1.1. Prior to this change,
curl would do this by separating header names and values with only a
colon, without introducing a space after the colon.
While this is technically a valid way to represent a HTTP/1.1 header
block, it is much more common to see a space following the colon. This
change introduces that space, to ensure that incautious tools are safely
able to parse the header block.
This also ensures that the difference between the HTTP/1.1 and HTTP/2
response layout is as minimal as possible.
Bug: https://github.com/curl/curl/issues/797
Closes #798
Fixes #797
|
|
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes #743
|
|
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
- Error if a header line is larger than supported.
- Warn if cumulative header line length may be larger than supported.
- Allow spaces when parsing the path component.
- Make sure each header line ends in \r\n. This fixes an out of bounds.
- Disallow header continuation lines until we decide what to do.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
Sicne we write header field in temporary location, not in the memory
that upper layer provides, incrementing drain should not happen.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
This commit ensures that streams which was closed in on_stream_close
callback gets passed to http2_handle_stream_close. Previously, this
might not happen. To achieve this, we increment drain property to
forcibly call recv function for that stream.
To more accurately check that we have no pending event before shutting
down HTTP/2 session, we sum up drain property into
http_conn.drain_total. We only shutdown session if that value is 0.
With this commit, when stream was closed before reading response
header fields, error code CURLE_HTTP2_STREAM is returned even if
HTTP/2 level error is NO_ERROR. This signals the upper layer that
stream was closed by error just like TCP connection close in HTTP/1.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
This commit ensures that data from network are processed before HTTP/2
session is terminated. This is achieved by pausing nghttp2 whenever
different stream than current easy handle receives data.
This commit also fixes the bug that sometimes processing hangs when
multiple HTTP/2 streams are multiplexed.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
Previously, when a stream was closed with other than NGHTTP2_NO_ERROR
by RST_STREAM, underlying TCP connection was dropped. This is
undesirable since there may be other streams multiplexed and they are
very much fine. This change introduce new error code
CURLE_HTTP2_STREAM, which indicates stream error that only affects the
relevant stream, and connection should be kept open. The existing
CURLE_HTTP2 means connection error in general.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
|
|
... but ignore EAGAIN if the stream has ended so that we don't end up in
a loop. This is a follow-up to c8ab613 in order to avoid the problem
d261652 was made to fix.
Reported-by: Jay Satiro
Clues-provided-by: Tatsuhiro Tsujikawa
Discussed in #750
|
|
The space character after the status code is mandatory, even if the
reason phrase is empty (see RFC 7230 section 3.1.2)
Closes #755
|
|
It turns out the google GFE HTTP/2 servers send a PING frame immediately
after a stream ends and its last DATA has been received by curl. So if
we don't drain that from the socket, it makes the socket readable in
subsequent checks and libcurl then (wrongly) assumes the connection is
dead when trying to reuse the connection.
Reported-by: Joonas Kuorilehto
Discussed in #750
|
|
It offers extra info from nghttp2 in certain error cases. Like for
example when trying prior-knowledge http2 on a server that doesn't speak
http2 at all. The error message is passed on as a verbose message to
libcurl.
Discussed in #722
The error callback was added in nghttp2 1.9.0
|
|
Since commit a5aec58 the handler schemes need to match for the
connections to be reused and for HTTP/2 multiplexing to work, reusing
connections is very important!
Closes #736
|
|
|
|
This regression landed in 5778e6f5 and made libcurl not act on received
settings and instead stayed with its internal defaults.
Bug: http://curl.haxx.se/mail/lib-2016-01/0031.html
Reported-by: Bankde
|
|
Discussed in https://github.com/bagder/curl/pull/564
|
|
Check that the trailer buffer exists before attempting a client write
for trailers on stream close.
Refer to comments in https://github.com/bagder/curl/pull/564
|
|
This commit adds trailer support in HTTP/2. In HTTP/1.1, chunked
encoding must be used to send trialer fields. HTTP/2 deprecated any
trandfer-encoding, including chunked. But trailer fields are now
always available.
Since trailer fields are relatively rare these days (gRPC uses them
extensively though), allocating buffer for trailer fields is done when
we detect that HEADERS frame containing trailer fields is started. We
use Curl_add_buffer_* functions to buffer all trailers, just like we
do for regular header fields. And then deliver them when stream is
closed. We have to be careful here so that all data are delivered to
upper layer before sending trailers to the application.
We can deliver trailer field one by one using NGHTTP2_ERR_PAUSE
mechanism, but current method is far more simple.
Another possibility is use chunked encoding internally for HTTP/2
traffic. I have not tested it, but it could add another overhead.
Closes #564
|
|
When NGHTTP2_ERR_PAUSE is returned from data_source_read_callback, we
might not process DATA frame fully. Calling nghttp2_session_mem_recv()
again will continue to process DATA frame, but if there is no incoming
frames, then we have to call it again with 0-length data. Without this,
on_stream_close callback will not be called, and stream could be hanged.
Bug: http://curl.haxx.se/mail/lib-2015-11/0103.html
Reported-by: Francisco Moraes
|
|
|
|
- set the correct stream_id for pushed streams
- init maxdownload and size properly
|
|
give the new stream the old one's stream_weight internally to avoid
sending a PRIORITY frame unless asked for it
|
|
This reverts commit 64e959ffe37c436503f9fed1ce2d6ee6ae50bd9a.
Feedback-by: Dan Fandrich
URL: http://curl.haxx.se/mail/lib-2015-11/0062.html
|
|
|
|
They tend to never get updated anyway so they're frequently inaccurate
and we never go back to revisit them anyway. We document issues to work
on properly in KNOWN_BUGS and TODO instead.
|
|
We need 1.0.0 or later. Also verified by configure.
|
|
|
|
Removed wrong assert()s
The 'conn' passed in as userdata can be used and there can be other
sessionhandles ('data') than the single one this checked for.
|
|
CURLOPT_STREAM_DEPENDS
CURLOPT_STREAM_DEPENDS_E
CURLOPT_STREAM_PRIORITY
|
|
bug introduced by 18691642931e5c7ac8af83ac3a84fbcb36000f96.
Closes #493
|
|
If the underlying recv called by http2_recv returns -1 then that is the
value http2_recv returns to the caller.
|
|
For a single-stream download from localhost, we managed to increase
transfer speed from 1.6MB/sec to around 400MB/sec, mostly because of
this single fix.
|
|
... only call it when there is data arriving for another handle than the
one that is currently driving it.
Improves single-stream download performance quite a lot.
Thanks-to: Tatsuhiro Tsujikawa
Bug: http://curl.haxx.se/mail/lib-2015-09/0097.html
|
|
|
|
RFC 7540 section 8.1.2.2 states: "An endpoint MUST NOT generate an
HTTP/2 message containing connection-specific header fields; any message
containing connection-specific header fields MUST be treated as
malformed"
Closes #401
|
|
Leftovers from when we removed the private socket hash.
Coverity CID 1317365, "Logically dead code"
|
|
"Explicit null dereferenced (FORWARD_NULL)"
Coverity CID 1317366
|
|
Return 0 instead of NGHTTP2_ERR_CALLBACK_FAILURE if we can't locate the
SessionHandle. Apparently mod_h2 will sometimes send a frame for a
stream_id we're finished with.
Use nghttp2_session_get_stream_user_data and
nghttp2_session_set_stream_user_data to identify SessionHandles instead
of a hash.
Closes #372
|
|
Otherwise it would never be called for an HTTP/2 connection, which has
its own disconnect handler.
I spotted this while debugging <https://bugzilla.redhat.com/1248389>
where the http_disconnect() handler was called on an FTP session handle
causing 'dnf' to crash. conn->data->req.protop of type (struct FTP *)
was reinterpreted as type (struct HTTP *) which resulted in SIGSEGV in
Curl_add_buffer_free() after printing the "Connection cache is full,
closing the oldest one." message.
A previously working version of libcurl started to crash after it was
recompiled with the HTTP/2 support despite the HTTP/2 protocol was not
actually used. This commit makes it work again although I suspect the
root cause (reinterpreting session handle data of incompatible protocol)
still has to be fixed. Otherwise the same will happen when mixing FTP
and HTTP/2 connections and exceeding the connection cache limit.
Reported-by: Tomas Tomecek
Bug: https://bugzilla.redhat.com/1248389
|
|
Detected by Coverity.
Error: NULL_RETURNS:
lib/http2.c:1301: returned_null: "strchr" returns null (checked 103 out of 109 times).
lib/http2.c:1301: var_assigned: Assigning: "hdbuf" = null return value from "strchr".
lib/http2.c:1302: dereference: Incrementing a pointer which might be null: "hdbuf".
1300|
1301| hdbuf = strchr(hdbuf, 0x0a);
1302|-> ++hdbuf;
1303|
1304| authority_idx = 0;
|
|
|