Age | Commit message (Collapse) | Author |
|
|
|
|
|
Previously when we do pause because of out of buffer, we just throw
away unread data in connection buffer. This just broke protocol
framing, and I saw occasional FRAME_SIZE_ERROR. This commit fix this
issue by remembering how much data read, and in the next iteration, we
process remaining data.
|
|
This commit fixes the bug that streams get stuck if stream gets some
DATA, and stream->closed becomes true at the same time. Previously,
in this condition, after we processed DATA, we are going to try to
read data from underlying transport, but there is no data, and gets
EAGAIN. There was no code path to evaludate stream->closed.
|
|
|
|
... as it was only used from there.
|
|
|
|
|
|
With the "drained" functionality we can get here slightly asynchronously
so the stream have have been closed but there is pending data left to
read.
|
|
|
|
... as it does for pipelining when we're multiplexing, as we need the
different buffers to store incoming data correctly for all streams.
|
|
|
|
No need to wait for our "spot" like for pipelining
|
|
... which is necessary since the socket won't be readable but there is
data waiting in the buffer.
|
|
|
|
|
|
... from the connection struct. The stream one being the 'struct HTTP'
which is kept in the SessionHandle struct (easy handle).
lookup streams for incoming frames in the stream hash, hashing is based
on the stream id and we get the SessionHandle for the incoming stream
that way.
|
|
|
|
|
|
|
|
|
|
|
|
Once we know we are HTTP/2 enabled we know the server can multiplex.
|
|
... and do not blacklist any.
|
|
All the details mentioned here are better documented in man pages
|
|
This file was removed in commit fd137786
|
|
|
|
Previously we counted all connections to a specific host name and that
would be used for the CURLMOPT_MAX_HOST_CONNECTIONS check for example,
while servers on different port numbers are normally considered
different "origins" on the web and should thus be considered different
hosts.
|
|
All the existing Curl_bundle* functions were only ever used from within
the conncache.c file, so I moved them over and made them static (and
removed the Curl_ prefix).
|
|
This avoids unnecessary dynamic allocs and as this also removed the last
users of *hash_alloc() and *hash_destroy(), those two functions are now
removed.
|
|
avoids extra dynamic allocation
|
|
... by using plain structs instead of pointers for the connection cache,
we can avoid several dynamic allocations that weren't necessary.
|
|
|
|
|
|
This ensures an alternate address is not used.
Does not apply to proxy tunnel.
|
|
Use text mode when cygwin to eliminate trailing carriage returns.
Bug: https://github.com/bagder/curl/pull/258
|
|
Also print the revocation reason if appropriate.
|
|
The symbol is fairly new.
Reported-by: Kamil Dudka
|
|
The OpenSSL trace callback is wonderfully undocumented but given a
journey in the source code, it seems the cases were ssl_ver is zero
doesn't follow the same pattern and thus turned out confusing and
misleading. For now, we skip doing any CURLINFO_TEXT logging on those
but keep sending them as CURLINFO_SSL_DATA_OUT/IN.
Also, I added direction to the text info and I edited some functions
slightly.
Bug: https://github.com/bagder/curl/issues/219
Reported-by: Jay Satiro, Ashish Shukla
|
|
|
|
|
|
|
|
https://github.com/bagder/curl/issues/244
Commit 145c263 changed the behavior when Curl_read_plain returns
CURLE_AGAIN. We now handle CURLE_AGAIN and SEC_I_CONTEXT_EXPIRED
correctly.
|
|
Commit: https://github.com/bagder/curl/commit/926cb9f
Reported-by: Ray Satiro
|
|
|
|
- update default versions of dependencies (except for rare/old platforms)
- update urls
- sync examples makefiles with main ones
- remove line ending space
|
|
|
|
Bug born in changes made several days ago 9a91e80.
Bug: http://curl.haxx.se/mail/lib-2015-04/0199.html
Reported-by: Brian Chrisman
|
|
This fixes using a multi-target mingw distro to build curl .dll for the
non-default target.
(mirroring the same patch present in src/makefile.m32)
|
|
Make the HTTP headers separated by default for improved security and
reduced risk for information leakage.
Bug: http://curl.haxx.se/docs/adv_20150429.html
Reported-by: Yehezkel Horowitz, Oren Souroujon
|