Age | Commit message (Collapse) | Author |
|
|
|
This allows for better memmory debugging and torture tests.
|
|
This fixes tests that were added after 113f04e664b as the tests would
fail otherwise.
We bring back "Proxy-Connection: Keep-Alive" now unconditionally to fix
regressions with old and stupid proxies, but we could possibly switch to
using it only for CONNECT or only for NTLM in a future if we want to
gradually reduce it.
Fixes #954
Reported-by: János Fekete
|
|
This reverts commit 113f04e664b16b944e64498a73a4dab990fe9a68.
|
|
Follow-up to a96319ebb9 (document the new behavior)
|
|
|
|
|
|
Follow up to a96319ebb93
|
|
|
|
I discovered some people have been using "https://example.com" style
strings as proxy and it "works" (curl doesn't complain) because curl
ignores unknown schemes and then assumes plain HTTP instead.
I think this misleads users into believing curl uses HTTPS to proxies
when it doesn't. Now curl rejects proxy strings using unsupported
schemes instead of just ignoring and defaulting to HTTP.
|
|
|
|
Third commit to fix issue #944 regarding SOCKS5 error handling.
Reported-by: David Kalnischkies
|
|
Second commit to fix issue #944 regarding SOCKS5 error handling.
Reported-by: David Kalnischkies
|
|
First commit to fix issue #944 regarding SOCKS5 error handling.
Reported-by: David Kalnischkies
|
|
The server developer.netscape.com does not resolve into any
ip address and can be removed.
|
|
Undo change introduced in d4643d6 which caused iPAddress match to be
ignored if dNSName was present but did not match.
Also, if iPAddress is present but does not match, and dNSName is not
present, fail as no-match. Prior to this change in such a case the CN
would be checked for a match.
Bug: https://github.com/curl/curl/issues/959
Reported-by: wmsch@users.noreply.github.com
|
|
Closes #956
|
|
Follow-up to e577c43bb to fix test case 569 brekage: stop the parser at
whitespace as well.
Help-by: Erik Janssen
|
|
Mark's new document about HTTP Retries
(https://mnot.github.io/I-D/httpbis-retry/) made me check our code and I
spotted that we don't retry failed HEAD requests which seems totally
inconsistent and I can't see any reason for that separate treatment.
So, no separate treatment for HEAD starting now. A HTTP request sent
over a reused connection that gets cut off before a single byte is
received will be retried on a fresh connection.
Made-aware-by: Mark Nottingham
|
|
|
|
|
|
Makes libcurl work in communication with gstreamer-based RTSP
servers. The original code validates the session id to be in accordance
with the RFC. I think it is better not to do that:
- For curl the actual content is a don't care.
- The clarity of the RFC is debatable, is $ allowed or only as \$, that
is imho not clear
- Gstreamer seems to url-encode the session id but % is not allowed by
the RFC
- less code
With this patch curl will correctly handle real-life lines like:
Session: biTN4Kc.8%2B1w-AF.; timeout=60
Bug: https://curl.haxx.se/mail/lib-2016-08/0076.html
|
|
Added in 5fce88aa8c12564
|
|
This makes it possible to use specific compilers or a cache.
Sample use for clcache:
set CC=clcache.bat
nmake /f Makefile.vc DEBUG=no MODE=static VC=14 GEN_PDB=no
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In the old line number 290, CC and CURL_CC had the same value. After
that, /DCURL_STATICLIB was added to CC but not CURL_CC (intended?).
This gets rid of the CC variable entirely. It is a first step to make it
possible to manualyl set a CC variable in order to be able to change the
compiler.
|
|
|
|
$(CURL_CC) is always used with $(CURL_CFLAGS) appended, so before this,
all arguments in CURL_CFLAGS have been added twice.
|
|
- Turn on USE_THREADS_WIN32 in Windows if ares isn't on
This change is similar to what we already do in the autotools build.
|
|
All compilers used by cmake in Windows should support large files.
- Add test SIZEOF_OFF_T
- Remove outdated test SIZEOF_CURL_OFF_T
- Turn on USE_WIN32_LARGE_FILES in Windows
- Check for 'Largefile' during the features output
|
|
|
|
Since the server can at any time send a HTTP/2 frame to us, we need to
wait for the socket to be readable during all transfers so that we can
act on incoming frames even when uploading etc.
Reminded-by: Tatsuhiro Tsujikawa
|
|
|
|
In order to make MBEDTLS_DEBUG work, the debug threshold must be unequal
to 0. This patch also adds a comment how mbedtls must be compiled in
order to make debugging work, and explains the possible debug levels.
|
|
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
|
|
When input stream for curl is stdin and input stream is not a file but
generated by a script then curl can truncate data transfer to arbitrary
size since a partial packet is treated as end of transfer by TFTP.
Fixes #857
|
|
Makes the script pass on comments holding meta data to the output
file. Like fingerprinters, issuer, date ranges etc.
Closes #937
|