Age | Commit message (Collapse) | Author |
|
Calling sscanf is not required since the raw IPv4 address is
available and the protocol can be detected using ai_family.
|
|
Made by Visual Studio's auto-correct feature and missed by me in my own
code reviews!
|
|
Hooked up the HTTP authentication layer to query the new 'is mechanism
supported' functions when deciding what mechanism to use.
As per commit 00417fd66c existing functionality is maintained for now.
|
|
|
|
Hooked up the SASL authentication layer to query the new 'is mechanism
supported' functions when deciding what mechanism to use.
For now existing functionality is maintained.
|
|
|
|
As Windows SSPI authentication calls fail when a particular mechanism
isn't available, introduced these functions for DIGEST, NTLM, Kerberos 5
and Negotiate to allow both HTTP and SASL authentication the opportunity
to query support for a supported mechanism before selecting it.
For now each function returns TRUE to maintain compatability with the
existing code when called.
|
|
|
|
This allows for better memmory debugging and torture tests.
|
|
This reverts commit 113f04e664b16b944e64498a73a4dab990fe9a68.
|
|
Follow up to a96319ebb93
|
|
I discovered some people have been using "https://example.com" style
strings as proxy and it "works" (curl doesn't complain) because curl
ignores unknown schemes and then assumes plain HTTP instead.
I think this misleads users into believing curl uses HTTPS to proxies
when it doesn't. Now curl rejects proxy strings using unsupported
schemes instead of just ignoring and defaulting to HTTP.
|
|
Third commit to fix issue #944 regarding SOCKS5 error handling.
Reported-by: David Kalnischkies
|
|
Second commit to fix issue #944 regarding SOCKS5 error handling.
Reported-by: David Kalnischkies
|
|
First commit to fix issue #944 regarding SOCKS5 error handling.
Reported-by: David Kalnischkies
|
|
Undo change introduced in d4643d6 which caused iPAddress match to be
ignored if dNSName was present but did not match.
Also, if iPAddress is present but does not match, and dNSName is not
present, fail as no-match. Prior to this change in such a case the CN
would be checked for a match.
Bug: https://github.com/curl/curl/issues/959
Reported-by: wmsch@users.noreply.github.com
|
|
Follow-up to e577c43bb to fix test case 569 brekage: stop the parser at
whitespace as well.
Help-by: Erik Janssen
|
|
Mark's new document about HTTP Retries
(https://mnot.github.io/I-D/httpbis-retry/) made me check our code and I
spotted that we don't retry failed HEAD requests which seems totally
inconsistent and I can't see any reason for that separate treatment.
So, no separate treatment for HEAD starting now. A HTTP request sent
over a reused connection that gets cut off before a single byte is
received will be retried on a fresh connection.
Made-aware-by: Mark Nottingham
|
|
Makes libcurl work in communication with gstreamer-based RTSP
servers. The original code validates the session id to be in accordance
with the RFC. I think it is better not to do that:
- For curl the actual content is a don't care.
- The clarity of the RFC is debatable, is $ allowed or only as \$, that
is imho not clear
- Gstreamer seems to url-encode the session id but % is not allowed by
the RFC
- less code
With this patch curl will correctly handle real-life lines like:
Session: biTN4Kc.8%2B1w-AF.; timeout=60
Bug: https://curl.haxx.se/mail/lib-2016-08/0076.html
|
|
- Turn on USE_THREADS_WIN32 in Windows if ares isn't on
This change is similar to what we already do in the autotools build.
|
|
All compilers used by cmake in Windows should support large files.
- Add test SIZEOF_OFF_T
- Remove outdated test SIZEOF_CURL_OFF_T
- Turn on USE_WIN32_LARGE_FILES in Windows
- Check for 'Largefile' during the features output
|
|
Since the server can at any time send a HTTP/2 frame to us, we need to
wait for the socket to be readable during all transfers so that we can
act on incoming frames even when uploading etc.
Reminded-by: Tatsuhiro Tsujikawa
|
|
In order to make MBEDTLS_DEBUG work, the debug threshold must be unequal
to 0. This patch also adds a comment how mbedtls must be compiled in
order to make debugging work, and explains the possible debug levels.
|
|
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
|
|
When input stream for curl is stdin and input stream is not a file but
generated by a script then curl can truncate data transfer to arbitrary
size since a partial packet is treated as end of transfer by TFTP.
Fixes #857
|
|
Makes the script pass on comments holding meta data to the output
file. Like fingerprinters, issuer, date ranges etc.
Closes #937
|
|
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
|
|
Regression added in 790d6de48515. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
|
|
|
|
CVE-2016-5420
Bug: https://curl.haxx.se/docs/adv_20160803B.html
|
|
CVE-2016-5419
Bug: https://curl.haxx.se/docs/adv_20160803A.html
Reported-by: Bru Rom
Contributions-by: Eric Rescorla and Ray Satiro
|
|
CVE-2016-5421
Bug: https://curl.haxx.se/docs/adv_20160803C.html
Reported-by: Marcelo Echeverria and Fernando Muñoz
|
|
This patch is necessary so that curl compiles if MBEDTLS_DEBUG is
defined.
Bug: https://curl.haxx.se/mail/lib-2016-08/0001.html
|
|
If a call to GetSystemDirectory fails, the `path` pointer that was
previously allocated would be leaked. This makes sure that `path` is
always freed.
Closes #938
|
|
As SPNEGO is only defined when these pre-processor variables are defined
there is no need to query them explicitly.
|
|
Typo introduced in commit ad5e9bfd5d.
|
|
This is a follow up to the parent commit dcdd4be which fixes one leak
but creates another by failing to free the credentials handle if out of
memory. Also there's a second location a few lines down where we fail to
do same. This commit fixes both of those issues.
|
|
This patch allocates memory to "output_token" only when it is required
so that memory is not leaked if function returns.
|
|
- Linux TFO + TLS is not implemented yet.
Bug: https://github.com/curl/curl/issues/907
|
|
- Curl_ipv6works() is not thread-safe until after the first call, so
call it once during global init to avoid a possible race condition.
Bug: https://github.com/curl/curl/issues/915
PR: https://github.com/curl/curl/pull/918
|
|
Closes https://github.com/curl/curl/pull/913
|
|
Closes https://github.com/curl/curl/pull/911
|
|
Reported-by: Gou Lingfeng
Bug: https://curl.haxx.se/mail/lib-2016-06/0139.html
|
|
- the expression of an 'if' was always true
- a 'while' contained a condition that was always true
- use 'if(k->exp100 > EXP100_SEND_DATA)' instead of 'if(k->exp100)'
- fixed a typo
Closes #889
|
|
... as otherwise we could get a 0 which would count as no error and we'd
wrongly continue and could end up segfaulting.
Bug: https://curl.haxx.se/mail/lib-2016-06/0052.html
Reported-by: 暖和的和暖
|
|
Necessary since 6cabd78531f
Fixes #853
|
|
Broken since 6cabd785, which adds use of the Curl_extract_certinfo
function from the x509asn1.c file.
|
|
... and save the typedef'ed names for headers and external APIs.
|
|
|
|
Prior to this change we called Curl_ssl_getsessionid and
Curl_ssl_addsessionid regardless of whether session ID reusing was
enabled. According to comments that is in case session ID reuse was
disabled but then later enabled.
The old way was not intuitive and probably not something users expected.
When a user disables session ID caching I'd guess they don't expect the
session ID to be cached anyway in case the caching is later enabled.
|