Age | Commit message (Collapse) | Author |
|
...as otherwise it might use a different decimal sign.
Bug: #2436
Reported-by: Oumph on github
|
|
This patch adds CURLOPT_DNS_SHUFFLE_ADDRESSES to explicitly request
shuffling of IP addresses returned for a hostname when there is more
than one. This is useful when the application knows that a round robin
approach is appropriate and is willing to accept the consequences of
potentially discarding some preference order returned by the system's
implementation.
Closes #1694
|
|
Add --haproxy-protocol for the command line tool
Closes #2162
|
|
Found via `codespell`
Closes #2389
|
|
This is what "HTTP/0.9" basically looks like.
Reported on IRC
Closes #2382
|
|
It fails somewhere between every 3rd to 10th travis-CI run
|
|
Refuse to operate when given path components featuring byte values lower
than 32.
Previously, inserting a %00 sequence early in the directory part when
using the 'singlecwd' ftp method could make curl write a zero byte
outside of the allocated buffer.
Test case 340 verifies.
CVE-2018-1000120
Reported-by: Duy Phan Thanh
Bug: https://curl.haxx.se/docs/adv_2018-9cd6.html
|
|
Reported-by: Michael Kaufmann
Fixes #2357
Closes #2362
|
|
Added test 1265 that verifies.
Reported-by: steelman on github
Fixes #2353
Closes #2355
|
|
- Add new option CURLOPT_RESOLVER_START_FUNCTION to set a callback that
will be called every time before a new resolve request is started
(ie before a host is resolved) with a pointer to backend-specific
resolver data. Currently this is only useful for ares.
- Add new option CURLOPT_RESOLVER_START_DATA to set a user pointer to
pass to the resolver start callback.
Closes https://github.com/curl/curl/pull/2311
|
|
This enables users to preresolve but still take advantage of happy
eyeballs and trying multiple addresses if some are not connecting.
Ref: https://github.com/curl/curl/pull/2260
|
|
|
|
|
|
Closes #2302
|
|
Test 319 checks proper raw mode data with non-chunked gzip
transfer-encoded server data.
Test 326 checks raw mode with chunked server data.
Bug: #2303
Closes #2308
|
|
RFC 5321 4.1.1.4 specifies the CRLF terminating the DATA command
should be taken into account when chasing the <CRLF>.<CRLF> end marker.
Thus a leading dot character in data is also subject to escaping.
Tests 911 and test server are adapted to this situation.
New tests 951 and 952 check proper handling of initial dot in data.
Closes #2304
|
|
If CURL_DOES_CONVERSION is enabled, uploaded LFs are mapped to CRLFs,
giving a result that is different from what is expected.
This commit avoids using CURLOPT_TRANSFERTEXT and directly encodes data
to upload in ascii.
Bug: https://github.com/curl/curl/pull/1872
|
|
Make curl_getdate() handle dates before 1970 as well (returning negative
values).
Make test 517 test dates for 64 bit time_t.
This fixes bug (3) mentioned in #2238
Closes #2250
|
|
|
|
... unless CURLOPT_UNRESTRICTED_AUTH is set to allow them. This matches how
curl already handles Authorization headers created internally.
Note: this changes behavior slightly, for the sake of reducing mistakes.
Added test 317 and 318 to verify.
Reported-by: Craig de Stigter
Bug: https://curl.haxx.se/docs/adv_2018-b3bf.html
|
|
vtls.c:multissl_init() might do a curl_free() call so strip that out to
make this work with more builds. We just want to verify that
memorytracking works so skipping one line is no harm.
|
|
A mime tree attached to an easy handle using CURLOPT_MIMEPOST is
strongly bound to the handle: there is a pointer to the easy handle in
each item of the mime tree and following the parent pointer list
of mime items ends in a dummy part stored within the handle.
Because of this binding, a mime tree cannot be shared between different
easy handles, thus it needs to be cloned upon easy handle duplication.
There is no way for the caller to get the duplicated mime tree
handle: it is then set to be automatically destroyed upon freeing the
new easy handle.
New test 654 checks proper mime structure duplication/release.
Add a warning note in curl_mime_data_cb() documentation about sharing
user data between duplicated handles.
Closes #2235
|
|
|
|
|
|
|
|
... and make the max filesize check trigger if the value is too big.
Updates test 178.
Reported-by: Brad Spencer
Fixes #2212
Closes #2223
|
|
Decoding loop implementation did not concern the case when all
received data is consumed by Brotli decoder and the size of decoded
data internally hold by Brotli decoder is greater than CURL_MAX_WRITE_SIZE.
For content with unencoded length greater than CURL_MAX_WRITE_SIZE this
can result in the loss of data at the end of content.
Closes #2194
|
|
Move curl_mime_initpart() and curl_mime_cleanpart() calls to lower-level
functions dealing with UserDefined structure contents.
This avoids memory leakages on curl-generated part mime headers.
New test 2073 checks this using the cli tool --next option: it
triggers a valgrind error if bug is present.
Bug: https://curl.haxx.se/mail/lib-2017-12/0060.html
Reported-by: Martin Galvan
|
|
- When zlib version is < 1.2.0.4, process gzip trailer before considering
extra data as an error.
- Inflate with Z_BLOCK instead of Z_SYNC_FLUSH to maximize correct data
and minimize corrupt data output.
- Do not try to restart deflate decompression in raw mode if output has
started or if the leading data is not available anymore.
- New test 232 checks inflating raw-deflated content.
Closes #2068
|
|
This reverts commit 9ffad8eb1329bb35c8988115ac7ed85cf91ef955.
It was actually added rather recently in 8e8afa82cbb629 due to a crash
that would otherwise happen in the RTSP code. As I don't think we've
fixed that behavior yet, we better keep this work-around until we have
fixed it better.
|
|
|
|
Prune the DNS cache immediately after the dns entry is unlocked in
multi_done. Timed out entries will then get discarded in a more orderly
fashion.
Test506 is updated
Reported-by: Oleg Pudeyev
Fixes #2169
Closes #2170
|
|
That data is only ever used by the CURLOPT_INTERLEAVEFUNCTION callback
and that option isn't set or used by the curl tool!
Updates the 9 tests that verify --libcurl
Closes #2167
|
|
If the lock is released before the dealings with the bundle is over, it may
have changed by another thread in the mean time.
Fixes #2132
Fixes #2151
Closes #2139
|
|
For pop3/imap/smtp, added test 891 to somewhat verify the pop3
case.
For this, I enhanced the pingpong test server to be able to send back
responses with LF-only instead of always using CRLF.
Closes #2150
|
|
This brings its in sync with the error code returned by the
libssh backend.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@gnutls.org>
|
|
That also updates tests to expect the right error code
libssh2 back-end returns CURLE_SSH error if the remote file
is not found. Expect instead CURLE_REMOTE_FILE_NOT_FOUND
which is sent by the libssh backend.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@redhat.com>
|
|
The code would previous read beyond the end of the pattern string if the
match pattern ends with an open bracket when the default pattern
matching function is used.
Detected by OSS-Fuzz:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=4161
CVE-2017-8817
Bug: https://curl.haxx.se/docs/adv_2017-ae72.html
|
|
|
|
|
|
|
|
Host names like "127.0.0.1 moo" would otherwise be accepted by some
getaddrinfo() implementations.
Updated test 1034 and 1035 accordingly.
Fixes #2073
Closes #2092
|
|
... so that IPv6 addresses can be passed like they can for connect-to
and how they're used in URLs.
Added test 1324 to verify
Reported-by: Alex Malinovich
Fixes #2087
Closes #2091
|
|
Follow-up to aadb7c7. Verified by new test 1263.
Closes #2072
|
|
|
|
This uses the brotli external library (https://github.com/google/brotli).
Brotli becomes a feature: additional curl_version_info() bit and
structure fields are provided for it and CURLVERSION_NOW bumped.
Tests 314 and 315 check Brotli content unencoding with correct and
erroneous data.
Some tests are updated to accomodate with the now configuration dependent
parameters of the Accept-Encoding header.
|
|
This is implemented as an output streaming stack of unencoders, the last
calling the client write procedure.
New test 230 checks this feature.
Bug: https://github.com/curl/curl/pull/2002
Reported-By: Daniel Bankhead
|
|
By properly keeping track of the last entry in the list of URLs/uploads
to handle, curl now avoids many meaningless traverses of the list which
speeds up many-URL handling *MASSIVELY* (several magnitudes on 100K
URLs).
Added test 1291, to verify that it doesn't take ages - but we don't have
any detection of "too slow" command in the test suite.
Reported-by: arainchik on github
Fixes #1959
Closes #2052
|
|
Assisted-by: Per Lundberg
Fixes #2044
Closes #2046
Closes #2048
|
|
Also upgrade test 1133 to cover this case and clarify man page about
form data quoting.
Bug: https://github.com/curl/curl/issues/2022
Reported-By: omau on github
|