Age | Commit message (Collapse) | Author |
|
.... and avoid advancing the pointer to trigger an out of buffer read.
Detected by OSS-fuzz
Bug: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=5251
Assisted-by: Max Dymond
|
|
A mime tree attached to an easy handle using CURLOPT_MIMEPOST is
strongly bound to the handle: there is a pointer to the easy handle in
each item of the mime tree and following the parent pointer list
of mime items ends in a dummy part stored within the handle.
Because of this binding, a mime tree cannot be shared between different
easy handles, thus it needs to be cloned upon easy handle duplication.
There is no way for the caller to get the duplicated mime tree
handle: it is then set to be automatically destroyed upon freeing the
new easy handle.
New test 654 checks proper mime structure duplication/release.
Add a warning note in curl_mime_data_cb() documentation about sharing
user data between duplicated handles.
Closes #2235
|
|
|
|
|
|
|
|
... and make the max filesize check trigger if the value is too big.
Updates test 178.
Reported-by: Brad Spencer
Fixes #2212
Closes #2223
|
|
- Enable execute permission (chmod +x)
- Change interpreter to /usr/bin/env perl
Closes https://github.com/curl/curl/pull/2222
|
|
.. because limits.h presence isn't optional, it's required by C89.
Ref: http://port70.net/~nsz/c/c89/c89-draft.html#2.2.4.2
Closes https://github.com/curl/curl/pull/2215
|
|
|
|
|
|
Decoding loop implementation did not concern the case when all
received data is consumed by Brotli decoder and the size of decoded
data internally hold by Brotli decoder is greater than CURL_MAX_WRITE_SIZE.
For content with unencoded length greater than CURL_MAX_WRITE_SIZE this
can result in the loss of data at the end of content.
Closes #2194
|
|
Move curl_mime_initpart() and curl_mime_cleanpart() calls to lower-level
functions dealing with UserDefined structure contents.
This avoids memory leakages on curl-generated part mime headers.
New test 2073 checks this using the cli tool --next option: it
triggers a valgrind error if bug is present.
Bug: https://curl.haxx.se/mail/lib-2017-12/0060.html
Reported-by: Martin Galvan
|
|
- When zlib version is < 1.2.0.4, process gzip trailer before considering
extra data as an error.
- Inflate with Z_BLOCK instead of Z_SYNC_FLUSH to maximize correct data
and minimize corrupt data output.
- Do not try to restart deflate decompression in raw mode if output has
started or if the leading data is not available anymore.
- New test 232 checks inflating raw-deflated content.
Closes #2068
|
|
This reverts commit 9ffad8eb1329bb35c8988115ac7ed85cf91ef955.
It was actually added rather recently in 8e8afa82cbb629 due to a crash
that would otherwise happen in the RTSP code. As I don't think we've
fixed that behavior yet, we better keep this work-around until we have
fixed it better.
|
|
|
|
|
|
Prune the DNS cache immediately after the dns entry is unlocked in
multi_done. Timed out entries will then get discarded in a more orderly
fashion.
Test506 is updated
Reported-by: Oleg Pudeyev
Fixes #2169
Closes #2170
|
|
That data is only ever used by the CURLOPT_INTERLEAVEFUNCTION callback
and that option isn't set or used by the curl tool!
Updates the 9 tests that verify --libcurl
Closes #2167
|
|
|
|
If the lock is released before the dealings with the bundle is over, it may
have changed by another thread in the mean time.
Fixes #2132
Fixes #2151
Closes #2139
|
|
For pop3/imap/smtp, added test 891 to somewhat verify the pop3
case.
For this, I enhanced the pingpong test server to be able to send back
responses with LF-only instead of always using CRLF.
Closes #2150
|
|
This SFTP test fails with libssh back-end due to failure to verify
the peer. Disable peer verification in the test as there seems to
be the intention of the test.
Note that the libssh back-end automatically verifies the peer's
host using the default known_hosts file.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@gnutls.org>
|
|
This brings its in sync with the error code returned by the
libssh backend.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@gnutls.org>
|
|
That also updates tests to expect the right error code
libssh2 back-end returns CURLE_SSH error if the remote file
is not found. Expect instead CURLE_REMOTE_FILE_NOT_FOUND
which is sent by the libssh backend.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@redhat.com>
|
|
The code would previous read beyond the end of the pattern string if the
match pattern ends with an open bracket when the default pattern
matching function is used.
Detected by OSS-Fuzz:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=4161
CVE-2017-8817
Bug: https://curl.haxx.se/docs/adv_2017-ae72.html
|
|
|
|
|
|
|
|
Host names like "127.0.0.1 moo" would otherwise be accepted by some
getaddrinfo() implementations.
Updated test 1034 and 1035 accordingly.
Fixes #2073
Closes #2092
|
|
... so that IPv6 addresses can be passed like they can for connect-to
and how they're used in URLs.
Added test 1324 to verify
Reported-by: Alex Malinovich
Fixes #2087
Closes #2091
|
|
Follow-up to aadb7c7. Verified by new test 1263.
Closes #2072
|
|
|
|
This uses the brotli external library (https://github.com/google/brotli).
Brotli becomes a feature: additional curl_version_info() bit and
structure fields are provided for it and CURLVERSION_NOW bumped.
Tests 314 and 315 check Brotli content unencoding with correct and
erroneous data.
Some tests are updated to accomodate with the now configuration dependent
parameters of the Accept-Encoding header.
|
|
This is implemented as an output streaming stack of unencoders, the last
calling the client write procedure.
New test 230 checks this feature.
Bug: https://github.com/curl/curl/pull/2002
Reported-By: Daniel Bankhead
|
|
By properly keeping track of the last entry in the list of URLs/uploads
to handle, curl now avoids many meaningless traverses of the list which
speeds up many-URL handling *MASSIVELY* (several magnitudes on 100K
URLs).
Added test 1291, to verify that it doesn't take ages - but we don't have
any detection of "too slow" command in the test suite.
Reported-by: arainchik on github
Fixes #1959
Closes #2052
|
|
Assisted-by: Per Lundberg
Fixes #2044
Closes #2046
Closes #2048
|
|
Test cleanup after OOM wasn't being consistently performed.
|
|
... which is valid according to documentation. Regression since
f121575c0b5f.
Verified now in test 501.
Reported-by: cbartl on github
Fixes #2038
Closes #2039
|
|
|
|
Also upgrade test 1133 to cover this case and clarify man page about
form data quoting.
Bug: https://github.com/curl/curl/issues/2022
Reported-By: omau on github
|
|
Updated docs to include support for RFC7616
Signed-off-by: Florin <petriuc.florin@gmail.com>
Closes #1934
|
|
... instead of doing an infinite loop!
Added test 1162 to verify.
Reported-by: Max Dymond
Fixes #2015
Closes #2017
|
|
... since the 'tv' stood for timeval and this function does not return a
timeval struct anymore.
Also, cleaned up the Curl_timediff*() functions to avoid typecasts and
clean up the descriptive comments.
Closes #2011
|
|
... to cater for systems with unsigned time_t variables.
- Renamed the functions to curlx_timediff and Curl_timediff_us.
- Added overflow protection for both of them in either direction for
both 32 bit and 64 bit time_ts
- Reprefixed the curlx_time functions to use Curl_*
Reported-by: Peter Piekarski
Fixes #2004
Closes #2005
|
|
They use $(TESTUTIL) and thus should use $(TESTUTIL_LIBS) too.
This fixes build failures on Fedora 13.
Closes #2006
|
|
... by using range checks. Among other things, this avoids an undefined
behavior for a left shift that could happen on negative or very large
values.
Closes #1997
Detected by OSS-fuzz: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=3694
|
|
See issue #1999
|
|
Even if OpenSSL is enabled, it might not be the default backend when
multi-ssl is enabled, causing the test to fail.
|
|
|
|
|