Age | Commit message (Collapse) | Author |
|
available.
|
|
was a bit too quick and broke test case 1101 with that change. The order of
some of the setups is sensitive. I now changed it slightly again.
|
|
detects and uses proxies based on the environment variables. If the proxy
was given as an explicit option it worked, but due to the setup order
mistake proxies would not be used fine for a few protocols when picked up
from '[protocol]_proxy'. Obviously this broke after 7.19.4. I now also added
test case 1106 that verifies this functionality.
(http://curl.haxx.se/bug/view.cgi?id=2913886)
|
|
|
|
on FTP errors in the transient 5xx range. Transient FTP errors are in the
4xx range. The code itself only tried on 5xx errors that occured _at login_.
Now the retry code retries on all FTP transfer failures that ended with a
4xx response.
(http://curl.haxx.se/bug/view.cgi?id=2911279)
|
|
|
|
accessing alredy freed memory and thus crash when using HTTPS (with
OpenSSL), multi interface and the CURLOPT_DEBUGFUNCTION and a certain order
of cleaning things up. I fixed it.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
|
|
with unknown size. Previously it was only used for posts with a known size
larger than 1024 bytes.
|
|
curl_easy_setopt with CURLOPT_HTTPHEADER, the library should set
data->state.expect100header accordingly - the current code (in 7.19.7 at
least) doesn't handle this properly. Martin Storsjo provided the fix!
|
|
PEM format. Required by several stunnel versions used by our test harness.
|
|
rework patch that now integrates TFTP properly into libcurl so that it can
be used non-blocking with the multi interface and more. BLKSIZE also works.
The --tftp-blksize option was added to allow setting the TFTP BLKSIZE from
the command line.
|
|
meter/callback during FTP command/response sequences. It turned out it was
really lame before and now the progress meter SHOULD get called at least
once per second.
|
|
finer granularity control when generating src and lib makefiles.
|
|
though it failed to write a very small download to disk (done in a single
fwrite call). It turned out to be because fwrite() returned success, but
there was insufficient error-checking for the fclose() call which tricked
curl to believe things were fine.
|
|
closed by libcurl before the SSL lib were shutdown and they may write to its
socket. Detected to at least happen with OpenSSL builds.
|
|
CURLOPT_HTTPPROXYTUNNEL enabled over a proxy, a subsequent request using the
same proxy with the tunnel option disabled would still wrongly re-use that
previous connection and the outcome would only be badness.
|
|
end up with entries that wouldn't time-out:
1. Set up a first web server that redirects (307) to a http://server:port
that's down
2. Have curl connect to the first web server using curl multi
After the curl_easy_cleanup call, there will be curl dns entries hanging
around with in_use != 0.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
|
|
its pkg-config file. So -Wl stuff ended up in the .pc file, which is really
bad, and breaks if there are multiple -Wl in our LDFLAGS (which are in
PTXdist). bug #2893592 (http://curl.haxx.se/bug/view.cgi?id=2893592)
|
|
(and in particular the list of required libraries) even if a path is given
as argument to --with-ssl
|
|
arguments when curl was built
|
|
--with-nss is set but not "yes".
I think we can still improve that to check for pkg-config in that path etc,
but at least this patch brings back the same functionality we had before.
|
|
the client certificate. It also disable the key name test as some engines
can select a private key/cert automatically (When there is only one key
and/or certificate on the hardware device used by the engine)
|
|
won't be reused unless protection level for peer and host verification match.
|
|
a broken TLS server. However it does not happen if SSL version is selected
manually. The approach was originally taken from PSM. Kaspar Brand helped me
to complete the patch. Original bug reports:
https://bugzilla.redhat.com/525496
https://bugzilla.redhat.com/527771
|
|
closed NSPR descriptor. The issue was hard to find, reported several times
before and always closed unresolved. More info at the RH bug:
https://bugzilla.redhat.com/534176
|
|
and GNU GSS installed due to a missing mutual exclusion of header files in
the Kerberos 5 code path. He also verified that my patch worked for him.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2891595) which identified how an entry
in the DNS cache would linger too long if the request that added it was in
use that long. He also provided the patch that now makes libcurl capable of
still doing a request while the DNS hash entry may get timed out.
|
|
used during the FTP connection phase (after the actual TCP connect), while
it of course should be. I also made the speed check get called correctly so
that really slow servers will trigger that properly too.
|
|
in non-blocking mode.
|
|
curl.h, adjusting auto-makefiles include path, to enhance portability to
OS's without an orthogonal directory tree structure such as OS/400.
|
|
wrong percentage for small files, most notable for <1000 bytes and could
easily end up showing more than 100% at the end. It also didn't show any
percentage, transfer size or estimated transfer times when transferring
less than 100 bytes.
|
|
|
|
CURLINFO_SIZE_DOWNLOAD (the -w variable size_download) didn't work when
getting data from ldap!
|
|
download was 0 bytes, as libcurl would then return the size as unknown (-1)
and not 0. I wrote a fix and test case 566 to verify it.
|
|
auth is used, as it caused a crash. I failed to repeat the issue, but still
made a change that now forces the TCP connection used for a freed SCP
session to get closed and not be re-used.
|
|
POST using a read callback, with Digest authentication and
"Transfer-Encoding: chunked" enforced. I would then cause the first request
to be wrongly sent and then basically hang until the server closed the
connection. I fixed the problem and added test case 565 to verify it.
|
|
unparsable expiry dates and then treat them as session cookies - previously
libcurl would reject cookies with a date format it couldn't parse. Research
shows that the major browser treat such cookies as session cookies. I
modified test 8 and 31 to verify this.
|
|
during configure.
|
|
|
|
by user 'koresh' introduced the --crlfile option to curl, which makes curl
tell libcurl about a file with CRL (certificate revocation list) data to
read.
|
|
that now makes curl_getdate(3) actually handles RFC 822 formatted dates that
use the "single letter military timezones".
http://www.rfc-ref.org/RFC-TEXTS/822/chapter5.html has the details.
|
|
data!
|
|
(http://curl.haxx.se/bug/view.cgi?id=2873666) which identified a problem which
made libcurl loop infinitely when given incorrect credentials when using HTTP
GSS negotiate authentication.
|
|
libcurl called NSS to close the SSL "session" it also closed the actual
socket.
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=2870221) that libcurl returned an
incorrect return code from the internal trynextip() function which caused
him grief. This is a regression that was introduced in 7.19.1 and I find it
strange it hasn't hit us harder, but I won't persue into figuring out
exactly why.
|
|
SO_SNDBUF to CURL_WRITE_SIZE even if the SO_SNDBUF starts out larger. The
patch doesn't do a setsockopt if SO_SNDBUF is already greater than
CURL_WRITE_SIZE. This should help folks who have set up their computer with
large send buffers.
|
|
the define CURL_MAX_HTTP_HEADER which is even exposed in the public header
file to allow for users to fairly easy rebuild libcurl with a modified
limit. The rationale for a fixed limit is that libcurl is realloc()ing a
buffer to be able to put a full header into it, so that it can call the
header callback with the entire header, but that also risk getting it into
trouble if a server by mistake or willingly sends a header that is more or
less without an end. The limit is set to 100K.
|
|
saving received cookies with no given path, if the path in the request had a
query part. That is means a question mark (?) and characters on the right
side of that. I wrote test case 1105 and fixed this problem.
|
|
transfer.c for blocking. It is currently used only by SCP and SFTP protocols.
This enhancement resolves an issue with 100% CPU usage during SFTP upload,
reported by Vourhey.
|