Age | Commit message (Collapse) | Author |
|
|
|
with threaded resolver enabled (Windows default configuration) that would
get triggered when a curl handle is closed while doing DNS resolution.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
from hostip.h to setup.h in order to allow proper inclusion in any file.
This represents no functional change at all in which resolver is used,
everything still works as usual, internally and externally there is no
difference in behavior.
|
|
s/RTPDATA/INTERLEAVEDATA/
|
|
|
|
libcurl options for controlling what to get and how to receive posssibly
interleaved RTP data. Initial commit.
|
|
the server anymore
|
|
|
|
command is a special "hack" used by the drftpd server, but even though it is
a custom extension I've deemed it fine to add to libcurl since this server
seems to survive and people keep using it and want libcurl to support
it. The new libcurl option is named CURLOPT_FTP_USE_PRET, and it is also
usable from the curl tool with --ftp-pret. Using this option on a server
that doesn't support this command will make libcurl fail.
|
|
receivers, and made the command line tool thus support the option specified
many times
|
|
|
|
|
|
really didn't belong there and had no real point.
|
|
struct, and instead use the already stored string in the handler struct.
|
|
was a bit too quick and broke test case 1101 with that change. The order of
some of the setups is sensitive. I now changed it slightly again.
|
|
detects and uses proxies based on the environment variables. If the proxy
was given as an explicit option it worked, but due to the setup order
mistake proxies would not be used fine for a few protocols when picked up
from '[protocol]_proxy'. Obviously this broke after 7.19.4. I now also added
test case 1106 that verifies this functionality.
(http://curl.haxx.se/bug/view.cgi?id=2913886)
|
|
|
|
closed by libcurl before the SSL lib were shutdown and they may write to its
socket. Detected to at least happen with OpenSSL builds.
|
|
CURLOPT_HTTPPROXYTUNNEL enabled over a proxy, a subsequent request using the
same proxy with the tunnel option disabled would still wrongly re-use that
previous connection and the outcome would only be badness.
|
|
end up with entries that wouldn't time-out:
1. Set up a first web server that redirects (307) to a http://server:port
that's down
2. Have curl connect to the first web server using curl multi
After the curl_easy_cleanup call, there will be curl dns entries hanging
around with in_use != 0.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
|
|
won't be reused unless protection level for peer and host verification match.
|
|
|
|
|
|
Fix SIGSEGV on free'd easy_conn when pipe unexpectedly breaks
Fix data corruption issue with re-connected transfers
Fix use after free if we're completed but easy_conn not NULL
|
|
(http://curl.haxx.se/bug/view.cgi?id=2835196), fixing a few compiler
warnings when mixing ints and bools.
|
|
CURLOPT_NOPROXY to "*", or to a host that should not use a proxy, I actually
could still end up using a proxy if a proxy environment variable was set.
|
|
They introduce known_host support for SSH keys to libcurl. See docs for
details.
|
|
provided in the url, add it there (squid needs this).
|
|
|
|
|
|
contributed a range of patches to fix them.
|
|
With the curl memory tracking feature decoupled from the debug build feature,
CURLDEBUG and DEBUGBUILD preprocessor symbol definitions are used as follows:
CURLDEBUG used for curl debug memory tracking specific code (--enable-curldebug)
DEBUGBUILD used for debug enabled specific code (--enable-debug)
|
|
|
|
|
|
the auth credentials back in 7.19.0 and earlier while now you have to set ""
to get the same effect. His patch brings back the ability to use NULL.
|
|
no_proxy which made it not skip the proxy if the URL entered contained a
user name. I added test case 1101 to verify.
|
|
unconditionally
|
|
to it in the actual connect call and not asynchronously.
|
|
'connected' truly is false when the socks connect fails.
Curl_done() fixed for the check-conn->bits.done-before-Curl_getoff_all_pipelines case
|
|
(http://curl.haxx.se/bug/view.cgi?id=2784055) identifying a problem to
connect to SOCKS proxies when using the multi interface. It turned out to
almost not work at all previously. We need to wait for the TCP connect to
be properly verified before doing the SOCKS magic.
There's still a flaw in the FTP code for this.
|
|
|
|
Storsjo pointed out how setting CURLOPT_NOBODY to 0 could be downright
confusing as it set the method to either GET or HEAD. The example he showed
looked like:
curl_easy_setopt(curl, CURLOPT_PUT, 1);
curl_easy_setopt(curl, CURLOPT_NOBODY, 0);
The new way doesn't alter the method until the request is about to start. If
CURLOPT_NOBODY is then 1 the HTTP request will be HEAD. If CURLOPT_NOBODY is
0 and the request happens to have been set to HEAD, it will then instead be
set to GET. I believe this will be less surprising to users, and hopefully
not hit any existing users badly.
|