Age | Commit message (Collapse) | Author |
|
(when using OpenSSL).
|
|
with the multi interface and multi-part formposts. The fix from February
22nd could make the Curl_done() function get called twice on the same
connection and it was not designed for that and thus tried to call free() on
an already freed memory area!
|
|
|
|
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=1431750) helped me identify and fix two
different but related bugs:
1) Removing an easy handle from a multi handle before the transfer is done
could leave a connection in the connection cache for that handle that is
in a state that isn't suitable for re-use. A subsequent re-use could then
read from a NULL pointer and segfault.
2) When an easy handle was removed from the multi handle, there could be an
outstanding c-ares DNS name resolve request. When the response arrived,
it caused havoc since the connection struct it "belonged" to could've
been freed already.
Now Curl_done() is called when an easy handle is removed from a multi handle
pre-maturely (that is, before the transfer was complteted). Curl_done() also
makes sure to cancel all (if any) outstanding c-ares requests.
|
|
|
|
type to the already provided type CURLPROXY_SOCKS4.
I added a --socks4 option that works like the current --socks5 option but
instead use the socks4 protocol.
|
|
|
|
an app can use to let libcurl only connect to a remote host and then extract
the socket from libcurl. libcurl will then not attempt to do any transfer at
all after the connect is done.
|
|
curl tool with --local-port. Plain and simply set the range of ports to bind
the local end of connections to. Implemented on to popular demand.
Not extensively tested. Please let me know how it works.
|
|
(CURLOPT_FTPPORT) didn't work for ipv6-enabed curls if the IP wasn't a
"native" IP while it works fine for ipv6-disabled builds!
In the process of fixing this, I removed the support for LPRT since I can't
think of many reasons to keep doing it and asking on the mailing list didn't
reveal anyone else that could either. The code that sends EPRT and PORT is
now also a lot simpler than before (IMHO).
|
|
not supporting it. It hasn't been functioning for years anyway, so this is
just finally stating what already was true. And a cleanup at the same time.
|
|
|
|
password of 127 bytes or less embedded in a URL, where actually the code
uses a 255 byte buffer for it! Modified now to use the full buffer size.
|
|
|
|
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=1338648) which really is more of a
feature request, but anyway. It pointed out that --max-redirs did not allow
it to be set to 0, which then would return an error code on the first
Location: found. Based on Nis' patch, now libcurl supports CURLOPT_MAXREDIRS
set to 0, or -1 for infinity. Added test case 274 to verify.
|
|
it, it could then accidentally actually crash. Presumably, this concerns FTP
connections. http://curl.haxx.se/bug/view.cgi?id=1330310
|
|
now contain the word "proxy" is the hostname is in fact a proxy. This will
help users detect situations when they mistakenly use a proxy.
|
|
protocol sockets even if the resolved address may say otherwise
|
|
|
|
added. TODO: add them to docs. add TFTP server to test suite. add TFTP to
list of protocols whereever those are mentioned.
|
|
from the command line tool with --ignore-content-length. This will make it
easier to download files from Apache 1.x (and similar) servers that are
still having problems serving files larger than 2 or 4 GB. When this option
is enabled, curl will simply have to wait for the server to close the
connection to signal end of transfer. I wrote test case 269 that runs a
simple test that this works.
|
|
CURLOPT_COOKIEFILE), add a cookie (with CURLOPT_COOKIELIST), tell it to
write the result to a given cookie jar and then never actually call
curl_easy_perform() - the given file(s) to read was never read but the
output file was written and thus it caused a "funny" result.
- While doing some tests for the bug above, I noticed that Firefox generates
large numbers (for the expire time) in the cookies.txt file and libcurl
didn't treat them properly. Now it does.
|
|
HTTP proxy if an FTP URL was given. libcurl now properly switches to pure HTTP
internally when an HTTP proxy is used, even for FTP URLs. The problem would
also occur with other multi-pass auth methods.
|
|
not!
|
|
|
|
modifies it
|
|
set to 1, CURLOPT_NOBODY will now automatically be set to 0.
|
|
simple interface to extracting and setting cookies in libcurl's internal
"cookie jar". See the new cookie_interface.c example code.
|
|
Pointed out by Bjorn Reese.
|
|
trailer is then sent to the normal header callback/stream.
|
|
with CURLOPT_PROXY can use a http:// prefix and user + password. The user
and password fields are now also URL decoded properly.
Test case 264 added to verify.
|
|
address was not possible to use. It is now, but requires it written
RFC2732-style, within brackets - which incidently is how you enter numerical
IPv6 addresses in URLs. Test case 263 added to verify.
|
|
|
|
"http://somehost?data" as it added a slash too much in the request ("GET
/?data/"...). Added test case 260 to verify.
|
|
warnings like:
'x' may be used uninitialized in this function.
|
|
|
|
|
|
VS2005.
|
|
.netrc, and when following a Location: the subsequent requests didn't properly
use the auth as found in the netrc file. Added test case 257 to verify my fix.
|
|
compiler warning
|
|
used the default port. He was right. I fixed the problem and added the test
cases 521, 522 and 523 to verify the fix.
|
|
internally, with code provided by sslgen.c. All SSL-layer-specific code is
then written in ssluse.c (for OpenSSL) and gtls.c (for GnuTLS).
As far as possible, internals should not need to know what SSL layer that is
in use. Building with GnuTLS currently makes two test cases fail.
TODO.gnutls contains a few known outstanding issues for the GnuTLS support.
GnuTLS support is enabled with configure --with-gnutls
|
|
from url.c: Curl_disconnect().
|
|
|
|
for SSPI support. The contents of the file has been moved into the krb4.h file.
|
|
|