Age | Commit message (Collapse) | Author |
|
systems
|
|
transfers. They are done on non-windows systems and translate CRLF to LF.
|
|
code rearrange to fit the future better.
|
|
|
|
attempt to silence picky compilers when assigning data pointers to a function
pointer variable
|
|
(http://curl.haxx.se/mail/lib-2006-02/0154.html) by adding the NTLM hash
function in addition to the LM one and making some other adjustments in the
order the different parts of the data block are sent in the Type-2 reply.
Inspiration for this work was taken from the Firefox NTLM implementation.
I edited the existing 21(!) NTLM test cases to run fine with these news. Due
to the fact that we now properly include the host name in the Type-2 message
the test cases now only compare parts of that chunk.
|
|
|
|
(when using OpenSSL).
|
|
with the multi interface and multi-part formposts. The fix from February
22nd could make the Curl_done() function get called twice on the same
connection and it was not designed for that and thus tried to call free() on
an already freed memory area!
|
|
an app can use to let libcurl only connect to a remote host and then extract
the socket from libcurl. libcurl will then not attempt to do any transfer at
all after the connect is done.
|
|
curl tool with --local-port. Plain and simply set the range of ports to bind
the local end of connections to. Implemented on to popular demand.
Not extensively tested. Please let me know how it works.
|
|
(CURLOPT_FTPPORT) didn't work for ipv6-enabed curls if the IP wasn't a
"native" IP while it works fine for ipv6-disabled builds!
In the process of fixing this, I removed the support for LPRT since I can't
think of many reasons to keep doing it and asking on the mailing list didn't
reveal anyone else that could either. The code that sends EPRT and PORT is
now also a lot simpler than before (IMHO).
|
|
not supporting it. It hasn't been functioning for years anyway, so this is
just finally stating what already was true. And a cleanup at the same time.
|
|
given subdirs, libcurl would still "remember" the full path as if it is the
current directory libcurl is in so that the next curl_easy_perform() would
get really confused if it tried the same path again - as it would not issue
any CWD commands at all, assuming it is already in the "proper" dir.
Starting now, a failed CWD command sets a flag that prevents the path to be
"remembered" after returning.
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=1338648) which really is more of a
feature request, but anyway. It pointed out that --max-redirs did not allow
it to be set to 0, which then would return an error code on the first
Location: found. Based on Nis' patch, now libcurl supports CURLOPT_MAXREDIRS
set to 0, or -1 for infinity. Added test case 274 to verify.
|
|
protocol sockets even if the resolved address may say otherwise
|
|
|
|
added. TODO: add them to docs. add TFTP server to test suite. add TFTP to
list of protocols whereever those are mentioned.
|
|
|
|
from the command line tool with --ignore-content-length. This will make it
easier to download files from Apache 1.x (and similar) servers that are
still having problems serving files larger than 2 or 4 GB. When this option
is enabled, curl will simply have to wait for the server to close the
connection to signal end of transfer. I wrote test case 269 that runs a
simple test that this works.
|
|
trailer is then sent to the normal header callback/stream.
|
|
.netrc, and when following a Location: the subsequent requests didn't properly
use the auth as found in the netrc file. Added test case 257 to verify my fix.
|
|
|
|
internally, with code provided by sslgen.c. All SSL-layer-specific code is
then written in ssluse.c (for OpenSSL) and gtls.c (for GnuTLS).
As far as possible, internals should not need to know what SSL layer that is
in use. Building with GnuTLS currently makes two test cases fail.
TODO.gnutls contains a few known outstanding issues for the GnuTLS support.
GnuTLS support is enabled with configure --with-gnutls
|
|
up in several chunks when read.
|
|
|
|
|
|
USE_WINDOWS_SSPI on Windows, and then libcurl will be built to use the native
way to do NTLM. SSPI also allows libcurl to pass on the current user and its
password in the request.
|
|
The tag 'before_ftp_statemachine' was set just before this commit in case
of future need.
|
|
|
|
curl_easy_perform() invokes. It was previously unlocked at disconnect, which
could mean that it remained locked between multiple transfers. The DNS cache
may not live as long as the connection cache does, as they are separate.
To deal with the lack of DNS (host address) data availability in re-used
connections, libcurl now keeps a copy of the IP adress as a string, to be able
to show it even on subsequent requests on the same connection.
|
|
present in RFC959... so now (lib)curl supports it as well. --ftp-account and
CURLOPT_FTP_ACCOUNT set the account string. (The server may ask for an account
string after PASS have been sent away. The client responds with "ACCT [account
string]".) Added test case 228 and 229 to verify the functionality. Updated
the test FTP server to support ACCT somewhat.
|
|
|
|
|
|
select() overhaul fix.
|
|
#1098843. In short, a shared DNS cache was setup for a multi handle and when
the shared cache was deleted before the individual easy handles, the latter
cleanups caused read/writes to already freed memory.
|
|
a precedence problem with the zlib header. See CHANGES for details.
|
|
|
|
|
|
ssluse.*: Added SSL_strerror(). Curl_SSL_engines_list() now returns a slist
which must be freed by caller.
|
|
UrlState sub-struct. Also made the engine_list exist for non-ssl builds to
make curl build.
|
|
Added Curl_SSL_engines_list(), cleanup SSL in url.c
(no HAVE_OPENSSL_x etc.).
|
|
|
|
|
|
clash with djgpp ioctl() macro in setup.h.
|
|
If EPSV, EPRT or LPRT is tried and doesn't work, it will not be retried on
the same server again even if a following request is made using a persistent
connection.
If a second request is made to a server, requesting a file from the same
directory as the previous request operated on, libcurl will no longer make
that long series of CWD commands just to end up on the same spot. Note that
this is only for *exactly* the same dir. There is still room for improvements
to optimize the CWD-sending when the dirs are only slightly different.
Added test 210, 211 and 212 to verify these changes. Had to improve the
test script too and added a new primitive to the test file format.
|
|
|
|
|
|
|