Age | Commit message (Collapse) | Author |
|
curl_easy_setopt() that alters how libcurl functions when following
redirects. It makes libcurl obey the RFC2616 when a 301 response is received
after a non-GET request is made. Default libcurl behaviour is to change
method to GET in the subsequent request (like it does for response code 302
- because that's what many/most browsers do), but with this CURLOPT_POST301
option enabled it will do what the spec says and do the next request using
the same method again. I.e keep POST after 301.
The curl tool got this option as --post301
Test case 1011 and 1012 were added to verify.
|
|
CURLOPT_NOBODY enabled but not CURLOPT_HEADER, libcurl wouldn't do TYPE
before it does SIZE which makes it less useful. I walked over the code and
made it do this properly, and added test case 542 to verify it.
|
|
o It looks for the NSS database first in the environment variable SSL_DIR,
then in /etc/pki/nssdb, then it initializes with no database if neither of
those exist.
o If the NSS PKCS#11 libnspsem.so driver is available then PEM files may be
loaded, including the ca-bundle. If it is not available then only
certificates already in the NSS database are used.
o Tries to detect whether a file or nickname is being passed in so the right
thing is done
o Added a bit of code to make the output more like the OpenSSL module,
including displaying the certificate information when connecting in
verbose mode
o Improved handling of certificate errors (expired, untrusted, etc)
The libnsspem.so PKCS#11 module is currently only available in Fedora
8/rawhide. Work will be done soon to upstream it. The NSS module will work
with or without it, all that changes is the source of the certificates and
keys.
|
|
Renamed the curl_ftpssl enum to curl_usessl and its enumerated constants,
creating macros for backward compatibility.
|
|
|
|
|
|
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
|
|
of a socket after it has been closed, when the FTP-SSL data connection is taken
down.
|
|
|
|
some few internal identifiers to avoid conflicts, which could be useful on
other platforms.
|
|
* Move scp:// into a state machine so it won't block in multi mode
* When available use the full directory entry from the sftp:// server
|
|
libcurl. This also makes the options change name to --krb (from --krb4) and
CURLOPT_KRBLEVEL (from CURLOPT_KRB4LEVEL) but the old names are still
|
|
and CURLOPT_NEW_DIRECTORY_PERMS. These control the premissions for files
and directories created on the remote server. CURLOPT_NEW_FILE_PERMS
defaults to 0644 and CURLOPT_NEW_DIRECTORY_PERMS defaults to 0755
|
|
allocated when needed
|
|
libssh2_sftp_shutdown() and libssh2_session_free() can now return
LIBSSH2_ERROR_EAGAIN.
* Fix the _send() and _recv() return values so non-blocking works
|
|
LIBSSH2_APINO >= 200706012030. More to come...
|
|
can/will be used as it then makes the common cases save 16KB of data for each
easy handle that isn't used for pipelining.
|
|
function that deprecates the curl_multi_socket() function. Using the new
function the application tell libcurl what action that was found in the
socket that it passes in. This gives a significant performance boost as it
allows libcurl to avoid a call to poll()/select() for every call to
curl_multi_socket*().
|
|
|
|
easy handles are added to a multi handle, by avoiding the looping over all
the handles to find which one to remove.
|
|
the multi interface. Note that it still does a part of the connection in a
blocking manner.
|
|
"case label value exceeds maximum value for type" and
"comparison is always false due to limited range of data type"
Both triggered when using a bool variable as the switch variable
in a switch statement and using enums for the case targets.
|
|
different server behaviour
|
|
|
|
|
|
the left side of @ to make it short(er).
|
|
SSL/TLS layer. http://www.mozilla.org/projects/security/pki/nss/
|
|
to the debug callback.
- Shmulik Regev added CURLOPT_HTTP_CONTENT_DECODING and
CURLOPT_HTTP_TRANSFER_DECODING that if set to zero will disable libcurl's
internal decoding of content or transfer encoded content. This may be
preferable in cases where you use libcurl for proxy purposes or similar. The
command line tool got a --raw option to disable both at once.
|
|
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
|
|
the problem. The code now tries harder to use httproxy and proxy where
apppropriate, as not all proxies are HTTP...
|
|
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
|
|
curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it
will make libcurl shutdown SSL/TLS after the authentication is done on a
FTP-SSL operation.
|
|
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.
|
|
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
|
|
include the protocol bits of such actions, which currently only means FTP
|
|
(http://curl.haxx.se/bug/view.cgi?id=1603712) which is about connections
getting cut off prematurely when --limit-rate is used. While I found no such
problems in my tests nor in my reading of the code, I found that the
--limit-rate code was severly flawed (since it was moved into the lib, since
7.15.5) when used with the easy interface and it didn't work as documented so
I reworked it somewhat and now it works for my tests.
|
|
|
|
|
|
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.
|
|
|
|
re-use connections (for pipelining) before the name resolving is done.
|
|
(when the resoling isn't completede yet) and not confuse it with a simple
connection re-use (non-pipelined).
|
|
|
|
would crash if a bad function sequence was used when shutting down after
using the multi interface (i.e using easy_cleanup after multi_cleanup) so
precautions have been added to make sure it doesn't any more - test case 529
was added to verify.
|
|
currently fits in the cache, to make the cache work better especially for
pipelining cases but also for "mere" (persistent) connection re-use.
|
|
handle that is part of a multi handle first removes the handle from the
stack.
- Added CURLOPT_SSL_SESSIONID_CACHE and --no-sessionid to disable SSL
session-ID re-use on demand since there obviously are broken servers out
there that misbehave with session-IDs used.
|
|
problem with it (SIGSEGV-style). It clearly showed that the existing
socket-state and state-difference function wasn't good enough so I rewrote
it and could then re-run Jeff's program without any crash. The previous
version clearly could miss to tell the application when a handle changed
from using one socket to using another.
While I was at it (as I could use this as a means to track this problem
down), I've now added a 'magic' number to the easy handle struct that is
inited at curl_easy_init() time and cleared at curl_easy_cleanup() time that
we can use internally to detect that an easy handle seems to be fine, or at
least not closed or freed (freeing in debug builds fill the area with 0x13
bytes but in normal builds we can of course not assume any particular data
in the freed areas).
|
|
cache within the multi handle.
|
|
|
|
allow applications to set their own socket options.
|