Age | Commit message (Collapse) | Author |
|
PS: Once again, sorry if the added files have executable perms on Linux.
|
|
PS: Sorry if the added file has executable perms on Linux, I didn't found anything related to it...
|
|
library name under Win32 (Added "_imp" for dynamically linked).
|
|
|
|
|
|
and mine. These are far to be functionnal yet.
PS: Hello world :)
|
|
strdup() call failed.
|
|
|
|
|
|
NSS is used. These ciphers were added in NSS 3.4 and require to be enabled
explicitly.
|
|
Curl_blockread_all(). It is needed in code inside USE_WINDOWS_SSPI.
|
|
library is found to support it.
|
|
|
|
|
|
|
|
|
|
curl_easy_duphandle did not necessarily duplicate the CURLOPT_COOKIEFILE
option. It only enabled the cookie engine in the destination handle if
data->cookies is not NULL (where data is the source handle). In case of a
newly initialized handle which just had the cookie support enabled by a
curl_easy_setopt(handle, CURL_COOKIEFILE, "")-call, handle->cookies was
still NULL because the setopt-call only appends the value to
data->change.cookielist, hence duplicating this handle would not have the
cookie engine switched on.
We also concluded that the slist-functionality would be suitable for being
put in its own module rather than simply hanging out in lib/sendf.c so I
created lib/slist.[ch] for them.
|
|
|
|
allocation of the memory BIO was not being properly checked.
|
|
in the gnutls code where we were checking for negative values for errors,
when the man pages state that GNUTLS_E_SUCCESS is returned on success and
other values indicate error conditions.
|
|
|
|
|
|
whenever you attempt to open a new connection.
|
|
(http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
which previous libcurl versions (by design) can be tricked to access an
arbitrary local/different file instead of a remote one when
CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
together this the addition of two new setopt options for controlling this
new behavior:
o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
excludes the FILE and SCP protocols and thus you nee to explicitly allow
them in your app if you really want that behavior.
o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
using the primary URL option. This is useful if you want to allow a user or
other outsiders control what URL to pass to libcurl and yet not allow all
protocols libcurl may have been built to support.
|
|
|
|
|
|
CURLOPT_LOCALPORT were used together (the local port bind failed), and
Markus Koetter provided the fix!
|
|
|
|
|
|
curl_global_init() function to properly maintain the performing functions
thread-safe. We've previously (28 April 2007) moved the init to a later time
just to avoid it to fail very early when libgcrypt dislikes the situation,
but that move was bad and the fix should rather be in libgcrypt or
elsewhere.
|
|
It happened because the code used the struct for server-based auth all the
time for both proxy and server auth which of course was wrong.
|
|
CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD return
-1 if the sizes aren't know. Previously these returned 0, make it impossible
to detect the difference between actually zero and unknown.
|
|
|
|
to build a Mac OS X fat ppc/i386 or ppc64/x86_64 libcurl.framework
|
|
to the proper 'libcurl' as clearly this caused confusion.
|
|
|
|
|
|
FTP with the multi interface: when a transfer fails, like when aborted by a
write callback, the control connection was wrongly closed and thus not
re-used properly.
This change is also an attempt to cleanup the code somewhat in this area, as
now the FTP code attempts to keep (better) track on pending responses
necessary to get read in ftp_done().
|
|
libcurl did a superfluous 1000ms wait when doing SFTP downloads!
We read data with libssh2 while doing the "DO" operation for SFTP and then
when we were about to start getting data for the actual file part, the
"TRANSFER" part, we waited for socket action (in 1000ms) before doing a
libssh2-read. But in this case libssh2 had already read and buffered the
data so we ended up always just waiting 1000ms before we get working on the
data!
|
|
|
|
CURLE_REMOTE_FILE_NOT_FOUND instead of CURLE_FTP_COULDNT_RETR_FILE.
|
|
|
|
leak like that fixed on the 14th. When zlib returns failure, we need to
cleanup properly before returning error.
|
|
plain FTP connections, and it will then allow MKD to fail once and retry the
CWD afterwards. This is especially useful if you're doing many simultanoes
connections against the same server and they all have this option enabled,
as then CWD may first fail but then another connection does MKD before this
connection and thus MKD fails but trying CWD works! The numbers can
(should?) now be set with the convenience enums now called
CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
Tests has proven that if you're making an application that uploads a set of
files to an ftp server, you will get a noticable gain in speed if you're
using multiple connections and this option will be then be very useful.
|
|
when an 'int' is assigned to a 'time_t' variable. Hence redefine 'retry_time'
and 'retry_max' to 'time_t'.
|
|
copyright-update script thinks
|
|
code, which could happen on libz errors.
|
|
|
|
the condition in the previous request was unmet. This is typically a time
condition set with CURLOPT_TIMECONDITION and was previously not possible to
reliably figure out. From bug report #2565128
(http://curl.haxx.se/bug/view.cgi?id=2565128)
|
|
functions are.
|