Age | Commit message (Collapse) | Author |
|
by Daniel Johnson.
|
|
on curl-users, it is also added to DISABLED since I don't have time to work
on it further right now.
|
|
whenever you attempt to open a new connection.
|
|
|
|
binary it also removes the include/curl subdir!
|
|
definitions added to RPG binding
|
|
|
|
(http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
which previous libcurl versions (by design) can be tricked to access an
arbitrary local/different file instead of a remote one when
CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
together this the addition of two new setopt options for controlling this
new behavior:
o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
excludes the FILE and SCP protocols and thus you nee to explicitly allow
them in your app if you really want that behavior.
o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
using the primary URL option. This is useful if you want to allow a user or
other outsiders control what URL to pass to libcurl and yet not allow all
protocols libcurl may have been built to support.
|
|
|
|
|
|
too close to release now
|
|
|
|
|
|
|
|
218 - Senthil Raja Velu's "CURLOPT_LOCALPORT option broken", patch by
Markus Koetter
Both are now committed
|
|
CURLOPT_LOCALPORT were used together (the local port bind failed), and
Markus Koetter provided the fix!
|
|
|
|
|
|
curl_global_init() function to properly maintain the performing functions
thread-safe. We've previously (28 April 2007) moved the init to a later time
just to avoid it to fail very early when libgcrypt dislikes the situation,
but that move was bad and the fix should rather be in libgcrypt or
elsewhere.
|
|
|
|
without involving CVS:
diff -X diff-exclude -ru curl-old curl-patched
|
|
It happened because the code used the struct for server-based auth all the
time for both proxy and server auth which of course was wrong.
|
|
|
|
CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD return
-1 if the sizes aren't know. Previously these returned 0, make it impossible
to detect the difference between actually zero and unknown.
|
|
220 - Take advantage of libssh2_version() that's been added for the upcoming
1.1, to extract the run-time version number properly.
|
|
|
|
to build a Mac OS X fat ppc/i386 or ppc64/x86_64 libcurl.framework
|
|
|
|
to the proper 'libcurl' as clearly this caused confusion.
|
|
files
|
|
|
|
|
|
|
|
|
|
|
|
|
|
FTP with the multi interface: when a transfer fails, like when aborted by a
write callback, the control connection was wrongly closed and thus not
re-used properly.
This change is also an attempt to cleanup the code somewhat in this area, as
now the FTP code attempts to keep (better) track on pending responses
necessary to get read in ftp_done().
|
|
|
|
connection is kept alive afterwards
|
|
libcurl did a superfluous 1000ms wait when doing SFTP downloads!
We read data with libssh2 while doing the "DO" operation for SFTP and then
when we were about to start getting data for the actual file part, the
"TRANSFER" part, we waited for socket action (in 1000ms) before doing a
libssh2-read. But in this case libssh2 had already read and buffered the
data so we ended up always just waiting 1000ms before we get working on the
data!
|
|
|
|
CURLE_REMOTE_FILE_NOT_FOUND instead of CURLE_FTP_COULDNT_RETR_FILE.
|
|
|
|
|
|
leak like that fixed on the 14th. When zlib returns failure, we need to
cleanup properly before returning error.
|
|
|
|
plain FTP connections, and it will then allow MKD to fail once and retry the
CWD afterwards. This is especially useful if you're doing many simultanoes
connections against the same server and they all have this option enabled,
as then CWD may first fail but then another connection does MKD before this
connection and thus MKD fails but trying CWD works! The numbers can
(should?) now be set with the convenience enums now called
CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
Tests has proven that if you're making an application that uploads a set of
files to an ftp server, you will get a noticable gain in speed if you're
using multiple connections and this option will be then be very useful.
|
|
to current state.
|
|
when an 'int' is assigned to a 'time_t' variable. Hence redefine 'retry_time'
and 'retry_max' to 'time_t'.
|
|
copyright-update script thinks
|