Age | Commit message (Collapse) | Author |
|
|
|
|
|
If the CURLOPT_PORT option is used on an FTP URL like
"ftp://example.com/file;type=A" the ";type=A" is stripped off.
I added test case 562 to verify, only to find out that I couldn't repeat
this bug so I hereby consider it not a bug anymore!
|
|
I've now made TFTP "connections" not being kept for re-use within libcurl.
TFTP is UDP-based so the benefit was really low (if even existing) to begin
with so instead of tracking down to fix this problem we instead removed the
re-use. I also enabled test case 1099 that I wrote a few days ago to verify
that this change fixes the reported problem.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2783090) pointing out that on windows
we need to grow the SO_SNDBUF buffer somewhat to get really good upload
speeds. http://support.microsoft.com/kb/823764 has the details. Friends
confirmed that simply adding 32 to CURL_MAX_WRITE_SIZE is enough.
|
|
Chen pointed out how curl couldn't upload with resume when reading from a
pipe.
This ended up with the introduction of a new return code for the
CURLOPT_SEEKFUNCTION callback that basically says that the seek failed but
that libcurl may try to resolve the situation anyway. In our case this means
libcurl will attempt to instead read that much data from the stream instead
of seeking and that way curl can now upload with resume when data is read
from a stream!
|
|
Wegener pointed out that CURLINFO_APPCONNECT_TIME didn't work with the multi
interface and provided a patch that fixed the problem!
|
|
|
|
Koenig pointed out that the man page didn't tell that the *_proxy
environment variables can be specified lower case or UPPER CASE and the
lower case takes precedence,
|
|
|
|
|
|
setup_once.h. Inclusion of each header file is based on the definition of
NEED_MALLOC_H and NEED_MEMORY_H respectively.
|
|
how it occurs (http://curl.haxx.se/mail/lib-2009-04/0289.html). The
conclusion was that if an error is detected and Curl_done() is called for
the connection, ftp_done() could at times return another error code that
then would take precedence and that new code confused existing logic that
works for the first error code (CURLE_SEND_ERROR) only.
|
|
OBJECTPOINT options. Now we've introduced a new function - my_setopt_str -
within the app for setting plain string options to avoid the risk of this
mistake happening.
|
|
proxy. libcurl would then wrongly close the connection after each
request. In his case it had the weird side-effect that it killed NTLM auth
for the proxy causing an inifinite loop!
I added test case 1098 to verify this fix. The test case does however not
properly verify that the transfers are done persistently - as I couldn't
think of a clever way to achieve it right now - but you need to read the
stderr output after a test run to see that it truly did the right thing.
|
|
Storsjo pointed out how setting CURLOPT_NOBODY to 0 could be downright
confusing as it set the method to either GET or HEAD. The example he showed
looked like:
curl_easy_setopt(curl, CURLOPT_PUT, 1);
curl_easy_setopt(curl, CURLOPT_NOBODY, 0);
The new way doesn't alter the method until the request is about to start. If
CURLOPT_NOBODY is then 1 the HTTP request will be HEAD. If CURLOPT_NOBODY is
0 and the request happens to have been set to HEAD, it will then instead be
set to GET. I believe this will be less surprising to users, and hopefully
not hit any existing users badly.
|
|
out to be leaking cacerts. Kamil Dudka helped me complete the fix. The issue
is found in Redhat's bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=453612
There are still memory leaks present, but they seem to have other reasons.
|
|
Improved Symbian support for SSL.
|
|
the steps required to build a Mac OS X four way fat ppc/i386/ppc64/x86_64
libcurl.framework. Four way fat framework requires OS X 10.5 SDK or later.
|
|
|
|
and 1 on fatal errors. Previously it only mentioned non-zero on fatal
errors. This is a slight change in meaning, but it follows what we've done
elsewhere before and it opens up for LOTS of more useful return codes
whenever we can think of them...
|
|
non-configured libcurl. In this case curl_off_t data type was gated
to the off_t data type which depends on the _FILE_OFFSET_BITS. This
configuration is exactly the unwanted configuration for our curl_off_t
data type which must not depend on such setting. This breaks ABI for
libcurl libraries built with Sun compilers which were built without
having run the configure script with _FILE_OFFSET_BITS different than
64 and using the ILP32 data model.
|
|
strdup() call failed.
|
|
|
|
NSS is used. These ciphers were added in NSS 3.4 and require to be enabled
explicitly.
|
|
library is found to support it.
|
|
|
|
other libcurl function.
|
|
|
|
shipped in release archives but is only in CVS)
|
|
curl_easy_duphandle did not necessarily duplicate the CURLOPT_COOKIEFILE
option. It only enabled the cookie engine in the destination handle if
data->cookies is not NULL (where data is the source handle). In case of a
newly initialized handle which just had the cookie support enabled by a
curl_easy_setopt(handle, CURL_COOKIEFILE, "")-call, handle->cookies was
still NULL because the setopt-call only appends the value to
data->change.cookielist, hence duplicating this handle would not have the
cookie engine switched on.
We also concluded that the slist-functionality would be suitable for being
put in its own module rather than simply hanging out in lib/sendf.c so I
created lib/slist.[ch] for them.
|
|
scripts to make it detect a bad checkout earlier. People with older
checkouts who don't do cvs update with the -d option won't get the new dirs
and then will get funny outputs that can be a bit hard to understand and
fix.
|
|
allocation of the memory BIO was not being properly checked.
|
|
in the gnutls code where we were checking for negative values for errors,
when the man pages state that GNUTLS_E_SUCCESS is returned on success and
other values indicate error conditions.
|
|
curl didn't use sprintf() in a way that is documented to work in POSIX but
since we use our own printf() code (from libcurl) that shouldn't be a
problem. Nonetheless I modified the code to not rely on such particular
features and to not cause further raised eyebrowse with no good reason.
|
|
more issues for authors to consider when writing robust libcurl-using
applications.
|
|
|
|
by Daniel Johnson.
|
|
whenever you attempt to open a new connection.
|
|
(http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
which previous libcurl versions (by design) can be tricked to access an
arbitrary local/different file instead of a remote one when
CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
together this the addition of two new setopt options for controlling this
new behavior:
o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
excludes the FILE and SCP protocols and thus you nee to explicitly allow
them in your app if you really want that behavior.
o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
using the primary URL option. This is useful if you want to allow a user or
other outsiders control what URL to pass to libcurl and yet not allow all
protocols libcurl may have been built to support.
|
|
CURLOPT_LOCALPORT were used together (the local port bind failed), and
Markus Koetter provided the fix!
|
|
curl_global_init() function to properly maintain the performing functions
thread-safe. We've previously (28 April 2007) moved the init to a later time
just to avoid it to fail very early when libgcrypt dislikes the situation,
but that move was bad and the fix should rather be in libgcrypt or
elsewhere.
|
|
It happened because the code used the struct for server-based auth all the
time for both proxy and server auth which of course was wrong.
|
|
CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD return
-1 if the sizes aren't know. Previously these returned 0, make it impossible
to detect the difference between actually zero and unknown.
|
|
to build a Mac OS X fat ppc/i386 or ppc64/x86_64 libcurl.framework
|
|
to the proper 'libcurl' as clearly this caused confusion.
|
|
|
|
FTP with the multi interface: when a transfer fails, like when aborted by a
write callback, the control connection was wrongly closed and thus not
re-used properly.
This change is also an attempt to cleanup the code somewhat in this area, as
now the FTP code attempts to keep (better) track on pending responses
necessary to get read in ftp_done().
|
|
libcurl did a superfluous 1000ms wait when doing SFTP downloads!
We read data with libssh2 while doing the "DO" operation for SFTP and then
when we were about to start getting data for the actual file part, the
"TRANSFER" part, we waited for socket action (in 1000ms) before doing a
libssh2-read. But in this case libssh2 had already read and buffered the
data so we ended up always just waiting 1000ms before we get working on the
data!
|
|
CURLE_REMOTE_FILE_NOT_FOUND instead of CURLE_FTP_COULDNT_RETR_FILE.
|