Age | Commit message (Collapse) | Author |
|
|
|
In order to get back on track, I've removed all the plans for
stuff I had in the queue. I will instead focus on fixing bugs and
relying on that people who truly want things added will come back
on the mailing list and nag and provide patches.
7.20.1 should be possible to release in April 2010
|
|
|
|
|
|
|
|
|
|
|
|
c-ares is now hosted entirely separate from the curl project
see http://c-ares.haxx.se/ for all details concerning c-ares,
its source repository and more.
|
|
Kenny To filed the bug report #2963679 with patch to fix a
problem he experienced with doing multi interface HTTP POST over
a proxy using PROXYTUNNEL. He found a case where it would connect
fine but bits.tcpconnect was not set correct so libcurl didn't
work properly.
(http://curl.haxx.se/bug/view.cgi?id=2963679)
|
|
I ran it now successfully and it helped to pinpoint a libssh2
memory leak!
|
|
|
|
Akos Pasztory filed debian bug report #572276
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=572276
mentioning a problem with a resource that returns chunked-encoded
_and_ with a Content-Length and libcurl failed to properly ignore
the latter information.
|
|
Hauke Duden provided an example program that made the multi
interface crash. His example simply used the multi interface and
did first one FTP transfer and after completion it used a second
easy handle and did another FTP transfer on the same FTP server.
This triggered a bug in the "delayed easy handle kill" system
that curl uses: when an FTP connection is left alive it must keep
an easy handle around internally - only for the purpose of having
an easy handle when it later disconnects it. The code assumed
that when the easy handle was removed and an internal reference
was made, that version could be killed later on when a new easy
handle came using the same connection. This was wrong as Hauke's
example showed that the removed handle wasn't killed for real
until later. This caused a double close attempt => segfault.
|
|
|
|
|
|
Looking at the code of Curl_resolv_timeout() in hostip.c, I think
that in case of a timeout, the signal handler for SIGALRM never
gets removed. I think that in my case it gets executed at some
point later on when execution has long left Curl_resolv_timeout()
or even the cURL library.
The code that is jumped to with siglongjmp() simply sets the
error message to "name lookup timed out" and then returns with
CURLRESOLV_ERROR. I guess that instead of simply returning
without cleaning up, the code should have a goto that jumps to
the spot right after the call to Curl_resolv().
|
|
|
|
|
|
which could have caused a double free when reusing curl handle.
|
|
|
|
|
|
|
|
|
|
|
|
Error codes were not properly returned to the main curl code (and on to apps
using libcurl).
tftp was crapping out when tsize == 0 on upload, but I see no reason to fail
to upload just because the remote file is zero-length. Ignore tsize option on
upload.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The problem mentioned on Dec 10 2009
(http://curl.haxx.se/bug/view.cgi?id=2905220) was only partially fixed.
Partially because an easy handle can be associated with many connections in
the cache (e.g. if there is a redirect during the lifetime of the easy
handle). The previous patch only cleaned up the first one. The new fix now
removes the easy handle from all connections, not just the first one.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
the easy interface was used.
|
|
expressions
|
|
|
|
|
|
longer using 'in6_addr' but only our 'ares_in6_addr' struct
|
|
|
|
|
|
|
|
|
|
with threaded resolver enabled (Windows default configuration) that would
get triggered when a curl handle is closed while doing DNS resolution.
|