Age | Commit message (Collapse) | Author |
|
an unlock in between) for a certain case and that in fact works when using
regular windows mutexes but not with pthreads'! Locks should of course not
get locked again so this is now fixed.
http://curl.haxx.se/mail/lib-2008-08/0422.html
|
|
the HTTP method to GET (or HEAD) when given a value of 0.
|
|
interface, and the proxy would send Connection: close during the
authentication phase. http://curl.haxx.se/bug/view.cgi?id=2069047
|
|
which caused an error when the second header was dumped due to stdout
being closed. Added test case 1066 to verify. Also fixed a potential
problem where a closed file descriptor might be used for an upload
when more than one URL is given.
|
|
|
|
|
|
memory leak because it never called the OpenSSL function
CRYPTO_cleanup_all_ex_data() as it was supposed to. This was because of a
missing define in config-win32.h!
|
|
belong among the release numbers anyway
|
|
|
|
|
|
_directory_ if that happened to appear in the path!
|
|
constants CURL_OFF_T_C and CURL_OFF_TU_C. The clever double helper macro
used internally to provide its functionality is thanks to Lars Nilsson.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2042430) with a patch. "NTLM Windows
SSPI code is not thread safe". This was due to libcurl using static
variables to tell wether to load the necessary SSPI DLL, but now the loading
has been moved to the more suitable curl_global_init() call.
|
|
(http://curl.haxx.se/bug/view.cgi?id=2042440) with a patch. He identified a
problem when using NTLM over a proxy but the end-point does Basic, and then
libcurl would do wrong when the host sent "Connection: close" as the proxy's
NTLM state was erroneously cleared.
|
|
to have a curl_off_t data type no longer gated to off_t.
|
|
connection with the multi interface even if a previous use of it caused a
CURLE_PEER_FAILED_VERIFICATION to get returned. I now make sure that failed
SSL connections properly close the connections.
|
|
proved how PUT and POST with a redirect could lead to a "hang" due to the
data stream not being rewound properly when it had to in order to get sent
properly (again) to the subsequent URL. This is now fixed and these test
cases are no longer disabled.
|
|
with -C - sent garbage in the Content-Range: header. I fixed this problem by
making sure libcurl always sets the size of the _entire_ upload if an app
attemps to do resumed uploads since libcurl simply cannot know the size of
what is currently at the server end. Test 1041 is no longer disabled.
|
|
|
|
when we have been doing this since revision 1.47 of configure.ac 4 years and
5 months ago when cross-compiling a Windows target. We actually don't use any
function from the Windows GDI (Graphics Device Interface) related with drawing
or graphics-related operations.
|
|
support this so it goes untested.
|
|
|
|
incorrectly--the host name is treated as part of the user name and the
port number becomes the password. This can be observed in test 279
(was KNOWN_ISSUE #54).
|
|
being mangled when passed to proxies when CURLOPT_PORT is also set
(reported by Pramod Sharma).
|
|
|
|
parser to allow numerical IPv6-addresses to be specified with the scope
given, as per RFC4007 - with a percent letter that itself needs to be URL
escaped. For example, for an address of fe80::1234%1 the HTTP URL is:
"http://[fe80::1234%251]/"
|
|
true bug in libcurl built with OpenSSL. It made curl_easy_getinfo() more or
less always return 0 for CURLINFO_SSL_VERIFYRESULT because the function that
would set it to something non-zero would return before the assign in almost
all error cases. The internal variable is now set to non-zero from the start
of the function only to get cleared later on if things work out fine.
|
|
and OS/2.
|
|
|
|
overrun" (http://curl.haxx.se/bug/view.cgi?id=2026240) identifying two
problems, and providing the fix for them:
- CURL_READFUNC_PAUSE did in fact not pause the _sending_ of data that it is
designed for but paused _receiving_ of data!
- libcurl didn't internally set the read counter to zero when this return
code was detected, which would potentially lead to junk getting sent to
the server.
|
|
doing "proxy-tunnels" with non-HTTP prototols and that was simply because
the code assumed the user-agent was only needed for HTTP.
|
|
is set in fdset.events" (http://curl.haxx.se/bug/view.cgi?id=2015126) which
exactly pinpointed the problem only triggered on Windows Vista, provided
reference to docs and also a fix. There is much work behind Peter Lamberg's
excellent bug report. Thank You!
|
|
edited it slightly. Now you should be able to use IPv6 addresses fine even
with libcurl built to use c-ares.
|
|
fix for it. It occured when you did a FTP transfer using
CURLFTPMETHOD_SINGLECWD and then did another one on the same easy handle but
switched to CURLFTPMETHOD_NOCWD. Due to the "dir depth" variable not being
cleared properly. Scott's test case is now known as test 539 and it
verifies the fix.
|
|
response codes. Previously libcurl would hang on such occurances. I added
test case 1033 to verify.
|
|
CURLINFO_APPCONNECT_TIME. This is set with the "application layer"
handshake/connection is completed (typically SSL, TLS or SSH). By using this
you can figure out the application layer's own connect time. You can extract
the time stamp using curl's -w option and the new variable named
'time_appconnect'. This feature was sponsored by Lenny Rachitsky at NeuStar.
|
|
|
|
operating system.
|
|
which output the range using a signed variable where it should rather use
unsigned.
|
|
some systems" (http://curl.haxx.se/bug/view.cgi?id=1999181). The problem was
that the configure script did not use the _POSIX_MONOTONIC_CLOCK feature test
macro when checking monotonic clock availability. This is now fixed and the
monotonic clock will not be used unless the feature test macro is defined
with a value greater than zero indicating always supported.
|
|
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=487567) pointing out that
libcurl used Content-Range: instead of Range when doing a range request with
--head (CURLOPT_NOBODY). This is now fixed and test case 1032 was added to
verify.
|
|
handshake with a SSLv2 server, and it turned out to be because it didn't
recognize the cipher named "rc4-md5". In our list that cipher was named
plainly "rc4". I've now added rc4-md5 to work as an alias as Phil reported
that it made things work for him again.
|
|
crashed libcurl. This is now addressed by making sure we use "plain send"
internally when doing the socks handshake instead of the Curl_write()
function which is designed to use the "target" protocol. That's then SCP or
SFTP in this case. I also took the opportunity and cleaned up some ssh-
related #ifdefs in the code for readability.
|
|
libcurl to not tell the app properly when a socket was closed (when the name
resolve done by c-ares is done) and then immediately re-created and put to
use again (for the actual connection). Since the closure will make the
"watch status" get lost in several event-based systems libcurl will need to
tell the app about this close/re-create case.
|
|
multi interface with pipelining enabled as it would wrongly check for,
detect and close "dead connections" even though that connection was already
in use!
|
|
always fire up a new connection rather than using the existing one when the
multi interface is used. Original bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=450140
|
|
|
|
|
|
All boolean options (such as -O, -I, -v etc), both short and long versions,
now always switch on/enable the option named. Using the same option multiple
times thus make no difference. To switch off one of those options, you need
to use the long version of the option and type --no-OPTION. Like to disable
verbose mode you use --no-verbose!
- Added --remote-name-all to curl, which if used changes the default for all
given URLs to be dealt with as if -O is used. So if you want to disable that
for a specific URL after --remote-name-all has been used, you muse use -o -
or --no-remote-name.
|
|
OpenSSL, NSS and GnuTLS-built libcurls.
|