Age | Commit message (Collapse) | Author |
|
|
|
gets a 550 response back for the cases where a download (or NOBODY) is
wanted. It still allows a 550 as response if the SIZE is used as part of an
upload process (like if resuming an upload is requested and the file isn't
there before the upload). I also modified the FTP test server and a few test
cases accordingly to match this modified behavior.
|
|
libcurl to somewhat reduce the size of the binary. Run configure
--disable-proxy.
|
|
Curl_resolv_timeout function to reduce coupling.
|
|
downloads!
|
|
|
|
CURLINFO_REDIRECT_URL in multi mode" also contained a patch that fixed the
problem.
|
|
|
|
with realm="". http://curl.haxx.se/bug/view.cgi?id=2126435
|
|
switching from one protocol to another in a single request (e.g.
redirecting from HTTP to FTP as in test 1055) by resetting
state.expect100header before every request.
|
|
date parser function. This makes our function less dependent on system-
provided functions and instead we do all the magic ourselves. We also no
longer depend on the TZ environment variable.
|
|
implementation".
|
|
Markus Moeller reported: http://curl.haxx.se/mail/archive-2008-09/0016.html
- recv() errors other than those equal to EAGAIN now cause proper
CURLE_RECV_ERROR to get returned. This made test case 160 fail so I've now
disabled it until we can figure out another way to exercise that logic.
|
|
proxy" (http://curl.haxx.se/bug/view.cgi?id=2107377) that showed how a multi
interface using program didn't work when built with GnuTLS and a CONNECT
request was done over a proxy (basically test 502 over a proxy to a HTTPS
site). It turned out the ssl connect function would get called twice which
caused the second call to fail.
|
|
that memdebug.h is included in the test programs.
|
|
|
|
|
|
Also, leave the existing SIGALRM handler alone if the timeout is too small
to handle.
|
|
|
|
error message.
|
|
|
|
code when fdopen() is not available, to avoid compiler error.
|
|
|
|
FreeBSD ports system.
|
|
examples that I found in the FreeBSD ports system.
|
|
page.
|
|
|
|
sites in cases where the cookie clearly has a very old expiry date. The
condition was simply that libcurl's date parser would fail to convert the
date and it would then count as a (timed-based) match. Starting now, a
missed date due to an unsupported date format or date range will now cause
the cookie to not match.
|
|
request.
Detect cases where an upload must be sent chunked and the server supports
only HTTP 1.0 and return CURLE_UPLOAD_FAILED.
|
|
CURLOPT_POST301 (but adds a define for backwards compatibility for you who
don't define CURL_NO_OLDIES). This option allows you to now also change the
libcurl behavior for a HTTP response 302 after a POST to not use GET in the
subsequent request (when CURLOPT_FOLLOWLOCATION is enabled). I edited the
patch somewhat before commit. The curl tool got a matching --post302
option. Test case 1076 was added to verify this.
|
|
enabling this feature with CURLOPT_CERTINFO for a request using SSL (HTTPS
or FTPS), libcurl will gather lots of server certificate info and that info
can then get extracted by a client after the request has completed with
curl_easy_getinfo()'s CURLINFO_CERTINFO option. Linus Nielsen Feltzing
helped me test and smoothen out this feature.
Unfortunately, this feature currently only works with libcurl built to use
OpenSSL.
This feature was sponsored by networking4all.com - thanks!
|
|
during certain conditions. I also changed this code to use realloc() based
on Daniel Fandrich's suggestion.
|
|
|
|
|
|
|
|
706 and 707.
|
|
file for libcurl, and while doing that fix he unified with curl-config.in
how the supported protocols and features are extracted and used, so both those
tools should now always be synced.
|
|
|
|
to HTTP 1.0 upon receiving a response from the HTTP server. Tests 1072
and 1073 are similar to test 1069 in that they involve the impossible
scenario of sending chunked data to a HTTP 1.0 server. All these currently
fail and are added to DISABLED.
Added test 1075 to test --anyauth with Basic authentication.
|
|
"Connection: close" and actually close the connection after the
response-body, libcurl could still have outstanding data to send and it
would not properly notice this and stop sending. This caused weirdness and
sad faces. http://curl.haxx.se/bug/view.cgi?id=2080222
Note that there are still reasons to consider libcurl's behavior when
getting a >= 400 response code while sending data, as Craig Perras' note
"http upload: how to stop on error" specifies:
http://curl.haxx.se/mail/archive-2008-08/0138.html
|
|
files bing mirrored) and thus I've changed the URL in the cookiejar header
to no longer use curlm.haxx.se but instead use the main site curl.haxx.se
|
|
an unlock in between) for a certain case and that in fact works when using
regular windows mutexes but not with pthreads'! Locks should of course not
get locked again so this is now fixed.
http://curl.haxx.se/mail/lib-2008-08/0422.html
|
|
the HTTP method to GET (or HEAD) when given a value of 0.
|
|
libcurl - Win32 DLL Debug
libcurl - Win32 DLL Release
libcurl - Win32 LIB Debug
libcurl - Win32 LIB Release
|
|
1021 and 1067.
|
|
supporting configure's --disable-largefile option for WIN32 targets also.
Non-configure systems which do not use config-win32.h configuration file,
and want to use the WIN32 file API, must define USE_WIN32_LARGE_FILES or
USE_WIN32_SMALL_FILES as appropriate in their own configuration files.
|
|
firefox-db2pem.sh conversion script that converts a local Firefox db of ca
certs into PEM format, suitable for use with a OpenSSL or GnuTLS built
libcurl.
|
|
interface, and the proxy would send Connection: close during the
authentication phase. http://curl.haxx.se/bug/view.cgi?id=2069047
|
|
which caused an error when the second header was dumped due to stdout
being closed. Added test case 1066 to verify. Also fixed a potential
problem where a closed file descriptor might be used for an upload
when more than one URL is given.
|
|
support is provided using WIN32 functions directly.
|