Age | Commit message (Collapse) | Author |
|
if add2list() returns an error, bail out!
|
|
|
|
|
|
Instead of reopening the downloaded file, fsetxattr uses the (already
open) file descriptor to attach extended attributes. This makes the
procedure more robust against errors caused by moved or deleted files.
|
|
|
|
CURLOPT_RESOLVE is a new option that sends along a curl_slist with
name:port:address sets that will populate the DNS cache with entries so
that request can be "fooled" to use another host than what otherwise
would've been used. Previously we've encouraged the use of Host: for
that when dealing with HTTP, but this new feature has the added bonus
that it allows the name from the URL to be used for TLS SNI and server
certificate name checks as well.
This is a first change. Surely more will follow to make it decent.
|
|
Removed the code that was needed for libcurl before 7.19.0 which now is
more than two years old.
Simplified the top comment and corrected the URL.
|
|
setxattr is a glibc call to set extended attributes, so configure now
checks for it and the code is adapted to only build when the
functionality is present.
|
|
It is often convinient to track back the source of a once downloaded
file; this patch makes curl store the source URL and other metadata
alongside the retrieved file by using the extended attributes (if
supported by the file system and enabled by --xattr).
|
|
Some options, such as the automatic decompression and some SSL related
ones now will bail out if the underlying libcurl doesn't have support
for the particular feature needed.
|
|
If the filename contains a backslash, only use filename portion. The
idea is that even systems that don't handle backslashes as path
separators probably want that path removed for convenience.
This flaw is considered a security problem, see the curl security
vulnerability http://curl.haxx.se/docs/adv_20101013.html
|
|
if ( => if(
while ( => while(
and some other changes in the similar spirit, trying to make the
whole file use the same style
|
|
|
|
Otherwise, there could be problems running in certain locales.
|
|
|
|
It was introduced in commit eeb2cb05 along with the -F type=
change. Also fixed a typo in the name of the magic filename=
parameter. Tweaked tests 39 and 173 to better test this path.
|
|
The -F option allows some custom parameters within the given string, and
those strings are separated with semicolons. You can for example specify
"name=daniel;type=text/plain" to set content-type for the
field. However, the use of semicolons like that made it not work fine if
you specified one within the content-type, like for:
"name=daniel;type=text/plain;charset=UTF-8"
... as the second one would be seen as a separator and "charset" is no
parameter curl knows anything about so it was just silently discarded.
The new logic now checks if the semicolon and following keyword looks
like a parameter it knows about and if it isn't it is assumed to be
meant to be used within the content-type string itself.
I modified test case 186 to verify that this works as intended.
Reported by: Larry Stone
Bug: http://curl.haxx.se/bug/view.cgi?id=3048988
|
|
|
|
original bug report at https://bugzilla.redhat.com/622520
|
|
The --retry logic does retry HTTP when some specific response codes are
returned, but because the -f option sets the CURLOPT_FAILONERROR to
libcurl, the return codes are different for such situations and then the
curl tool failed to consider it for retrying.
Reported by: Mike Power
Bug: http://curl.haxx.se/bug/view.cgi?id=3037362
|
|
The --remote-header-name option for the command-line tool assumes that
everything beyond the filename= field is part of the filename, but that
might not always be the case, for example:
Content-Disposition: attachment; filename=file.txt; modification-date=...
This fix chops the filename off at the next semicolon, if there is one.
|
|
When getting multiple URLs, curl didn't properly reset the byte counter
after a successful transfer so if the subsequent transfer failed it
would wrongly use the previous byte counter and behave badly (segfault)
because of that. The code assumes that the byte counter and the 'stream'
pointer is well in synch.
Reported by: Jon Sargeant
Bug: http://curl.haxx.se/bug/view.cgi?id=3028241
|
|
The test would loop forever if authtype bit 0 wasn't set.
|
|
Since uploading from stdin is very likely to not work with anyauth and
its multi-phase probing for what authentication to actually use, alert
the user about it. Multi-phase negotiate almost certainly will involve
sending data and thus libcurl will need to rewind the stream to send
again, and it cannot do that with stdin.
|
|
I think the [REMARK] and commented function calls cluttered the code a
bit too much and made the generated code ugly to read. Now we instead
track the remarks one specially and just lists them at the end of the
generated code more as additional information.
|
|
it makes the --libcurl output easier to follow.
|
|
And additionally, don't show function or object pointers actual value
since they make no sense to anyone. Show 'functionpointer' and
'objectpointer' instead.
|
|
In the generated code --libcurl makes, all calls to curl_easy_setopt()
that use *_LARGE options now have the value typecasted to curl_off_t, so
that it works correctly for 32bit systems with 64bit curl_off_t type.
|
|
|
|
|
|
... and CURL_LLONG_MAX -> CURL_OFF_T_MAX
|
|
The main change is to allow input from user-specified methods,
when they are specified with CURLOPT_READFUNCTION.
All calls to fflush(stdout) in telnet.c were removed, which makes
using 'curl telnet://foo.com' painful since prompts and other data
are not always returned to the user promptly. Use
'curl --no-buffer telnet://foo.com' instead. In general,
the user should have their CURLOPT_WRITEFUNCTION do a fflush
for interactive use.
Also fix assumption that reading from stdin never returns < 0.
Old code could crash in that case.
Call progress functions in telnet main loop.
Signed-off-by: Ben Greear <greearb@candelatech.com>
|
|
--proto tells curl to use the listed protocols for its initial
retrieval
--proto-redir tells curl to use the listed protocols after a
redirect
|
|
The -O option caused curl to crash on windows and DOS due to the
tool writing out of boundary memory.
|
|
The feature that uses the file name given in a
Content-disposition: header didn't properly skip trailing
carriage returns and linefeed characters from the end of the file
name when it was given without quotes.
|
|
Make the function call with (void) as we don't care about the
return code.
|
|
using int is not fine on 64bit systems
|
|
|
|
|
|
(http://curl.haxx.se/bug/view.cgi?id=2958074) that curl on Windows with
option --trace-time did not use local time when timestamping trace lines.
This could also happen on other systems depending on time souurce.
|
|
|
|
|
|
|
|
'int' which fits better with existing CURL_PROGRESS_* definitions.
|
|
(1) conversion from 'const int ' to 'unsigned char ', possible loss of data
(2) conditional expression is constant
|
|
|
|
|
|
|
|
|
|
versions --ftp-ssl and --ftp-ssl-reqd as these options are now used to
control SSL/TLS for IMAP, POP3 and SMTP as well in addition to FTP. The old
option names are still working but the new ones are the prefered ones
(listed and documented).
|