Age | Commit message (Collapse) | Author |
|
|
|
- Add unit test 1604 to test the sanitize_file_name function.
- Use -DCURL_STATICLIB when building libcurltool for unit testing.
- Better detection of reserved DOS device names.
- New flags to modify sanitize behavior:
SANITIZE_ALLOW_COLONS: Allow colons
SANITIZE_ALLOW_PATH: Allow path separators and colons
SANITIZE_ALLOW_RESERVED: Allow reserved device names
SANITIZE_ALLOW_TRUNCATE: Allow truncating a long filename
- Restore sanitization of banned characters from user-specified outfile.
Prior to this commit sanitization of a user-specified outfile was
temporarily disabled in 2b6dadc because there was no way to allow path
separators and colons through while replacing other banned characters.
Now in such a case we call the sanitize function with
SANITIZE_ALLOW_PATH which allows path separators and colons to pass
through.
Closes https://github.com/curl/curl/issues/624
Reported-by: Octavio Schroeder
|
|
|
|
Free an existing domain before replacing it.
Bug: https://github.com/curl/curl/issues/635
Reported-by: silveja1@users.noreply.github.com
|
|
Closes #632
|
|
I removed the scheme prefix from the URLs references this host name, as
we don't own/run that anymore but the name is kept for historic reasons.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
It isn't used by the code in current conditions but for safety it seems
sensible to at least not crash on such input.
Extended unit test 1395 to verify this too as well as a plain "/" input.
|
|
|
|
Closes #621
|
|
Due to path separators being incorrectly sanitized in --output
pathnames, eg -o c:\foo => c__foo
This is a partial revert of 3017d8a until I write a proper fix. The
remote-name will continue to be sanitized, but if the user specified an
--output with string replacement (#1, #2, etc) that data is unsanitized
until I finish a fix.
Bug: https://github.com/bagder/curl/issues/624
Reported-by: Octavio Schroeder
|
|
.. also warn about letting the server pick the filename.
|
|
|
|
|
|
Closes #617
|
|
Closes https://github.com/bagder/curl/pull/618
|
|
tool_doswin.c:185:14: warning: 'msdosify' defined but not used
[-Wunused-function]
Closes https://github.com/bagder/curl/pull/616
|
|
Reported-by: Bernard Spil
|
|
|
|
|
|
Proxy NTLM authentication should compare credentials when
re-using a connection similar to host authentication, as it
authenticate the connection.
Example:
curl -v -x http://proxy:port http://host/ -U good_user:good_pwd
--proxy-ntlm --next -x http://proxy:port http://host/
[-U fake_user:fake_pwd --proxy-ntlm]
CVE-2016-0755
Bug: http://curl.haxx.se/docs/adv_20160127A.html
|
|
curl does not sanitize colons in a remote file name that is used as the
local file name. This may lead to a vulnerability on systems where the
colon is a special path character. Currently Windows/DOS is the only OS
where this vulnerability applies.
CVE-2016-0754
Bug: http://curl.haxx.se/docs/adv_20160127B.html
|
|
|
|
|
|
Current FAQ didn't make it clear where the main repo is.
Closes #612
|
|
bug: http://curl.haxx.se/mail/lib-2016-01/0123.html
|
|
|
|
|
|
|
|
- Switch from verifying a pinned public key in a callback during the
certificate verification to inline after the certificate verification.
The callback method had three problems:
1. If a pinned public key didn't match, CURLE_SSL_PINNEDPUBKEYNOTMATCH
was not returned.
2. If peer certificate verification was disabled the pinned key
verification did not take place as it should.
3. (related to #2) If there was no certificate of depth 0 the callback
would not have checked the pinned public key.
Though all those problems could have been fixed it would have made the
code more complex. Instead we now verify inline after the certificate
verification in mbedtls_connect_step2.
Ref: http://curl.haxx.se/mail/lib-2016-01/0047.html
Ref: https://github.com/bagder/curl/pull/601
|
|
Because disabling the peer verification (--insecure) must not disable
the public key pinning check (--pinnedpubkey).
|
|
|
|
The CURLOPT_SSH_PUBLIC_KEYFILE option has been documented to handle
empty strings specially since curl-7_25_0-31-g05a443a but the behavior
was unintentionally removed in curl-7_38_0-47-gfa7d04f.
This commit restores the original behavior and clarifies it in the
documentation that NULL and "" have both the same meaning when passed
to CURLOPT_SSH_PUBLIC_KEYFILE.
Bug: http://curl.haxx.se/mail/lib-2016-01/0072.html
|
|
|
|
... by extracting the LIB + REASON from the OpenSSL error code. OpenSSL
1.1.0+ returned a new func number of another cerfificate fail so this
required a fix and this is the better way to catch this error anyway.
|
|
|
|
|
|
The configure test uses AC_TRY_RUN to figure out if an ipv6 socket
works, and testing like that doesn't work for cross-compiles. These days
IPv6 support is widespread so a blind guess is probably more likely to
be 'yes' than 'no' now.
Further: anyone who cross-compiles can use configure's --disable-ipv6 to
explicitly disable IPv6 and that also works for cross-compiles.
Made happen after discussions in issue #594
|
|
Closes #514
|
|
When an HTTP/2 upgrade request fails (no protocol switch), it would
previously detect that as still possible to pipeline on (which is
acorrect) and do that when PIPEWAIT was enabled even if pipelining was
not explictily enabled.
It should only pipelined if explicitly asked to.
Closes #584
|
|
Before this patch, if a URL does not start with the protocol
name/scheme, effective URLs would be prefixed with upper-case protocol
names/schemes. This behavior might not be expected by library users or
end users.
For example, if `CURLOPT_DEFAULT_PROTOCOL` is set to "https". And the
URL is "hostname/path". The effective URL would be
"HTTPS://hostname/path" instead of "https://hostname/path".
After this patch, effective URLs would be prefixed with a lower-case
protocol name/scheme.
Closes #597
Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
|
|
|
|
The script should use the just-built curl, not the system one. This fixes
zsh completion generation when no system curl is installed.
|
|
Instead of generation a broken completion file.
|
|
Closes #596
|