Age | Commit message (Collapse) | Author |
|
|
|
Test 1409 and 1410 verifies the stricter numeric option parser
introduced the other day in commit f2b6ebed7b.
|
|
I made "connmon" not get initialized properly before use, and I use the
big hammer and make sure we always clear the entire struct to avoid any
problem like this in the future.
|
|
Two commits ago, we fixed a bug where the connction would be closed
prematurely after a HEAD. Now I added connection-monitor to test 48 and
added a second HEAD and make sure that both are sent over the same
connection.
This triggered a failure before the bug fix and now works. Will help us
avoid a future regression of this kind.
|
|
This makes verifying easier and makes us more sure curl closes the
connection only at the correct point in time. Adjusted test 206 and 1008
accordingly and updated the docs for it.
|
|
A HEAD response has no body length and gets the headers like the
corresponding GET would so it should not get closed after the response
based on the same rules. This mistake caused connections that did HEAD
to get closed too often without a valid reason.
Bug: http://curl.haxx.se/bug/view.cgi?id=3542731
Reported by: Eelco Dolstra
|
|
|
|
|
|
|
|
Updated .gitignore for NetWare created files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 - str2offset() no longer accepts negative numbers since offsets are by
nature positive.
2 - introduced str2unum() for the command line parser that accepts
numericals which are not supposed to be negative, so that it will
properly complain on apparent bad uses and mistakes.
Bug: http://curl.haxx.se/mail/archive-2012-07/0013.html
|
|
|
|
Modification based on voting result:
http://curl.haxx.se/mail/lib-2012-07/0104.html
|
|
Since the order of the cookies is sorted by the length of the paths,
having them on the same path length will make the test depend on what
order the qsort() implementation will put them. As seen in the
windows/msys output posted by Guenter in this posting:
http://curl.haxx.se/mail/lib-2012-07/0105.html
|
|
|
|
The function https_getsock was only implemented properly when USE_SSLEAY
or USE_GNUTLS is defined, but it is also necessary for USE_SCHANNEL.
The problem occurs when Curl_read_plain or Curl_write_plain returns
CURLE_AGAIN. In that case CURL_OK is returned to the multi-interface an
the used socket is set to state CURL_POLL_REMOVE and the easy-state is
set to CURLM_STATE_PROTOCONNECT. This is fine, because later the socket
should be set to CURL_POLL_IN or CURL_POLL_OUT via multi_getsock. That's
where https_getsock is called and doesn't return any sockets.
|
|
|
|
|
|
|
|
Re-wrote Curl_darwinssl_random() to not use arc4random_buf() because the
function is not available prior to iOS 4.3 and OS X 10.7.
|
|
|
|
|
|
|
|
Since WinSSL cannot be build without SSPI being enabled,
USE_WINSSL now defaults to the value of USE_SSPI.
The makefile does now raise an error if WinSSL is enabled
while SSPI is disabled.
|
|
Renamed external parameter USE_SSPI = yes/no to ENABLE_SSPI = yes/no.
Backwards compatible change: USE_SSPI can still be passed as external
parameter with yes/no value as long as ENABLE_SSPI is not given.
USE_x defines are passed around with true/false values internally,
USE_SSPI is now aligned to this approach, but still accepts external
values yes/no being passed, just like the other defines.
|
|
- Changed space usage to line up with the whole file
- Renamed CFLAGS_SSPI/IPV6 to SSPI/IPV6_CFLAGS to be
consistent with the other CFLAGS_x variables
- Make use of existing CFLAGS_IPV6 (previously IPV6_CFLAGS)
instead of appending directly to CFLAGS
|
|
The code was printing a warning when SNI was set up successfully. Oops.
Printing the cipher number in verbose mode was something only TLS/SSL
programmers might understand, so I had it print the name of the cipher,
just like in the OpenSSL code. That'll be at least a little bit easier
to understand. The SecureTransport API doesn't have a method of getting
a string from a cipher like OpenSSL does, so I had to generate the
strings manually.
|
|
|
|
Bug #75 updated with additional info, still remains for builds with
other backends.
|
|
|
|
|
|
Reduce the number of #ifdef UNICODE directives used in source files.
|
|
Test 1008 and 206 don't show the disconnect since it happens when SWS
awaits a new request, but 503 does and so the verify section needs that
string added.
|
|
When doing CONNECT requests, libcurl must make sure the connection is
alive as much as possible. NTLM requires it and it is generally good for
other cases as well.
NTLM over CONNECT requests has been broken since this regression I
introduced in my CONNECT cleanup commits that started with 41b02378342,
included since 7.25.0.
Bug: http://curl.haxx.se/bug/view.cgi?id=3538625
Reported by: Marcel Raad
|
|
I moved out the servercmd parsing into a its own function called
parse_servercmd() and made sure it gets used also when the test number
is extracted from CONNECT requests. It turned out sws didn't do that
previously!
|
|
|
|
|
|
Using this, the server will output in the protocol log when the
connection gets disconnected and thus we will verify correctly in the
test cases that the connection doesn't get closed prematurely. This is
important for example NTLM to work.
Documentation added to FILEFORMAT, test 503 updated to use this.
|
|
|
|
|
|
|
|
Mention the CURL_SOCKET_TIMEOUT argument in step 6 of the typical
application.
|
|
|