| Age | Commit message (Collapse) | Author | 
|---|
|  | CURLOPT_SOCKS5_GSSAPI_SERVICE and CURLOPT_SOCKS5_GSSAPI_NEC to allow libcurl
  to do GSS-style authentication with SOCKS5 proxies. The curl tool got the
  options called --socks5-gssapi-service and --socks5-gssapi-nec to enable
  these. | 
|  |  | 
|  |  | 
|  | to set desired block size to use for TFTP transfers instead of the default
  512 bytes. | 
|  | disable "rfc4507bis session ticket support".  rfc4507bis was later turned
  into the proper RFC5077 it seems: http://tools.ietf.org/html/rfc5077
  The enabled extension concerns the session management. I wonder how often
  libcurl stops a connection and then resumes a TLS session. also, sending the
  session data is some overhead. .I suggest that you just use your proposed
  patch (which explicitly disables TICKET).
  If someone writes an application with libcurl and openssl who wants to
  enable the feature, one can do this in the SSL callback.
  Sharad Gupta brought this to my attention. Peter Sylvester helped me decide
  on the proper action. | 
|  | (http://curl.haxx.se/bug/view.cgi?id=2535504) pointing out that realms with
  quoted quotation marks in HTTP Digest headers didn't work. I've now added
  test case 1095 that verifies my fix. | 
|  | They basically offer the same thing the NO_PROXY environment variable only
  offered previously: list a set of host names that shall not use the proxy
  even if one is specified. | 
|  | clarity.  This does fix one problem that causes ;type=i FTP URLs
to fail in the Turkish locale when CURLOPT_PROXY_TRANSFER_MODE is
used (test case 561)
Added tests 561 and 1092 through 1094 to test various combinations
of ;type= and ;mode= URLs that could potentially fail in the Turkish
locale. | 
|  | lib/Makefile.vc6
  file (and thus from the vc8 and vc9 ones too). | 
|  |  | 
|  |  | 
|  |  | 
|  | curl_easy_setup(curl, CURLOPT_COOKIELIST, "SESS") on a CURL handle with no
  cookies data. | 
|  |  | 
|  | When using the multi interface over HTTP and the server returns a Location
  header, the running easy handle will get stuck in the CURLM_STATE_PERFORM
  state, leaving the external event loop stuck waiting for data from the
  ingoing socket (when using the curl_multi_socket_action stuff). While this
  bug was pretty hard to find, it seems to require only a one-line fix. The
  break statement on line 1374 in multi.c caused the function to skip the call
  to multistate().
  How to reproduce this bug? Well, that's another question.  evhiperfifo.c in
  the examples directory chokes on this bug only _sometimes_, probably
  depending on how fast the URLs are added. One way of testing the bug out is
  writing to hiper.fifo from more than one source at the same time. | 
|  | curl_easy_reset() by creating Curl_init_userdefined(). This had the side effect
of fixing curl_easy_reset() so it now also resets CURLOPT_FTP_FILEMETHOD and
CURLOPT_SSL_SESSIONID_CACHE | 
|  |  | 
|  |  | 
|  | I have to jump through a few hoops now with the NSS library initialization
  since another part of an application may have already initialized NSS by the
  time Curl gets invoked. This patch is more careful to only shutdown the NSS
  library if Curl did the initialization.
  It also adds in a bit of code to set the default ciphers if the app that
  call NSS_Init* did not call NSS_SetDomesticPolicy() or set specific
  ciphers. One might argue that this lets other application developers get
  lazy and/or they aren't using the NSS API correctly, and you'd be right.
  But still, this will avoid terribly difficult-to-trace crashes and is
  generally helpful. | 
|  | with other Makefile.netware. | 
|  | just found that ares already uses this define. | 
|  | be IPv6-aware. | 
|  | (http://curl.haxx.se/bug/view.cgi?id=2413067) that identified a problem that
  would cause libcurl to mark a DNS cache entry "in use" eternally if the
  subsequence TCP connect failed. It would thus never get pruned and refreshed
  as it should've been. | 
|  | --disable-verbose". | 
|  |  | 
|  |  | 
|  |  | 
|  | on file indexes beyond 2 or 4GB. | 
|  | corrected spellings and more. | 
|  |  | 
|  |  | 
|  | pipelining, as libcurl could then easily get confused and A) work on the
  handle that was not "first in queue" on a pipeline, or even B) tell the app
  to REMOVE a socket while it was in use by a second handle in a pipeline. Both
  errors caused hanging or stalling applications. | 
|  | in combination with infof() calls | 
|  | was actually ready to get done, as the internal time resolution is higher
  than the returned millisecond timer. Therefore it could cause applications
  running on fast processors to do short bursts of busy-loops.
  curl_multi_timeout() will now only return 0 if the timeout is actually
  alreay triggered. | 
|  |  | 
|  | now has an improved ability to do right when the multi interface (both
  "regular" and multi_socket) is used for SCP and SFTP transfers. This should
  result in (much) less busy-loop situations and thus less CPU usage with no
  speed loss. | 
|  | operation didn't complete properly if the EAGAIN equivalent was returned but
  libcurl would simply continue with a half-completed close operation
  performed. This ruined persistent connection re-use and cause some
  SSH-protocol errors in general. The correction is unfortunately adding a
  blocking function - doing it entirely non-blocking should be considered for
  a better fix. | 
|  |  | 
|  | If USE_WATT32=1 one needs to use stack-based calls (-3s).
So to keep the makefile nice and clean, specify -3s for
Winsock target too (there's hardly any speed-gain using -3r). | 
|  |  | 
|  | made libcurl sometimes not properly abort problematic SFTP transfers. | 
|  | removing easy handles from multi handles when the easy handle is/was within
  a HTTP pipeline. His bug report #2351653
  (http://curl.haxx.se/bug/view.cgi?id=2351653) was also related and was
  eventually fixed by a patch by Igor himself. | 
|  | specified data pointer was head. | 
|  | duphandle+curl_mutli" (http://curl.haxx.se/bug/view.cgi?id=2416182) showed
  that curl_easy_duphandle() wrongly also copied the pointer to the connection
  cache, which was plain wrong and caused a segfault if the handle would be
  used in a different multi handle than the handle it was duplicated from. | 
|  | in the parse_remote_port() function as the scope id has already been stripped
from the string. | 
|  | addresses if they were very long (>39 letters) due to a too strict address
  validity parser. It now accepts addresses up to 45 bytes long. | 
|  | _ Adjust OS400 make script for non-CVS distributions.
_ Upgrade ILE/RPG binding.
_ Define CURL_HIDDEN_SYMBOLS on OS400, since only CURL_EXTERN-marked symbols are exported. | 
|  | there are servers "out there" that relies on the client doing this broken
  Digest authentication. Apache even comes with an option to work with such
  broken clients.
  The difference is only for URLs that contain a query-part (a '?'-letter and
  text to the right of it).
  libcurl now supports this quirk, and you enable it by setting the
  CURLAUTH_DIGEST_IE bit in the bitmask you pass to the CURLOPT_HTTPAUTH or
  CURLOPT_PROXYAUTH options. They are thus individually controlled to server
  and proxy. | 
|  | particular state for the control connection like it did before for implicit
  FTPS (libcurl assumed such control connections to be encrypted while some
  FTPS servers such as FileZilla assumes such connections to be clear
  mode). Use the CURLOPT_USE_SSL option to set your desired level. | 
|  | researching it, it turned out he got a 550 response back from a SIZE command
  and then I fell over the text in RFC3659 that says:
   The presence of the 550 error response to a SIZE command MUST NOT be taken
   by the client as an indication that the file cannot be transferred in the
   current MODE and TYPE.
  In other words: the change I did on September 30th 2008 and that has been
  included in the last two releases were a regression and a bad idea. We MUST
  NOT take a 550 response from SIZE as a hint that the file doesn't exist. |