| Age | Commit message (Collapse) | Author | 
|---|
|  | the server anymore | 
|  |  | 
|  |  | 
|  | command is a special "hack" used by the drftpd server, but even though it is
  a custom extension I've deemed it fine to add to libcurl since this server
  seems to survive and people keep using it and want libcurl to support
  it. The new libcurl option is named CURLOPT_FTP_USE_PRET, and it is also
  usable from the curl tool with --ftp-pret. Using this option on a server
  that doesn't support this command will make libcurl fail. | 
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  | going on | 
|  |  | 
|  | They introduce known_host support for SSH keys to libcurl. See docs for
  details. | 
|  | errno is not reset on success. | 
|  |  | 
|  |  | 
|  | range if given colon-separated after the host name/address part. Like
  "192.168.0.1:2000-10000" | 
|  | pointed out! | 
|  |  | 
|  | With the curl memory tracking feature decoupled from the debug build feature,
CURLDEBUG and DEBUGBUILD preprocessor symbol definitions are used as follows:
CURLDEBUG used for curl debug memory tracking specific code (--enable-curldebug)
DEBUGBUILD used for debug enabled specific code (--enable-debug) | 
|  |  | 
|  |  | 
|  | the function we should use, while both curl_multi_socket() and
curl_multi_socket_all() should be killed! | 
|  |  | 
|  |  | 
|  | reported in the Debian package. | 
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  | Chen pointed out how curl couldn't upload with resume when reading from a
  pipe.
  This ended up with the introduction of a new return code for the
  CURLOPT_SEEKFUNCTION callback that basically says that the seek failed but
  that libcurl may try to resolve the situation anyway. In our case this means
  libcurl will attempt to instead read that much data from the stream instead
  of seeking and that way curl can now upload with resume when data is read
  from a stream! | 
|  |  | 
|  |  | 
|  | "pointer to a char pointer". | 
|  |  | 
|  | and 1 on fatal errors. Previously it only mentioned non-zero on fatal
  errors. This is a slight change in meaning, but it follows what we've done
  elsewhere before and it opens up for LOTS of more useful return codes
  whenever we can think of them... | 
|  | functions if the easy handles are used in multiple threads | 
|  |  | 
|  | more issues for authors to consider when writing robust libcurl-using
applications. | 
|  | (http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
  which previous libcurl versions (by design) can be tricked to access an
  arbitrary local/different file instead of a remote one when
  CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
  together this the addition of two new setopt options for controlling this
  new behavior:
  o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
  follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
  excludes the FILE and SCP protocols and thus you nee to explicitly allow
  them in your app if you really want that behavior.
  o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
  using the primary URL option. This is useful if you want to allow a user or
  other outsiders control what URL to pass to libcurl and yet not allow all
  protocols libcurl may have been built to support. | 
|  | CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD return
  -1 if the sizes aren't know. Previously these returned 0, make it impossible
  to detect the difference between actually zero and unknown. | 
|  |  | 
|  |  | 
|  | plain FTP connections, and it will then allow MKD to fail once and retry the
  CWD afterwards. This is especially useful if you're doing many simultanoes
  connections against the same server and they all have this option enabled,
  as then CWD may first fail but then another connection does MKD before this
  connection and thus MKD fails but trying CWD works! The numbers can
  (should?) now be set with the convenience enums now called
  CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
  Tests has proven that if you're making an application that uploads a set of
  files to an ftp server, you will get a noticable gain in speed if you're
  using multiple connections and this option will be then be very useful. | 
|  | the condition in the previous request was unmet. This is typically a time
  condition set with CURLOPT_TIMECONDITION and was previously not possible to
  reliably figure out. From bug report #2565128
  (http://curl.haxx.se/bug/view.cgi?id=2565128) | 
|  |  |