| Age | Commit message (Collapse) | Author | 
|---|
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  | really didn't belong there and had no real point. | 
|  | struct, and instead use the already stored string in the handler struct. | 
|  | available. | 
|  | was a bit too quick and broke test case 1101 with that change. The order of
some of the setups is sensitive. I now changed it slightly again. | 
|  | detects and uses proxies based on the environment variables. If the proxy
  was given as an explicit option it worked, but due to the setup order
  mistake proxies would not be used fine for a few protocols when picked up
  from '[protocol]_proxy'. Obviously this broke after 7.19.4. I now also added
  test case 1106 that verifies this functionality.
  (http://curl.haxx.se/bug/view.cgi?id=2913886) | 
|  |  | 
|  |  | 
|  | protocol-specific header files | 
|  |  | 
|  | See http://curl.haxx.se/mail/lib-2009-12/0107.html | 
|  |  | 
|  | accessing alredy freed memory and thus crash when using HTTPS (with
  OpenSSL), multi interface and the CURLOPT_DEBUGFUNCTION and a certain order
  of cleaning things up. I fixed it.
  (http://curl.haxx.se/bug/view.cgi?id=2891591) | 
|  |  | 
|  | with unknown size. Previously it was only used for posts with a known size
  larger than 1024 bytes. | 
|  |  | 
|  | curl_easy_setopt with CURLOPT_HTTPHEADER, the library should set
  data->state.expect100header accordingly - the current code (in 7.19.7 at
  least) doesn't handle this properly. Martin Storsjo provided the fix! | 
|  |  | 
|  |  | 
|  | rework patch that now integrates TFTP properly into libcurl so that it can
  be used non-blocking with the multi interface and more. BLKSIZE also works.
  The --tftp-blksize option was added to allow setting the TFTP BLKSIZE from
  the command line. | 
|  | meter/callback during FTP command/response sequences. It turned out it was
   really lame before and now the progress meter SHOULD get called at least
   once per second. | 
|  |  | 
|  | closed by libcurl before the SSL lib were shutdown and they may write to its
  socket. Detected to at least happen with OpenSSL builds. | 
|  | CURLOPT_HTTPPROXYTUNNEL enabled over a proxy, a subsequent request using the
  same proxy with the tunnel option disabled would still wrongly re-use that
  previous connection and the outcome would only be badness. | 
|  |  | 
|  |  | 
|  | calloc() and realloc() function calls. | 
|  |  | 
|  | end up with entries that wouldn't time-out:
  1. Set up a first web server that redirects (307) to a http://server:port
     that's down
  2. Have curl connect to the first web server using curl multi
  After the curl_easy_cleanup call, there will be curl dns entries hanging
  around with in_use != 0.
  (http://curl.haxx.se/bug/view.cgi?id=2891591) | 
|  |  | 
|  | options. The library is always built as thread safe as possible on every system. | 
|  |  | 
|  |  | 
|  | the client certificate. It also disable the key name test as some engines
  can select a private key/cert automatically (When there is only one key
  and/or certificate on the hardware device used by the engine) | 
|  | won't be reused unless protection level for peer and host verification match. | 
|  | No need for a separate variable ndns.
The memory leak detection will detect code that fails to release a dns reference.
The DEBUGASSERT will detect code that releases too many references. | 
|  |  | 
|  | a broken TLS server. However it does not happen if SSL version is selected
  manually. The approach was originally taken from PSM. Kaspar Brand helped me
  to complete the patch. Original bug reports:
  https://bugzilla.redhat.com/525496
  https://bugzilla.redhat.com/527771 | 
|  | closed NSPR descriptor. The issue was hard to find, reported several times
  before and always closed unresolved. More info at the RH bug:
  https://bugzilla.redhat.com/534176 | 
|  |  | 
|  | and GNU GSS installed due to a missing mutual exclusion of header files in
  the Kerberos 5 code path. He also verified that my patch worked for him. | 
|  | (http://curl.haxx.se/bug/view.cgi?id=2891595) which identified how an entry
  in the DNS cache would linger too long if the request that added it was in
  use that long. He also provided the patch that now makes libcurl capable of
  still doing a request while the DNS hash entry may get timed out. | 
|  | used during the FTP connection phase (after the actual TCP connect), while
  it of course should be. I also made the speed check get called correctly so
  that really slow servers will trigger that properly too. | 
|  | in non-blocking mode. | 
|  | curl.h, adjusting auto-makefiles include path, to enhance portability to
OS's without an orthogonal directory tree structure such as OS/400. | 
|  |  |