aboutsummaryrefslogtreecommitdiff
path: root/docs/KNOWN_BUGS
blob: 42611d62c2babb7620071bbe08c143967e0b8f11 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
These are problems known to exist at the time of this release. Feel free to
join in and help us correct one or more of these! Also be sure to check the
changelog of the current development status, as one or more of these problems
may have been fixed since this was written!

76. The SOCKET type in Win64 is 64 bits large (and thus so is curl_socket_t on
  that platform), and long is only 32 bits. It makes it impossible for
  curl_easy_getinfo() to return a socket properly with the CURLINFO_LASTSOCKET
  option as for all other operating systems.

75. NTLM authentication involving unicode user name or password.
  http://curl.haxx.se/mail/lib-2009-10/0024.html
  http://curl.haxx.se/bug/view.cgi?id=2944325

74. The HTTP spec allows headers to be merged and become comma-separated
  instead of being repeated several times. This also include Authenticate: and
  Proxy-Authenticate: headers and while this hardly every happens in real life
  it will confuse libcurl which does not properly support it for all headers -
  like those Authenticate headers.

73. if a connection is made to a FTP server but the server then just never
  sends the 220 response or otherwise is dead slow, libcurl will not
  acknowledge the connection timeout during that phase but only the "real"
  timeout - which may surprise users as it is probably considered to be the
  connect phase to most people. Brought up (and is being misunderstood) in:
  http://curl.haxx.se/bug/view.cgi?id=2844077

72. "Pausing pipeline problems."
  http://curl.haxx.se/mail/lib-2009-07/0214.html

70. Problem re-using easy handle after call to curl_multi_remove_handle
  http://curl.haxx.se/mail/lib-2009-07/0249.html

68. "More questions about ares behavior".
  http://curl.haxx.se/mail/lib-2009-08/0012.html

67. When creating multipart formposts. The file name part can be encoded with
  something beyond ascii but currently libcurl will only pass in the verbatim
  string the app provides. There are several browsers that already do this
  encoding. The key seems to be the updated draft to RFC2231:
  http://tools.ietf.org/html/draft-reschke-rfc2231-in-http-02

66. When using telnet, the time limitation options don't work.
  http://curl.haxx.se/bug/view.cgi?id=2818950

65. When doing FTP over a socks proxy or CONNECT through HTTP proxy and the
  multi interface is used, libcurl will fail if the (passive) TCP connection
  for the data transfer isn't more or less instant as the code does not
  properly wait for the connect to be confirmed. See test case 564 for a first
  shot at a test case.

63. When CURLOPT_CONNECT_ONLY is used, the handle cannot reliably be re-used
  for any further requests or transfers. The work-around is then to close that
  handle with curl_easy_cleanup() and create a new. Some more details:
  http://curl.haxx.se/mail/lib-2009-04/0300.html

62. CURLOPT_TIMEOUT does not work properly with the regular multi and
  multi_socket interfaces. The work-around for apps is to simply remove the
  easy handle once the time is up. See also:
  http://curl.haxx.se/bug/view.cgi?id=2501457

61. If an upload using Expect: 100-continue receives an HTTP 417 response,
  it ought to be automatically resent without the Expect:.  A workaround is
  for the client application to redo the transfer after disabling Expect:.
  http://curl.haxx.se/mail/archive-2008-02/0043.html

60. libcurl closes the connection if an HTTP 401 reply is received while it
  is waiting for the the 100-continue response.
  http://curl.haxx.se/mail/lib-2008-08/0462.html

58. It seems sensible to be able to use CURLOPT_NOBODY and
  CURLOPT_FAILONERROR with FTP to detect if a file exists or not, but it is
  not working: http://curl.haxx.se/mail/lib-2008-07/0295.html

57. On VMS-Alpha: When using an http-file-upload the file is not sent to the
  Server with the correct content-length.  Sending a file with 511 or less
  bytes, content-length 512 is used.  Sending a file with 513 - 1023 bytes,
  content-length 1024 is used.  Files with a length of a multiple of 512 Bytes
  show the correct content-length. Only these files work for upload.
  http://curl.haxx.se/bug/view.cgi?id=2057858

56. When libcurl sends CURLOPT_POSTQUOTE commands when connected to a SFTP
  server using the multi interface, the commands are not being sent correctly
  and instead the connection is "cancelled" (the operation is considered done)
  prematurely. There is a half-baked (busy-looping) patch provided in the bug
  report but it cannot be accepted as-is. See
  http://curl.haxx.se/bug/view.cgi?id=2006544

55. libcurl fails to build with MIT Kerberos for Windows (KfW) due to KfW's
  library header files exporting symbols/macros that should be kept private
  to the KfW library. See ticket #5601 at http://krbdev.mit.edu/rt/

52. Gautam Kachroo's issue that identifies a problem with the multi interface
  where a connection can be re-used without actually being properly
  SSL-negotiated:
  http://curl.haxx.se/mail/lib-2008-01/0277.html

49. If using --retry and the transfer timeouts (possibly due to using -m or
  -y/-Y) the next attempt doesn't resume the transfer properly from what was
  downloaded in the previous attempt but will truncate and restart at the
  original position where it was at before the previous failed attempt. See
  http://curl.haxx.se/mail/lib-2008-01/0080.html and Mandriva bug report
  https://qa.mandriva.com/show_bug.cgi?id=22565

48. If a CONNECT response-headers are larger than BUFSIZE (16KB) when the
  connection is meant to be kept alive (like for NTLM proxy auth), the
  function will return prematurely and will confuse the rest of the HTTP
  protocol code. This should be very rare.

43. There seems to be a problem when connecting to the Microsoft telnet server.
  http://curl.haxx.se/bug/view.cgi?id=1720605

41. When doing an operation over FTP that requires the ACCT command (but not
  when logging in), the operation will fail since libcurl doesn't detect this
  and thus fails to issue the correct command:
  http://curl.haxx.se/bug/view.cgi?id=1693337

39. Steffen Rumler's Race Condition in Curl_proxyCONNECT:
  http://curl.haxx.se/mail/lib-2007-01/0045.html

38. Kumar Swamy Bhatt's problem in ftp/ssl "LIST" operation:
  http://curl.haxx.se/mail/lib-2007-01/0103.html

37. Having more than one connection to the same host when doing NTLM
  authentication (with performs multiple "passes" and authenticates a
  connection rather than a HTTP request), and particularly when using the
  multi interface, there's a risk that libcurl will re-use a wrong connection
  when doing the different passes in the NTLM negotiation and thus fail to
  negotiate (in seemingly mysterious ways).

35. Both SOCKS5 and SOCKS4 proxy connections are done blocking, which is very
  bad when used with the multi interface.

34. The SOCKS4 connection codes don't properly acknowledge (connect) timeouts.
  Also see #12. According to bug #1556528, even the SOCKS5 connect code does
  not do it right: http://curl.haxx.se/bug/view.cgi?id=1556528,

31. "curl-config --libs" will include details set in LDFLAGS when configure is
  run that might be needed only for building libcurl. Further, curl-config
  --cflags suffers from the same effects with CFLAGS/CPPFLAGS.

30. You need to use -g to the command line tool in order to use RFC2732-style
  IPv6 numerical addresses in URLs.

29. IPv6 URLs with zone ID is not nicely supported.
  http://www.ietf.org/internet-drafts/draft-fenner-literal-zone-02.txt (expired)
  specifies the use of a plus sign instead of a percent when specifying zone
  IDs in URLs to get around the problem of percent signs being
  special. According to the reporter, Firefox deals with the URL _with_ a
  percent letter (which seems like a blatant URL spec violation).
  libcurl supports zone IDs where the percent sign is URL-escaped (i.e. %25).

   See http://curl.haxx.se/bug/view.cgi?id=1371118

26. NTLM authentication using SSPI (on Windows) when (lib)curl is running in
  "system context" will make it use wrong(?) user name - at least when compared
  to what winhttp does. See http://curl.haxx.se/bug/view.cgi?id=1281867

23. SOCKS-related problems:
  A) libcurl doesn't support SOCKS for IPv6.
  B) libcurl doesn't support FTPS over a SOCKS proxy.
  E) libcurl doesn't support active FTP over a SOCKS proxy

  We probably have even more bugs and lack of features when a SOCKS proxy is
  used.

22. Sending files to a FTP server using curl on VMS, might lead to curl
  complaining on "unaligned file size" on completion. The problem is related
  to VMS file structures and the perceived file sizes stat() returns. A
  possible fix would involve sending a "STRU VMS" command.
  http://curl.haxx.se/bug/view.cgi?id=1156287

21. FTP ASCII transfers do not follow RFC959. They don't convert the data
   accordingly (not for sending nor for receiving). RFC 959 section 3.1.1.1
   clearly describes how this should be done:

     The sender converts the data from an internal character representation to
     the standard 8-bit NVT-ASCII representation (see the Telnet
     specification).  The receiver will convert the data from the standard
     form to his own internal form.

   Since 7.15.4 at least line endings are converted.

16. FTP URLs passed to curl may contain NUL (0x00) in the RFC 1738 <user>,
  <password>, and <fpath> components, encoded as "%00".  The problem is that
  curl_unescape does not detect this, but instead returns a shortened C
  string.  From a strict FTP protocol standpoint, NUL is a valid character
  within RFC 959 <string>, so the way to handle this correctly in curl would
  be to use a data structure other than a plain C string, one that can handle
  embedded NUL characters.  From a practical standpoint, most FTP servers
  would not meaningfully support NUL characters within RFC 959 <string>,
  anyway (e.g., UNIX pathnames may not contain NUL).

14. Test case 165 might fail on a system which has libidn present, but with an
  old iconv version (2.1.3 is a known bad version), since it doesn't recognize
  the charset when named ISO8859-1. Changing the name to ISO-8859-1 makes the
  test pass, but instead makes it fail on Solaris hosts that use its native
  iconv.

13. curl version 7.12.2 fails on AIX if compiled with --enable-ares.
  The workaround is to combine --enable-ares with --disable-shared

12. When connecting to a SOCKS proxy, the (connect) timeout is not properly
  acknowledged after the actual TCP connect (during the SOCKS "negotiate"
  phase).

10. To get HTTP Negotiate authentication to work fine, you need to provide a
  (fake) user name (this concerns both curl and the lib) because the code
  wrongly only considers authentication if there's a user name provided.
  http://curl.haxx.se/bug/view.cgi?id=1004841. How?
  http://curl.haxx.se/mail/lib-2004-08/0182.html

8. Doing resumed upload over HTTP does not work with '-C -', because curl
  doesn't do a HEAD first to get the initial size. This needs to be done
  manually for HTTP PUT resume to work, and then '-C [index]'.

6. libcurl ignores empty path parts in FTP URLs, whereas RFC1738 states that
  such parts should be sent to the server as 'CWD ' (without an argument).
  The only exception to this rule, is that we knowingly break this if the
  empty part is first in the path, as then we use the double slashes to
  indicate that the user wants to reach the root dir (this exception SHALL
  remain even when this bug is fixed).

5. libcurl doesn't treat the content-length of compressed data properly, as
  it seems HTTP servers send the *uncompressed* length in that header and
  libcurl thinks of it as the *compressed* length. Some explanations are here:
  http://curl.haxx.se/mail/lib-2003-06/0146.html

2. If a HTTP server responds to a HEAD request and includes a body (thus
  violating the RFC2616), curl won't wait to read the response but just stop
  reading and return back. If a second request (let's assume a GET) is then
  immediately made to the same server again, the connection will be re-used
  fine of course, and the second request will be sent off but when the
  response is to get read, the previous response-body is what curl will read
  and havoc is what happens.
  More details on this is found in this libcurl mailing list thread:
  http://curl.haxx.se/mail/lib-2002-08/0000.html