aboutsummaryrefslogtreecommitdiff
path: root/docs/TODO
diff options
context:
space:
mode:
authorDaniel Stenberg <daniel@haxx.se>2007-12-08 23:00:00 +0000
committerDaniel Stenberg <daniel@haxx.se>2007-12-08 23:00:00 +0000
commit41d8186c7e1e1cc84fc7199a999a2d7d22cd3ae1 (patch)
treecb8142d97c7a2ed473434c7eb1a2808fcfe741b9 /docs/TODO
parent6e9276229f8f204ec934872fbb4bade2fb44d1dd (diff)
reformat to FAQ/CONTRIBUTE style, for nicer web-look when I apply the magic
script(s) on it online
Diffstat (limited to 'docs/TODO')
-rw-r--r--docs/TODO706
1 files changed, 450 insertions, 256 deletions
diff --git a/docs/TODO b/docs/TODO
index 11f6f8942..aae0f847d 100644
--- a/docs/TODO
+++ b/docs/TODO
@@ -4,346 +4,540 @@
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
-TODO
+ Things that could be nice to do in the future
Things to do in project cURL. Please tell us what you think, contribute and
- send us patches that improve things! Also check the http://curl.haxx.se/dev
- web section for various technical development notes.
+ send us patches that improve things!
All bugs documented in the KNOWN_BUGS document are subject for fixing!
- LIBCURL
-
- * Introduce another callback interface for upload/download that makes one
- less copy of data and thus a faster operation.
- [http://curl.haxx.se/dev/no_copy_callbacks.txt]
-
- * More data sharing. curl_share_* functions already exist and work, and they
- can be extended to share more. For example, enable sharing of the ares
- channel and the connection cache.
-
- * Introduce a new error code indicating authentication problems (for proxy
- CONNECT error 407 for example). This cannot be an error code, we must not
- return informational stuff as errors, consider a new info returned by
- curl_easy_getinfo() http://curl.haxx.se/bug/view.cgi?id=845941
-
- * Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
- SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
- To support ipv6 interface addresses properly.
-
- * Add the following to curl_easy_getinfo(): GET_HTTP_IP, GET_FTP_IP and
- GET_FTP_DATA_IP. Return a string with the used IP. Suggested by Alan.
-
- * Add option that changes the interval in which the progress callback is
- called at most.
-
- * Make libcurl built with c-ares use c-ares' IPv6 abilities. They weren't
- present when we first added c-ares support but they have been added since!
- When this is done and works, we can actually start considering making c-ares
- powered libcurl the default build (which of course would require that we'd
- bundle the c-ares source code in the libcurl source code releases).
-
- * Make the curl/*.h headers include the proper system includes based on what
- was present at the time when configure was run. Currently, the sys/select.h
- header is for example included by curl/multi.h only on specific platforms
- we know MUST have it. This is error-prone. We therefore want the header
- files to adapt to configure results. Those results must be stored in a new
- header and they must use a curl name space, i.e not be HAVE_* prefix (as
- that would risk collide with other apps that use libcurl and that runs
- configure).
+ 1. libcurl
+ 1.1 Zero-copy interface
+ 1.2 More data sharing
+ 1.3 struct lifreq
+ 1.4 Get IP address
+ 1.5 c-ares ipv6
+ 1.5 configure-based info in public headers
+
+ 2. libcurl - multi interface
+ 2.1 More non-blocking
+ 2.2 Pause transfers
+ 2.3 Remove easy interface internally
+ 2.4 Avoid having to remove/readd handles
+
+ 3. Documentation
+ 3.1 More and better
+
+ 4. FTP
+ 4.1 PRET
+ 4.2 Alter passive/active on failure and retry
+ 4.3 Earlier bad letter detection
+ 4.4 REST for large files
+ 4.5 FTP proxy support
+ 4.6 PORT port range
+ 4.7 ASCII support
+
+ 5. HTTP
+ 5.1 Other HTTP versions with CONNECT
+ 5.1 Better persistancy for HTTP 1.0
+
+ 6. TELNET
+ 6.1 ditch stdin
+ 6.2 ditch telnet-specific select
+
+ 7. SSL
+ 7.1 Disable specific versions
+ 7.2 Provide mytex locking API
+ 7.3 dumpcert
+ 7.4 Evaluate SSL patches
+ 7.5 Cache OpenSSL contexts
+ 7.6 Export session ids
+ 7.7 Provide callback for cert verfication
+ 7.8 Support other SSL libraries
+ 7.9 Support SRP on the TLS layer
+ 7.10 improve configure --with-ssl
+
+ 8. GnuTLS
+ 8.1 Make NTLM work without OpenSSL functions
+ 8.2 SSl engine stuff
+ 8.3 SRP
+ 8.4 non-blocking
+ 8.5 check connection
+
+ 9. LDAP
+ 9.1 ditch ldap-specific select
+
+ 10. New protocols
+ 10.1 RTSP
+ 10.2 RSYNC
+
+ 11. Client
+ 11.1 Content-Disposition
+ 11.2 sync
+ 11.3 glob posts
+ 11.4 prevent file overwriting
+ 11.5 ftp wildcard download
+ 11.6 simultaneous parallel transfers
+ 11.7 provide formpost headers
+ 11.8 url-specific options
+
+ 12. Build
+ 12.1 roffit
+
+ 13. Test suite
+ 13.1 SSL tunnel
+ 13.2 nicer lacking perl message
+ 13.3 more protocols supported
+ 13.4 more platforms supported
+
+ 14. Next SONAME bump
+ 14.1 http-style HEAD output for ftp
+ 14.2 combine error codes
+
+ 15. Next major release
+ 15.1 cleanup return codes
+ 15.2 remove obsolete defines
+ 15.3 size_t
+ 15.4 remove several functions
+ 15.5 remove CURLOPT_FAILONERROR
- Work on this has been started but hasn't been finished, and the initial
- patch and some details are found here:
- http://curl.haxx.se/mail/lib-2006-12/0084.html
+==============================================================================
- LIBCURL - multi interface
+1. libcurl
- * Make sure we don't ever loop because of non-blocking sockets return
- EWOULDBLOCK or similar. The GnuTLS connection etc.
+1.1 Zero-copy interface
- * Make transfers treated more carefully. We need a way to tell libcurl we
- have data to write, as the current system expects us to upload data each
- time the socket is writable and there is no way to say that we want to
- upload data soon just not right now, without that aborting the upload. The
- opposite situation should be possible as well, that we tell libcurl we're
- ready to accept read data. Today libcurl feeds the data as soon as it is
- available for reading, no matter what.
+ Introdue another callback interface for upload/download that makes one less
+ copy of data and thus a faster operation.
+ [http://curl.haxx.se/dev/no_copy_callbacks.txt]
- * Make curl_easy_perform() a wrapper-function that simply creates a multi
- handle, adds the easy handle to it, runs curl_multi_perform() until the
- transfer is done, then detach the easy handle, destroy the multi handle and
- return the easy handle's return code. This will thus make everything
- internally use and assume the multi interface. The select()-loop should use
- curl_multi_socket().
-
- * curl_multi_handle_control() - this can control the easy handle (while)
- added to a multi handle in various ways:
- o RESTART, unconditionally restart this easy handle's transfer from the
- start, re-init the state
- o RESTART_COMPLETED, restart this easy handle's transfer but only if the
- existing transfer has already completed and it is in a "finished state".
- o STOP, just stop this transfer and consider it completed
- o PAUSE?
- o RESUME?
+1.2 More data sharing
- DOCUMENTATION
-
- * More and better
+ curl_share_* functions already exist and work, and they can be extended to
+ share more. For example, enable sharing of the ares channel and the
+ connection cache.
- FTP
-
- * PRET is a command that primarily "drftpd" supports, which could be useful
- when using libcurl against such a server. It is a non-standard and a rather
- oddly designed command, but...
- http://curl.haxx.se/bug/feature.cgi?id=1729967
+1.3 struct lifreq
- * When trying to connect passively to a server which only supports active
- connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the
- connection. There could be a way to fallback to an active connection (and
- vice versa). http://curl.haxx.se/bug/feature.cgi?id=1754793
+ Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
+ SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
+ To support ipv6 interface addresses for network interfaces properly.
- * Make the detection of (bad) %0d and %0a codes in FTP url parts earlier in
- the process to avoid doing a resolve and connect in vain.
+1.4 Get IP address
- * REST fix for servers not behaving well on >2GB requests. This should fail
- if the server doesn't set the pointer to the requested index. The tricky
- (impossible?) part is to figure out if the server did the right thing or
- not.
+ Add the following to curl_easy_getinfo(): GET_HTTP_IP, GET_FTP_IP and
+ GET_FTP_DATA_IP. Return a string with the used IP.
- * Support the most common FTP proxies, Philip Newton provided a list
- allegedly from ncftp:
- http://curl.haxx.se/mail/archive-2003-04/0126.html
+1.5 c-ares ipv6
- * Make CURLOPT_FTPPORT support an additional port number on the IP/if/name,
- like "blabla:[port]" or possibly even "blabla:[portfirst]-[portsecond]".
- http://curl.haxx.se/bug/feature.cgi?id=1505166
+ Make libcurl built with c-ares use c-ares' IPv6 abilities. They weren't
+ present when we first added c-ares support but they have been added since!
+ When this is done and works, we can actually start considering making c-ares
+ powered libcurl the default build (which of course would require that we'd
+ bundle the c-ares source code in the libcurl source code releases).
- * FTP ASCII transfers do not follow RFC959. They don't convert the data
- accordingly.
+1.5 configure-based info in public headers
- * Since USERPWD always override the user and password specified in URLs, we
- might need another way to specify user+password for anonymous ftp logins.
+ Make the curl/*.h headers include the proper system includes based on what
+ was present at the time when configure was run. Currently, the sys/select.h
+ header is for example included by curl/multi.h only on specific platforms we
+ know MUST have it. This is error-prone. We therefore want the header files to
+ adapt to configure results. Those results must be stored in a new header and
+ they must use a curl name space, i.e not be HAVE_* prefix (as that would risk
+ collide with other apps that use libcurl and that runs configure).
- * The FTP code should get a way of returning errors that is known to still
- have the control connection alive and sound. Currently, a returned error
- from within ftp-functions does not tell if the control connection is still
- OK to use or not. This causes libcurl to fail to re-use connections
- slightly too often.
+ Work on this has been started but hasn't been finished, and the initial patch
+ and some details are found here:
+ http://curl.haxx.se/mail/lib-2006-12/0084.html
- HTTP
+ The remaining problems to solve involve the platforms that can't run
+ configure.
- * When doing CONNECT to a HTTP proxy, libcurl always uses HTTP/1.0. This has
- never been reported as causing trouble to anyone, but should be considered
- to use the HTTP version the user has chosen.
+2. libcurl - multi interface
- * "Better" support for persistent connections over HTTP 1.0
- http://curl.haxx.se/bug/feature.cgi?id=1089001
+2.1 More non-blocking
- TELNET
+ Make sure we don't ever loop because of non-blocking sockets return
+ EWOULDBLOCK or similar. The GnuTLS connection etc.
- * Reading input (to send to the remote server) on stdin is a crappy solution
- for library purposes. We need to invent a good way for the application to
- be able to provide the data to send.
+2.2 Pause transfers
- * Move the telnet support's network select() loop go away and merge the code
- into the main transfer loop. Until this is done, the multi interface won't
- work for telnet.
+ Make transfers treated more carefully. We need a way to tell libcurl we have
+ data to write, as the current system expects us to upload data each time the
+ socket is writable and there is no way to say that we want to upload data
+ soon just not right now, without that aborting the upload. The opposite
+ situation should be possible as well, that we tell libcurl we're ready to
+ accept read data. Today libcurl feeds the data as soon as it is available for
+ reading, no matter what.
- SSL
+2.3 Remove easy interface internally
- * Provide an option that allows for disabling specific SSL versions, such as
- SSLv2 http://curl.haxx.se/bug/feature.cgi?id=1767276
+ Make curl_easy_perform() a wrapper-function that simply creates a multi
+ handle, adds the easy handle to it, runs curl_multi_perform() until the
+ transfer is done, then detach the easy handle, destroy the multi handle and
+ return the easy handle's return code. This will thus make everything
+ internally use and assume the multi interface. The select()-loop should use
+ curl_multi_socket().
- * Provide a libcurl API for setting mutex callbacks in the underlying SSL
- library, so that the same application code can use mutex-locking
- independently of OpenSSL or GnutTLS being used.
+2.4 Avoid having to remove/readd handles
- * Anton Fedorov's "dumpcert" patch:
- http://curl.haxx.se/mail/lib-2004-03/0088.html
+ curl_multi_handle_control() - this can control the easy handle (while) added
+ to a multi handle in various ways:
- * Evaluate/apply Gertjan van Wingerde's SSL patches:
- http://curl.haxx.se/mail/lib-2004-03/0087.html
+ o RESTART, unconditionally restart this easy handle's transfer from the
+ start, re-init the state
- * "Look at SSL cafile - quick traces look to me like these are done on every
- request as well, when they should only be necessary once per ssl context
- (or once per handle)". The major improvement we can rather easily do is to
- make sure we don't create and kill a new SSL "context" for every request,
- but instead make one for every connection and re-use that SSL context in
- the same style connections are re-used. It will make us use slightly more
- memory but it will libcurl do less creations and deletions of SSL contexts.
+ o RESTART_COMPLETED, restart this easy handle's transfer but only if the
+ existing transfer has already completed and it is in a "finished state".
- * Add an interface to libcurl that enables "session IDs" to get
- exported/imported. Cris Bailiff said: "OpenSSL has functions which can
- serialise the current SSL state to a buffer of your choice, and
- recover/reset the state from such a buffer at a later date - this is used
- by mod_ssl for apache to implement and SSL session ID cache".
+ o STOP, just stop this transfer and consider it completed
- * OpenSSL supports a callback for customised verification of the peer
- certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
- it be? There's so much that could be done if it were! (brought by Chris
- Clark)
+ o PAUSE?
- * Make curl's SSL layer capable of using other free SSL libraries. Such as
- MatrixSSL (http://www.matrixssl.org/).
+ o RESUME?
- * Peter Sylvester's patch for SRP on the TLS layer.
- Awaits OpenSSL support for this, no need to support this in libcurl before
- there's an OpenSSL release that does it.
+3. Documentation
- * make the configure --with-ssl option first check for OpenSSL, then GnuTLS,
- then NSS...
+3.1 More and better
- GnuTLS
+ Exactly
- * Get NTLM working using the functions provided by libgcrypt, since GnuTLS
- already depends on that to function. Not strictly SSL/TLS related, but
- hey... Another option is to get available DES and MD4 source code from the
- cryptopp library. They are fine license-wise, but are C++.
+4. FTP
- * SSL engine stuff?
+4.1 PRET
- * Work out a common method with Peter Sylvester's OpenSSL-patch for SRP
- on the TLS to provide name and password
+ PRET is a command that primarily "drftpd" supports, which could be useful
+ when using libcurl against such a server. It is a non-standard and a rather
+ oddly designed command, but...
+ http://curl.haxx.se/bug/feature.cgi?id=1729967
- * Fix the connection phase to be non-blocking when multi interface is used
+4.2 Alter passive/active on failure and retry
- * Add a way to check if the connection seems to be alive, to correspond to
- the SSL_peak() way we use with OpenSSL.
+ When trying to connect passively to a server which only supports active
+ connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the
+ connection. There could be a way to fallback to an active connection (and
+ vice versa). http://curl.haxx.se/bug/feature.cgi?id=1754793
- LDAP
+4.3 Earlier bad letter detection
+
+ Make the detection of (bad) %0d and %0a codes in FTP url parts earlier in the
+ process to avoid doing a resolve and connect in vain.
+
+4.4 REST for large files
+
+ REST fix for servers not behaving well on >2GB requests. This should fail if
+ the server doesn't set the pointer to the requested index. The tricky
+ (impossible?) part is to figure out if the server did the right thing or not.
+
+4.5 FTP proxy support
+
+ Support the most common FTP proxies, Philip Newton provided a list allegedly
+ from ncftp. This is not a subject without debate, and is probably not really
+ suitable for libcurl. http://curl.haxx.se/mail/archive-2003-04/0126.html
+
+4.6 PORT port range
+
+ Make CURLOPT_FTPPORT support an additional port number on the IP/if/name,
+ like "blabla:[port]" or possibly even "blabla:[portfirst]-[portsecond]".
+ http://curl.haxx.se/bug/feature.cgi?id=1505166
+
+4.7 ASCII support
+
+ FTP ASCII transfers do not follow RFC959. They don't convert the data
+ accordingly.
+
+5. HTTP
+
+5.1 Other HTTP versions with CONNECT
+
+ When doing CONNECT to a HTTP proxy, libcurl always uses HTTP/1.0. This has
+ never been reported as causing trouble to anyone, but should be considered to
+ use the HTTP version the user has chosen.
+
+5.1 Better persistancy for HTTP 1.0
+
+ "Better" support for persistent connections over HTTP 1.0
+ http://curl.haxx.se/bug/feature.cgi?id=1089001
+
+6. TELNET
+
+6.1 ditch stdin
+
+Reading input (to send to the remote server) on stdin is a crappy solution for
+library purposes. We need to invent a good way for the application to be able
+to provide the data to send.
+
+6.2 ditch telnet-specific select
+
+ Move the telnet support's network select() loop go away and merge the code
+ into the main transfer loop. Until this is done, the multi interface won't
+ work for telnet.
+
+7. SSL
+
+7.1 Disable specific versions
+
+ Provide an option that allows for disabling specific SSL versions, such as
+ SSLv2 http://curl.haxx.se/bug/feature.cgi?id=1767276
+
+7.2 Provide mytex locking API
+
+ Provide a libcurl API for setting mutex callbacks in the underlying SSL
+ library, so that the same application code can use mutex-locking
+ independently of OpenSSL or GnutTLS being used.
+
+7.3 dumpcert
+
+ Anton Fedorov's "dumpcert" patch:
+ http://curl.haxx.se/mail/lib-2004-03/0088.html
+
+7.4 Evaluate SSL patches
+
+ Evaluate/apply Gertjan van Wingerde's SSL patches:
+ http://curl.haxx.se/mail/lib-2004-03/0087.html
+
+7.5 Cache OpenSSL contexts
+
+ "Look at SSL cafile - quick traces look to me like these are done on every
+ request as well, when they should only be necessary once per ssl context (or
+ once per handle)". The major improvement we can rather easily do is to make
+ sure we don't create and kill a new SSL "context" for every request, but
+ instead make one for every connection and re-use that SSL context in the same
+ style connections are re-used. It will make us use slightly more memory but
+ it will libcurl do less creations and deletions of SSL contexts.
+
+7.6 Export session ids
+
+ Add an interface to libcurl that enables "session IDs" to get
+ exported/imported. Cris Bailiff said: "OpenSSL has functions which can
+ serialise the current SSL state to a buffer of your choice, and recover/reset
+ the state from such a buffer at a later date - this is used by mod_ssl for
+ apache to implement and SSL session ID cache".
+
+7.7 Provide callback for cert verfication
+
+ OpenSSL supports a callback for customised verification of the peer
+ certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
+ it be? There's so much that could be done if it were!
+
+7.8 Support other SSL libraries
+
+ Make curl's SSL layer capable of using other free SSL libraries. Such as
+ MatrixSSL (http://www.matrixssl.org/).
+
+7.9 Support SRP on the TLS layer
+
+ Peter Sylvester's patch for SRP on the TLS layer. Awaits OpenSSL support for
+ this, no need to support this in libcurl before there's an OpenSSL release
+ that does it.
+
+7.10 improve configure --with-ssl
+
+ make the configure --with-ssl option first check for OpenSSL, then GnuTLS,
+ then NSS...
+
+8. GnuTLS
+
+8.1 Make NTLM work without OpenSSL functions
+
+ Get NTLM working using the functions provided by libgcrypt, since GnuTLS
+ already depends on that to function. Not strictly SSL/TLS related, but
+ hey... Another option is to get available DES and MD4 source code from the
+ cryptopp library. They are fine license-wise, but are C++.
+
+8.2 SSl engine stuff
+
+ Is this even possible?
+
+8.3 SRP
+
+ Work out a common method with Peter Sylvester's OpenSSL-patch for SRP on the
+ TLS to provide name and password. GnuTLS already supports it...
+
+8.4 non-blocking
+
+ Fix the connection phase to be non-blocking when multi interface is used
+
+8.5 check connection
+
+ Add a way to check if the connection seems to be alive, to correspond to the
+ SSL_peak() way we use with OpenSSL.
+
+9. LDAP
+
+9.1 ditch ldap-specific select
* Look over the implementation. The looping will have to "go away" from the
lib/ldap.c source file and get moved to the main network code so that the
multi interface and friends will work for LDAP as well.
- NEW PROTOCOLS
+10. New protocols
+
+10.1 RTSP
+
+ RFC2326 (protocol - very HTTP-like, also contains URL description)
+
+10.2 RSYNC
+
+ (no RFCs for protocol nor URI/URL format). An implementation should most
+ probably use an existing rsync library, such as librsync.
+
+11. Client
+
+11.1 Content-Disposition
+
+ Add option that is similar to -O but that takes the output file name from the
+ Content-Disposition: header, and/or uses the local file name used in
+ redirections for the cases the server bounces the request further to a
+ different file (name): http://curl.haxx.se/bug/feature.cgi?id=1364676
+
+11.2 sync
+
+ "curl --sync http://example.com/feed[1-100].rss" or
+ "curl --sync http://example.net/{index,calendar,history}.html"
+
+ Downloads a range or set of URLs using the remote name, but only if the
+ remote file is newer than the local file. A Last-Modified HTTP date header
+ should also be used to set the mod date on the downloaded file.
+
+11.3 glob posts
+
+ Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
+ This is easily scripted though.
+
+11.4 prevent file overwriting
+
+ Add an option that prevents cURL from overwriting existing local files. When
+ used, and there already is an existing file with the target file name
+ (either -O or -o), a number should be appended (and increased if already
+ existing). So that index.html becomes first index.html.1 and then
+ index.html.2 etc.
+
+11.5 ftp wildcard download
+
+ "curl ftp://site.com/*.txt"
+
+11.6 simultaneous parallel transfers
+
+ The client could be told to use maximum N simultaneous parallel transfers and
+ then just make sure that happens. It should of course not make more than one
+ connection to the same remote host. This would require the client to use the
+ multi interface. http://curl.haxx.se/bug/feature.cgi?id=1558595
+
+11.7 provide formpost headers
+
+ Extending the capabilities of the multipart formposting. How about leaving
+ the ';type=foo' syntax as it is and adding an extra tag (headers) which
+ works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where
+ fil1.hdr contains extra headers like
+
+ Content-Type: text/plain; charset=KOI8-R"
+ Content-Transfer-Encoding: base64
+ X-User-Comment: Please don't use browser specific HTML code
+
+ which should overwrite the program reasonable defaults (plain/text,
+ 8bit...)
+
+11.8 url-specific options
+
+ Provide a way to make options bound to a specific URL among several on the
+ command line. Possibly by letting ':' separate options between URLs,
+ similar to this:
- * RTSP - RFC2326 (protocol - very HTTP-like, also contains URL description)
+ curl --data foo --url url.com : \
+ --url url2.com : \
+ --url url3.com --data foo3
- * RSYNC (no RFCs for protocol nor URI/URL format). An implementation should
- most probably use an existing rsync library, such as librsync.
+ (More details: http://curl.haxx.se/mail/archive-2004-07/0133.html)
- CLIENT
+ The example would do a POST-GET-POST combination on a single command line.
- * Add option that is similar to -O but that takes the output file name from
- the Content-Disposition: header, and/or uses the local file name used in
- redirections for the cases the server bounces the request further to a
- different file (name): http://curl.haxx.se/bug/feature.cgi?id=1364676
+12. Build
- * "curl --sync http://example.com/feed[1-100].rss" or
- "curl --sync http://example.net/{index,calendar,history}.html"
+12.1 roffit
- Downloads a range or set of URLs using the remote name, but only if the
- remote file is newer than the local file. A Last-Modified HTTP date header
- should also be used to set the mod date on the downloaded file.
- (idea from "Brianiac")
+ Consider extending 'roffit' to produce decent ASCII output, and use that
+ instead of (g)nroff when building src/hugehelp.c
- * Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
- Requested by Dane Jensen and others. This is easily scripted though.
+13. Test suite
- * Add an option that prevents cURL from overwriting existing local files. When
- used, and there already is an existing file with the target file name
- (either -O or -o), a number should be appended (and increased if already
- existing). So that index.html becomes first index.html.1 and then
- index.html.2 etc. Jeff Pohlmeyer suggested.
+13.1 SSL tunnel
- * "curl ftp://site.com/*.txt"
+ Make our own version of stunnel for simple port forwarding to enable HTTPS
+ and FTP-SSL tests without the stunnel dependency, and it could allow us to
+ provide test tools built with either OpenSSL or GnuTLS
- * The client could be told to use maximum N simultaneous parallel transfers
- and then just make sure that happens. It should of course not make more
- than one connection to the same remote host. This would require the client
- to use the multi interface. http://curl.haxx.se/bug/feature.cgi?id=1558595
+13.2 nicer lacking perl message
- * Extending the capabilities of the multipart formposting. How about leaving
- the ';type=foo' syntax as it is and adding an extra tag (headers) which
- works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where
- fil1.hdr contains extra headers like
+ If perl wasn't found by the configure script, don't attempt to run the tests
+ but explain something nice why it doesn't.
- Content-Type: text/plain; charset=KOI8-R"
- Content-Transfer-Encoding: base64
- X-User-Comment: Please don't use browser specific HTML code
+13.3 more protocols supported
- which should overwrite the program reasonable defaults (plain/text,
- 8bit...) (Idea brough to us by kromJx)
+ Extend the test suite to include more protocols. The telnet could just do ftp
+ or http operations (for which we have test servers).
- * ability to specify the classic computing suffixes on the range
- specifications. For example, to download the first 500 Kilobytes of a file,
- be able to specify the following for the -r option: "-r 0-500K" or for the
- first 2 Megabytes of a file: "-r 0-2M". (Mark Smith suggested)
+13.4 more platforms supported
- * --data-encode that URL encodes the data before posting
- http://curl.haxx.se/mail/archive-2003-11/0091.html (Kevin Roth suggested)
+ Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
+ fork()s and it should become even more portable.
- * Provide a way to make options bound to a specific URL among several on the
- command line. Possibly by letting ':' separate options between URLs,
- similar to this:
+14. Next SONAME bump
- curl --data foo --url url.com : \
- --url url2.com : \
- --url url3.com --data foo3
+14.1 http-style HEAD output for ftp
- (More details: http://curl.haxx.se/mail/archive-2004-07/0133.html)
+ #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers
+ from being output in NOBODY requests over ftp
- The example would do a POST-GET-POST combination on a single command line.
+14.2 combine error codes
- BUILD
+ Combine some of the error codes to remove duplicates. The original
+ numbering should not be changed, and the old identifiers would be
+ macroed to the new ones in an CURL_NO_OLDIES section to help with
+ backward compatibility.
- * Consider extending 'roffit' to produce decent ASCII output, and use that
- instead of (g)nroff when building src/hugehelp.c
+ Candidates for removal and their replacements:
- TEST SUITE
+ CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND
+ CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND
+ CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR
+ CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT
+ CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT
+ CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL
+ CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND
+ CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED
- * Make our own version of stunnel for simple port forwarding to enable HTTPS
- and FTP-SSL tests without the stunnel dependency, and it could allow us to
- provide test tools built with either OpenSSL or GnuTLS
+15. Next major release
- * If perl wasn't found by the configure script, don't attempt to run the
- tests but explain something nice why it doesn't.
+15.1 cleanup return codes
- * Extend the test suite to include more protocols. The telnet could just do
- ftp or http operations (for which we have test servers).
+ curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a
+ CURLMcode. These should be changed to be the same.
- * Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
- fork()s and it should become even more portable.
+15.2 remove obsolete defines
- NEXT soname bump
+ remove obsolete defines from curl/curl.h
- * #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers
- from being output in NOBODY requests over ftp
+15.3 size_t
- * Combine some of the error codes to remove duplicates. The original
- numbering should not be changed, and the old identifiers would be
- macroed to the new ones in an CURL_NO_OLDIES section to help with
- backward compatibility.
+ make several functions use size_t instead of int in their APIs
- Candidates for removal and their replacements:
+15.4 remove several functions
- CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND
- CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND
- CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR
- CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT
- CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT
- CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL
- CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND
- CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED
+ remove the following functions from the public API:
- NEXT MAJOR RELEASE
+ curl_getenv
- * curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a
- CURLMcode. These should be changed to be the same.
+ curl_mprintf (and variations)
- * remove obsolete defines from curl/curl.h
+ curl_strequal
- * make several functions use size_t instead of int in their APIs
+ curl_strnequal
- * remove the following functions from the public API:
- curl_getenv
- curl_mprintf (and variations)
- curl_strequal
- curl_strnequal
+ They will instead become curlx_ - alternatives. That makes the curl app
+ still capable of building with them from source.
- They will instead become curlx_ - alternatives. That makes the curl app
- still capable of building with them from source.
+15.5 remove CURLOPT_FAILONERROR
- * Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird
- internally. Let the app judge success or not for itself.
+ Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird
+ internally. Let the app judge success or not for itself.