_ _ ____ _ ___| | | | _ \| | / __| | | | |_) | | | (__| |_| | _ <| |___ \___|\___/|_| \_\_____| TODO Things to do in project cURL. Please tell us what you think, contribute and send us patches that improve things! Also check the http://curl.haxx.se/dev web section for various technical development notes. LIBCURL * Introduce an interface to libcurl that allows applications to easier get to know what cookies that are received. Pushing interface that calls a callback on each received cookie? Querying interface that asks about existing cookies? We probably need both. Enable applications to modify existing cookies as well. * Make content encoding/decoding internally be made using a filter system. * Introduce another callback interface for upload/download that makes one less copy of data and thus a faster operation. [http://curl.haxx.se/dev/no_copy_callbacks.txt] * Add asynchronous name resolving (http://libdenise.sf.net/). This should be made to work on most of the supported platforms, or otherwise it isn't really interesting. * Data sharing. Tell which easy handles within a multi handle that should share cookies, connection cache, dns cache, ssl session cache. Full suggestion found here: http://curl.haxx.se/dev/sharing.txt * Mutexes. By adding mutex callback support, the 'data sharing' mentioned above can be made between several easy handles running in different threads too. The actual mutex implementations will be left for the application to implement, libcurl will merely call 'getmutex' and 'leavemutex' callbacks. Part of the sharing suggestion at: http://curl.haxx.se/dev/sharing.txt * Set the SO_KEEPALIVE socket option to make libcurl notice and disconnect very long time idle connections. * Go through the code and verify that libcurl deals with big files >2GB and >4GB all over. Bug reports (and source reviews) indicate that it doesn't currently work properly. * CURLOPT_MAXFILESIZE. Prevent downloads that are larger than the specified size. CURLE_FILESIZE_EXCEEDED would then be returned. Gautam Mani requested. That is, the download should not even begin but be aborted immediately. * Allow the http_proxy (and other) environment variables to contain user and password as well in the style: http://proxyuser:proxypasswd@proxy:port Berend Reitsma suggested. LIBCURL - multi interface * Make sure we don't ever loop because of non-blocking sockets return EWOULDBLOCK or similar. This FTP command sending, the SSL connection etc. * Make transfers treated more carefully. We need a way to tell libcurl we have data to write, as the current system expects us to upload data each time the socket is writable and there is no way to say that we want to upload data soon just not right now, without that aborting the upload. The opposite situation should be possible as well, that we tell libcurl we're ready to accept read data. Today libcurl feeds the data as soon as it is available for reading, no matter what. DOCUMENTATION * More and better FTP * FTP ASCII upload does not follow RFC959 section 3.1.1.1: "The sender converts the data from an internal character representation to the standard 8-bit NVT-ASCII representation (see the Telnet specification). The receiver will convert the data from the standard form to his own internal form." * Since USERPWD always override the user and password specified in URLs, we might need another way to specify user+password for anonymous ftp logins. * An option to only download remote FTP files if they're newer than the local one is a good idea, and it would fit right into the same syntax as the already working http dito works. It of course requires that 'MDTM' works, and it isn't a standard FTP command. * Add FTPS support with SSL for the data connection too. This should be made according to the specs written in draft-murray-auth-ftp-ssl-08.txt, "Securing FTP with TLS" HTTP * If the "body" of the POST is < MSS it really aught to be sent along with the headers. More generally, if the last chunk of the POST body is < MSS, it should be sent with the previous chunk (which may be the POST headers). So long as any one send is larger than MSS (or there is only one send when < MSS :), the Nagle Algorithm will not be a problem on any stack where Nagle is implemented correctly. (pointed out by Rick Jones) * Authentication: NTLM. Support for that MS crap called NTLM authentication. MS proxies and servers sometime require that. Since that protocol is a proprietary one, it involves reverse engineering and network sniffing. This should however be a library-based functionality. There are a few different efforts "out there" to make open source HTTP clients support this and it should be possible to take advantage of other people's hard work. http://modntlm.sourceforge.net/ is one. There's a web page at http://www.innovation.ch/java/ntlm.html that contains detailed reverse- engineered info. * RFC2617 compliance, "Digest Access Authentication" A valid test page seem to exist at: http://hopf.math.nwu.edu/testpage/digest/ And some friendly person's server source code is available at http://hopf.math.nwu.edu/digestauth/index.html Then there's the Apache mod_digest source code too of course. It seems as if Netscape doesn't support this, and not many servers do. Although this is a lot better authentication method than the more common "Basic". Basic sends the password in cleartext over the network, this "Digest" method uses a challange-response protocol which increases security quite a lot. * Pipelining. Sending multiple requests before the previous one(s) are done. This could possibly be implemented using the multi interface to queue requests and the response data. TELNET * Make TELNET work on windows98! * Reading input (to send to the remote server) on stdin is a crappy solution for library purposes. We need to invent a good way for the application to be able to provide the data to send. * Move the telnet support's network select() loop go away and merge the code into the main transfer loop. Until this is done, the multi interface won't work for telnet. SSL * If you really want to improve the SSL situation, you should probably have a look at SSL cafile loading as well - quick traces look to me like these are done on every request as well, when they should only be necessary once per ssl context (or once per handle). Even better would be to support the SSL CAdir option - instead of loading all of the root CA certs for every request, this option allows you to only read the CA chain that is actually required (into the cache)... * Add an interface to libcurl that enables "session IDs" to get exported/imported. Cris Bailiff said: "OpenSSL has functions which can serialise the current SSL state to a buffer of your choice, and recover/reset the state from such a buffer at a later date - this is used by mod_ssl for apache to implement and SSL session ID cache". This whole idea might become moot if we enable the 'data sharing' as mentioned in the LIBCURL label above. * OpenSSL supports a callback for customised verification of the peer certificate, but this doesn't seem to be exposed in the libcurl APIs. Could it be? There's so much that could be done if it were! (brought by Chris Clark) * Make curl's SSL layer option capable of using other free SSL libraries. Such as the Mozilla Security Services (http://www.mozilla.org/projects/security/pki/nss/) and GNUTLS (http://gnutls.hellug.gr/) LDAP * Look over the implementation. The looping will have to "go away" from the lib/ldap.c source file and get moved to the main network code so that the multi interface and friends will work for LDAP as well. CLIENT * Add an option that prevents cURL from overwiting existing local files. When used, and there already is an existing file with the target file name (either -O or -o), a number should be appended (and increased if already existing). So that index.html becomes first index.html.1 and then index.html.2 etc. Jeff Pohlmeyer suggested. * "curl ftp://site.com/*.txt" * Several URLs can be specified to get downloaded. We should be able to use the same syntax to specify several files to get uploaded (using the same persistant connection), using -T. * When the multi interface has been implemented and proved to work, the client could be told to use maximum N simultaneous transfers and then just make sure that happens. It should of course not make more than one connection to the same remote host. * Extending the capabilities of the multipart formposting. How about leaving the ';type=foo' syntax as it is and adding an extra tag (headers) which works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where fil1.hdr contains extra headers like Content-Type: text/plain; charset=KOI8-R" Content-Transfer-Encoding: base64 X-User-Comment: Please don't use browser specific HTML code which should overwrite the program reasonable defaults (plain/text, 8bit...) (Idea brough to us by kromJx) TEST SUITE * If perl wasn't found by the configure script, don't attempt to run the tests but explain something nice why it doesn't. * Extend the test suite to include more protocols. The telnet could just do ftp or http operations (for which we have test servers). * Make the test suite work on more platforms. OpenBSD and Mac OS. Remove fork()s and it should become even more portable. * Introduce a test suite that tests libcurl better and more explicitly. NEXT MAJOR RELEASE * curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a CURLMcode. These should be changed to be the same. * curl_formparse() should be removed