aboutsummaryrefslogtreecommitdiff
path: root/docs/TODO
blob: cfed3458dba2f04b34b3cee066ed1675dc530347 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
                                  _   _ ____  _     
                              ___| | | |  _ \| |    
                             / __| | | | |_) | |    
                            | (__| |_| |  _ <| |___ 
                             \___|\___/|_| \_\_____|

TODO

For the future

 Ok, this is what I wanna do with Curl. Please tell me what you think, and
 please don't hesitate to contribute and send me patches that improve this
 product! (Yes, you may add things not mentioned here, these are just a
 few teasers...)

 * Make SSL session ids get used if multiple HTTPS documents from the same
   host is requested.

 * Improve the command line option parser to accept '-m300' as well as the '-m
   300' convention. It should be able to work if '-m300' is considered to be
   space separated to the next option.

 * Make the curl tool support URLs that start with @ that would then mean that
   the following is a plain list with URLs to download. Thus @filename.txt
   reads a list of URLs from a local file. A fancy option would then be to
   support @http://whatever.com that would first load a list and then get the
   URLs mentioned in the list. I figure -O or something would have to be
   implied by such an action.

 * Make curl with multiple URLs, even outside of {}-letters. I could also
   imagine an optional fork()ed system that downloads each URL in its own
   thread. It should of course have a maximum amount of simultaneous fork()s.

 * Improve the regular progress meter with --continue is used. It should be
   noticable when there's a resume going on.

 * Add a command line option that allows the output file to get the same time
   stamp as the remote file. This requires some fiddling on FTP but comes
   almost free for HTTP.

 * Make the SSL layer option capable of using the Mozilla Security Services as
   an alternative to OpenSSL:
   http://www.mozilla.org/projects/security/pki/nss/

 * Make sure the low-level interface works. highlevel.c should basically be
   possible to write using that interface. Document the low-level interface

 * Make the easy-interface support multiple file transfers. If they're done
   to the same host, they should use persistant connections or similar.

 * Add asynchronous name resolving, as this enables full timeout support for
   fork() systems.

 * Move non-URL related functions that are used by both the lib and the curl
   application to a separate "portability lib".

 * Add support for other languages than C.  C++ (rumours have been heard about
   something being worked on in this area) and perl (we have seen the first
   versions of this!) comes to mind. Python anyone?

 * "Content-Encoding: compress/gzip/zlib"

   HTTP 1.1 clearly defines how to get and decode compressed documents. There
   is the zlib that is pretty good at decompressing stuff. This work was
   started in October 1999 but halted again since it proved more work than we
   thought. It is still a good idea to implement though.

 * Authentication: NTLM. It would be cool to support that MS crap called NTLM
   authentication. MS proxies and servers sometime require that. Since that
   protocol is a proprietary one, it involves reverse engineering and network
   sniffing. This should however be a library-based functionality. There are a
   few different efforts "out there" to make open source HTTP clients support
   this and it should be possible to take advantage of other people's hard
   work. http://modntlm.sourceforge.net/ is one.

 * RFC2617 compliance, "Digest Access Authentication"
   A valid test page seem to exist at:
    http://hopf.math.nwu.edu/testpage/digest/
   And some friendly person's server source code is available at
    http://hopf.math.nwu.edu/digestauth/index.html

   Then there's the Apache mod_digest source code too of course.  It seems as
   if Netscape doesn't support this, and not many servers do. Although this is
   a lot better authentication method than the more common "Basic". Basic
   sends the password in cleartext over the network, this "Digest" method uses
   a challange-response protocol which increases security quite a lot.

 * Multiple Proxies?
   Is there anyone that actually uses serial-proxies? I mean, send CONNECT to
   the first proxy to connect to the second proxy to which you send CONNECT to
   connect to the remote host (or even more iterations). Is there anyone
   wanting curl to support it? (Not that it would be hard, just confusing...)

 * Other proxies
   Ftp-kind proxy, Socks5, whatever kind of proxies are there?

 * IPv6 Awareness and support
   Where ever it would fit.  configure search for v6-versions of a few
   functions and then use them instead is of course the first thing to do...
   RFC 2428 "FTP Extensions for IPv6 and NATs" will be interesting. PORT
   should be replaced with EPRT for IPv6, and EPSV instead of PASV.

 * SSL for more protocols, like SSL-FTP...
   (http://search.ietf.org/internet-drafts/draft-murray-auth-ftp-ssl-05.txt)

 * HTTP POST resume using Range: