aboutsummaryrefslogtreecommitdiff
path: root/docs/TODO
blob: 95774ac19ab4a0cb88b0b70cee4ed8a9ec8ea66f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
                                  _   _ ____  _     
                              ___| | | |  _ \| |    
                             / __| | | | |_) | |    
                            | (__| |_| |  _ <| |___ 
                             \___|\___/|_| \_\_____|

TODO

For version 7. Stuff I palnned to have included in curl for version
seven. Let's do a serious attempt to include most of this.

  Document the easy-interface completely

  Make sure the low-level interface works. highlevel.c should basically be
  possible to write using that interface.

  Document the low-level interface

  Add asynchronous name resolving, as this enables full timeout support for
  fork() systems.

  Make the resolving threadsafe(er).

  Make sure you can set the progress callback

  Add libtool stuff

  Move non-URL related functions that are used by both the lib and the curl
  application to a separate "portability lib".

  Correct the lib's getenv() call as it is not threadsafe under win32.

  Add support for other languages than C (not important)


For the future


 Ok, this is what I wanna do with Curl. Please tell me what you think, and
 please don't hesitate to contribute and send me patches that improve this
 product! (Yes, you may add things not mentioned here, these are just a
 few teasers...)

 * rtsp:// support -- "Real Time Streaming Protocol"

   RFC 2326

 * "Content-Encoding: compress/gzip/zlib"

   HTTP 1.1 clearly defines how to get and decode compressed documents. There
   is the zlib that is pretty good at decompressing stuff. This work was
   started in October 1999 but halted again since it proved more work than we
   thought. It is still a good idea to implement though.

 * HTTP Pipelining/persistant connections

 - We should introduce HTTP "pipelining". Curl could be able to request for
   several HTTP documents in one connect. It would be the beginning for
   supporing more advanced functions in the future, like web site
   mirroring. This will require that the urlget() function supports several
   documents from a single HTTP server, which it doesn't today.

 - When curl supports fetching several documents from the same server using
   pipelining, I'd like to offer that function to the command line. Anyone has
   a good idea how? The current way of specifying one URL with the output sent
   to the stdout or a file gets in the way. Imagine a syntax that supports
   "additional documents from the same server" in a way similar to:

     curl <main URL> --more-doc <path> --more-doc <path>

   where --more-doc specifies another document on the same server. Where are
   the output files gonna be put and how should they be named? Should each
   "--more-doc" parameter require a local file name to store the result in?
   Like "--more-file" as in:

     curl <URL> --more-doc <path> --more-file <file>

 * RFC2617 compliance, "Digest Access Authentication"
   A valid test page seem to exist at:
    http://hopf.math.nwu.edu/testpage/digest/
   And some friendly person's server source code is available at
    http://hopf.math.nwu.edu/digestauth/index.html

   Then there's the Apache mod_digest source code too of course.  It seems as
   if Netscape doesn't support this, and not many servers do. Although this is
   a lot better authentication method than the more common "Basic". Basic
   sends the password in cleartext over the network, this "Digest" method uses
   a challange-response protocol which increases security quite a lot.

 * Different FTP Upload Through Web Proxy
   I don't know any web proxies that allow CONNECT through on port 21, but
   that would be the best way to do ftp upload. All we would need to do would
   be to 'CONNECT <host>:<port> HTTP/1.0\r\n\r\n' and then do business as
   usual. I least I think so. It would be fun if someone tried this...

 * Multiple Proxies?
   Is there anyone that actually uses serial-proxies? I mean, send CONNECT to
   the first proxy to connect to the second proxy to which you send CONNECT to
   connect to the remote host (or even more iterations). Is there anyone
   wanting curl to support it? (Not that it would be hard, just confusing...)

 * Other proxies
   Ftp-kind proxy, Socks5, whatever kind of proxies are there?

 * IPv6 Awareness
   Where ever it would fit. I am not that into v6 yet to fully grasp what we
   would need to do, but letting the autoconf search for v6-versions of a few
   functions and then use them instead is of course the first thing to do...
   RFC 2428 "FTP Extensions for IPv6 and NATs" will be interesting. PORT
   should be replaced with EPRT for IPv6, and EPSV instead of PASV.

 * An automatic RPM package maker
   Please, write me a script that makes it. It'd make my day.

 * SSL for more protocols, like SSL-FTP...
   (http://search.ietf.org/internet-drafts/draft-murray-auth-ftp-ssl-05.txt)

 * HTTP POST resume using Range:

 * Make curl capable of verifying the server's certificate when connecting
   with HTTPS://.