(Illustration by Gaich Muramatsu)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 > It's the number of files limit that is the problem and when your average > file is 10MB, there would only be about 10000 files for which we really > only need 6.2MB of RVM space. Ah! That alleviates the problem somewhat, but not quite. > You can effectively run 2 or more coda-servers on the same hardware. > Each server exports part of the filespace so that each server requires > less RVM. The only problem is that clients want to connect to a fixed > port on a server-ip address. The machine has to have several ip-aliases > configured for the same interface, and the server has to be told to bind > to a specific ip-address. Just one question here; how does this decrease the *total* amount of virtual memory needed? That is, if I split up the volumes so that half of them are served by one server process and one by the other, wouldn't that just create the need for two RVM:s half the size of the original? Or is it possible (I remember reading something along those lines) to address more than 4 gig of memory in the kernel, but just not in user space (due to 32 bit pointers)? > There are 2 other approaches that haven't been implemented, but are > being considered. Per volume RVM segments that can be mapped and > unmapped independently. Basically the Coda server would start to do some > page-in/page-out style management with RVM segments similar to how the > kernel handles multiple processes. > > The other approach would 'demote' RVM to be an intermediate cache only > and the actual metadata would reside on disk along with the container > files. Any of them would be great from a user perspective IMO. Many high-volume storage situations don't involve actively accessing large parts of the data. For example, in a multi-user environment some files are bound to be accesses more or less constantly, while other files are lying around because they're needed only some times (things like source packages unpacked, Java API docs (a million .html files :)) would consume lots of RVM space, but on average would be rarely accessed). That's pretty much my situation. > For write access it is a bit more difficult, as in reality you want the > maildelivery/webserver/etc. to run as a specific authenticated process. > However, the fork/exec and setsid/setgid/setuid tricks that are used to > drop root priviledges confuse Coda's process identity tracking. The > solution that was introduced by AFS, the process authentication group, > wasn't accepted by Linus to become part of the kernel. > > The token expiration part is trivially solved by creating a cron job > that runs "echo password | clog -pipe codausername". Ah! I didn't realize that authentication in one process affects all processes owned by the same user. This is great. The only problem with the above though is that it's a security risk because it shows up in the list of processes (as do environment variables). One solution would be to have clog be able to read a password from file. > There are several PAM modules. But since Coda currently still uses > weak-encoding instead of true encryption it isn't recommended to use the > Coda password for anything but Coda authentication. Ok. > It is possible to authenticate using kerberos, and then use the > authenticator to obtain a Coda token. I'm not sure how to completely > automate that. Ah! I did spot a -kerberos4 and -kerberos5 switch to one of the tools (just can't remember which off hand :)). I'll look into it. Thanks a lot for all the pointers, - -- / Peter Schuller, InfiDyne Technologies HB PGP userID: 0xE9758B7D or 'Peter Schuller <peter.schuller_at_infidyne.com>' Key retrival: Send an E-Mail to getpgpkey_at_scode.org E-Mail: [email protected] Web: http://www.scode.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.5 (GNU/Linux) Comment: For info see http://www.gnupg.org iD8DBQE7HKr5DNor2+l1i30RAkLAAJ4wWrjxqDTAvog4pGvVnKta4yFhKACgh2Nd nlBREfm6ph8OMDqP+1K7PYc= =HFAf -----END PGP SIGNATURE-----Received on 2001-06-05 05:52:28