> The security requirements for a revision control system are > difficult to state in a general way. Different deployments > are likely to have different user identity and authentication > realms and mechanisms; different deployments are likely to > impose different requirements about the structure and function > of administrative controls over access requirements. > > One choice is to add it above the protocol translator: > > If we begin with a system *without* "readdir", but with transactional > "mkdir", "rename", and "putfile" we can implement a robust "readdir" > which reports only files and directories created by authorized clients > and which provides validating clients with some measure of protection > against files and directories maliciously removed. (This is > more or less a matter of "transactionalizing" the management of > .listing files).
I'm skipping ahead a couple steps in your logic, but sounds to me like you're talking about one of two things: Explicitely, it sounds like you're saying 'lets move the signing to another layer, where things are abstracted better.' I strongly advise against this, as thats going to incur a lot of signatures. Bazaar already requires multiple signatures (5 signatures between "I like that branch" and "I just commited my first local change"). That's pure hell for people like me, that don't have the trust, time, inclination or experience to properly evaluate a key escrow agent. In clearer words, "signed archives, no key agent, you're going to experience a new way of hating life" But I don't think thats where you're going (you hint at this because the framework you've set up is 'the underlying filesystem can't be trusted -- not even readdir') Implicitely, I'm getting a feeling that you're headed off to the idea of 'lets make a smart server and keep everything in a virtual filesystem' Though convental wisdom says to avoid this (deleted and modified files pockmark the filesystem with gaps that later get fit with not-quite-right smaller files that "almost" fit) But it couldn't be just any virtual filesystem -- it would have to be a filesystem in which not only are the individual "files" are signed, but the filesystem as a whole is (think: bad guy wants to remove a security fix.) So, for just a moment, lets take a step back and consider our strengths as pertains to the software we care most about (free software, of course!) and the "tools" that we have, that other revision control systems don't (such as mirroring).. Imagine the following... as the god of arch statistics (such as there is). I'll take a hip shot and say that for each archive in the wild, there is, on average, 1.3 copies (most public archives are a mirror of a sekret one, and its not rare that the supermirror is a mirror of a mirror). That looks to me an awful lot like raid 1. That looks an awful lot like raid 1 to me (I said that twice on purpose, so that it would be read) Unfortunately, in the current paradigm, there's no automated way to checksum one copy against another. Now is where I start going off the deep end. :) If we had smart servers, then we could have the following: * Dumb(er) clients that could ask archd to do heavy lifting on its behalf -- nifty things like arbitrary deltas. * archd servers could propogate changes to one another *very* efficiently -- we could offer the users propogation choices from "1:1" up to "send a pack of missing revisions to the slave archds every 10 minutes" (Here is where things get fun) * A filesystem that, other than for raw storage, is independant of of the underlying filesystem. Heck, you could even put the archivefs on a cfs mount for which you provided the password on boot (locking out nonarchd processes) * archds could keep an eye out for low system load. If they detect an idle system, they could build arbitrary deltas, ask other archds for the same arbitrary delta, and if there's no match, you caught the bad guy being bad before he could propogate his screw up. The more archds you have propgated your archive to, the less likely they can get away with something nasty. * rather than gpg signing everything up the wazoo, you gpg sign a token that the archd passes you, which verfies your identity. * archd can be reasonably transparent. With a simple to build, configure and install dedicated archd server, we avoid the hell that subversion users go through trying to get subversion, webdav AND authentication going all at the same time. > I realize that that's a very long essay but the point is: the idea of > adding more hash functions is based on an architectural misconception. > Instead of adding SHA1 in addition to MD5, it would be far better > to make signing not depend on the checksum data. I suppose a fair analogy would be if one were to get robbed by somebody with a set of lockpicks. Sure, you can add more locks, but that doesn't stop then, it just slows them down. The trusted client thing that you mention -- going down that road to me worries me a bit. I've did a a bit of thinking on this one when I was running Freeciv (yeah, just a game, but client trust is an *enormous* issue in games), and I've seen two ways to do this; either you dumb down the client so much that its little more than a user liason for the server backend, or you start talking about explicitely blessed binaries (which destroys the concept of free software)..... In other words, sounds to me like you're talking "smart server". That's how you support a unique trusted filesystem. That's how you control access by the client to the data store. That's how you do a lot of stuff. -- James Blackwell | Life is made of the stuff that hasn't killed Tell someone a joke! | you yet. - yours truly ---------------------------------------------------------------------- GnuPG (ID 06357400) AAE4 8C76 58DA 5902 761D 247A 8A55 DA73 0635 7400 _______________________________________________ Gnu-arch-users mailing list Gnu-arch-users@gnu.org http://lists.gnu.org/mailman/listinfo/gnu-arch-users GNU arch home page: http://savannah.gnu.org/projects/gnu-arch/