Steve, are you sure that magic "fs mkmount" works? I'm setting up my first AFS cell, using 1.4.12, since I wanted to work with pre-compiled binaries to start, and keep things simple. fs doesn't like that syntax:
[r...@fhcore ~]# fs mkmount /afs/d.fh.nyc.us.boot.efs:root.afs/.d.fh.nyc.us.boot.efs root.cell -rw fs: mount points must be created within the AFS file system (I forgot to create the rw mount, so I tried using the command which, if it works, makes it unnecessary :-P) I can't find any documentation on this yet -- is this perhaps a feature of 1.5? On Sat, Sep 25, 2010 at 12:11 AM, Steven Jenkins <[email protected]>wrote: > On Fri, Sep 24, 2010 at 1:28 PM, Phillip Moore > <[email protected]> wrote: > > I spent the morning working on this brain dump: > > http://docs.openefs.org/efs-core-docs/DevGuide/Future/OpenAFS.html > > Right now, there's a race on between the two sites that are trying to be > the > > first to bring up an EFS 3 domain, outside of my team (actually -- the > first > > other than me, personally, I think). > > One of those sites wants to use EFS for OpenAFS environments, and there > is > > nothing I personally want more than to get work with OpenAFS again, so > I'm > > rooting for them :-P > > Anyway, check out the doc if you have a few minutes. It's FAR from > > complete, and really just a brain dump, but it's touches on the bulk of > the > > issues we're going to have to figure out in order to make this work. > > And make it work is precisely what I intend to do.... > > I've put together some thoughts. I'd be happy to provide a git diff > of your doc if you prefer. Otherwise, read on... > > Disclaimer: this was a brain-dump. There is clearly more work/thinking > that > needs to occur. > > The 3 inter-related issues of EFS domains, Kerberos realms and uid/gid > mappings across domains and realms is really about stitching those > together. > There are several projects in OpenAFS (or underway) that will probably > help address this: > > - existing cross-(Kerberos)realm work in OpenAFS. Currently, there is > some limited cross-realm support (documentation is in the krb.conf and > krb.excl man pages for OpenAFS). By using that support, names from > foreign realms can be treated as local. Assuming all AFS ids are > in sync across each cell, one can then configure each cell to trust the > other cells (assuming that a cell maps to a Kerberos realm). > > That's a pretty unrealistic scenario. It might be useful in some > cases, but it won't be useful in the assumed more general case where > the cells have not been centrally managed and thus AFS ids are not > in sync across cells. > > - PTS extensions: c.f., > https://datatracker.ietf.org/doc/draft-brashear-afs3-pts-extended-names/ > to provide a mechanism to map among AFS cells and Kerberos realms, > as well as help with inconsistent uids in those cells & realms. > > Based on that, we should be able to view N realms as a logical whole > (e.g., by first defining a 'canonical' mapping of uids, then building > a mapping database. Note that with the krb.excl file, we can exclude > id's in certain realms, so a migration could conceivably happen in > a controlled fashion). > > At a rough glance, these will get us very, very close. Someone should > touch base with Derrick and do some proof-of-concept mappings to verify > this. I don't know the status of that work -- Derrick mentioned today that > he has some code, but it's not ready for anyone else to start playing with. > > - Various people who have played with using AD (or LDAP) as the > backing store for PTS. There are also other ways to solve this problem > that > have been discussed but aren't necessarily in the 'here's some code' stage. > These projects may well play a part in a solution set to the cross-realm, > cross-cell, inconsistent uid namespace problem. > > Misc notes: > > 1- We shouldn't require uid/gid consolidation to occur as a prequisite to > adopting OpenEFS -- the organizational maturity bar for that is simply too > high. > > 2- It's worth writing up a sample document describing how we want > migration from your current multi-cell, multi-realm environment to EFS to > occur. > > Creating mount points: more recent versions of OpenAFS (i.e., virtually > anything that will be found in production) support dynamic mounting of > volumes via a magic syntax (the exact syntax escapes me at the moment -- > I've not used it but only seen it mentioned a few times and was > unable to locate the actual syntax) > > e.g., > /afs/example.org:user.wpmoore > > would be a path that would automagically mount the volume user.wpmoore. > > so the necessary fs mkmount could be done as follows: > > fs mkmount /afs/example.org:user.wpmoore/some/path some.volume > > without requiring any special pre and post mount hacking. > > rsync: my understanding is that incremental vos dump/restore > is quite a bit better now (if I recall correctly, in 2008, Ali Ferguson > from Morgan said at the OpenAFS workshop that Morgan was using it and > that he was confident that the bugs had been shaken out of it, but that > was after numerous failed attempts over the years). > > It would be useful to track pros and cons of volumes being per domain or > per cell. I don't think most people can make an argument either way (i.e., > the number of people who can seriously discuss the tradeoffs is, well, > tiny). I think we need to translate this to more 'non-insider-language' > so that the various users can weigh in on how they would need this to work. > Off the cuff, I don't see why anyone would really want per-domain over > per-cell, but there may be some failure scenarios where it would be useful. > > > Steven > _______________________________________________ > EFS-dev mailing list > [email protected] > http://mailman.openefs.org/mailman/listinfo/efs-dev >
_______________________________________________ EFS-dev mailing list [email protected] http://mailman.openefs.org/mailman/listinfo/efs-dev
