On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
> On Apr 25, 2012, at 3:36 PM, Nico Williams wrote:
> > I disagree vehemently. automount is a disaster because you need to
> > synchronize changes with all those clients. That's not realistic.
> Really? I did it with NIS automount maps and 600+ clients back in 1991.
> Other than the obvious problems with open files, has it gotten worse since
Nothing's changed. Automounter + data migration -> rebooting clients
(or close enough to rebooting). I.e., outage.
> Storage migration is much more difficult with NFSv2, NFSv3, NetWare, etc.
But not with AFS. And spec-wise not with NFSv4 (though I don't know
if/when all NFSv4 clients will properly support migration, just that
the protocol and some servers do).
> With server-side, referral-based namespace construction that problem
> goes away, and the whole thing can be transparent w.r.t. migrations.
> Agree, but we didn't have NFSv4 back in 1991 :-) Today, of course, this
> is how one would design it if you had to design a new DFS today.
Indeed, that's why I built an automounter solution in 1996 (that's
still in use, I'm told). Although to be fair AFS existed back then
and had global namespace and data migration back then, and was mature.
It's taken NFS that long to catch up...
> Almost any of the popular nosql databases offer this and more.
> The movement away from POSIX-ish DFS and storing data in
> traditional "files" is inevitable. Even ZFS is a object store at its core.
I agree. Except that there are applications where large octet streams
are needed. HPC, media come to mind.
zfs-discuss mailing list