this is a small nit, but i think the partition proposal works a bit more like a mount point than your proposal. when you mount a file system, the mount isn't transparent. two mounted file systems can have files with the same inode number, for example. you also can't do some things like a rename across file system boundaries.
in your proposal, what happens if an a client creates an ephemeral node on the remote ZK cluster. who does the failure detection and clean up? it also wasn't clear what happens when a client does a read on the remote ZK cluster. does the read always get forwarded to the remote cluster? also what happens if the request to the remote cluster hangs? thanx ben On Thu, Jun 9, 2011 at 11:41 AM, Alexander Shraer <[email protected]> wrote: > Hi, > > We're considering working on a new feature that will allow "mounting" part of > the namespace of one ZK cluster into another ZK cluster. The goal is > essentially to be able to partition a ZK namespace while preserving current > ZK semantics as much as possible. > More details are here: > http://wiki.apache.org/hadoop/ZooKeeper/MountRemoteZookeeper > > It would be great to get your feedback and especially please let us know if > you think your application can benefit from this feature. > > Thanks, > Alex Shraer and Eddie Bortnikov > > >
