> The auth plugin works at the znode level . The server side authentication
> I was talking about is just to verify the authentication for a zookeeper
> client for creating/reading/changing znodes in ZooKeeper.
Ok, understood. Thanks for these details.
> For cluster wide security, I think it is also important to use networking
> hardware security. In EC2, this corresponds to the security groups. For
> Linux itself, you do this using iptables.
That's the impression I had as well. Do you think it'd be too tricky
to implement an equivalent pluggable authentication scheme which would
operate at the server level? E.g. something that would allow using a
shared secret safely, or certificates.
I'm pondering about the possibility of offering ZooKeeper embedded in
another system, so it'd be best if the system security wasn't
dependent on the network setup which is left to the user that deploys
the packed system.
> The basic idea is that you can lock down the network access to the cluster
> so that to access your ZK cluster, you actually have to be running on a
> correct machine.
> This doesn't satisfy the original need, but is an important defense in depth
> adjunct to it.
Makes perfect sense.
> Another way to get connection level security on ZK access would be to use
> something like ssh or stunnel to allow access to the cluster which is
> otherwise completely locked down except for the ZK nodes talking to each
> other. This approach does meet the original requirements (I think).
I think so as well. For the same reasons outlined above, it'd be
fantastic to have the authentication system being independent from the
specific deployment environment. But this is definitely a viable
alternative otherwise. It also brings encryption as a plus.
Thanks for these ideas,