[
https://issues.apache.org/jira/browse/HBASE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17113058#comment-17113058
]
Francis Christopher Liu commented on HBASE-11288:
-------------------------------------------------
{quote}
For example, I'm not impressed by HBASE-18095 while lots of other committers
really love it. As in our deployments, we can also control how we deploy
zookeeper, not only HBase. Zookeeper supports a feature called observer node,
which can be considered as a cache of the zk cluster.
{quote}
I see, I agree with your sentiment and we do that too where appropriate. In
fact we were about to go down a similar route as you described with zookeeper
except for some limitations in early zookeeper versions prior to 3.5(?) which
did not have the "read only"(?) mode. The problem for us was that without that
mode there still had to be a quorum agreement for connections to the observer
even though the clients are doing only reads. This was unacceptable for us as
that was one of the things we observed was hammering the ensemble so we had to
go with a different approach.
More recenlty we also have implemented a Rest Registry because of security
requirements along the lines of what [~apurtell] described. Hence I too find
that it makes sense to abstract out zookeeper from the user so we have better
control of the attack surface. I suspect this requirement is going to become
more common.
{quote}
Then for me, it is also very easy to just implement a simple cache server, to
pull all the content in root from the active master and cache it in memory,
then to serve clients as the 'master' for locating meta. And I could also make
use of lvs, to spread load across multiple cache servers. And for users who can
not control things other than HBase, we could implement something like
HBASE-18095, to let backup masters to serve the locating meta requests.
{quote}
Yeah I think we would need an implemented solution for users as well. Or it
would seem a bit lopsided expecting users that worry about root traffic to
implement their own caching service.
{quote}
But if you use a hbase:root table, then you will expose the whole region API to
users. The API is powerful, but hard to simulate, so it will be really hard to
spread the load across multiple servers.
{quote}
If that is something you need we could still have root table but not expose the
region api (possibly like we already do for mutations) to them just the api you
described? Also I'm wondering if we don't expose the scan apis at all then do
we plan to have hbck for root where it is located (eg in the master)?
{quote}
Enable read replicas on root table? Seems a bit overkill and read replicas
itself is still not stable enough I suppose...
{quote}
It would seem overkill if we only did it for root. But we do it for meta today?
Or are people no longer using it for meta? There's no developer support in
getting it stable?
Please let me know what you think?
> Splittable Meta
> ---------------
>
> Key: HBASE-11288
> URL: https://issues.apache.org/jira/browse/HBASE-11288
> Project: HBase
> Issue Type: Umbrella
> Components: meta
> Reporter: Francis Christopher Liu
> Assignee: Francis Christopher Liu
> Priority: Major
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)