[ 
https://issues.apache.org/jira/browse/HBASE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112539#comment-17112539
 ] 

Andrew Kyle Purtell edited comment on HBASE-11288 at 5/20/20, 7:03 PM:
-----------------------------------------------------------------------

bq. But anyway, the ConnectionRegsitry is pluggable, so for users who can not 
control things other than HBase, they could use the new registry implementation 
in HBASE-18095 to reduce the load of zookeeper, 

[~zhangduo] If we are going to discuss this in this level of detail, I feel I 
have to enumerate why we built it, which doesn't have anything to do with load 
per se, just to clarify:
- For configuring for fail fast, having to think about zk connection 
configuration particulars in addition to HBase RPC configuration is doable, but 
clumsy, and limiting, and not always done correctly. It matters when you 
operate at scale and have a number of different internal customers with 
different expectations about retry or fail-fast behavior. A monolithic deploy / 
service organization may not have this, which is fine, it's optional. 
- It's a security problem that zk is exposed to clients. ZK's security model is 
problematic. (3.5 and up can be better with TLS, requiring successful client 
and server cert-based auth before accepting any requests, but we don't support 
the ZK TLS transport out of the box actually.) For operators with this concern, 
now they can isolate the ZK service from end users with network or host ACLs, 
and HBase can service those clients still. 


was (Author: apurtell):
bq. But anyway, the ConnectionRegsitry is pluggable, so for users who can not 
control things other than HBase, they could use the new registry implementation 
in HBASE-18095 to reduce the load of zookeeper, 

[~zhangduo] We are going to discuss this in this level of detail, I feel I have 
to enumerate why we built it, which doesn't have anything to do with load per 
se, just to clarify:
- For configuring for fail fast, having to think about zk connection 
configuration particulars in addition to HBase RPC configuration is doable, but 
clumsy, and limiting, and not always done correctly. It matters when you 
operate at scale and have a number of different internal customers with 
different expectations about retry or fail-fast behavior. A monolithic deploy / 
service organization may not have this, which is fine, it's optional. 
- It's a security problem that zk is exposed to clients. ZK's security model is 
problematic. (3.5 and up can be better with TLS, requiring successful client 
and server cert-based auth before accepting any requests, but we don't support 
the ZK TLS transport out of the box actually.) For operators with this concern, 
now they can isolate the ZK service from end users with network or host ACLs, 
and HBase can service those clients still. 

> Splittable Meta
> ---------------
>
>                 Key: HBASE-11288
>                 URL: https://issues.apache.org/jira/browse/HBASE-11288
>             Project: HBase
>          Issue Type: Umbrella
>          Components: meta
>            Reporter: Francis Christopher Liu
>            Assignee: Francis Christopher Liu
>            Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to