[ 
https://issues.apache.org/jira/browse/HBASE-29083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17923639#comment-17923639
 ] 

Wellington Chevreuil commented on HBASE-29083:
----------------------------------------------

{quote}Not necessarily have to restart it, I'm not sure. I think it's 
acceptable to require restart, but if we can do it without it, would be great.
{quote}
Got it. If we are defining the behaviour via the configuration, then if we want 
to avoid the need to restart, we may do it via dynamic configuration.
{quote}Do clusters participating on a "read replica" group know about each 
other? 
Yet I don't see a reason for why they should, so hopefully we're fine without 
such synchronization.

How can we validate that no more than one participating cluster is in 
non-readonly mode?
One idea: active cluster "registers" itself by adding a file somewhere to the 
common storage tree which can be detected by others and prevent starting in 
read-write mode.
{quote}
The file based register sounds like a good idea. Maybe client writes could rely 
on this file too to figure the current active cluster. Have you guys already 
planned how would the failover be implemented? Maybe some builtin command that 
flips the config value in each cluster, then also update this file with the new 
active cluster?

> Add global read-only mode to HBase
> ----------------------------------
>
>                 Key: HBASE-29083
>                 URL: https://issues.apache.org/jira/browse/HBASE-29083
>             Project: HBase
>          Issue Type: Sub-task
>          Components: master
>            Reporter: Andor Molnar
>            Assignee: Anuj Sharma
>            Priority: Major
>
> Implement *read-only* mode for HBase. Can be set at the start of the cluster 
> or rolling restart.
> New config setting:
> |*Config*|*Default* |*Explanation*|
> |hbase.global.readonly.enabled|false|Puts the entire cluster into read-only 
> mode.|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to