chrajeshbabu commented on PR #5167:
URL: https://github.com/apache/hbase/pull/5167#issuecomment-1507462001

   > Thank you @chrajeshbabu. It seems the changes make sense from the upgrade 
viewpoint. On the other hand, if this rare scenario were to happen on a healthy 
2.x cluster, does this change make any difference?
   > 
   > >
   
   > If this happens on 2.x cluster, what would now be different? Only that 
meta would be assigned on any random server, >correct?
   
   I have tried two scenarios in healthy 2.x cluster with the change
   1) Removed zookeeper data alone and restarted cluster, things came up 
properly
   2) Removed both zookeeper data as well as master data so that we hit the 
same scenario of meta initialisation procedure, meta region assigned to random 
server and one server regions came up properly and remaining servers become 
unknown(this is not related to this issue will check why it's happening and 
work on another JIRA), recovered those with hbck2 scheduleRecoveries  option 
and cluster is normal.
   
   By throwing exception also we need to delete master data because of failed 
init meta procedure not bring the master up and need to follow one of these 
steps 
   1) meta server znode creation which can also leads to unknown servers 
issue(we should use hbck2 to recover)
   2)delete meta table data in hdfs and rebuild completely which is error prone 
and exact state of meta may not build sometimes because tables state missing in 
zookeeper etc...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to