[jira] [Updated] (HBASE-11594) Unhandled NoNodeException in distributed log replay mode

2014-08-01 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11594:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.98+

 Unhandled NoNodeException in distributed log replay mode 
 -

 Key: HBASE-11594
 URL: https://issues.apache.org/jira/browse/HBASE-11594
 Project: HBase
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: hbase-11594-0.98.patch, hbase-11594.patch


 This issue happens when a RS doesn't have any WAL to be replayed. Master 
 immediately finishes recovery for the RS while a region in recovery is still 
 opening. 
 Below is the exception in region server log:
 {noformat}
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for 
 /hbase/recovering-regions/20fcfad9746b3d83fff84fb773af6c80/h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:407)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:878)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:928)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:922)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.updateRecoveringRegionLastFlushedSequenceId(HRegionServer.java:4560)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1780)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:325)
 {noformat} 
 The following is related master log:
 {noformat}
 2014-03-18 11:27:14,192 DEBUG 
 [MASTER_SERVER_OPERATIONS-h2-suse-uns-1395117052-hbase-4:6-3] 
 master.DeadServer: Finished processing 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 2014-03-18 11:27:14,199 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.MasterFileSystem: Log dir for server 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633 does not 
 exist
 2014-03-18 11:27:14,203 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: dead splitlog workers 
 [h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633]
 2014-03-18 11:27:14,204 DEBUG 
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: Scheduling batch of logs to split
 2014-03-18 11:27:14,206 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: started splitting 0 logs in []
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11594) Unhandled NoNodeException in distributed log replay mode

2014-07-31 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11594:
---

Fix Version/s: (was: 0.98.4)
   2.0.0
   0.98.5

Fix versions wrong, fixing.

+1. Will commit later today unless objection. Ping [~enis] for branch-1.

 Unhandled NoNodeException in distributed log replay mode 
 -

 Key: HBASE-11594
 URL: https://issues.apache.org/jira/browse/HBASE-11594
 Project: HBase
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: hbase-11594.patch


 This issue happens when a RS doesn't have any WAL to be replayed. Master 
 immediately finishes recovery for the RS while a region in recovery is still 
 opening. 
 Below is the exception in region server log:
 {noformat}
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for 
 /hbase/recovering-regions/20fcfad9746b3d83fff84fb773af6c80/h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:407)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:878)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:928)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:922)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.updateRecoveringRegionLastFlushedSequenceId(HRegionServer.java:4560)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1780)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:325)
 {noformat} 
 The following is related master log:
 {noformat}
 2014-03-18 11:27:14,192 DEBUG 
 [MASTER_SERVER_OPERATIONS-h2-suse-uns-1395117052-hbase-4:6-3] 
 master.DeadServer: Finished processing 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 2014-03-18 11:27:14,199 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.MasterFileSystem: Log dir for server 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633 does not 
 exist
 2014-03-18 11:27:14,203 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: dead splitlog workers 
 [h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633]
 2014-03-18 11:27:14,204 DEBUG 
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: Scheduling batch of logs to split
 2014-03-18 11:27:14,206 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: started splitting 0 logs in []
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11594) Unhandled NoNodeException in distributed log replay mode

2014-07-31 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11594:
---

Attachment: hbase-11594-0.98.patch

Patch for 0.98, simple fixup

 Unhandled NoNodeException in distributed log replay mode 
 -

 Key: HBASE-11594
 URL: https://issues.apache.org/jira/browse/HBASE-11594
 Project: HBase
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.99.0, 0.98.5, 2.0.0

 Attachments: hbase-11594-0.98.patch, hbase-11594.patch


 This issue happens when a RS doesn't have any WAL to be replayed. Master 
 immediately finishes recovery for the RS while a region in recovery is still 
 opening. 
 Below is the exception in region server log:
 {noformat}
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for 
 /hbase/recovering-regions/20fcfad9746b3d83fff84fb773af6c80/h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:407)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:878)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:928)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:922)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.updateRecoveringRegionLastFlushedSequenceId(HRegionServer.java:4560)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1780)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:325)
 {noformat} 
 The following is related master log:
 {noformat}
 2014-03-18 11:27:14,192 DEBUG 
 [MASTER_SERVER_OPERATIONS-h2-suse-uns-1395117052-hbase-4:6-3] 
 master.DeadServer: Finished processing 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 2014-03-18 11:27:14,199 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.MasterFileSystem: Log dir for server 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633 does not 
 exist
 2014-03-18 11:27:14,203 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: dead splitlog workers 
 [h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633]
 2014-03-18 11:27:14,204 DEBUG 
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: Scheduling batch of logs to split
 2014-03-18 11:27:14,206 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: started splitting 0 logs in []
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11594) Unhandled NoNodeException in distributed log replay mode

2014-07-25 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-11594:
--

Fix Version/s: 0.98.4
   0.99.0

 Unhandled NoNodeException in distributed log replay mode 
 -

 Key: HBASE-11594
 URL: https://issues.apache.org/jira/browse/HBASE-11594
 Project: HBase
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.99.0, 0.98.4


 This issue happens when a RS doesn't have any WAL to be replayed. Master 
 immediately finishes recovery for the RS while a region in recovery is still 
 opening. 
 Below is the exception in region server log:
 {noformat}
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for 
 /hbase/recovering-regions/20fcfad9746b3d83fff84fb773af6c80/h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:407)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:878)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:928)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:922)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.updateRecoveringRegionLastFlushedSequenceId(HRegionServer.java:4560)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1780)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:325)
 {noformat} 
 The following is related master log:
 {noformat}
 2014-03-18 11:27:14,192 DEBUG 
 [MASTER_SERVER_OPERATIONS-h2-suse-uns-1395117052-hbase-4:6-3] 
 master.DeadServer: Finished processing 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 2014-03-18 11:27:14,199 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.MasterFileSystem: Log dir for server 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633 does not 
 exist
 2014-03-18 11:27:14,203 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: dead splitlog workers 
 [h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633]
 2014-03-18 11:27:14,204 DEBUG 
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: Scheduling batch of logs to split
 2014-03-18 11:27:14,206 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: started splitting 0 logs in []
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11594) Unhandled NoNodeException in distributed log replay mode

2014-07-25 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-11594:
--

Attachment: hbase-11594.patch

 Unhandled NoNodeException in distributed log replay mode 
 -

 Key: HBASE-11594
 URL: https://issues.apache.org/jira/browse/HBASE-11594
 Project: HBase
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.99.0, 0.98.4

 Attachments: hbase-11594.patch


 This issue happens when a RS doesn't have any WAL to be replayed. Master 
 immediately finishes recovery for the RS while a region in recovery is still 
 opening. 
 Below is the exception in region server log:
 {noformat}
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for 
 /hbase/recovering-regions/20fcfad9746b3d83fff84fb773af6c80/h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:407)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:878)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:928)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:922)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.updateRecoveringRegionLastFlushedSequenceId(HRegionServer.java:4560)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1780)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:325)
 {noformat} 
 The following is related master log:
 {noformat}
 2014-03-18 11:27:14,192 DEBUG 
 [MASTER_SERVER_OPERATIONS-h2-suse-uns-1395117052-hbase-4:6-3] 
 master.DeadServer: Finished processing 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 2014-03-18 11:27:14,199 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.MasterFileSystem: Log dir for server 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633 does not 
 exist
 2014-03-18 11:27:14,203 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: dead splitlog workers 
 [h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633]
 2014-03-18 11:27:14,204 DEBUG 
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: Scheduling batch of logs to split
 2014-03-18 11:27:14,206 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: started splitting 0 logs in []
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11594) Unhandled NoNodeException in distributed log replay mode

2014-07-25 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-11594:
--

Status: Patch Available  (was: Open)

 Unhandled NoNodeException in distributed log replay mode 
 -

 Key: HBASE-11594
 URL: https://issues.apache.org/jira/browse/HBASE-11594
 Project: HBase
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.99.0, 0.98.4

 Attachments: hbase-11594.patch


 This issue happens when a RS doesn't have any WAL to be replayed. Master 
 immediately finishes recovery for the RS while a region in recovery is still 
 opening. 
 Below is the exception in region server log:
 {noformat}
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for 
 /hbase/recovering-regions/20fcfad9746b3d83fff84fb773af6c80/h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
 at 
 org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:407)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:878)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:928)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:922)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.updateRecoveringRegionLastFlushedSequenceId(HRegionServer.java:4560)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1780)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:325)
 {noformat} 
 The following is related master log:
 {noformat}
 2014-03-18 11:27:14,192 DEBUG 
 [MASTER_SERVER_OPERATIONS-h2-suse-uns-1395117052-hbase-4:6-3] 
 master.DeadServer: Finished processing 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633
 2014-03-18 11:27:14,199 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.MasterFileSystem: Log dir for server 
 h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633 does not 
 exist
 2014-03-18 11:27:14,203 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: dead splitlog workers 
 [h2-suse-uns-1395117052-hbase-7.cs1cloud.internal,60020,1395141895633]
 2014-03-18 11:27:14,204 DEBUG 
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: Scheduling batch of logs to split
 2014-03-18 11:27:14,206 INFO  
 [M_LOG_REPLAY_OPS-h2-suse-uns-1395117052-hbase-4:6-0] 
 master.SplitLogManager: started splitting 0 logs in []
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)