[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-11-08 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-18628:
---
Fix Version/s: (was: 1.5.0)

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.4.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18628.branch-1.v5.patch, HBASE-18628.patch, 
> HBASE-18628.v2.patch, HBASE-18628.v3.patch, HBASE-18628.v4.patch, 
> HBASE-18628.v5.patch, HBASE-18628.v5.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-18628:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.5.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Change looks ok and ZK, procedure, and security units most likely to see timing 
or functional issues pass. Pushed to branch-1.4 and branch-1.

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.4.0, 1.5.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18628.branch-1.v5.patch, HBASE-18628.patch, 
> HBASE-18628.v2.patch, HBASE-18628.v3.patch, HBASE-18628.v4.patch, 
> HBASE-18628.v5.patch, HBASE-18628.v5.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-22 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Fix Version/s: 1.4.0

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 1.4.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18628.branch-1.v5.patch, HBASE-18628.patch, 
> HBASE-18628.v2.patch, HBASE-18628.v3.patch, HBASE-18628.v4.patch, 
> HBASE-18628.v5.patch, HBASE-18628.v5.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18628:
--
Attachment: HBASE-18628.v5.patch

Retry to see if failures related.

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18628.branch-1.v5.patch, HBASE-18628.patch, 
> HBASE-18628.v2.patch, HBASE-18628.v3.patch, HBASE-18628.v4.patch, 
> HBASE-18628.v5.patch, HBASE-18628.v5.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Attachment: HBASE-18628.branch-1.v5.patch

Attaching branch-1 patch.

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18628.branch-1.v5.patch, HBASE-18628.patch, 
> HBASE-18628.v2.patch, HBASE-18628.v3.patch, HBASE-18628.v4.patch, 
> HBASE-18628.v5.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Status: Patch Available  (was: Open)

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, 
> HBASE-18628.v3.patch, HBASE-18628.v4.patch, HBASE-18628.v5.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Attachment: HBASE-18628.v5.patch

v5: Retry v4. Also, actually fix the whitespace this time. :\

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, 
> HBASE-18628.v3.patch, HBASE-18628.v4.patch, HBASE-18628.v5.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Status: Open  (was: Patch Available)

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, 
> HBASE-18628.v3.patch, HBASE-18628.v4.patch, HBASE-18628.v5.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Fix Version/s: 2.0.0-alpha-3
   3.0.0

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-3
>
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, 
> HBASE-18628.v3.patch, HBASE-18628.v4.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, but 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Status: Patch Available  (was: Open)

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, 
> HBASE-18628.v3.patch, HBASE-18628.v4.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, but not a viable short term fix.
> # Add an Executor to 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Attachment: HBASE-18628.v4.patch

v4:
Actually apply the correct patch, since I put up the wrong file for v3

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, 
> HBASE-18628.v3.patch, HBASE-18628.v4.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Status: Open  (was: Patch Available)

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, 
> HBASE-18628.v3.patch, HBASE-18628.v4.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, but not a viable short term fix.
> # Add an Executor to 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Attachment: HBASE-18628.v3.patch

v3:
* fix empty if block
* verify whitespace changes

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, 
> HBASE-18628.v3.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, but not a viable short term fix.
> # 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Status: Patch Available  (was: Open)

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, 
> HBASE-18628.v3.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, but not a viable short term fix.
> # Add an Executor to ZookeeperWatcher and launch 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Status: Open  (was: Patch Available)

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, but not a viable short term fix.
> # Add an Executor to ZookeeperWatcher and launch threads from there. Maybe 
> 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Status: Patch Available  (was: Open)

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, but not a viable short term fix.
> # Add an Executor to ZookeeperWatcher and launch threads from there. Maybe 
> 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Attachment: HBASE-18628.v2.patch

v2:
- Make sure we clear interrupted flag, since I'm not sure if the executor 
reuses threads or not.
- Check future for null, since it could be the first event or the executor 
could be shut down already.
- Updated log message to be more general and more actionable.

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Attachments: HBASE-18628.patch, HBASE-18628.v2.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Status: Open  (was: Patch Available)

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Attachments: HBASE-18628.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, but not a viable short term fix.
> # Add an Executor to ZookeeperWatcher and launch threads from there. Maybe 
> we'd want to pull the 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Status: Patch Available  (was: Open)

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Attachments: HBASE-18628.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, but not a viable short term fix.
> # Add an Executor to ZookeeperWatcher and launch threads from there. Maybe 
> we'd want to pull the 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Attachment: HBASE-18628.patch

Modifying code and redeploying to this particular cluster is kind of a 
troublesome process, so I'm not able to easily add additional debug statements.

I had also reached the same conclusion that there's something stuck in the CAS 
there, so why not avoid it entirely. Here's a strawman patch (no new tests) to 
do pre-emption using Java features instead of rolling our own method.

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Priority: Critical
> Attachments: HBASE-18628.patch, jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and 

[jira] [Updated] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications

2017-08-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18628:
--
Attachment: jstack

Attaching jstack from the RS.

> ZKPermissionWatcher blocks all ZK notifications
> ---
>
> Key: HBASE-18628
> URL: https://issues.apache.org/jira/browse/HBASE-18628
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Mike Drob
>Priority: Critical
> Attachments: jstack
>
>
> Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 
> based build, but I think the problem will affect branch-1 as well. I'm not 
> able to easily reproduce the issue, but it will usually come up within an 
> hour on a given cluster that I have, at which point the problem persists 
> until an RS restart. I've been seeing the problem and paying attention for 
> maybe two months, but I suspect it's been happening much longer than that.
> h3. Problem
> When running in a secure cluster, sometimes the ZK EventThread will get stuck 
> on a permissions update and not be able to process new notifications. This 
> happens to also block flush and snapshot, which is how we found it.
> h3. Analysis
> The main smoking gun is seeing this in repeated jstacks:
> {noformat}
> "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x7f0b92644000 
> nid=0x6e69 waiting on condition [0x7f0b6730f000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> {noformat}
> That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but 
> it never gets past the condition.
> {code}
> while (!nodes.compareAndSet(null, nodeList)) {
>   try {
> Thread.sleep(20);
>   } catch (InterruptedException e) {
> LOG.warn("Interrupted while setting node list", e);
> Thread.currentThread().interrupt();
>   }
> }
> {code}
> The warning never shows up in the logs, it just keeps looping and looping. 
> The last relevant line from the watcher in logs is:
> {noformat}
> 2017-08-17 21:25:12,379 DEBUG 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: 
> regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, 
> baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
> state=SyncConnected, path=/hbase/acl
> {noformat}
> Which makes sense, because the code snippet is from permission watcher's 
> {{nodeChildrenChanged}} handler.
> The separate thread introduced in HBASE-14370 is present, but not doing 
> anything. And this event hasn't gotten to the part where it splits off into a 
> thread:
> {noformat}
> "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 
> tid=0x01750800 nid=0x6fd9 waiting on condition [0x7f0b5dce5000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007436ecea0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> h3. Solutions
> There's a few approaches we can take to fix this, I think they are all 
> complimentary. It might be useful to file subtasks or new issues for some of 
> the solutions if they are longer term.
> # Move flush and snapshot to ProcedureV2. This makes my proximate problem go 
> away, but it's only relevant to branch-2 and master, and doesn't fix anything 
> on branch-1. Also, Permissions updates would still get stuck, preventing 
> future permissions updates. I think this is important long term for the 
> robustness of the system, but not a viable short term fix.
> # Add an Executor to ZookeeperWatcher and launch threads from there. Maybe 
> we'd want to pull the Executor out of ZKPW, but that's not