[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13737230#comment-13737230 ] Pablo Medina commented on HBASE-9087: - When are you planning to release hbase 0.94.11 ? Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13737344#comment-13737344 ] Lars Hofhansl commented on HBASE-9087: -- This week. 0.94.10RC0 was 7/19 and I am shooting for a monthly release. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13732288#comment-13732288 ] Pablo Medina commented on HBASE-9087: - I tested the patch with my workload and it improved my response times in 10%. I'm not seeing the same rate of handlers blocked during my test as in 0.94.7. I'm wondering why you guys didn't see any improvement in your test cases. My test case consists in read 1.5 million keys per minute over 3 tables (1 cf per table). So I think this scenario generates too much pressure in the Stores opening scanners at the server side with the consequence of generating contention on that CopyOnWriteSet in 0.94.7. What do you guys think about it? Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13727681#comment-13727681 ] Pablo Medina commented on HBASE-9087: - I'm not opening scanners at the client side. My use case involves a multiGet with 500 keys aprox. I noticed that each of those keys is handled a get and a subsequent Scanner on the server side. May I be overloading the server with too many scanners by using that multiGet case? Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13727896#comment-13727896 ] Lars Hofhansl commented on HBASE-9087: -- Hmm... I tried that; interesting. That server should definitely be able to handle this. Might be easiest if you tried with the patch, Pablo. You can build HBase like this: # svn checkout http://svn.apache.org/repos/asf/hbase/branches/0.94 hbase-0.94 # cd hbase-0.94 # download the patch here. Then patch -p0 HBASE-9087-1.patch # mvn clean install -DskipTests # once this is done you'll fine the tarball in the target directory. I'm happy to put up a tarball on my private apache area. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728119#comment-13728119 ] Elliott Clark commented on HBASE-9087: -- I'll check this into trunk/95 then so that we can run integration tests for a while on it. Backporting should be pretty easy if needed [~lhofhansl] Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728146#comment-13728146 ] stack commented on HBASE-9087: -- +1 on commit to trunk and 0.95. Chatting w/ Elliott we could not see how CHM would give a different view than COW. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728291#comment-13728291 ] Lars Hofhansl commented on HBASE-9087: -- Meh... I'm just gonna commit this to 0.94 as well. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728324#comment-13728324 ] Lars Hofhansl commented on HBASE-9087: -- Please don't mark an issue fixed if it has not been committed to all branches. Can either leave it open or remove the (in this case) the 0.94.11 tag. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728331#comment-13728331 ] Lars Hofhansl commented on HBASE-9087: -- Committed to 0.94 as well. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728340#comment-13728340 ] Hudson commented on HBASE-9087: --- SUCCESS: Integrated in hbase-0.95-on-hadoop2 #215 (See [https://builds.apache.org/job/hbase-0.95-on-hadoop2/215/]) HBASE-9087 Handlers being blocked during reads (eclark: rev 1509887) * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728359#comment-13728359 ] Hudson commented on HBASE-9087: --- SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #650 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/650/]) HBASE-9087 Handlers being blocked during reads (eclark: rev 1509886) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728392#comment-13728392 ] Hudson commented on HBASE-9087: --- SUCCESS: Integrated in HBase-0.94-security #243 (See [https://builds.apache.org/job/HBase-0.94-security/243/]) HBASE-9087 Handlers being blocked during reads (Elliott) (larsh: rev 1509922) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728401#comment-13728401 ] Hudson commented on HBASE-9087: --- SUCCESS: Integrated in HBase-0.94 #1092 (See [https://builds.apache.org/job/HBase-0.94/1092/]) HBASE-9087 Handlers being blocked during reads (Elliott) (larsh: rev 1509922) * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728406#comment-13728406 ] Hudson commented on HBASE-9087: --- SUCCESS: Integrated in hbase-0.95 #398 (See [https://builds.apache.org/job/hbase-0.95/398/]) HBASE-9087 Handlers being blocked during reads (eclark: rev 1509887) * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728409#comment-13728409 ] Hudson commented on HBASE-9087: --- SUCCESS: Integrated in HBase-TRUNK #4336 (See [https://builds.apache.org/job/HBase-TRUNK/4336/]) HBASE-9087 Handlers being blocked during reads (eclark: rev 1509886) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13726416#comment-13726416 ] Pablo Medina commented on HBASE-9087: - I ran this issue when asking concurrently the same keys. Looking at the stack trace It turns out that the bottleneck is on the Store level. So I guess that you should run some kind of test that retrieve under concurrency some set of keys belonging to the same Store. Does that make sense? Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1372#comment-1372 ] Lars Hofhansl commented on HBASE-9087: -- Yeah it's per store, so you'd see this contention if you read a lot of KVs from the same Region and ColumnFamily. Now thinking about how this is used a bit more... We're using this to notify the scanners that they have to reset their KVHeap stack. In that we absolutely have to make sure that all currently open scanners do this. ConcurrentHashMap does not actually guarantee this upon interating, but CopyOnWriteArraySet does. So maybe we're opening ourselves up to concurrency issues. An alternative would be a to use a HashSet and synchronize on it. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13727123#comment-13727123 ] Lars Hofhansl commented on HBASE-9087: -- Thinking more. :) What exactly do we have to guarantee about this? When we call notifyChangedReadersObservers(), all we have to ensure that we see all observers that were added prior to this. So the guarantees provided by ConcurrentHashMap should be good enough after all. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13727189#comment-13727189 ] Lars Hofhansl commented on HBASE-9087: -- I also tried a bunch of scenarios and could not find one where this improves performance. [~pablomedina85], any chance to run your scenario with this patch applied? Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13727218#comment-13727218 ] Pablo Medina commented on HBASE-9087: - I'll try to run it tomorrow. Where I can find the version with the patch applied? Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13727291#comment-13727291 ] Lars Hofhansl commented on HBASE-9087: -- You'll have to build it yourself. If that is an issue, I can build one for you. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13727298#comment-13727298 ] Lars Hofhansl commented on HBASE-9087: -- I know Elliott asked that on the mailing list, just to be sure, are you closing your scanners on the client? Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725346#comment-13725346 ] Elliott Clark commented on HBASE-9087: -- Running a ycsb workload e benchmark on this right now. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725406#comment-13725406 ] Pablo Medina commented on HBASE-9087: - Elliot, did you run that benchmark ? did it improve the performance under concurrency? Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725414#comment-13725414 ] Lars Hofhansl commented on HBASE-9087: -- I'm very curious as well :) Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725422#comment-13725422 ] Pablo Medina commented on HBASE-9087: - btw. In the meanwhile... do you guys know what is a 'proper' the number of handlers?. I know that 'proper' means different things in different use cases but have you seen any region server serving requests using 1k handlers or more? It is that common scenario? Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725428#comment-13725428 ] Elliott Clark commented on HBASE-9087: -- Running the benchmarks but they are tied in with integration tests so they will take 5 hours or so. I hope to have results by the end of the day. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725436#comment-13725436 ] Lars Hofhansl commented on HBASE-9087: -- You want to be able to keep both CPU and Disks busy. So one should have at least as many handlers as CPU threads and disk spindles. Beyond that it is trial and error. We have 12 core CPU (24 HW threads) and 6 disk drives and have set the handler count to 50. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725455#comment-13725455 ] Elliott Clark commented on HBASE-9087: -- Requests are queued as they are decoded off the wire. SO you can have lots of requests coming in, however only 50 will be actively worked on at a time. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725449#comment-13725449 ] Pablo Medina commented on HBASE-9087: - So you can not handle more than 50 requests at a time?. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725461#comment-13725461 ] Pablo Medina commented on HBASE-9087: - Right. But if you have free cpu cycles and let's say that you have a high block cache hit ratio (almost all the data is in memory) you should consider increasing the handlers so you can use those free cycles to increase your performance, right?. I guess that 50 handlers in Lars scenario consumes almost all its cpu / disk bandwith. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725531#comment-13725531 ] Lars Hofhansl commented on HBASE-9087: -- That's why you make sure to have at least as many handlers as CPU threads, and few more to handle io waits. Having way more threads than CPU threads and spindles is counter productive and you're better off queuing request. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725934#comment-13725934 ] Elliott Clark commented on HBASE-9087: -- So ycsb didn't show any change in time Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13726036#comment-13726036 ] Lars Hofhansl commented on HBASE-9087: -- That's expected, no? YCSB is not running many long running scans, and I presume you didn't jack up the handler count. +1 from me. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13726057#comment-13726057 ] Elliott Clark commented on HBASE-9087: -- I ran workload e which has some scanners, but you're probably correct about the length. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Fix For: 0.98.0, 0.95.2, 0.94.11 Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723902#comment-13723902 ] demian berjman commented on HBASE-9087: --- From CopyOnWriteArraySet javadoc: Mutative operations (add, set, remove, etc.) are expensive since they usually entail copying the entire underlying array. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7 Reporter: Pablo Medina Priority: Critical I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724256#comment-13724256 ] Lars Hofhansl commented on HBASE-9087: -- Why ConcurrentSkipListMap vs ConcurrentHashMap? Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Attachments: HBASE-9087-0.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724300#comment-13724300 ] Hadoop QA commented on HBASE-9087: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12595006/HBASE-9087-0.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingKeyRange org.apache.hadoop.hbase.regionserver.TestBlocksScanned org.apache.hadoop.hbase.regionserver.TestResettingCounters org.apache.hadoop.hbase.regionserver.TestScanWithBloomError org.apache.hadoop.hbase.regionserver.TestColumnSeeking org.apache.hadoop.hbase.regionserver.TestSplitTransaction org.apache.hadoop.hbase.filter.TestColumnPrefixFilter org.apache.hadoop.hbase.client.TestIntraRowPagination org.apache.hadoop.hbase.filter.TestDependentColumnFilter org.apache.hadoop.hbase.filter.TestMultipleColumnPrefixFilter org.apache.hadoop.hbase.regionserver.TestKeepDeletes org.apache.hadoop.hbase.regionserver.TestMinVersions org.apache.hadoop.hbase.filter.TestFilter org.apache.hadoop.hbase.regionserver.TestScanner org.apache.hadoop.hbase.regionserver.TestWideScanner org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook org.apache.hadoop.hbase.regionserver.TestRegionMergeTransaction org.apache.hadoop.hbase.coprocessor.TestCoprocessorInterface Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6523//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6523//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6523//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6523//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6523//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6523//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6523//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6523//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6523//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6523//console This message is automatically generated. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Attachments: HBASE-9087-0.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724552#comment-13724552 ] Hadoop QA commented on HBASE-9087: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12595049/HBASE-9087-1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6527//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6527//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6527//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6527//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6527//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6527//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6527//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6527//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6527//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6527//console This message is automatically generated. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724564#comment-13724564 ] stack commented on HBASE-9087: -- I wonder if this fixes the performance probs the lads saw? Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9087) Handlers being blocked during reads
[ https://issues.apache.org/jira/browse/HBASE-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724574#comment-13724574 ] Elliott Clark commented on HBASE-9087: -- I would imagine so. CopyOnWrite has an open java bug stating that it scales non-linearly, because the copy uses insertIfAbsent (or something similar). But if the Integration cluster has some free time I can try a ycsb run. Handlers being blocked during reads --- Key: HBASE-9087 URL: https://issues.apache.org/jira/browse/HBASE-9087 Project: HBase Issue Type: Bug Components: Performance Affects Versions: 0.94.7, 0.95.1 Reporter: Pablo Medina Assignee: Elliott Clark Priority: Critical Attachments: HBASE-9087-0.patch, HBASE-9087-1.patch I'm having a lot of handlers (90 - 300 aprox) being blocked when reading rows. They are blocked during changedReaderObserver registration. Lars Hofhansl suggests to change the implementation of changedReaderObserver from CopyOnWriteList to ConcurrentHashMap. Here is a stack trace: IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000 nid=0x2244 waiting on condition [0x7ff51fefd000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0xc5c13ae8 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.CopyOnWriteArrayList.addIfAbsent(CopyOnWriteArrayList.java:553) at java.util.concurrent.CopyOnWriteArraySet.add(CopyOnWriteArraySet.java:221) at org.apache.hadoop.hbase.regionserver.Store.addChangedReaderObserver(Store.java:1085) at org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:138) at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2077) at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3755) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1804) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1796) at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1771) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4776) at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira