[ 
https://issues.apache.org/jira/browse/PHOENIX-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055748#comment-15055748
 ] 

Gokhan Cagrici commented on PHOENIX-2508:
-----------------------------------------

We had this issue again. Here are the logs from one of our Region Servers:

2015-12-14 03:03:04,125 INFO  [MemStoreFlusher.1] regionserver.HRegion: 
Finished memstore flush of ~1.49 MB/1566784, currentsize=0 B/0 for region 
ALARM_FACT_IX1,\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1448586787274.93d8fe09ca8a2a5c7149f3d9effe2d64.
 in 78ms, sequenceid=141648, compaction requested=true
2015-12-14 03:03:04,969 INFO  
[hdfs-hbase-s2.insight-test-1,16020,1449743274566_ChoreService_1] 
regionserver.HRegionServer: 
hdfs-hbase-s2.insight-test-1,16020,1449743274566-MemstoreFlusherChore 
requesting flush for region 
ALARM_FACT_IX1,\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1448586787274.7ae2d56b3955c0a62faaea2ed4ae4139.
 after a delay of 10559
2015-12-14 03:03:05,214 INFO  [MemStoreFlusher.0] regionserver.HRegion: Started 
memstore flush for 
ALARM_FACT_IX1,\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1448586787274.7ae2d56b3955c0a62faaea2ed4ae4139.,
 current region memstore size 1.79 MB, and 1/1 column families' memstores are 
being flushed.
2015-12-14 03:03:05,224 INFO  [MemStoreFlusher.0] compress.CodecPool: Got 
brand-new compressor [.gz]
2015-12-14 03:03:05,225 INFO  [MemStoreFlusher.0] compress.CodecPool: Got 
brand-new compressor [.gz]
2015-12-14 03:03:05,262 INFO  [MemStoreFlusher.0] 
regionserver.DefaultStoreFlusher: Flushed, sequenceid=140724, memsize=1.8 M, 
hasBloomFilter=true, into tmp file 
hdfs://hdfs-hbase-m1.insight-test-1:54310/hbase/data/default/ALARM_FACT_IX1/7ae2d56b3955c0a62faaea2ed4ae4139/.tmp/e1fccc561510461e88d2d565fb49b2f4
2015-12-14 03:03:05,274 INFO  [MemStoreFlusher.0] compress.CodecPool: Got 
brand-new decompressor [.gz]
2015-12-14 03:03:05,274 INFO  [MemStoreFlusher.0] compress.CodecPool: Got 
brand-new decompressor [.gz]
2015-12-14 03:03:05,274 INFO  [MemStoreFlusher.0] compress.CodecPool: Got 
brand-new decompressor [.gz]
2015-12-14 03:03:05,274 INFO  [MemStoreFlusher.0] compress.CodecPool: Got 
brand-new decompressor [.gz]
2015-12-14 03:03:05,291 INFO  [MemStoreFlusher.0] compress.CodecPool: Got 
brand-new decompressor [.gz]
2015-12-14 03:03:05,291 INFO  [MemStoreFlusher.0] compress.CodecPool: Got 
brand-new decompressor [.gz]
2015-12-14 03:03:05,291 INFO  [MemStoreFlusher.0] compress.CodecPool: Got 
brand-new decompressor [.gz]
2015-12-14 03:03:05,291 INFO  [MemStoreFlusher.0] compress.CodecPool: Got 
brand-new decompressor [.gz]
2015-12-14 03:03:05,291 INFO  [MemStoreFlusher.0] regionserver.HStore: Added 
hdfs://hdfs-hbase-m1.insight-test-1:54310/hbase/data/default/ALARM_FACT_IX1/7ae2d56b3955c0a62faaea2ed4ae4139/0/e1fccc561510461e88d2d565fb49b2f4,
 entries=8046, sequenceid=140724, filesize=31.1 K
2015-12-14 03:03:05,294 INFO  [MemStoreFlusher.0] regionserver.HRegion: 
Finished memstore flush of ~1.79 MB/1874368, currentsize=0 B/0 for region 
ALARM_FACT_IX1,\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1448586787274.7ae2d56b3955c0a62faaea2ed4ae4139.
 in 79ms, sequenceid=140724, compaction requested=true
2015-12-14 03:03:10,568 ERROR 
[B.defaultRpcServer.handler=15,queue=0,port=16020] 
coprocessor.MetaDataEndpointImpl: getTable failed
java.io.IOException: Timed out waiting for lock for row: \x00\x00EVENT_FACT
        at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5013)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2397)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2365)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:440)
        at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11609)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
        at java.lang.Thread.run(Thread.java:745)
2015-12-14 03:03:50,668 ERROR 
[B.defaultRpcServer.handler=25,queue=1,port=16020] 
coprocessor.MetaDataEndpointImpl: getTable failed
java.io.IOException: Timed out waiting for lock for row: \x00\x00EVENT_FACT
        at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5013)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2397)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2365)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:440)
        at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11609)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)


> Phoenix Connections Stopped Working
> -----------------------------------
>
>                 Key: PHOENIX-2508
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2508
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.6.0
>         Environment: 1 HBase Master, 2 RS
>            Reporter: Gokhan Cagrici
>            Priority: Blocker
>
> Connections stopped working and no new connections can be established. 
> HBASE SHELL:
> hbase(main):004:0> status
> 2 servers, 0 dead, 282.0000 average load
> RS1 LOG:
> 2015-12-10 13:55:35,063 ERROR 
> [B.defaultRpcServer.handler=21,queue=0,port=16020] 
> coprocessor.MetaDataEndpointImpl: createTable failed
> java.io.IOException: Timed out waiting for lock for row: \x00SYSTEM\x00CATALOG
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5013)
>       at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1283)
>       at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1171)
>       at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11619)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>       at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
>       at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>       at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>       at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>       at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>       at java.lang.Thread.run(Thread.java:745)
> 2015-12-10 13:56:25,544 ERROR 
> [B.defaultRpcServer.handler=2,queue=2,port=16020] 
> coprocessor.MetaDataEndpointImpl: createTable failed
> java.io.IOException: Timed out waiting for lock for row: \x00SYSTEM\x00CATALOG
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5013)
>       at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1283)
>       at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1171)
>       at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11619)
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>       at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
>       at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>       at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>       at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>       at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>       at java.lang.Thread.run(Thread.java:745)
> RS2 LOG:
> 2015-12-10 13:58:10,668 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,670 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,672 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,674 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,676 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,678 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,680 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,682 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,684 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,686 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,688 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,690 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,692 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,694 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,695 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,697 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,699 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,701 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,703 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,704 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,706 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,708 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]
> 2015-12-10 13:58:10,710 INFO  
> [regionserver/hdfs-hbase-s2.insight-test-2/192.168.24.208:16020-longCompactions-1449367229498]
>  compress.CodecPool: Got brand-new decompressor [.gz]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to