[jira] [Commented] (HBASE-23169) Random region server aborts while clearing Old Wals

2019-10-20 Thread Karthick (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955751#comment-16955751
 ] 

Karthick commented on HBASE-23169:
--

[~wchevreuil] We have 1.4.10 deployed on our production clusters. We checked 
for conflicts in 1.4.10 with the patch in 
[HBASE-22784|https://jira.apache.org/jira/browse/HBASE-22784] and since there 
were no conflicts we applied the patch. And please note the fact that the 
region server aborts happen randomly. At the moment we have restart mechanisms 
but because of this issue we are not able to apply the patch in all out 
clusters.

> Random region server aborts while clearing Old Wals
> ---
>
> Key: HBASE-23169
> URL: https://issues.apache.org/jira/browse/HBASE-23169
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication, wal
>Affects Versions: 1.4.10, 1.4.11
>Reporter: Karthick
>Assignee: Wellington Chevreuil
>Priority: Blocker
>  Labels: patch
>
> After applying the patch given in 
> [HBASE-22784|https://jira.apache.org/jira/browse/HBASE-22784] random region 
> server aborts were noticed. This happens in ReplicationResourceShipper thread 
> while writing the replication wal position.
> {code:java}
> 2019-10-05 08:17:28,132 FATAL 
> [regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
>  regionserver.HRegionServer: ABORTING region server 
> 172.20.20.20,16020,1570193969775: Failed to write replication wal position 
> (filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
> position=127494739)2019-10-05 08:17:28,132 FATAL 
> [regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
>  regionserver.HRegionServer: ABORTING region server 
> 172.20.20.20,16020,1570193969775: Failed to write replication wal position 
> (filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
> position=127494739)org.apache.zookeeper.KeeperException$NoNodeException: 
> KeeperErrorCode = NoNode for 
> /hbase/replication/rs/172.20.20.20,16020,1570193969775/2/172.20.20.20%2C16020%2C1570193969775.1570288637045
>  at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
> org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1327) at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:422)
>  at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:824) at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:874) at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:868) at 
> org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.setLogPosition(ReplicationQueuesZKImpl.java:155)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:194)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:727)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:698)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:551)2019-10-05
>  08:17:28,133 FATAL 
> [regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
>  regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23169) Random region server aborts while clearing Old Wals

2019-10-15 Thread Karthick (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952514#comment-16952514
 ] 

Karthick commented on HBASE-23169:
--

[~wchevreuil] I've included the TRACE logs when the Regionserver failed. 
{code:java}
2019-10-15 20:32:01,902 INFO  [main-SendThread(172.17.17.17:2191)] 
zookeeper.ClientCnxn: Socket connection established to 
172.17.17.17/172.17.17.17:2191, initiating session
2019-10-15 20:32:01,904 INFO  [main-SendThread(172.17.17.17:2191)] 
zookeeper.ClientCnxn: Session establishment complete on server 
172.17.17.17/172.17.17.17:2191, sessionid = 0x166d3a44fd582762, negotiated 
timeout = 4
2019-10-15 20:32:01,906 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Finished replicating mutations.
2019-10-15 20:32:01,930 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Started replicating mutations.
2019-10-15 20:32:01,934 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Finished replicating mutations.
2019-10-15 20:32:01,966 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Started replicating mutations.
2019-10-15 20:32:01,970 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Finished replicating mutations.
2019-10-15 20:32:01,989 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Started replicating mutations.
2019-10-15 20:32:01,993 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Finished replicating mutations.
2019-10-15 20:32:02,033 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Started replicating mutations.
2019-10-15 20:32:02,036 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Finished replicating mutations.
2019-10-15 20:32:02,055 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Started replicating mutations.
2019-10-15 20:32:02,058 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Finished replicating mutations.
2019-10-15 20:32:02,063 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Started replicating mutations.
2019-10-15 20:32:02,068 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Finished replicating mutations.
2019-10-15 20:32:02,095 DEBUG 
[regionserver//172.72.72.72:16020.replicationSource.replicationWALReaderThread.172.72.72.72%2C16020%2C1570500915276,2]
 regionserver.WALEntryStream: Reached the end of WAL file 
'hdfs://OtherGridMetaCluster/hbasedata/WALs/172.72.72.72,16020,1570500915276/172.72.72.72%2C16020%2C1570500915276.1571196349507'.
 It was not closed cleanly, so we did not parse 8 bytes of data. This is 
normally ok.
2019-10-15 20:32:02,095 TRACE 
[regionserver//172.72.72.72:16020.replicationSource.replicationWALReaderThread.172.72.72.72%2C16020%2C1570500915276,2]
 regionserver.WALEntryStream: Reached the end of log 
hdfs://OtherGridMetaCluster/hbasedata/WALs/172.72.72.72,16020,1570500915276/172.72.72.72%2C16020%2C1570500915276.1571196349507,
 and the length of the file is 127513914
2019-10-15 20:32:02,095 DEBUG 
[regionserver//172.72.72.72:16020.replicationSource.replicationWALReaderThread.172.72.72.72%2C16020%2C1570500915276,2]
 regionserver.WALEntryStream: Reached the end of log 
hdfs://OtherGridMetaCluster/hbasedata/WALs/172.72.72.72,16020,1570500915276/172.72.72.72%2C16020%2C1570500915276.1571196349507
2019-10-15 20:32:02,101 DEBUG 
[regionserver//172.72.72.72:16020.replicationSource.replicationWALReaderThread.172.72.72.72%2C16020%2C1570500915276,2]
 regionserver.ReplicationSourceManager: Removing 1 logs in the list: 
[172.72.72.72%2C16020%2C1570500915276.1571196349507]
2019-10-15 20:32:02,108 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Started replicating mutations.
2019-10-15 20:32:02,110 DEBUG 
[RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16020] 
regionserver.ReplicationSink: Finished replicating mutations.
2019-10-15 20:32:02,130 TRACE 
[regionserver//172.72.72.72:16020.replicationSource.replicationWALReaderThread.172.72.72.72%2C16020%2C1570500915276,2]
 regionserver.ReplicationSourceWALReaderThread: Read 1 WAL entries eligible for 
replication
2019-10-15 20:32:02,130 TRACE 
[regionserver//172.72.72.72:16020.replicationSource.172.72.72.72%2C16020%2C1570500915276,2]
 regionserver.HBaseInterClusterReplicationEndpoint: Submitting 1 entries of 
total size 376
2019-10-15 20:32:02,131 TRACE [pool-29-thread-198] 

[jira] [Commented] (HBASE-22784) OldWALs not cleared in a replication slave cluster (cyclic replication bw 2 clusters)

2019-10-13 Thread Karthick (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950677#comment-16950677
 ] 

Karthick commented on HBASE-22784:
--

[~wchevreuil] I've opened the new Jira 
[here|https://jira.apache.org/jira/browse/HBASE-23169]

> OldWALs not cleared in a replication slave cluster (cyclic replication bw 2 
> clusters)
> -
>
> Key: HBASE-22784
> URL: https://issues.apache.org/jira/browse/HBASE-22784
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.4.9, 1.4.10
>Reporter: Solvannan R M
>Assignee: Wellington Chevreuil
>Priority: Blocker
> Fix For: 1.5.0, 1.4.11
>
> Attachments: HBASE-22784.branch-1.001.patch, 
> HBASE-22784.branch-1.002.patch, HBASE-22784.branch-1.003.patch, 
> HBASE-22784.branch-1.004.patch
>
>
> When a cluster is passive (receiving edits only via replication) in a cyclic 
> replication setup of 2 clusters, OldWALs size keeps on growing. On analysing, 
> we observed the following behaviour.
>  # New entry is added to WAL (Edit replicated from other cluster).
>  # ReplicationSourceWALReaderThread(RSWALRT) reads and applies the configured 
> filters (due to cyclic replication setup, ClusterMarkingEntryFilter discards 
> new entry from other cluster).
>  # Entry is null, RSWALRT neither updates the batch stats 
> (WALEntryBatch.lastWalPosition) nor puts it in the entryBatchQueue.
>  # ReplicationSource thread is blocked in entryBachQueue.take().
>  # So ReplicationSource#updateLogPosition has never invoked and WAL file is 
> never cleared from ReplicationQueue.
>  # Hence LogCleaner on the master, doesn't deletes the oldWAL files from 
> hadoop.
> NOTE: When a new edit is added via hbase-client, ReplicationSource thread 
> process and clears the oldWAL files from replication queues and hence master 
> cleans up the WALs
> Please provide us a solution
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23169) Random region server aborts while clearing Old Wals

2019-10-13 Thread Karthick (Jira)
Karthick created HBASE-23169:


 Summary: Random region server aborts while clearing Old Wals
 Key: HBASE-23169
 URL: https://issues.apache.org/jira/browse/HBASE-23169
 Project: HBase
  Issue Type: Bug
  Components: regionserver, Replication, wal
Affects Versions: 1.4.10, 1.4.11
Reporter: Karthick


After applying the patch given in 
[HBASE-22784|https://jira.apache.org/jira/browse/HBASE-22784] random region 
server aborts were noticed. This happens in ReplicationResourceShipper thread 
while writing the replication wal position.
{code:java}
2019-10-05 08:17:28,132 FATAL 
[regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
 regionserver.HRegionServer: ABORTING region server 
172.20.20.20,16020,1570193969775: Failed to write replication wal position 
(filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
position=127494739)2019-10-05 08:17:28,132 FATAL 
[regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
 regionserver.HRegionServer: ABORTING region server 
172.20.20.20,16020,1570193969775: Failed to write replication wal position 
(filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
position=127494739)org.apache.zookeeper.KeeperException$NoNodeException: 
KeeperErrorCode = NoNode for 
/hbase/replication/rs/172.20.20.20,16020,1570193969775/2/172.20.20.20%2C16020%2C1570193969775.1570288637045
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1327) at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:422)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:824) at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:874) at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:868) at 
org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.setLogPosition(ReplicationQueuesZKImpl.java:155)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:194)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:727)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:698)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:551)2019-10-05
 08:17:28,133 FATAL 
[regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
 regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: 
[org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-22784) OldWALs not cleared in a replication slave cluster (cyclic replication bw 2 clusters)

2019-10-10 Thread Karthick (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949119#comment-16949119
 ] 

Karthick edited comment on HBASE-22784 at 10/11/19 4:05 AM:


[~wchevreuil]  we applied the patch in hbase-1.4.10 and we noticed random 
region server aborts because of ReplicationQueuesZKImpl#setLogPosition() in 
ReplicationSourceShipperThread. 

 
{code:java}
2019-10-05 08:17:28,132 FATAL 
[regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
 regionserver.HRegionServer: ABORTING region server 
172.20.20.20,16020,1570193969775: Failed to write replication wal position 
(filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
position=127494739)2019-10-05 08:17:28,132 FATAL 
[regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
 regionserver.HRegionServer: ABORTING region server 
172.20.20.20,16020,1570193969775: Failed to write replication wal position 
(filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
position=127494739)org.apache.zookeeper.KeeperException$NoNodeException: 
KeeperErrorCode = NoNode for 
/hbase/replication/rs/172.20.20.20,16020,1570193969775/2/172.20.20.20%2C16020%2C1570193969775.1570288637045
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1327) at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:422)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:824) at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:874) at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:868) at 
org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.setLogPosition(ReplicationQueuesZKImpl.java:155)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:194)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:727)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:698)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:551)2019-10-05
 08:17:28,133 FATAL 
[regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
 regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: 
[org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint{code}
Please provide us a solution and let us know if you need more logs regarding 
this.


was (Author: karthickram):
[~wchevreuil]  we applied the patch in hbase-1.4.10 and we noticed random 
region server aborts because of ReplicationQueuesZKImpl#setLogPosition() in 
ReplicationSourceShipperThread. 

 
{quote}2019-10-05 08:17:28,132 FATAL 
[regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
 regionserver.HRegionServer: ABORTING region server 
172.20.20.20,16020,1570193969775: Failed to write replication wal position 
(filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
position=127494739)2019-10-05 08:17:28,132 FATAL 
[regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
 regionserver.HRegionServer: ABORTING region server 
172.20.20.20,16020,1570193969775: Failed to write replication wal position 
(filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
position=127494739)org.apache.zookeeper.KeeperException$NoNodeException: 
KeeperErrorCode = NoNode for 
/hbase/replication/rs/172.20.20.20,16020,1570193969775/2/172.20.20.20%2C16020%2C1570193969775.1570288637045
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1327) at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:422)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:824) at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:874) at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:868) at 
org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.setLogPosition(ReplicationQueuesZKImpl.java:155)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:194)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:727)
 at 

[jira] [Commented] (HBASE-22784) OldWALs not cleared in a replication slave cluster (cyclic replication bw 2 clusters)

2019-10-10 Thread Karthick (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949119#comment-16949119
 ] 

Karthick commented on HBASE-22784:
--

[~wchevreuil]  we applied the patch in hbase-1.4.10 and we noticed random 
region server aborts because of ReplicationQueuesZKImpl#setLogPosition() in 
ReplicationSourceShipperThread. 

 
{quote}2019-10-05 08:17:28,132 FATAL 
[regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
 regionserver.HRegionServer: ABORTING region server 
172.20.20.20,16020,1570193969775: Failed to write replication wal position 
(filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
position=127494739)2019-10-05 08:17:28,132 FATAL 
[regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
 regionserver.HRegionServer: ABORTING region server 
172.20.20.20,16020,1570193969775: Failed to write replication wal position 
(filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
position=127494739)org.apache.zookeeper.KeeperException$NoNodeException: 
KeeperErrorCode = NoNode for 
/hbase/replication/rs/172.20.20.20,16020,1570193969775/2/172.20.20.20%2C16020%2C1570193969775.1570288637045
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1327) at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:422)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:824) at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:874) at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:868) at 
org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.setLogPosition(ReplicationQueuesZKImpl.java:155)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:194)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:727)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:698)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:551)2019-10-05
 08:17:28,133 FATAL 
[regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
 regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: 
[org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint
{quote}
Please provide us a solution for this and let us know if you need more logs 
regarding this.

> OldWALs not cleared in a replication slave cluster (cyclic replication bw 2 
> clusters)
> -
>
> Key: HBASE-22784
> URL: https://issues.apache.org/jira/browse/HBASE-22784
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.4.9, 1.4.10
>Reporter: Solvannan R M
>Assignee: Wellington Chevreuil
>Priority: Blocker
> Fix For: 1.5.0, 1.4.11
>
> Attachments: HBASE-22784.branch-1.001.patch, 
> HBASE-22784.branch-1.002.patch, HBASE-22784.branch-1.003.patch, 
> HBASE-22784.branch-1.004.patch
>
>
> When a cluster is passive (receiving edits only via replication) in a cyclic 
> replication setup of 2 clusters, OldWALs size keeps on growing. On analysing, 
> we observed the following behaviour.
>  # New entry is added to WAL (Edit replicated from other cluster).
>  # ReplicationSourceWALReaderThread(RSWALRT) reads and applies the configured 
> filters (due to cyclic replication setup, ClusterMarkingEntryFilter discards 
> new entry from other cluster).
>  # Entry is null, RSWALRT neither updates the batch stats 
> (WALEntryBatch.lastWalPosition) nor puts it in the entryBatchQueue.
>  # ReplicationSource thread is blocked in entryBachQueue.take().
>  # So ReplicationSource#updateLogPosition has never invoked and WAL file is 
> never cleared from ReplicationQueue.
>  # Hence LogCleaner on the master, doesn't deletes the oldWAL files from 
> hadoop.
> NOTE: When a new edit is added via hbase-client, ReplicationSource thread 
> process and clears the oldWAL files from replication queues and hence master 
> cleans up the WALs
> Please provide us a solution
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22448) Scan is slow for Multiple Column prefixes

2019-05-28 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849409#comment-16849409
 ] 

Karthick commented on HBASE-22448:
--

[~openinx] thanks for the suggestion. It works fine with 
MultipleColumnPrefixFilter.

> Scan is slow for Multiple Column prefixes
> -
>
> Key: HBASE-22448
> URL: https://issues.apache.org/jira/browse/HBASE-22448
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.4.8, 1.4.9
>Reporter: Karthick
>Assignee: Zheng Hu
>Priority: Critical
>  Labels: prefix, scan, scanner
> Fix For: 1.5.0, 1.4.10
>
> Attachments: 0001-benchmark-UT.patch, HBaseFileImport.java, 
> filter-list-with-or-internal-2.png, 
> org.apache.hadoop.hbase.filter.TestSlowColumnPrefix-output.zip, 
> qualifiers.txt, scanquery.txt
>
>
> While scanning a row (around 10 lakhs columns) with 100 column prefixes, it 
> takes around 4 seconds in hbase-1.2.5 and when the same query is executed in 
> hbase-1.4.9 it takes around 50 seconds.
> Is there any way to optimise this?
>  
> *P.S:*
> We have applied the patch provided in 
> [-HBASE-21620-|https://jira.apache.org/jira/browse/HBASE-21620] and  
> [-HBASE-21734-|https://jira.apache.org/jira/browse/HBASE-21734] . Attached 
> *qualifiers*.*txt* file which contains the column keys. Use the 
> *HBaseFileImport.java* file provided to populate in your table and use 
> *scanquery.txt* to query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-05-21 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844712#comment-16844712
 ] 

Karthick edited comment on HBASE-21620 at 5/21/19 10:34 AM:


[~openinx] I've raised a separate issue here 
[https://issues.apache.org/jira/browse/HBASE-22448].

 


was (Author: karthickram):
[~openinx]  As per your request, I've raised a separate issue here 
[https://issues.apache.org/jira/browse/HBASE-22448].

 

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseFileImport.java, HBaseImportData.java, columnkey.txt, file.txt, 
> qualifiers.txt, scanquery.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-05-21 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844712#comment-16844712
 ] 

Karthick commented on HBASE-21620:
--

[~openinx]  As per your request, I've raised a separate issue here 
[https://issues.apache.org/jira/browse/HBASE-22448].

 

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseFileImport.java, HBaseImportData.java, columnkey.txt, file.txt, 
> qualifiers.txt, scanquery.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22448) Scan is slow for Multiple Column prefixes

2019-05-21 Thread Karthick (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthick updated HBASE-22448:
-
Description: 
While scanning a row (around 10 lakhs columns) with 100 column prefixes, it 
takes around 4 seconds in hbase-1.2.5 and when the same query is executed in 
hbase-1.4.9 it takes around 50 seconds.

Is there any way to optimise this?

 

*P.S:*

We have applied the patch provided in 
[-HBASE-21620-|https://jira.apache.org/jira/browse/HBASE-21620] and  
[-HBASE-21734-|https://jira.apache.org/jira/browse/HBASE-21734] . Attached 
*qualifiers*.*txt* file which contains the column keys. Use the 
*HBaseFileImport.java* file provided to populate in your table and use 
*scanquery.txt* to query.

  was:
While scanning a row (around 10 lakhs columns) with  100 column prefixes, it 
takes around 4 seconds in hbase-1.2.5 and when the same query is executed in 
hbase-1.4.9 it takes around 50 seconds.

Is there any way to optimise this?

 

*P.S:*

We have applied the patch provided in 
[-HBASE-21620-|https://jira.apache.org/jira/browse/HBASE-21620] and  
[-HBASE-21734-|https://jira.apache.org/jira/browse/HBASE-21734] . Attached 
*qualifiers*.*txt* file which contains the column keys. Use the 
*HBaseFileImport.java* file provided to populate in your table and use 
*scanquery.txt* to query.


> Scan is slow for Multiple Column prefixes
> -
>
> Key: HBASE-22448
> URL: https://issues.apache.org/jira/browse/HBASE-22448
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.4.8, 1.4.9
>Reporter: Karthick
>Assignee: Zheng Hu
>Priority: Blocker
>  Labels: prefix, scan, scanner
> Attachments: HBaseFileImport.java, qualifiers.txt, scanquery.txt
>
>
> While scanning a row (around 10 lakhs columns) with 100 column prefixes, it 
> takes around 4 seconds in hbase-1.2.5 and when the same query is executed in 
> hbase-1.4.9 it takes around 50 seconds.
> Is there any way to optimise this?
>  
> *P.S:*
> We have applied the patch provided in 
> [-HBASE-21620-|https://jira.apache.org/jira/browse/HBASE-21620] and  
> [-HBASE-21734-|https://jira.apache.org/jira/browse/HBASE-21734] . Attached 
> *qualifiers*.*txt* file which contains the column keys. Use the 
> *HBaseFileImport.java* file provided to populate in your table and use 
> *scanquery.txt* to query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22448) Scan is slow for Multiple Column prefixes

2019-05-21 Thread Karthick (JIRA)
Karthick created HBASE-22448:


 Summary: Scan is slow for Multiple Column prefixes
 Key: HBASE-22448
 URL: https://issues.apache.org/jira/browse/HBASE-22448
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 1.4.9, 1.4.8
Reporter: Karthick
 Attachments: HBaseFileImport.java, qualifiers.txt, scanquery.txt

While scanning a row (around 10 lakhs columns) with  100 column prefixes, it 
takes around 4 seconds in hbase-1.2.5 and when the same query is executed in 
hbase-1.4.9 it takes around 50 seconds.

Is there any way to optimise this?

 

*P.S:*

We have applied the patch provided in 
[-HBASE-21620-|https://jira.apache.org/jira/browse/HBASE-21620] and  
[-HBASE-21734-|https://jira.apache.org/jira/browse/HBASE-21734] . Attached 
*qualifiers*.*txt* file which contains the column keys. Use the 
*HBaseFileImport.java* file provided to populate in your table and use 
*scanquery.txt* to query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-05-21 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844665#comment-16844665
 ] 

Karthick commented on HBASE-21620:
--

[~openinx] We are facing this issue in production. Any update on this?

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseFileImport.java, HBaseImportData.java, columnkey.txt, file.txt, 
> qualifiers.txt, scanquery.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-05-09 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836290#comment-16836290
 ] 

Karthick commented on HBASE-21620:
--

[~openinx]  please use the *HBaseFileImport.java* file and change the filename 
from _"columnkey.txt"_ to _"qualifiers.txt"_ to import the columns in your 
table. You also have replace the ZK quorum, topnode and namespace:tablename. 
After importing the columns you can use scan query given in *scanquery.txt* in 
hbase shell.

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseFileImport.java, HBaseImportData.java, columnkey.txt, file.txt, 
> qualifiers.txt, scanquery.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-05-09 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836290#comment-16836290
 ] 

Karthick edited comment on HBASE-21620 at 5/9/19 11:29 AM:
---

[~openinx]  please use the attached *HBaseFileImport.java* file and change the 
filename from _"columnkey.txt"_ to _"qualifiers.txt"_ to import the columns in 
your table. You also have to replace the ZK quorum, topnode and 
namespace:tablename. After importing the columns you can use scan query given 
in *scanquery.txt* in hbase shell.


was (Author: karthickram):
[~openinx]  please use the *HBaseFileImport.java* file and change the filename 
from _"columnkey.txt"_ to _"qualifiers.txt"_ to import the columns in your 
table. You also have replace the ZK quorum, topnode and namespace:tablename. 
After importing the columns you can use scan query given in *scanquery.txt* in 
hbase shell.

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseFileImport.java, HBaseImportData.java, columnkey.txt, file.txt, 
> qualifiers.txt, scanquery.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21734) Some optimization in FilterListWithOR

2019-01-17 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745841#comment-16745841
 ] 

Karthick commented on HBASE-21734:
--

[~openinx] Now that Hadoop QA tests passed can you please provide a patch for 
HBase-1.4.8?

> Some optimization in FilterListWithOR
> -
>
> Key: HBASE-21734
> URL: https://issues.apache.org/jira/browse/HBASE-21734
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-21734.v1.patch, columnkey.txt, perf-ut.patch
>
>
> In HBASE-21620,   [~KarthickRam] and [~mohamed.meeran]  complaind that their 
> performance of filter list has been degraded after that patch in here [1].  
> I wrote a UT for this, and test under my host.  It's true.   I gussed there 
> may be two reasons: 
> 1.  the comparator.compare(nextKV, cell) > 0 StoreScanner; 
> 2.  the filter list concated by OR will choose the minimal forward step among 
> all sub-filters. in this patch, we have stricter restrictions on all sub 
> filters, include those sub-filter whose has non-null RC returned in 
> calculateReturnCodeByPrevCellAndRC (previously, we will skip to merge this 
> sub-filter's rc, but it's wrong in some case), and merge all of the 
> sub-filter's RC, this is also some time cost.
> The former one seems not the main problem, because the UT still cost ~ 3s 
> even if I comment the compare.  the second one has some impact indeed, 
> because after i skip to merge the sub-filters's RC if 
> calculateReturnCodeByPrevCellAndRC return a non-null rc,  the UT cost ~ 1s,  
> it's improvement but the logic is not wrong.
> 1. 
> https://issues.apache.org/jira/browse/HBASE-21620?focusedCommentId=16737100=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16737100



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-01-17 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744997#comment-16744997
 ] 

Karthick commented on HBASE-21620:
--

[~openinx] please use this (HBaseFileImport.java) file to import the columns in 
your cluster. Please replace the ZK Quorum, topnode and namespace:tablename. 

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseFileImport.java, HBaseImportData.java, columnkey.txt, file.txt, 
> test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-01-17 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744992#comment-16744992
 ] 

Karthick commented on HBASE-21620:
--

[~openinx] use this Util to import the columns in your cluster. Please replace 
the ZK Quorum, topnode and namespace:tablename. 

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseImportData.java, columnkey.txt, file.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-01-17 Thread Karthick (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthick updated HBASE-21620:
-
Comment: was deleted

(was: [~openinx] use this Util to import the columns in your cluster. Please 
replace the ZK Quorum, topnode and namespace:tablename. )

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseImportData.java, columnkey.txt, file.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-01-16 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744767#comment-16744767
 ] 

Karthick edited comment on HBASE-21620 at 1/17/19 7:54 AM:
---

We tried the same scan using HBase-2.1.2. The scan is even more slower than 
1.4.8 (It takes around 3-4 seconds now). Because of this issue we are unable to 
update our production clusters from 1.2.5 to a higher version. [~openinx] Can 
you please provide a fix for this?

 


was (Author: karthickram):
We tried the same scan using HBase-2.1.2. The scan is even more slower than 
1.4.8 (It takes around 3-4 seconds now). Because of this issue we are not able 
to update our production clusters from 1.2.5 to a higher version. [~openinx] 
Can you please provide a fix for this?

 

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseImportData.java, columnkey.txt, file.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-01-16 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744767#comment-16744767
 ] 

Karthick commented on HBASE-21620:
--

We tried the same scan using HBase-2.1.2. The scan is even more slower than 
1.4.8 (It takes around 3-4 seconds now). Because of this issue we are not able 
to update our production clusters from 1.2.5 to a higher version. [~openinx] 
Can you please provide a fix for this?

 

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseImportData.java, columnkey.txt, file.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-01-14 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16741969#comment-16741969
 ] 

Karthick commented on HBASE-21620:
--

[~openinx] There was no improvement even after removing the compare condition. 
Is there any other way to optimize the scan?

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseImportData.java, columnkey.txt, file.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2019-01-10 Thread Karthick (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740116#comment-16740116
 ] 

Karthick commented on HBASE-21620:
--

[~openinx] we removed the *(comparator.compare(nextKV, cell) > 0)* condition 
and tested, but still the scan is slow.

 

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 1.4.8, 2.1.2, 2.0.4
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21620.branch-1.patch, HBASE-21620.v1.patch, 
> HBASE-21620.v2.patch, HBASE-21620.v3.patch, HBASE-21620.v3.patch, 
> HBaseImportData.java, columnkey.txt, file.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19766) UninitializedMessageException : Message missing required fields

2018-01-16 Thread Karthick (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthick updated HBASE-19766:
-
Description: 
"UninitializedMessageException : Message missing required fields : region, 
get", is thrown while performing Get. Due to this all the Get requests to the 
same Region Server are getting stalled.

com.google.protobuf.UninitializedMessageException: Message missing required 
fields : region, get
 at 
com.google.protobuf.AbstractMessage$Build.newUninitializedMessageException(AbstractMessage.java:770)
 at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$GetRequest$Builder.build(ClientProtos.java:6377)
 at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$GetRequest$Builder.build(ClientProtos.java:6309)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.processRequest(RpcServer.java:1840)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1775)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RPcServer.java:1623)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1603)
 at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:861)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:643)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:619)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)

  was:
"UninitializedMessageException : Message missing required fields : region, 
get", is thrown while performing Get. Due to this all the Get requests to the 
same Region Server are getting stalled. 

com.google.protobuf.UninitializedMessageException: Message missing required 
fields : region, get
at 
com.google.protobuf.AbstractMessage$Build.newUninitializedMessageException(AbstractMessage.java:770)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$GetRequest$Builder.build(ClientProtos.java:6377)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$GetRequest$Builder.build(ClientProtos.java:6309)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.processRequest(RpcServer.java:1840)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1775)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RPcServer.java:1623)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1603)
at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:861)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:643)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:;619)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)


> UninitializedMessageException : Message missing required fields 
> 
>
> Key: HBASE-19766
> URL: https://issues.apache.org/jira/browse/HBASE-19766
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Protobufs, regionserver, rpc
>Affects Versions: 1.2.5
> Environment: Linux Ubuntu, CentOS, JDK 1.8.0 
>Reporter: Karthick
>Priority: Critical
>
> "UninitializedMessageException : Message missing required fields : region, 
> get", is thrown while performing Get. Due to this all the Get requests to the 
> same Region Server are getting stalled.
> com.google.protobuf.UninitializedMessageException: Message missing required 
> fields : region, get
>  at 
> com.google.protobuf.AbstractMessage$Build.newUninitializedMessageException(AbstractMessage.java:770)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$GetRequest$Builder.build(ClientProtos.java:6377)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$GetRequest$Builder.build(ClientProtos.java:6309)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processRequest(RpcServer.java:1840)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1775)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RPcServer.java:1623)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1603)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:861)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:643)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:619)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  

[jira] [Created] (HBASE-19766) UninitializedMessageException : Message missing required fields

2018-01-11 Thread Karthick (JIRA)
Karthick created HBASE-19766:


 Summary: UninitializedMessageException : Message missing required 
fields 
 Key: HBASE-19766
 URL: https://issues.apache.org/jira/browse/HBASE-19766
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs, regionserver, rpc
Affects Versions: 1.2.5
 Environment: Linux Ubuntu, CentOS, JDK 1.8.0 
Reporter: Karthick
Priority: Critical


"UninitializedMessageException : Message missing required fields : region, 
get", is thrown while performing Get. Due to this all the Get requests to the 
same Region Server are getting stalled. 

com.google.protobuf.UninitializedMessageException: Message missing required 
fields : region, get
at 
com.google.protobuf.AbstractMessage$Build.newUninitializedMessageException(AbstractMessage.java:770)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$GetRequest$Builder.build(ClientProtos.java:6377)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$GetRequest$Builder.build(ClientProtos.java:6309)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.processRequest(RpcServer.java:1840)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1775)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RPcServer.java:1623)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1603)
at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:861)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:643)
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:;619)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18142) Deletion of a cell deletes the previous versions too

2017-08-03 Thread Karthick (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112525#comment-16112525
 ] 

Karthick commented on HBASE-18142:
--

https://issues.apache.org/jira/browse/HBASE-18211 [~chia7712] [~awked06] can 
you please look into this issue?


> Deletion of a cell deletes the previous versions too
> 
>
> Key: HBASE-18142
> URL: https://issues.apache.org/jira/browse/HBASE-18142
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 3.0.0
>Reporter: Karthick
>Assignee: ChunHao
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-18142.master.v0.patch
>
>
> When I tried to delete a cell using it's timestamp in the Hbase Shell, the 
> previous versions of the same cell also got deleted. But when I tried the 
> same using the Java API, then the previous versions are not deleted and I can 
> retrive the previous values.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java
> see this file to fix the issue. This method (public Delete addColumns(final 
> byte [] family, final byte [] qualifier, final long timestamp)) only deletes 
> the current version of the cell. The previous versions are not deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18211) Encryption of exisiting data in Stripe Compaction

2017-06-13 Thread Karthick (JIRA)
Karthick created HBASE-18211:


 Summary: Encryption of exisiting data in Stripe Compaction
 Key: HBASE-18211
 URL: https://issues.apache.org/jira/browse/HBASE-18211
 Project: HBase
  Issue Type: Bug
  Components: Compaction, encryption
Reporter: Karthick
Priority: Critical


We have a table which has time series data with Stripe Compaction enabled. 
After encryption has been enabled for this table the newer entries are 
encrypted and inserted. However to encrypt the existing data in the table, a 
major compaction has to run. Since, stripe compaction doesn't allow a major 
compaction to run, we are unable to encrypt the previous data. 

see this 
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactionPolicy.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (HBASE-18142) Deletion of a cell deletes the previous versions too

2017-05-31 Thread Karthick (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthick updated HBASE-18142:
-
Comment: was deleted

(was: 
https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java

see this file to fix the issue. This method (public Delete addColumns(final 
byte [] family, final byte [] qualifier, final long timestamp)) only deletes 
the current version of the cell. The previous versions are not deleted.)

> Deletion of a cell deletes the previous versions too
> 
>
> Key: HBASE-18142
> URL: https://issues.apache.org/jira/browse/HBASE-18142
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Reporter: Karthick
>
> When I tried to delete a cell using it's timestamp in the Hbase Shell, the 
> previous versions of the same cell also got deleted. But when I tried the 
> same using the Java API, then the previous versions are not deleted and I can 
> retrive the previous values.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java
> see this file to fix the issue. This method (public Delete addColumns(final 
> byte [] family, final byte [] qualifier, final long timestamp)) only deletes 
> the current version of the cell. The previous versions are not deleted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18142) Deletion of a cell deletes the previous versions too

2017-05-31 Thread Karthick (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthick updated HBASE-18142:
-
Component/s: API

https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java

see this file to fix the issue. This method (public Delete addColumns(final 
byte [] family, final byte [] qualifier, final long timestamp)) only deletes 
the current version of the cell. The previous versions are not deleted.

> Deletion of a cell deletes the previous versions too
> 
>
> Key: HBASE-18142
> URL: https://issues.apache.org/jira/browse/HBASE-18142
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Reporter: Karthick
>
> When I tried to delete a cell using it's timestamp in the Hbase Shell, the 
> previous versions of the same cell also got deleted. But when I tried the 
> same using the Java API, then the previous versions are not deleted and I can 
> retrive the previous values.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18142) Deletion of a cell deletes the previous versions too

2017-05-31 Thread Karthick (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthick updated HBASE-18142:
-
Description: 
When I tried to delete a cell using it's timestamp in the Hbase Shell, the 
previous versions of the same cell also got deleted. But when I tried the same 
using the Java API, then the previous versions are not deleted and I can 
retrive the previous values.

https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java

see this file to fix the issue. This method (public Delete addColumns(final 
byte [] family, final byte [] qualifier, final long timestamp)) only deletes 
the current version of the cell. The previous versions are not deleted.

  was:When I tried to delete a cell using it's timestamp in the Hbase Shell, 
the previous versions of the same cell also got deleted. But when I tried the 
same using the Java API, then the previous versions are not deleted and I can 
retrive the previous values.


> Deletion of a cell deletes the previous versions too
> 
>
> Key: HBASE-18142
> URL: https://issues.apache.org/jira/browse/HBASE-18142
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Reporter: Karthick
>
> When I tried to delete a cell using it's timestamp in the Hbase Shell, the 
> previous versions of the same cell also got deleted. But when I tried the 
> same using the Java API, then the previous versions are not deleted and I can 
> retrive the previous values.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java
> see this file to fix the issue. This method (public Delete addColumns(final 
> byte [] family, final byte [] qualifier, final long timestamp)) only deletes 
> the current version of the cell. The previous versions are not deleted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18142) Deletion of a cell deletes the previous versions too

2017-05-31 Thread Karthick (JIRA)
Karthick created HBASE-18142:


 Summary: Deletion of a cell deletes the previous versions too
 Key: HBASE-18142
 URL: https://issues.apache.org/jira/browse/HBASE-18142
 Project: HBase
  Issue Type: Bug
Reporter: Karthick


When I tried to delete a cell using it's timestamp in the Hbase Shell, the 
previous versions of the same cell also got deleted. But when I tried the same 
using the Java API, then the previous versions are not deleted and I can 
retrive the previous values.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)