[jira] [Commented] (HBASE-17339) Scan-Memory-First Optimization for Get Operation

2016-12-27 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15782330#comment-15782330
 ] 

Edward Bortnikov commented on HBASE-17339:
--

[~davelatham], [~yangzhe1991] - thanks for pointing out the historical context. 
Indeed, the idea will not work in peer clusters with concurrent updates. 
However, it seems that there are enough interesting use cases that deserve 
treatment. 

This optimization is complementary to in-memory flush & compaction (see 
HBASE-14918). The latter brings its own value, but in conjunction the two 
produce very impressive reduction in read latency. [~eshcar], maybe you could 
attach some perf results? Thanks.  

> Scan-Memory-First Optimization for Get Operation
> 
>
> Key: HBASE-17339
> URL: https://issues.apache.org/jira/browse/HBASE-17339
> Project: HBase
>  Issue Type: Improvement
>Reporter: Eshcar Hillel
> Attachments: HBASE-17339-V01.patch
>
>
> The current implementation of a get operation (to retrieve values for a 
> specific key) scans through all relevant stores of the region; for each store 
> both memory components (memstores segments) and disk components (hfiles) are 
> scanned in parallel.
> We suggest to apply an optimization that speculatively scans memory-only 
> components first and only if the result is incomplete scans both memory and 
> disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17238) Wrong in-memory hbase:meta location causing SSH failure

2016-12-27 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-17238:
---
Attachment: HBASE-17238.v1-branch-1.patch

> Wrong in-memory hbase:meta location causing SSH failure
> ---
>
> Key: HBASE-17238
> URL: https://issues.apache.org/jira/browse/HBASE-17238
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 1.1.0
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
>Priority: Critical
> Attachments: HBASE-17238.v1-branch-1.1.patch, 
> HBASE-17238.v1-branch-1.patch, HBASE-17238.v2-branch-1.1.patch
>
>
> In HBase 1.x, if HMaster#assignMeta() assigns a non-DEFAULT_REPLICA_ID 
> hbase:meta region, it would wrongly update the DEFAULT_REPLICA_ID hbase:meta 
> region in-memory.  This caused the in-memory region state has wrong RS 
> information for default replica hbase:meta region.  One of the problem we saw 
> is a wrong type of SSH could be chosen and causing problems.
> {code}
> void assignMeta(MonitoredTask status, Set 
> previouslyFailedMetaRSs, int replicaId)
>   throws InterruptedException, IOException, KeeperException {
> // Work on meta region
> ...
> if (replicaId == HRegionInfo.DEFAULT_REPLICA_ID) {
>   status.setStatus("Assigning hbase:meta region");
> } else {
>   status.setStatus("Assigning hbase:meta region, replicaId " + replicaId);
> }
> // Get current meta state from zk.
> RegionStates regionStates = assignmentManager.getRegionStates();
> RegionState metaState = 
> MetaTableLocator.getMetaRegionState(getZooKeeper(), replicaId);
> HRegionInfo hri = 
> RegionReplicaUtil.getRegionInfoForReplica(HRegionInfo.FIRST_META_REGIONINFO,
> replicaId);
> ServerName currentMetaServer = metaState.getServerName();
> ...
> boolean rit = this.assignmentManager.
>   processRegionInTransitionAndBlockUntilAssigned(hri);
> boolean metaRegionLocation = metaTableLocator.verifyMetaRegionLocation(
>   this.getConnection(), this.getZooKeeper(), timeout, replicaId);
> ...
> } else {
>   // Region already assigned. We didn't assign it. Add to in-memory state.
>   regionStates.updateRegionState(
> HRegionInfo.FIRST_META_REGIONINFO, State.OPEN, currentMetaServer); 
> <<--- Wrong region to update -->>
>   this.assignmentManager.regionOnline(
> HRegionInfo.FIRST_META_REGIONINFO, currentMetaServer); <<--- Wrong 
> region to update -->>
> }
> ...
> {code}
> Here is the problem scenario:
> Step 1: master failovers (or starts could have the same issue) and find 
> default replica of hbase:meta is in rs1.
> {noformat}
> 2016-11-26 00:06:36,590 INFO org.apache.hadoop.hbase.master.ServerManager: 
> AssignmentManager hasn't finished failover cleanup; waiting
> 2016-11-26 00:06:36,591 INFO org.apache.hadoop.hbase.master.HMaster: 
> hbase:meta with replicaId 0 assigned=0, rit=false, 
> location=rs1,60200,1480103147220
> {noformat}
> Step 2: master finds that replica 1 of hbase:meta is unassigned, therefore, 
> HMaster#assignMeta() is called and assign the replica 1 region to rs2, but at 
> the end, it wrongly updates the in-memory state of default replica to rs2
> {noformat}
> 2016-11-26 00:08:21,741 DEBUG org.apache.hadoop.hbase.master.RegionStates: 
> Onlined 1588230740 on rs2,60200,1480102993815 {ENCODED => 1588230740, NAME => 
> 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
> 2016-11-26 00:08:21,741 INFO org.apache.hadoop.hbase.master.RegionStates: 
> Offlined 1588230740 from rs1,60200,1480103147220
> 2016-11-26 00:08:21,741 INFO org.apache.hadoop.hbase.master.HMaster: 
> hbase:meta with replicaId 1 assigned=0, rit=false, 
> location=rs2,60200,1480102993815
> {noformat}
> Step 3: now rs1 is down, master needs to choose which SSH to call 
> (MetaServerShutdownHandler or normal ServerShutdownHandler) - in this case, 
> MetaServerShutdownHandler should be chosen; however, due to wrong in-memory 
> location, normal ServerShutdownHandler was chosen:
> {noformat}
> 2016-11-26 00:08:33,995 INFO 
> org.apache.hadoop.hbase.zookeeper.RegionServerTracker: RegionServer ephemeral 
> node deleted, processing expiration [rs1,60200,1480103147220]
> 2016-11-26 00:08:33,998 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: based on AM, current 
> region=hbase:meta,,1.1588230740 is on server=rs2,60200,1480102993815 server 
> being checked: rs1,60200,1480103147220
> 2016-11-26 00:08:34,001 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> Added=rs1,60200,1480103147220 to dead servers, submitted shutdown handler to 
> be executed meta=false
> {noformat}
> Step 4: Wrong SSH was chosen. Due to accessing hbase:meta failure, the SSH 
> failed after retries.  Now the dead server 

[jira] [Updated] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Liu Junhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Junhong updated HBASE-17374:

Attachment: 0001-fix-for-HBASE-17374-20161228.patch

Avoid RejectedExecutionException after ZKPermissionWatcher be closed.

> ZKPermissionWatcher crashed when grant after close region 
> --
>
> Key: HBASE-17374
> URL: https://issues.apache.org/jira/browse/HBASE-17374
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.15
>Reporter: Liu Junhong
>Priority: Critical
> Attachments: 0001-fix-for-HBASE-17374-20161228.patch, 
> 0001-fix-for-HBASE-17374.patch
>
>
> It was occurred many time that  I granted some permission,  but few of some 
> regionservers was not token effect and must be restart . When I look up logs, 
>  I found that :
> 2016-12-08 15:00:26,238 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] handler.CloseRegionHandler 
> (CloseRegionHandler.java:process(128)) - Processing close of 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> {color:red} 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1163)) - Closing 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
> compactions & flushes {color}
> 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1190)) - Updates disabled for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> 2016-12-08 15:00:26,242 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1753)) - Started memstore flush for 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., current 
> region memstore size 160
> 2016-12-08 15:00:26,284 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) 
> - Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,303 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) 
> - Committing store file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
>  as 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,318 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HStore 
> (HStore.java:commitFile(877)) - Added 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
>  entries=1, sequenceid=6, filesize=985
> 2016-12-08 15:00:26,319 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1920)) - Finished memstore flush of 
> ~160/160, currentsize=0/0 for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
> sequenceid=6, compaction requested=false
> 2016-12-08 15:00:26,323 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf1
> 2016-12-08 15:00:26,325 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf2
> 2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.token.TokenProvider
> {color:red}  2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.AccessController  {color}
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.regionserver.ExternalMetricObserver
> 2016-12-08 15:00:26,328 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 

[jira] [Commented] (HBASE-17238) Wrong in-memory hbase:meta location causing SSH failure

2016-12-27 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15782286#comment-15782286
 ] 

Stephen Yuan Jiang commented on HBASE-17238:


findBugs and javadoc issues are pre-existed and unrelated to the change.

> Wrong in-memory hbase:meta location causing SSH failure
> ---
>
> Key: HBASE-17238
> URL: https://issues.apache.org/jira/browse/HBASE-17238
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 1.1.0
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
>Priority: Critical
> Attachments: HBASE-17238.v1-branch-1.1.patch, 
> HBASE-17238.v2-branch-1.1.patch
>
>
> In HBase 1.x, if HMaster#assignMeta() assigns a non-DEFAULT_REPLICA_ID 
> hbase:meta region, it would wrongly update the DEFAULT_REPLICA_ID hbase:meta 
> region in-memory.  This caused the in-memory region state has wrong RS 
> information for default replica hbase:meta region.  One of the problem we saw 
> is a wrong type of SSH could be chosen and causing problems.
> {code}
> void assignMeta(MonitoredTask status, Set 
> previouslyFailedMetaRSs, int replicaId)
>   throws InterruptedException, IOException, KeeperException {
> // Work on meta region
> ...
> if (replicaId == HRegionInfo.DEFAULT_REPLICA_ID) {
>   status.setStatus("Assigning hbase:meta region");
> } else {
>   status.setStatus("Assigning hbase:meta region, replicaId " + replicaId);
> }
> // Get current meta state from zk.
> RegionStates regionStates = assignmentManager.getRegionStates();
> RegionState metaState = 
> MetaTableLocator.getMetaRegionState(getZooKeeper(), replicaId);
> HRegionInfo hri = 
> RegionReplicaUtil.getRegionInfoForReplica(HRegionInfo.FIRST_META_REGIONINFO,
> replicaId);
> ServerName currentMetaServer = metaState.getServerName();
> ...
> boolean rit = this.assignmentManager.
>   processRegionInTransitionAndBlockUntilAssigned(hri);
> boolean metaRegionLocation = metaTableLocator.verifyMetaRegionLocation(
>   this.getConnection(), this.getZooKeeper(), timeout, replicaId);
> ...
> } else {
>   // Region already assigned. We didn't assign it. Add to in-memory state.
>   regionStates.updateRegionState(
> HRegionInfo.FIRST_META_REGIONINFO, State.OPEN, currentMetaServer); 
> <<--- Wrong region to update -->>
>   this.assignmentManager.regionOnline(
> HRegionInfo.FIRST_META_REGIONINFO, currentMetaServer); <<--- Wrong 
> region to update -->>
> }
> ...
> {code}
> Here is the problem scenario:
> Step 1: master failovers (or starts could have the same issue) and find 
> default replica of hbase:meta is in rs1.
> {noformat}
> 2016-11-26 00:06:36,590 INFO org.apache.hadoop.hbase.master.ServerManager: 
> AssignmentManager hasn't finished failover cleanup; waiting
> 2016-11-26 00:06:36,591 INFO org.apache.hadoop.hbase.master.HMaster: 
> hbase:meta with replicaId 0 assigned=0, rit=false, 
> location=rs1,60200,1480103147220
> {noformat}
> Step 2: master finds that replica 1 of hbase:meta is unassigned, therefore, 
> HMaster#assignMeta() is called and assign the replica 1 region to rs2, but at 
> the end, it wrongly updates the in-memory state of default replica to rs2
> {noformat}
> 2016-11-26 00:08:21,741 DEBUG org.apache.hadoop.hbase.master.RegionStates: 
> Onlined 1588230740 on rs2,60200,1480102993815 {ENCODED => 1588230740, NAME => 
> 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
> 2016-11-26 00:08:21,741 INFO org.apache.hadoop.hbase.master.RegionStates: 
> Offlined 1588230740 from rs1,60200,1480103147220
> 2016-11-26 00:08:21,741 INFO org.apache.hadoop.hbase.master.HMaster: 
> hbase:meta with replicaId 1 assigned=0, rit=false, 
> location=rs2,60200,1480102993815
> {noformat}
> Step 3: now rs1 is down, master needs to choose which SSH to call 
> (MetaServerShutdownHandler or normal ServerShutdownHandler) - in this case, 
> MetaServerShutdownHandler should be chosen; however, due to wrong in-memory 
> location, normal ServerShutdownHandler was chosen:
> {noformat}
> 2016-11-26 00:08:33,995 INFO 
> org.apache.hadoop.hbase.zookeeper.RegionServerTracker: RegionServer ephemeral 
> node deleted, processing expiration [rs1,60200,1480103147220]
> 2016-11-26 00:08:33,998 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: based on AM, current 
> region=hbase:meta,,1.1588230740 is on server=rs2,60200,1480102993815 server 
> being checked: rs1,60200,1480103147220
> 2016-11-26 00:08:34,001 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> Added=rs1,60200,1480103147220 to dead servers, submitted shutdown handler to 
> be executed meta=false
> {noformat}
> Step 4: Wrong SSH was chosen. Due to accessing hbase:meta failure, the SSH 
> failed 

[jira] [Updated] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17379:
---
Attachment: 17379.v3.txt

Patch v3 addresses Ram's comments.

> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt, 17379.v2.txt, 17379.v3.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1880)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5842)
>   at 
> 

[jira] [Commented] (HBASE-16421) Introducing the CellChunkMap as a new additional index variant in the MemStore

2016-12-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15782176#comment-15782176
 ] 

ramkrishna.s.vasudevan commented on HBASE-16421:


bq. I assume that in all experiments (except DefaultMemStore) you had MSLAB and 
Chunk Pool on, 
Yes.
bq.What is the "total time" that you are presenting? Total time of the 
experiment? Isn’t it 16 minutes?
Yes it is 16 mins
bq.You can get specifically the scan latency from YCSB.
This does not complete as I said. Because every scan is a full scan from the 
randomly selected start row. I may have to reduce the record count and the 
operation count may be - to complete this experiment.
Also since we don't know if the scan comes from memstore or files - how to 
ensure that the garbage gnerated on a memstore scan reduces/increases 
performance ?  I don't think that is possible unless we have the memstore first 
Scan approach. 
bq.Anyway, we can say that CellChunkMap is not decreasing the performance too 
much, and it worth it at least as off-heaping worth it. Do you agree?
Yes I think so. But let us wait for some more feedback or cases that I may be 
missing. More tests can be done from beginning of next week.

> Introducing the CellChunkMap as a new additional index variant in the MemStore
> --
>
> Key: HBASE-16421
> URL: https://issues.apache.org/jira/browse/HBASE-16421
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Anastasia Braginsky
> Attachments: CellChunkMapRevived.pdf, ChunkCell_creation.png, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Follow up for HBASE-14921. This is going to be the umbrella JIRA to include 
> all the parts of integration of the CellChunkMap to the MemStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16421) Introducing the CellChunkMap as a new additional index variant in the MemStore

2016-12-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15782169#comment-15782169
 ] 

ramkrishna.s.vasudevan commented on HBASE-16421:


Ya this is the total GC time. My main aim is to show that CellChunkMap garbage 
is not a killer. But I can wait for some more inputs in order to test in a 
bigger scale.
And as you said offheap memstore just runs with 12G space and onheap runs with 
30G. The 12G space has to anyway handle the garbage that occurs in the 
flush/compaction path. If you see the mixed GC avg it is lesser than any other 
case for offheap memstore - particuarly the CellChunkMap case.
If I run offheap with 30G space I will end up in much better GC profile too. 



> Introducing the CellChunkMap as a new additional index variant in the MemStore
> --
>
> Key: HBASE-16421
> URL: https://issues.apache.org/jira/browse/HBASE-16421
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Anastasia Braginsky
> Attachments: CellChunkMapRevived.pdf, ChunkCell_creation.png, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Follow up for HBASE-14921. This is going to be the umbrella JIRA to include 
> all the parts of integration of the CellChunkMap to the MemStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15782160#comment-15782160
 ] 

ramkrishna.s.vasudevan commented on HBASE-17379:


Synchronization in removeLast, addFirst(), swapSuffix() is not necessary. 
Already the caller is synchronized.

Under drain() moving the size under sync is harmless. I think it is fine even 
if not done because we will be having the updatesLock obtained that time.

getTailSize(), getPipelineSize() and getScanners() we need synchronization. 

validateSuffixList() is unused.  

Thanks for the patch Ted.




> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt, 17379.v2.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> 

[jira] [Commented] (HBASE-17320) Add inclusive/exclusive support for startRow and endRow of scan

2016-12-27 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15782130#comment-15782130
 ] 

Duo Zhang commented on HBASE-17320:
---

Fine. Will wait for you.

> Add inclusive/exclusive support for startRow and endRow of scan
> ---
>
> Key: HBASE-17320
> URL: https://issues.apache.org/jira/browse/HBASE-17320
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17320-v1.patch, HBASE-17320-v2.patch, 
> HBASE-17320-v3.patch, HBASE-17320-v4.patch, HBASE-17320.patch
>
>
> This is especially useful when doing reverse scan. HBASE-17168 maybe a more 
> powerful solution but we need to be careful about the atomicity, and I do not 
> think we will provide the feature to end user. But I think it is OK to 
> provide inclusive/exclusive option to end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17320) Add inclusive/exclusive support for startRow and endRow of scan

2016-12-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15782081#comment-15782081
 ] 

Ted Yu commented on HBASE-17320:


Haven't gone through the latest patch yet.

I don't have other comment at this moment.

> Add inclusive/exclusive support for startRow and endRow of scan
> ---
>
> Key: HBASE-17320
> URL: https://issues.apache.org/jira/browse/HBASE-17320
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17320-v1.patch, HBASE-17320-v2.patch, 
> HBASE-17320-v3.patch, HBASE-17320-v4.patch, HBASE-17320.patch
>
>
> This is especially useful when doing reverse scan. HBASE-17168 maybe a more 
> powerful solution but we need to be careful about the atomicity, and I do not 
> think we will provide the feature to end user. But I think it is OK to 
> provide inclusive/exclusive option to end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15782074#comment-15782074
 ] 

ramkrishna.s.vasudevan commented on HBASE-17379:


[~anastas], [~eshcar] -FYI.

> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt, 17379.v2.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1880)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5842)
>   at 
> 

[jira] [Commented] (HBASE-17320) Add inclusive/exclusive support for startRow and endRow of scan

2016-12-27 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15782070#comment-15782070
 ] 

Duo Zhang commented on HBASE-17320:
---

Ping [~tedyu]. Any other concerns? I can add a release note before commit.

Thanks.

> Add inclusive/exclusive support for startRow and endRow of scan
> ---
>
> Key: HBASE-17320
> URL: https://issues.apache.org/jira/browse/HBASE-17320
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17320-v1.patch, HBASE-17320-v2.patch, 
> HBASE-17320-v3.patch, HBASE-17320-v4.patch, HBASE-17320.patch
>
>
> This is especially useful when doing reverse scan. HBASE-17168 maybe a more 
> powerful solution but we need to be careful about the atomicity, and I do not 
> think we will provide the feature to end user. But I think it is OK to 
> provide inclusive/exclusive option to end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17291) Remove ImmutableSegment#getKeyValueScanner

2016-12-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15782050#comment-15782050
 ] 

ramkrishna.s.vasudevan commented on HBASE-17291:


Oh ya. It definitely counts. That too any code in this area your feedback is 
most valuable. If you do -1 then it is 'NO GO' even if another committer says 
+1. :)

> Remove ImmutableSegment#getKeyValueScanner
> --
>
> Key: HBASE-17291
> URL: https://issues.apache.org/jira/browse/HBASE-17291
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17291.patch, HBASE-17291_1.patch, 
> HBASE-17291_2.patch
>
>
> This is based on a discussion over [~anastas]'s patch. The MemstoreSnapshot 
> uses a KeyValueScanner which actually seems redundant considering we already 
> have a SegmentScanner. The idea is that the snapshot scanner should be a 
> simple iterator type of scanner but it lacks the capability to do the 
> reference counting on that segment that is now used in snapshot. With 
> snapshot having mulitple segments in the latest impl it is better we hold on 
> to the segment by doing ref counting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17068) Procedure v2 - inherit region locks

2016-12-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781941#comment-15781941
 ] 

Hudson commented on HBASE-17068:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2211 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2211/])
HBASE-17068 Procedure v2 - inherit region locks (Matteo Bertozzi) (stack: rev 
306ef83c9cde9730ae2268db3814d59b936de4c1)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureScheduler.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestMasterProcedureScheduler.java


> Procedure v2 - inherit region locks 
> 
>
> Key: HBASE-17068
> URL: https://issues.apache.org/jira/browse/HBASE-17068
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17068-v0.patch, HBASE-17068-v1.patch, 
> HBASE-17068-v1.patch
>
>
> Add support for inherited region locks. 
> e.g. Split will have Assign/Unassign as child which will take the lock on the 
> same region split is running on



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17238) Wrong in-memory hbase:meta location causing SSH failure

2016-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781936#comment-15781936
 ] 

Hadoop QA commented on HBASE-17238:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 39s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
23s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} branch-1.1 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} branch-1.1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 45s 
{color} | {color:red} hbase-server in branch-1.1 has 81 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 29s 
{color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0_111. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 42s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_111. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 84m 34s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 125m 19s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8012383 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844848/HBASE-17238.v2-branch-1.1.patch
 |
| JIRA Issue | HBASE-17238 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 6f06d9bee3e0 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/hbase.sh |
| git revision | branch-1.1 / 1999c15 |
| Default Java | 1.7.0_80 |
| Multi-JDK 

[jira] [Commented] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync

2016-12-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781939#comment-15781939
 ] 

Hudson commented on HBASE-16524:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2211 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2211/])
HBASE-16524 Procedure v2 - Compute WALs cleanup on wal modification and (stack: 
rev 319ecd867a2903c4ce03c38f6ffec62ada1a6049)
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java
* (edit) 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/ProcedureStoreTracker.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormatReader.java
* (edit) 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/TestProcedureStoreTracker.java


> Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Appy
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16524-v2.patch, HBASE-16524-v3.patch, 
> HBASE-16524-v4.patch, HBASE-16524-v5.patch, HBASE-16524-v6.patch, 
> HBASE-16524.master.001.patch, HBASE-16524.master.002.patch, flame1.svg
>
>
> Fix performance regression introduced by HBASE-16094.
> Instead of scanning all the wals every time, we can rely on the 
> insert/update/delete events we have.
> and since we want to delete the wals in order we can keep track of what is 
> "holding" that wal, and take a hit on scanning all the trackers only when we 
> remove the first log in the queue.
> e.g.
> WAL-1 [1, 2] 
> WAL-2 [1] -> "[2] is holding WAL-1"
> WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17371) Enhance 'HBaseContextSuite @ distributedScan to test HBase client' with filter

2016-12-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781938#comment-15781938
 ] 

Hudson commented on HBASE-17371:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2211 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2211/])
HBASE-17371 Enhance 'HBaseContextSuite @ distributedScan to test HBase (tedyu: 
rev ccb8d671d590f4ea347fb85049f84620564ce1cb)
* (edit) 
hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/HBaseContextSuite.scala


> Enhance 'HBaseContextSuite @ distributedScan to test HBase client' with filter
> --
>
> Key: HBASE-17371
> URL: https://issues.apache.org/jira/browse/HBASE-17371
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: 17371.v1.txt
>
>
> Currently 'HBaseContextSuite @ distributedScan to test HBase client' uses 
> Scan which doesn't utilize any Filter.
> This issue adds a FirstKeyOnlyFilter to the scan object to ascertain the case 
> where the number of cells returned is the same as number of rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17090) Procedure v2 - fast wake if nothing else is running

2016-12-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781940#comment-15781940
 ] 

Hudson commented on HBASE-17090:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2211 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2211/])
HBASE-17090 Procedure v2 - fast wake if nothing else is running (Matteo (stack: 
rev da97569eae662ad90fd3afd98ef148c94eee4ac1)
* (edit) 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALPerformanceEvaluation.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/NoopProcedureStore.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/ProcedureStore.java


> Procedure v2 - fast wake if nothing else is running
> ---
>
> Key: HBASE-17090
> URL: https://issues.apache.org/jira/browse/HBASE-17090
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17090-v0.patch
>
>
> We wait Nmsec to see if we can batch more procedures, but the pattern that we 
> have allows us to wait only for what we know is running and avoid waiting for 
> something that will never get there. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16594) ROW_INDEX_V2 DBE

2016-12-27 Thread Chang chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781916#comment-15781916
 ] 

Chang chen commented on HBASE-16594:


Hi Guys 

How does ROW_INDEX_VX encoder compare to prefix tree? 

Thanks
Chang

> ROW_INDEX_V2 DBE
> 
>
> Key: HBASE-16594
> URL: https://issues.apache.org/jira/browse/HBASE-16594
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16594-master_v1.patch, HBASE-16594-master_v2.patch
>
>
> See HBASE-16213, ROW_INDEX_V1 DataBlockEncoding.
> ROW_INDEX_V1 is the first version which have no storage optimization, 
> ROW_INDEX_V2 do storage optimization: store every row only once, store column 
> family only once in a HFileBlock.
> ROW_INDEX_V1 is : 
> /** 
>  * Store cells following every row's start offset, so we can binary search to 
> a row's cells. 
>  * 
>  * Format: 
>  * flat cells 
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * integer: dataSize 
>  * 
> */
> ROW_INDEX_V2 is :
>  * row1 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  * row2 qualifier timestamp type value tag
>  * row3 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * column family
>  * integer: dataSize 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781891#comment-15781891
 ] 

Ted Yu commented on HBASE-17081:


TestHRegionWithInMemoryFlush still fails occasionally.
This is tracked by HBASE-17379

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-15787_8.patch, HBASE-17081-V01.patch, 
> HBASE-17081-V02.patch, HBASE-17081-V03.patch, HBASE-17081-V04.patch, 
> HBASE-17081-V05.patch, HBASE-17081-V06.patch, HBASE-17081-V06.patch, 
> HBASE-17081-V07.patch, HBASE-17081-V10.patch, 
> HBaseMeetupDecember2016-V02.pptx, Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17375) PrefixTreeArrayReversibleScanner#previousRowInternal doesn't work correctly

2016-12-27 Thread Chang chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781871#comment-15781871
 ] 

Chang chen commented on HBASE-17375:


Thanks for your info.

Unfortunately it's too new, so that i can't use it in prod.

> PrefixTreeArrayReversibleScanner#previousRowInternal doesn't work correctly
> ---
>
> Key: HBASE-17375
> URL: https://issues.apache.org/jira/browse/HBASE-17375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 0.98.24
>Reporter: Chang chen
>Assignee: Chang chen
> Fix For: 2.0.0
>
> Attachments: HBASE_17375_master_v1.patch, row trie example.PNG
>
>
> Recently, we find our hbase compaction thread never end.  Assume we have 
> following cells:
> {quote}
>  1
>  1
>  1
>  1
>  1
>  1
>  1
>  1
> {quote}
> If we encode above datas into prefix tree block, then it looks like:
> !row trie example.PNG!
> Assume the current row is {color:red}Abc{color} (e.g. the current row node is 
> 4), then the previous row should be *Aa* (e.g. 2). However 
> previousRowInternal return {color:red}A{color}(e.g. 1)
> After investigation, I believe it's the bug of 
> PrefixTreeArrayReversibleScanner#previousRowInternal.
> {code}
>   private boolean previousRowInternal() {
> //...
> while (!beforeFirst) {
>   //
>   // what if currentRowNode is nub?
>   if (currentRowNode.hasOccurrences()) {// escape clause
> currentRowNode.resetFanIndex();
> return true;// found some values
>   }
> }
> {code}
> currentRowNode.hasOccurrences() only test whether it has cell or not. But in 
> the case of  currentRowNode.isNub() is true, previousRowInternal should 
> follow the previous fan instead of return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17018) Spooling BufferedMutator

2016-12-27 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated HBASE-17018:
-
Attachment: HBASE-17018.master.005.patch

[~enis] are you suggesting we don't do a double-write, but write wals to HDFS 
only, and then have a separate set of "readers" replay the WALs from HDFS to 
HBase?

In that case we'd be writing tons of little WAL files to the source cluster's 
HDFS (not just the one backing HBase) in all cases, not just the case when 
HBase is bad. As Sangjin pointed out that would introduce a delay by when the 
writes are available, or else we have to keep track of high-and low watermarks, 
rotate WALs frequently or something else. I'm wondering if we are just shifting 
the complexity around.
The nice thing with the current approach is that under normal circumstances, 
the data written to HBase is ready in near-real time (only some writes are 
buffered, but we're talking about flushing once a minute).
HBase writing WALs to its own HDFS will be on a separately tuned cluster.

In any case, let me discuss that approach with other devs working on timeline 
service and see what they think.

In the meantime I'm stashing a new patch (version 5). This incorporate's 
[~sjlee0]'s suggestion to ensure that accounting flushCount and enqueueing is 
done in one synchronized block so that we avoid out of order items in the 
outbound queue. This is now moved to the coordinator. I've also added a simple 
exception handler to the coordinator and a unit test for that.
I'm not sure how much fancier we need to get with the exception handler. 

> Spooling BufferedMutator
> 
>
> Key: HBASE-17018
> URL: https://issues.apache.org/jira/browse/HBASE-17018
> Project: HBase
>  Issue Type: New Feature
>Reporter: Joep Rottinghuis
> Attachments: HBASE-17018.master.001.patch, 
> HBASE-17018.master.002.patch, HBASE-17018.master.003.patch, 
> HBASE-17018.master.004.patch, HBASE-17018.master.005.patch, 
> HBASE-17018SpoolingBufferedMutatorDesign-v1.pdf, YARN-4061 HBase requirements 
> for fault tolerant writer.pdf
>
>
> For Yarn Timeline Service v2 we use HBase as a backing store.
> A big concern we would like to address is what to do if HBase is 
> (temporarily) down, for example in case of an HBase upgrade.
> Most of the high volume writes will be mostly on a best-effort basis, but 
> occasionally we do a flush. Mainly during application lifecycle events, 
> clients will call a flush on the timeline service API. In order to handle the 
> volume of writes we use a BufferedMutator. When flush gets called on our API, 
> we in turn call flush on the BufferedMutator.
> We would like our interface to HBase be able to spool the mutations to a 
> filesystems in case of HBase errors. If we use the Hadoop filesystem 
> interface, this can then be HDFS, gcs, s3, or any other distributed storage. 
> The mutations can then later be re-played, for example through a MapReduce 
> job.
> https://reviews.apache.org/r/54882/
> For design of SpoolingBufferedMutatorImpl see 
> https://docs.google.com/document/d/1GTSk1Hd887gGJduUr8ZJ2m-VKrIXDUv9K3dr4u2YGls/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17379:
---
Description: 
>From 
>https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
> :
{code}
java.io.IOException: java.util.ConcurrentModificationException
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
at 
org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.ConcurrentModificationException: null
at 
java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
at java.util.LinkedList$ListItr.next(LinkedList.java:888)
at 
org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
at 
org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
at 
org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1880)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5842)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
{code}
The cause is in CompactionPipeline#getScanners() where there is no 
synchronization around iterating pipeline.
The code causing ConcurrentModificationException:
{code}
for (Segment segment : this.pipeline) {
{code}
was introduced by HBASE-17081

  was:
>From 

[jira] [Commented] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781806#comment-15781806
 ] 

Ted Yu commented on HBASE-17379:


{code}
Flaked tests: 
org.apache.hadoop.hbase.master.balancer.TestDefaultLoadBalancer.testBalanceClusterOverall(org.apache.hadoop.hbase.master.balancer.TestDefaultLoadBalancer)
  Run 1: TestDefaultLoadBalancer.testBalanceClusterOverall:152 null
  Run 2: TestDefaultLoadBalancer.testBalanceClusterOverall:152 null
  Run 3: PASS

org.apache.hadoop.hbase.regionserver.wal.TestAsyncLogRolling.testLogRollOnDatanodeDeath(org.apache.hadoop.hbase.regionserver.wal.TestAsyncLogRolling)
  Run 1: TestAsyncLogRolling.testLogRollOnDatanodeDeath:65 expected:<1> but 
was:<0>
  Run 2: PASS

org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelReplicationWithExpAsString.testVisibilityReplication(org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelReplicationWithExpAsString)
  Run 1: 
TestVisibilityLabelReplicationWithExpAsString>TestVisibilityLabelsReplication.testVisibilityReplication:265
 null
  Run 2: PASS
{code}
The failures from above flaky tests were not related to the patch.

> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt, 17379.v2.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   

[jira] [Commented] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781795#comment-15781795
 ] 

Hadoop QA commented on HBASE-17379:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 23s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 1s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 134m 9s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844836/17379.v2.txt |
| JIRA Issue | HBASE-17379 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a2dea51305fa 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ccb8d67 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5065/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5065/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5065/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HBASE-17320) Add inclusive/exclusive support for startRow and endRow of scan

2016-12-27 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781780#comment-15781780
 ] 

Duo Zhang commented on HBASE-17320:
---

I've reverted the modification of the old scan related tests in the latest 
patch.

The 'incompatible change' is the StoreFileManager so I need to modify 
TestStripeStoreFileManager, but StoreFileManager is marked as private.

{code:title=StoreFileManager.java}
@InterfaceAudience.Private
public interface StoreFileManager {
{code}

Thanks.

> Add inclusive/exclusive support for startRow and endRow of scan
> ---
>
> Key: HBASE-17320
> URL: https://issues.apache.org/jira/browse/HBASE-17320
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17320-v1.patch, HBASE-17320-v2.patch, 
> HBASE-17320-v3.patch, HBASE-17320-v4.patch, HBASE-17320.patch
>
>
> This is especially useful when doing reverse scan. HBASE-17168 maybe a more 
> powerful solution but we need to be careful about the atomicity, and I do not 
> think we will provide the feature to end user. But I think it is OK to 
> provide inclusive/exclusive option to end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17320) Add inclusive/exclusive support for startRow and endRow of scan

2016-12-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781759#comment-15781759
 ] 

Ted Yu commented on HBASE-17320:


{code}
@InterfaceAudience.Public
@InterfaceStability.Stable
public class Scan extends Query {
{code}
Yet new methods are added to Scan.
Shouldn't this be properly noted ?

If the changes are compatible, why do certain tests need to be modified ?

> Add inclusive/exclusive support for startRow and endRow of scan
> ---
>
> Key: HBASE-17320
> URL: https://issues.apache.org/jira/browse/HBASE-17320
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17320-v1.patch, HBASE-17320-v2.patch, 
> HBASE-17320-v3.patch, HBASE-17320-v4.patch, HBASE-17320.patch
>
>
> This is especially useful when doing reverse scan. HBASE-17168 maybe a more 
> powerful solution but we need to be careful about the atomicity, and I do not 
> think we will provide the feature to end user. But I think it is OK to 
> provide inclusive/exclusive option to end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781752#comment-15781752
 ] 

Ted Yu commented on HBASE-17379:


Addition of extra synchronization follows Anoop's comment.

Protection is needed in the following case:
{code}
  return new MemstoreSize(getSegmentsKeySize(pipeline), 
getSegmentsHeapOverhead(pipeline));
{code}
where pipeline is accessed twice in the same method.

Is there particular method you think synchronization is not needed ?

> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt, 17379.v2.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
>   at 
> 

[jira] [Updated] (HBASE-17238) Wrong in-memory hbase:meta location causing SSH failure

2016-12-27 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-17238:
---
Attachment: HBASE-17238.v2-branch-1.1.patch

> Wrong in-memory hbase:meta location causing SSH failure
> ---
>
> Key: HBASE-17238
> URL: https://issues.apache.org/jira/browse/HBASE-17238
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 1.1.0
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
>Priority: Critical
> Attachments: HBASE-17238.v1-branch-1.1.patch, 
> HBASE-17238.v2-branch-1.1.patch
>
>
> In HBase 1.x, if HMaster#assignMeta() assigns a non-DEFAULT_REPLICA_ID 
> hbase:meta region, it would wrongly update the DEFAULT_REPLICA_ID hbase:meta 
> region in-memory.  This caused the in-memory region state has wrong RS 
> information for default replica hbase:meta region.  One of the problem we saw 
> is a wrong type of SSH could be chosen and causing problems.
> {code}
> void assignMeta(MonitoredTask status, Set 
> previouslyFailedMetaRSs, int replicaId)
>   throws InterruptedException, IOException, KeeperException {
> // Work on meta region
> ...
> if (replicaId == HRegionInfo.DEFAULT_REPLICA_ID) {
>   status.setStatus("Assigning hbase:meta region");
> } else {
>   status.setStatus("Assigning hbase:meta region, replicaId " + replicaId);
> }
> // Get current meta state from zk.
> RegionStates regionStates = assignmentManager.getRegionStates();
> RegionState metaState = 
> MetaTableLocator.getMetaRegionState(getZooKeeper(), replicaId);
> HRegionInfo hri = 
> RegionReplicaUtil.getRegionInfoForReplica(HRegionInfo.FIRST_META_REGIONINFO,
> replicaId);
> ServerName currentMetaServer = metaState.getServerName();
> ...
> boolean rit = this.assignmentManager.
>   processRegionInTransitionAndBlockUntilAssigned(hri);
> boolean metaRegionLocation = metaTableLocator.verifyMetaRegionLocation(
>   this.getConnection(), this.getZooKeeper(), timeout, replicaId);
> ...
> } else {
>   // Region already assigned. We didn't assign it. Add to in-memory state.
>   regionStates.updateRegionState(
> HRegionInfo.FIRST_META_REGIONINFO, State.OPEN, currentMetaServer); 
> <<--- Wrong region to update -->>
>   this.assignmentManager.regionOnline(
> HRegionInfo.FIRST_META_REGIONINFO, currentMetaServer); <<--- Wrong 
> region to update -->>
> }
> ...
> {code}
> Here is the problem scenario:
> Step 1: master failovers (or starts could have the same issue) and find 
> default replica of hbase:meta is in rs1.
> {noformat}
> 2016-11-26 00:06:36,590 INFO org.apache.hadoop.hbase.master.ServerManager: 
> AssignmentManager hasn't finished failover cleanup; waiting
> 2016-11-26 00:06:36,591 INFO org.apache.hadoop.hbase.master.HMaster: 
> hbase:meta with replicaId 0 assigned=0, rit=false, 
> location=rs1,60200,1480103147220
> {noformat}
> Step 2: master finds that replica 1 of hbase:meta is unassigned, therefore, 
> HMaster#assignMeta() is called and assign the replica 1 region to rs2, but at 
> the end, it wrongly updates the in-memory state of default replica to rs2
> {noformat}
> 2016-11-26 00:08:21,741 DEBUG org.apache.hadoop.hbase.master.RegionStates: 
> Onlined 1588230740 on rs2,60200,1480102993815 {ENCODED => 1588230740, NAME => 
> 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
> 2016-11-26 00:08:21,741 INFO org.apache.hadoop.hbase.master.RegionStates: 
> Offlined 1588230740 from rs1,60200,1480103147220
> 2016-11-26 00:08:21,741 INFO org.apache.hadoop.hbase.master.HMaster: 
> hbase:meta with replicaId 1 assigned=0, rit=false, 
> location=rs2,60200,1480102993815
> {noformat}
> Step 3: now rs1 is down, master needs to choose which SSH to call 
> (MetaServerShutdownHandler or normal ServerShutdownHandler) - in this case, 
> MetaServerShutdownHandler should be chosen; however, due to wrong in-memory 
> location, normal ServerShutdownHandler was chosen:
> {noformat}
> 2016-11-26 00:08:33,995 INFO 
> org.apache.hadoop.hbase.zookeeper.RegionServerTracker: RegionServer ephemeral 
> node deleted, processing expiration [rs1,60200,1480103147220]
> 2016-11-26 00:08:33,998 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: based on AM, current 
> region=hbase:meta,,1.1588230740 is on server=rs2,60200,1480102993815 server 
> being checked: rs1,60200,1480103147220
> 2016-11-26 00:08:34,001 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> Added=rs1,60200,1480103147220 to dead servers, submitted shutdown handler to 
> be executed meta=false
> {noformat}
> Step 4: Wrong SSH was chosen. Due to accessing hbase:meta failure, the SSH 
> failed after retries.  Now the dead server was not processed; regions in 

[jira] [Commented] (HBASE-17291) Remove ImmutableSegment#getKeyValueScanner

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781682#comment-15781682
 ] 

stack commented on HBASE-17291:
---

[~anastas] Of course it counts.

> Remove ImmutableSegment#getKeyValueScanner
> --
>
> Key: HBASE-17291
> URL: https://issues.apache.org/jira/browse/HBASE-17291
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17291.patch, HBASE-17291_1.patch, 
> HBASE-17291_2.patch
>
>
> This is based on a discussion over [~anastas]'s patch. The MemstoreSnapshot 
> uses a KeyValueScanner which actually seems redundant considering we already 
> have a SegmentScanner. The idea is that the snapshot scanner should be a 
> simple iterator type of scanner but it lacks the capability to do the 
> reference counting on that segment that is now used in snapshot. With 
> snapshot having mulitple segments in the latest impl it is better we hold on 
> to the segment by doing ref counting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781679#comment-15781679
 ] 

stack commented on HBASE-17379:
---

Do we now have an added synchronize on every get? Did we have this previously?

> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt, 17379.v2.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1880)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5842)
>   at 
> 

[jira] [Updated] (HBASE-17381) ReplicationSourceWorkerThread can die due to unhandled exceptions

2016-12-27 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-17381:
--
Component/s: Replication

> ReplicationSourceWorkerThread can die due to unhandled exceptions
> -
>
> Key: HBASE-17381
> URL: https://issues.apache.org/jira/browse/HBASE-17381
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Gary Helmling
>
> If a ReplicationSourceWorkerThread encounters an unexpected exception in the 
> run() method (for example failure to allocate direct memory for the DFS 
> client), the exception will be logged by the UncaughtExceptionHandler, but 
> the thread will also die and the replication queue will back up indefinitely 
> until the Regionserver is restarted.
> We should make sure the worker thread is resilient to all exceptions that it 
> can actually handle.  For those that it really can't, it seems better to 
> abort the regionserver rather than just allow replication to stop with 
> minimal signal.
> Here is a sample exception:
> {noformat}
> ERROR regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, 
> currentPath=hdfs://.../hbase/WALs/XXXwalfilenameXXX
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:96)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:113)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:108)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:344)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:391)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:263)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:160)
> at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.peerFromSocketAndKey(TcpPeerServer.java:92)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3444)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:356)
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:308)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:276)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:264)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:423)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.openReader(ReplicationSource.java:830)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:572)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17320) Add inclusive/exclusive support for startRow and endRow of scan

2016-12-27 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781623#comment-15781623
 ] 

Duo Zhang commented on HBASE-17320:
---

I do not see any incompatible changes in the latest patch... Which part do you 
think is an incompatible change [~tedyu] ? Thanks.

> Add inclusive/exclusive support for startRow and endRow of scan
> ---
>
> Key: HBASE-17320
> URL: https://issues.apache.org/jira/browse/HBASE-17320
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17320-v1.patch, HBASE-17320-v2.patch, 
> HBASE-17320-v3.patch, HBASE-17320-v4.patch, HBASE-17320.patch
>
>
> This is especially useful when doing reverse scan. HBASE-17168 maybe a more 
> powerful solution but we need to be careful about the atomicity, and I do not 
> think we will provide the feature to end user. But I think it is OK to 
> provide inclusive/exclusive option to end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17090) Procedure v2 - fast wake if nothing else is running

2016-12-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17090:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master.

> Procedure v2 - fast wake if nothing else is running
> ---
>
> Key: HBASE-17090
> URL: https://issues.apache.org/jira/browse/HBASE-17090
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17090-v0.patch
>
>
> We wait Nmsec to see if we can batch more procedures, but the pattern that we 
> have allows us to wait only for what we know is running and avoid waiting for 
> something that will never get there. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17381) ReplicationSourceWorkerThread can die due to unhandled exceptions

2016-12-27 Thread Gary Helmling (JIRA)
Gary Helmling created HBASE-17381:
-

 Summary: ReplicationSourceWorkerThread can die due to unhandled 
exceptions
 Key: HBASE-17381
 URL: https://issues.apache.org/jira/browse/HBASE-17381
 Project: HBase
  Issue Type: Bug
Reporter: Gary Helmling


If a ReplicationSourceWorkerThread encounters an unexpected exception in the 
run() method (for example failure to allocate direct memory for the DFS 
client), the exception will be logged by the UncaughtExceptionHandler, but the 
thread will also die and the replication queue will back up indefinitely until 
the Regionserver is restarted.

We should make sure the worker thread is resilient to all exceptions that it 
can actually handle.  For those that it really can't, it seems better to abort 
the regionserver rather than just allow replication to stop with minimal signal.

Here is a sample exception:

{noformat}
ERROR regionserver.ReplicationSource: Unexpected exception in 
ReplicationSourceWorkerThread, 
currentPath=hdfs://.../hbase/WALs/XXXwalfilenameXXX
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:693)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at 
org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:96)
at 
org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:113)
at 
org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:108)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:344)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:391)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:263)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:160)
at 
org.apache.hadoop.hdfs.net.TcpPeerServer.peerFromSocketAndKey(TcpPeerServer.java:92)
at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3444)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:356)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:308)
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:276)
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:264)
at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:423)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.openReader(ReplicationSource.java:830)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:572)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17068) Procedure v2 - inherit region locks

2016-12-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17068:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master branch.

> Procedure v2 - inherit region locks 
> 
>
> Key: HBASE-17068
> URL: https://issues.apache.org/jira/browse/HBASE-17068
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17068-v0.patch, HBASE-17068-v1.patch, 
> HBASE-17068-v1.patch
>
>
> Add support for inherited region locks. 
> e.g. Split will have Assign/Unassign as child which will take the lock on the 
> same region split is running on



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17068) Procedure v2 - inherit region locks

2016-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781606#comment-15781606
 ] 

Hadoop QA commented on HBASE-17068:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 17s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 55s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 126m 48s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844828/HBASE-17068-v1.patch |
| JIRA Issue | HBASE-17068 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 1a3a89e58d03 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5ffbd4a |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5063/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5063/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Procedure v2 - inherit region locks 
> 
>
> Key: HBASE-17068
> URL: https://issues.apache.org/jira/browse/HBASE-17068
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17068-v0.patch, HBASE-17068-v1.patch, 
> HBASE-17068-v1.patch
>

[jira] [Commented] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync

2016-12-27 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781605#comment-15781605
 ] 

Stephen Yuan Jiang commented on HBASE-16524:


Agree

> Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Appy
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16524-v2.patch, HBASE-16524-v3.patch, 
> HBASE-16524-v4.patch, HBASE-16524-v5.patch, HBASE-16524-v6.patch, 
> HBASE-16524.master.001.patch, HBASE-16524.master.002.patch, flame1.svg
>
>
> Fix performance regression introduced by HBASE-16094.
> Instead of scanning all the wals every time, we can rely on the 
> insert/update/delete events we have.
> and since we want to delete the wals in order we can keep track of what is 
> "holding" that wal, and take a hit on scanning all the trackers only when we 
> remove the first log in the queue.
> e.g.
> WAL-1 [1, 2] 
> WAL-2 [1] -> "[2] is holding WAL-1"
> WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync

2016-12-27 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781608#comment-15781608
 ] 

Stephen Yuan Jiang commented on HBASE-16524:


Agree!




> Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Appy
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16524-v2.patch, HBASE-16524-v3.patch, 
> HBASE-16524-v4.patch, HBASE-16524-v5.patch, HBASE-16524-v6.patch, 
> HBASE-16524.master.001.patch, HBASE-16524.master.002.patch, flame1.svg
>
>
> Fix performance regression introduced by HBASE-16094.
> Instead of scanning all the wals every time, we can rely on the 
> insert/update/delete events we have.
> and since we want to delete the wals in order we can keep track of what is 
> "holding" that wal, and take a hit on scanning all the trackers only when we 
> remove the first log in the queue.
> e.g.
> WAL-1 [1, 2] 
> WAL-2 [1] -> "[2] is holding WAL-1"
> WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync

2016-12-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16524:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed the rebased v6 (called HBASE-16524.master.002.patch) to master branch.

> Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Appy
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16524-v2.patch, HBASE-16524-v3.patch, 
> HBASE-16524-v4.patch, HBASE-16524-v5.patch, HBASE-16524-v6.patch, 
> HBASE-16524.master.001.patch, HBASE-16524.master.002.patch, flame1.svg
>
>
> Fix performance regression introduced by HBASE-16094.
> Instead of scanning all the wals every time, we can rely on the 
> insert/update/delete events we have.
> and since we want to delete the wals in order we can keep track of what is 
> "holding" that wal, and take a hit on scanning all the trackers only when we 
> remove the first log in the queue.
> e.g.
> WAL-1 [1, 2] 
> WAL-2 [1] -> "[2] is holding WAL-1"
> WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781598#comment-15781598
 ] 

stack commented on HBASE-16524:
---

Lets commit. It is smarter tracking of the WAL files and a prerequisite for 
what follows. I'm sure we'll get to know this code more intimately in the next 
months.

> Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Appy
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16524-v2.patch, HBASE-16524-v3.patch, 
> HBASE-16524-v4.patch, HBASE-16524-v5.patch, HBASE-16524-v6.patch, 
> HBASE-16524.master.001.patch, HBASE-16524.master.002.patch, flame1.svg
>
>
> Fix performance regression introduced by HBASE-16094.
> Instead of scanning all the wals every time, we can rely on the 
> insert/update/delete events we have.
> and since we want to delete the wals in order we can keep track of what is 
> "holding" that wal, and take a hit on scanning all the trackers only when we 
> remove the first log in the queue.
> e.g.
> WAL-1 [1, 2] 
> WAL-2 [1] -> "[2] is holding WAL-1"
> WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync

2016-12-27 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781553#comment-15781553
 ] 

Stephen Yuan Jiang commented on HBASE-16524:


[~stack], this is the patch that I have least understanding (of the 4 recent 
infrastructure improvement patches from [~mbertozzi]).  I tried to talk to 
[~mbertozzi] about it, but it was too late and he left for a long vacation.

> Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Appy
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16524-v2.patch, HBASE-16524-v3.patch, 
> HBASE-16524-v4.patch, HBASE-16524-v5.patch, HBASE-16524-v6.patch, 
> HBASE-16524.master.001.patch, HBASE-16524.master.002.patch, flame1.svg
>
>
> Fix performance regression introduced by HBASE-16094.
> Instead of scanning all the wals every time, we can rely on the 
> insert/update/delete events we have.
> and since we want to delete the wals in order we can keep track of what is 
> "holding" that wal, and take a hit on scanning all the trackers only when we 
> remove the first log in the queue.
> e.g.
> WAL-1 [1, 2] 
> WAL-2 [1] -> "[2] is holding WAL-1"
> WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17371) Enhance 'HBaseContextSuite @ distributedScan to test HBase client' with filter

2016-12-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17371:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Enhance 'HBaseContextSuite @ distributedScan to test HBase client' with filter
> --
>
> Key: HBASE-17371
> URL: https://issues.apache.org/jira/browse/HBASE-17371
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: 17371.v1.txt
>
>
> Currently 'HBaseContextSuite @ distributedScan to test HBase client' uses 
> Scan which doesn't utilize any Filter.
> This issue adds a FirstKeyOnlyFilter to the scan object to ascertain the case 
> where the number of cells returned is the same as number of rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17371) Enhance 'HBaseContextSuite @ distributedScan to test HBase client' with filter

2016-12-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17371:
---
 Assignee: Ted Yu
Fix Version/s: 2.0.0

> Enhance 'HBaseContextSuite @ distributedScan to test HBase client' with filter
> --
>
> Key: HBASE-17371
> URL: https://issues.apache.org/jira/browse/HBASE-17371
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: 17371.v1.txt
>
>
> Currently 'HBaseContextSuite @ distributedScan to test HBase client' uses 
> Scan which doesn't utilize any Filter.
> This issue adds a FirstKeyOnlyFilter to the scan object to ascertain the case 
> where the number of cells returned is the same as number of rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17379:
---
Attachment: 17379.v2.txt

Patch v2 adds synchronization for the other references to pipeline.

> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt, 17379.v2.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1880)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5842)
>   at 
> 

[jira] [Commented] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781521#comment-15781521
 ] 

stack commented on HBASE-16524:
---

[~syuanjiang] You good w/ this going in sir?

> Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Appy
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16524-v2.patch, HBASE-16524-v3.patch, 
> HBASE-16524-v4.patch, HBASE-16524-v5.patch, HBASE-16524-v6.patch, 
> HBASE-16524.master.001.patch, HBASE-16524.master.002.patch, flame1.svg
>
>
> Fix performance regression introduced by HBASE-16094.
> Instead of scanning all the wals every time, we can rely on the 
> insert/update/delete events we have.
> and since we want to delete the wals in order we can keep track of what is 
> "holding" that wal, and take a hit on scanning all the trackers only when we 
> remove the first log in the queue.
> e.g.
> WAL-1 [1, 2] 
> WAL-2 [1] -> "[2] is holding WAL-1"
> WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-27 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781453#comment-15781453
 ] 

Stephen Yuan Jiang commented on HBASE-17149:


Ok, I will finish the backport in branch-1 and its child branch




> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: 17149.branch-1.incomplete.txt, 
> HBASE-17149.master.001.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.003.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync

2016-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781430#comment-15781430
 ] 

Hadoop QA commented on HBASE-16524:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
38s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m 16s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 22s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
6s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 4s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844826/HBASE-16524.master.002.patch
 |
| JIRA Issue | HBASE-16524 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux e0b543f80d10 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5ffbd4a |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5064/testReport/ |
| modules | C: hbase-procedure U: hbase-procedure |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5064/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Appy
>Assignee: Matteo Bertozzi
>Priority: 

[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781406#comment-15781406
 ] 

stack commented on HBASE-17149:
---

Ok. You know more about this than I [~syuanjiang]. Mind taking it home?  
Finishing the backport? Thank you.

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: 17149.branch-1.incomplete.txt, 
> HBASE-17149.master.001.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.003.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17068) Procedure v2 - inherit region locks

2016-12-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17068:
--
Attachment: HBASE-17068-v1.patch

Retry hadoopqa.

+1 on the patch.

> Procedure v2 - inherit region locks 
> 
>
> Key: HBASE-17068
> URL: https://issues.apache.org/jira/browse/HBASE-17068
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17068-v0.patch, HBASE-17068-v1.patch, 
> HBASE-17068-v1.patch
>
>
> Add support for inherited region locks. 
> e.g. Split will have Assign/Unassign as child which will take the lock on the 
> same region split is running on



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-27 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781350#comment-15781350
 ] 

Stephen Yuan Jiang edited comment on HBASE-17149 at 12/27/16 10:01 PM:
---

The latch is already in branch-1 - the latch is for backwards compatible (if 
old client, client in 1.0 or less, calls new master, master in 1.1 and later, 
the old client expected some assurance of certain check, that is what the latch 
for).  Here is an example of latch code in branch-1.1:
{code}
  ProcedurePrepareLatch latch = ProcedurePrepareLatch.createLatch();
  procId = this.procedureExecutor.submitProcedure(
new CreateTableProcedure(
  procedureExecutor.getEnvironment(), hTableDescriptor, newRegions, 
latch),
nonceGroup,
nonce);
  latch.await();
{code}


was (Author: syuanjiang):
The latch is already in branch-1, just code is a little different - the latch 
is for backwards compatible (if old client, client in 1.0 or less, calls new 
master, master in 1.1 and later, the old client expected some assurance of 
certain check, that is what the latch for).  Here is an example of latch code 
in branch-1.1:
{code}
  ProcedurePrepareLatch latch = ProcedurePrepareLatch.createLatch();
  procId = this.procedureExecutor.submitProcedure(
new CreateTableProcedure(
  procedureExecutor.getEnvironment(), hTableDescriptor, newRegions, 
latch),
nonceGroup,
nonce);
  latch.await();
{code}

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: 17149.branch-1.incomplete.txt, 
> HBASE-17149.master.001.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.003.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-27 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781350#comment-15781350
 ] 

Stephen Yuan Jiang commented on HBASE-17149:


The latch is already in branch-1, just code is a little different - the latch 
is for backwards compatible (if old client, client in 1.0 or less, calls new 
master, master in 1.1 and later, the old client expected some assurance of 
certain check, that is what the latch for).  Here is an example of latch code 
in branch-1.1:
{code}
  ProcedurePrepareLatch latch = ProcedurePrepareLatch.createLatch();
  procId = this.procedureExecutor.submitProcedure(
new CreateTableProcedure(
  procedureExecutor.getEnvironment(), hTableDescriptor, newRegions, 
latch),
nonceGroup,
nonce);
  latch.await();
{code}

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: 17149.branch-1.incomplete.txt, 
> HBASE-17149.master.001.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.003.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781343#comment-15781343
 ] 

stack commented on HBASE-16524:
---

I pushed a rebased patch.

> Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Appy
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16524-v2.patch, HBASE-16524-v3.patch, 
> HBASE-16524-v4.patch, HBASE-16524-v5.patch, HBASE-16524-v6.patch, 
> HBASE-16524.master.001.patch, HBASE-16524.master.002.patch, flame1.svg
>
>
> Fix performance regression introduced by HBASE-16094.
> Instead of scanning all the wals every time, we can rely on the 
> insert/update/delete events we have.
> and since we want to delete the wals in order we can keep track of what is 
> "holding" that wal, and take a hit on scanning all the trackers only when we 
> remove the first log in the queue.
> e.g.
> WAL-1 [1, 2] 
> WAL-2 [1] -> "[2] is holding WAL-1"
> WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync

2016-12-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16524:
--
Attachment: HBASE-16524.master.002.patch

> Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
> -
>
> Key: HBASE-16524
> URL: https://issues.apache.org/jira/browse/HBASE-16524
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Appy
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16524-v2.patch, HBASE-16524-v3.patch, 
> HBASE-16524-v4.patch, HBASE-16524-v5.patch, HBASE-16524-v6.patch, 
> HBASE-16524.master.001.patch, HBASE-16524.master.002.patch, flame1.svg
>
>
> Fix performance regression introduced by HBASE-16094.
> Instead of scanning all the wals every time, we can rely on the 
> insert/update/delete events we have.
> and since we want to delete the wals in order we can keep track of what is 
> "holding" that wal, and take a hit on scanning all the trackers only when we 
> remove the first log in the queue.
> e.g.
> WAL-1 [1, 2] 
> WAL-2 [1] -> "[2] is holding WAL-1"
> WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781328#comment-15781328
 ] 

stack commented on HBASE-17149:
---

The HBASE-16618 adds latch support which this patch seems to depend on. We do 
this all over:

ProcedurePrepareLatch latch = ProcedurePrepareLatch.createLatch();

Backport was good because was making me learn this stuff (smile).

If no latch, how we stop concurrent running of DDL operation?

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: 17149.branch-1.incomplete.txt, 
> HBASE-17149.master.001.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.003.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-27 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781182#comment-15781182
 ] 

Stephen Yuan Jiang commented on HBASE-17149:


[~stack], the reason that the backport to branch-1 is needed, as the nonce 
patch is in branch-1 (and its child branches).  In RPC retry of table DDLs, we 
would execute co-processors unnecessary.  We don't need 
split/merge/dispatchMerge in branch-1; only table DDLs and namespace DDLs (I 
know we had some refactoring in namespace code and latching code in master 
branch).  For HBASE-16618, it is not necessary in branch-1 (and I am not sure 
by adding HBASE-16618 would help backporting).  

I can do the backporting today if you have other stuff to worry about. 

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: 17149.branch-1.incomplete.txt, 
> HBASE-17149.master.001.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.003.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781157#comment-15781157
 ] 

stack commented on HBASE-17149:
---

[~syuanjiang] The description is not clear on why needed in branch-1. Is it to 
get the noncing of DDL such as create table, modify, etc? Do we have to 
backport this to branch-1?

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: 17149.branch-1.incomplete.txt, 
> HBASE-17149.master.001.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.003.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17149:
--
Attachment: 17149.branch-1.incomplete.txt

The patch took a bit of work to backport. It is not done. It needs HBASE-16618 
to do the latching stuff (the base class does latching). The split/merge is not 
in HMaster in branch-1 so that is missing. The snapshotmanager and 
tablenamespacemanagers are not nonced in branch-1.

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: 17149.branch-1.incomplete.txt, 
> HBASE-17149.master.001.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.003.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781138#comment-15781138
 ] 

stack commented on HBASE-17149:
---

The backport depends on HBASE-16618 which is not in branch-1. It doesn't go 
back easy either.  Let me post what I have [~syuanjiang]. Things look pretty 
clean. I got as far as HMaster. This is main obstacle. What you think?



> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17149.master.001.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.003.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17338) Treat Cell data size under global memstore heap size only when that Cell can not be copied to MSLAB

2016-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781110#comment-15781110
 ] 

Hadoop QA commented on HBASE-17338:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 11s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 55s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 135m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.regionserver.TestCompactingToCellArrayMapMemStore |
|   | hadoop.hbase.regionserver.TestWalAndCompactingMemStoreFlush |
|   | hadoop.hbase.regionserver.TestDefaultMemStore |
|   | hadoop.hbase.regionserver.TestCompactingMemStore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844806/HBASE-17338_V2.patch |
| JIRA Issue | HBASE-17338 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 401a0c475989 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5ffbd4a |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5062/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/5062/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5062/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5062/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


[jira] [Comment Edited] (HBASE-17371) Enhance 'HBaseContextSuite @ distributedScan to test HBase client' with filter

2016-12-27 Thread Weiqing Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781081#comment-15781081
 ] 

Weiqing Yang edited comment on HBASE-17371 at 12/27/16 7:18 PM:


+1 also, passed the build and related tests locally.


was (Author: weiqingyang):
+1 also, pass the build and related tests locally.

> Enhance 'HBaseContextSuite @ distributedScan to test HBase client' with filter
> --
>
> Key: HBASE-17371
> URL: https://issues.apache.org/jira/browse/HBASE-17371
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 17371.v1.txt
>
>
> Currently 'HBaseContextSuite @ distributedScan to test HBase client' uses 
> Scan which doesn't utilize any Filter.
> This issue adds a FirstKeyOnlyFilter to the scan object to ascertain the case 
> where the number of cells returned is the same as number of rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17371) Enhance 'HBaseContextSuite @ distributedScan to test HBase client' with filter

2016-12-27 Thread Weiqing Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781081#comment-15781081
 ] 

Weiqing Yang commented on HBASE-17371:
--

+1 also, pass the build and related tests locally.

> Enhance 'HBaseContextSuite @ distributedScan to test HBase client' with filter
> --
>
> Key: HBASE-17371
> URL: https://issues.apache.org/jira/browse/HBASE-17371
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 17371.v1.txt
>
>
> Currently 'HBaseContextSuite @ distributedScan to test HBase client' uses 
> Scan which doesn't utilize any Filter.
> This issue adds a FirstKeyOnlyFilter to the scan object to ascertain the case 
> where the number of cells returned is the same as number of rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-27 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781059#comment-15781059
 ] 

Stephen Yuan Jiang commented on HBASE-17149:


thanks, [~stack].  I just started to backport yesterday, it is indeed a lot of 
work.  Thanks for your help.  I will stop the dupe effort and wait for your 
patch.

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17149.master.001.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.003.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17149) Procedure v2 - Fix nonce submission

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781047#comment-15781047
 ] 

stack commented on HBASE-17149:
---

[~syuanjiang] I am back on the backport. Almost done.

> Procedure v2 - Fix nonce submission
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-17149.master.001.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.003.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780841#comment-15780841
 ] 

Hadoop QA commented on HBASE-17379:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 22s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 90m 38s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 127m 39s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844796/17379.v1.txt |
| JIRA Issue | HBASE-17379 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a2b290bc73f8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5ffbd4a |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5060/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5060/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Lack of synchronization in CompactionPipeline#getScanners()
> 

[jira] [Updated] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17379:
---
Affects Version/s: 2.0.0

> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1880)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5842)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> 

[jira] [Commented] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780815#comment-15780815
 ] 

Anoop Sam John commented on HBASE-17379:


Now after this fix, most of the usages of pipeline state is being synchronized. 
Still there are some left..  We need make them also safe?

> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1880)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5842)
>   at 

[jira] [Commented] (HBASE-17291) Remove ImmutableSegment#getKeyValueScanner

2016-12-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780804#comment-15780804
 ] 

Anoop Sam John commented on HBASE-17291:


Will review it tomorrow Ram.. Sorry for the delay.

> Remove ImmutableSegment#getKeyValueScanner
> --
>
> Key: HBASE-17291
> URL: https://issues.apache.org/jira/browse/HBASE-17291
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17291.patch, HBASE-17291_1.patch, 
> HBASE-17291_2.patch
>
>
> This is based on a discussion over [~anastas]'s patch. The MemstoreSnapshot 
> uses a KeyValueScanner which actually seems redundant considering we already 
> have a SegmentScanner. The idea is that the snapshot scanner should be a 
> simple iterator type of scanner but it lacks the capability to do the 
> reference counting on that segment that is now used in snapshot. With 
> snapshot having mulitple segments in the latest impl it is better we hold on 
> to the segment by doing ref counting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17338) Treat Cell data size under global memstore heap size only when that Cell can not be copied to MSLAB

2016-12-27 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17338:
---
Attachment: HBASE-17338_V2.patch

> Treat Cell data size under global memstore heap size only when that Cell can 
> not be copied to MSLAB
> ---
>
> Key: HBASE-17338
> URL: https://issues.apache.org/jira/browse/HBASE-17338
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-17338.patch, HBASE-17338_V2.patch, 
> HBASE-17338_V2.patch
>
>
> We have only data size and heap overhead being tracked globally.  Off heap 
> memstore works with off heap backed MSLAB pool.  But a cell, when added to 
> memstore, not always getting copied to MSLAB.  Append/Increment ops doing an 
> upsert, dont use MSLAB.  Also based on the Cell size, we sometimes avoid 
> MSLAB copy.  But now we track these cell data size also under the global 
> memstore data size which indicated off heap size in case of off heap 
> memstore.  For global checks for flushes (against lower/upper watermark 
> levels), we check this size against max off heap memstore size.  We do check 
> heap overhead against global heap memstore size (Defaults to 40% of xmx)  But 
> for such cells the data size also should be accounted under the heap overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17339) Scan-Memory-First Optimization for Get Operation

2016-12-27 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780751#comment-15780751
 ] 

Phil Yang edited comment on HBASE-17339 at 12/27/16 4:44 PM:
-

bq. In particular, replication support was deemed to be a blocker at the time.

Now we have HBASE-9465, the order can be guaranteed. But if clients also Put 
Cells into peer cluster, we have two region servers generating timestamp. 
Unless we change the TS in entries to local TS when peer cluster's RS receives 
the Cells, we still can not keep monotonicity. But replication is asynchronous 
which means maybe delay a lot time. So the only solution is synchronous 
replication or prevent writing to peer cluster. And we may need to distinguish 
normal client writing and ReplicationSink client writing. The second one can 
contains a TS while the first one can't.


was (Author: yangzhe1991):
bq.
In particular, replication support was deemed to be a blocker at the time.

Now we have HBASE-9465, the order can be guaranteed. But if clients also Put 
Cells into peer cluster, we have two region servers generating timestamp. 
Unless we change the TS in entries to local TS when peer cluster's RS receives 
the Cells, we still can not keep monotonicity. But replication is asynchronous 
which means maybe delay a lot time. So the only solution is synchronous 
replication or prevent writing to peer cluster. And we may need to distinguish 
normal client writing and ReplicationSink client writing. The second one can 
contains a TS while the first one can't.

> Scan-Memory-First Optimization for Get Operation
> 
>
> Key: HBASE-17339
> URL: https://issues.apache.org/jira/browse/HBASE-17339
> Project: HBase
>  Issue Type: Improvement
>Reporter: Eshcar Hillel
> Attachments: HBASE-17339-V01.patch
>
>
> The current implementation of a get operation (to retrieve values for a 
> specific key) scans through all relevant stores of the region; for each store 
> both memory components (memstores segments) and disk components (hfiles) are 
> scanned in parallel.
> We suggest to apply an optimization that speculatively scans memory-only 
> components first and only if the result is incomplete scans both memory and 
> disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17339) Scan-Memory-First Optimization for Get Operation

2016-12-27 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780751#comment-15780751
 ] 

Phil Yang commented on HBASE-17339:
---

bq.
In particular, replication support was deemed to be a blocker at the time.

Now we have HBASE-9465, the order can be guaranteed. But if clients also Put 
Cells into peer cluster, we have two region servers generating timestamp. 
Unless we change the TS in entries to local TS when peer cluster's RS receives 
the Cells, we still can not keep monotonicity. But replication is asynchronous 
which means maybe delay a lot time. So the only solution is synchronous 
replication or prevent writing to peer cluster. And we may need to distinguish 
normal client writing and ReplicationSink client writing. The second one can 
contains a TS while the first one can't.

> Scan-Memory-First Optimization for Get Operation
> 
>
> Key: HBASE-17339
> URL: https://issues.apache.org/jira/browse/HBASE-17339
> Project: HBase
>  Issue Type: Improvement
>Reporter: Eshcar Hillel
> Attachments: HBASE-17339-V01.patch
>
>
> The current implementation of a get operation (to retrieve values for a 
> specific key) scans through all relevant stores of the region; for each store 
> both memory components (memstores segments) and disk components (hfiles) are 
> scanned in parallel.
> We suggest to apply an optimization that speculatively scans memory-only 
> components first and only if the result is incomplete scans both memory and 
> disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16421) Introducing the CellChunkMap as a new additional index variant in the MemStore

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780737#comment-15780737
 ] 

stack commented on HBASE-16421:
---

Maybe the total time is GC time [~ram_krish]?

> Introducing the CellChunkMap as a new additional index variant in the MemStore
> --
>
> Key: HBASE-16421
> URL: https://issues.apache.org/jira/browse/HBASE-16421
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Anastasia Braginsky
> Attachments: CellChunkMapRevived.pdf, ChunkCell_creation.png, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Follow up for HBASE-14921. This is going to be the umbrella JIRA to include 
> all the parts of integration of the CellChunkMap to the MemStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16421) Introducing the CellChunkMap as a new additional index variant in the MemStore

2016-12-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780733#comment-15780733
 ] 

stack commented on HBASE-16421:
---

Do you have total pause times too as well as GC count [~ram_krish]?

What recommends offheap memstore [~ram_krish]? We use less heap? (But we are 
doing more GC work?).

> Introducing the CellChunkMap as a new additional index variant in the MemStore
> --
>
> Key: HBASE-16421
> URL: https://issues.apache.org/jira/browse/HBASE-16421
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Anastasia Braginsky
> Attachments: CellChunkMapRevived.pdf, ChunkCell_creation.png, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Follow up for HBASE-14921. This is going to be the umbrella JIRA to include 
> all the parts of integration of the CellChunkMap to the MemStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17338) Treat Cell data size under global memstore heap size only when that Cell can not be copied to MSLAB

2016-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780723#comment-15780723
 ] 

Hadoop QA commented on HBASE-17338:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-17338 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844802/HBASE-17338_V2.patch |
| JIRA Issue | HBASE-17338 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5061/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Treat Cell data size under global memstore heap size only when that Cell can 
> not be copied to MSLAB
> ---
>
> Key: HBASE-17338
> URL: https://issues.apache.org/jira/browse/HBASE-17338
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-17338.patch, HBASE-17338_V2.patch
>
>
> We have only data size and heap overhead being tracked globally.  Off heap 
> memstore works with off heap backed MSLAB pool.  But a cell, when added to 
> memstore, not always getting copied to MSLAB.  Append/Increment ops doing an 
> upsert, dont use MSLAB.  Also based on the Cell size, we sometimes avoid 
> MSLAB copy.  But now we track these cell data size also under the global 
> memstore data size which indicated off heap size in case of off heap 
> memstore.  For global checks for flushes (against lower/upper watermark 
> levels), we check this size against max off heap memstore size.  We do check 
> heap overhead against global heap memstore size (Defaults to 40% of xmx)  But 
> for such cells the data size also should be accounted under the heap overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17338) Treat Cell data size under global memstore heap size only when that Cell can not be copied to MSLAB

2016-12-27 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17338:
---
Attachment: HBASE-17338_V2.patch

New patch with changed approach.
Now in MemstoreSize itself, we track cell data size and heapSize.  Not like 
heap overhead.  We were doing this special overhead tracking and on heap MSLAB 
cases had to add both of these for checks etc.
Now we have heapSize accounting itself (here for on heap MSLAB cells and cells 
not in MSLAB area cell data size also included in heapSize accounting)

> Treat Cell data size under global memstore heap size only when that Cell can 
> not be copied to MSLAB
> ---
>
> Key: HBASE-17338
> URL: https://issues.apache.org/jira/browse/HBASE-17338
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-17338.patch, HBASE-17338_V2.patch
>
>
> We have only data size and heap overhead being tracked globally.  Off heap 
> memstore works with off heap backed MSLAB pool.  But a cell, when added to 
> memstore, not always getting copied to MSLAB.  Append/Increment ops doing an 
> upsert, dont use MSLAB.  Also based on the Cell size, we sometimes avoid 
> MSLAB copy.  But now we track these cell data size also under the global 
> memstore data size which indicated off heap size in case of off heap 
> memstore.  For global checks for flushes (against lower/upper watermark 
> levels), we check this size against max off heap memstore size.  We do check 
> heap overhead against global heap memstore size (Defaults to 40% of xmx)  But 
> for such cells the data size also should be accounted under the heap overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17339) Scan-Memory-First Optimization for Get Operation

2016-12-27 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780707#comment-15780707
 ] 

Phil Yang commented on HBASE-17339:
---

Yes, we may put or get more than one CFs,  CF level is not enough, we should 
make it table-level if we need this feature.

> Scan-Memory-First Optimization for Get Operation
> 
>
> Key: HBASE-17339
> URL: https://issues.apache.org/jira/browse/HBASE-17339
> Project: HBase
>  Issue Type: Improvement
>Reporter: Eshcar Hillel
> Attachments: HBASE-17339-V01.patch
>
>
> The current implementation of a get operation (to retrieve values for a 
> specific key) scans through all relevant stores of the region; for each store 
> both memory components (memstores segments) and disk components (hfiles) are 
> scanned in parallel.
> We suggest to apply an optimization that speculatively scans memory-only 
> components first and only if the result is incomplete scans both memory and 
> disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780688#comment-15780688
 ] 

Hadoop QA commented on HBASE-17374:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 58s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s 
{color} | {color:red} hbase-server generated 1 new + 1 unchanged - 0 fixed = 2 
total (was 1) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 10s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 138m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hbase.client.TestAsyncTableBatch |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844786/0001-fix-for-HBASE-17374.patch
 |
| JIRA Issue | HBASE-17374 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 01694496746a 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5ffbd4a |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5059/artifact/patchprocess/diff-javadoc-javadoc-hbase-server.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5059/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/5059/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5059/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Created] (HBASE-17380) Allow wait timeout and number of RS's to be configurable in RSGroups tests.

2016-12-27 Thread Josh Elser (JIRA)
Josh Elser created HBASE-17380:
--

 Summary: Allow wait timeout and number of RS's to be configurable 
in RSGroups tests.
 Key: HBASE-17380
 URL: https://issues.apache.org/jira/browse/HBASE-17380
 Project: HBase
  Issue Type: Improvement
  Components: test
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Trivial
 Fix For: 2.0.0


In trying to help in debug the RSGroup tests, the wait timeout being a fixed 
value can cause failures for the test due to general slowness. It would be nice 
if the wait timeout could be configurable.

Using more regionservers (than the default of 4) could also be configurable.

Finally, it also appears that IntegrationTestRSGroups doesn't function with a 
minicluster (only against a distributed cluster).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17375) PrefixTreeArrayReversibleScanner#previousRowInternal doesn't work correctly

2016-12-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780655#comment-15780655
 ] 

Anoop Sam John commented on HBASE-17375:


JFYI
ROW_INDEX_V1 was introduced in trunk (branch-1 also?) to have better random 
access.  This encoding helps..  U can have a look. This is done in very simple 
way.  just  saying.

> PrefixTreeArrayReversibleScanner#previousRowInternal doesn't work correctly
> ---
>
> Key: HBASE-17375
> URL: https://issues.apache.org/jira/browse/HBASE-17375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 0.98.24
>Reporter: Chang chen
>Assignee: Chang chen
> Fix For: 2.0.0
>
> Attachments: HBASE_17375_master_v1.patch, row trie example.PNG
>
>
> Recently, we find our hbase compaction thread never end.  Assume we have 
> following cells:
> {quote}
>  1
>  1
>  1
>  1
>  1
>  1
>  1
>  1
> {quote}
> If we encode above datas into prefix tree block, then it looks like:
> !row trie example.PNG!
> Assume the current row is {color:red}Abc{color} (e.g. the current row node is 
> 4), then the previous row should be *Aa* (e.g. 2). However 
> previousRowInternal return {color:red}A{color}(e.g. 1)
> After investigation, I believe it's the bug of 
> PrefixTreeArrayReversibleScanner#previousRowInternal.
> {code}
>   private boolean previousRowInternal() {
> //...
> while (!beforeFirst) {
>   //
>   // what if currentRowNode is nub?
>   if (currentRowNode.hasOccurrences()) {// escape clause
> currentRowNode.resetFanIndex();
> return true;// found some values
>   }
> }
> {code}
> currentRowNode.hasOccurrences() only test whether it has cell or not. But in 
> the case of  currentRowNode.isNub() is true, previousRowInternal should 
> follow the previous fan instead of return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Liu Junhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780639#comment-15780639
 ] 

Liu Junhong commented on HBASE-17374:
-

I notice that refreshAuthManager will call when a new TableAuthManager be 
created and I will avoid RejectedExecutionException in next patch.

> ZKPermissionWatcher crashed when grant after close region 
> --
>
> Key: HBASE-17374
> URL: https://issues.apache.org/jira/browse/HBASE-17374
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.15
>Reporter: Liu Junhong
>Priority: Critical
> Attachments: 0001-fix-for-HBASE-17374.patch
>
>
> It was occurred many time that  I granted some permission,  but few of some 
> regionservers was not token effect and must be restart . When I look up logs, 
>  I found that :
> 2016-12-08 15:00:26,238 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] handler.CloseRegionHandler 
> (CloseRegionHandler.java:process(128)) - Processing close of 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> {color:red} 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1163)) - Closing 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
> compactions & flushes {color}
> 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1190)) - Updates disabled for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> 2016-12-08 15:00:26,242 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1753)) - Started memstore flush for 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., current 
> region memstore size 160
> 2016-12-08 15:00:26,284 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) 
> - Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,303 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) 
> - Committing store file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
>  as 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,318 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HStore 
> (HStore.java:commitFile(877)) - Added 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
>  entries=1, sequenceid=6, filesize=985
> 2016-12-08 15:00:26,319 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1920)) - Finished memstore flush of 
> ~160/160, currentsize=0/0 for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
> sequenceid=6, compaction requested=false
> 2016-12-08 15:00:26,323 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf1
> 2016-12-08 15:00:26,325 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf2
> 2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.token.TokenProvider
> {color:red}  2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.AccessController  {color}
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.regionserver.ExternalMetricObserver
> 2016-12-08 15:00:26,328 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 

[jira] [Commented] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Liu Junhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780648#comment-15780648
 ] 

Liu Junhong commented on HBASE-17374:
-

Actually I think it maybe better that  TableAuthManager  use singleton pattern.


> ZKPermissionWatcher crashed when grant after close region 
> --
>
> Key: HBASE-17374
> URL: https://issues.apache.org/jira/browse/HBASE-17374
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.15
>Reporter: Liu Junhong
>Priority: Critical
> Attachments: 0001-fix-for-HBASE-17374.patch
>
>
> It was occurred many time that  I granted some permission,  but few of some 
> regionservers was not token effect and must be restart . When I look up logs, 
>  I found that :
> 2016-12-08 15:00:26,238 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] handler.CloseRegionHandler 
> (CloseRegionHandler.java:process(128)) - Processing close of 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> {color:red} 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1163)) - Closing 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
> compactions & flushes {color}
> 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1190)) - Updates disabled for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> 2016-12-08 15:00:26,242 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1753)) - Started memstore flush for 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., current 
> region memstore size 160
> 2016-12-08 15:00:26,284 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) 
> - Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,303 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) 
> - Committing store file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
>  as 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,318 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HStore 
> (HStore.java:commitFile(877)) - Added 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
>  entries=1, sequenceid=6, filesize=985
> 2016-12-08 15:00:26,319 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1920)) - Finished memstore flush of 
> ~160/160, currentsize=0/0 for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
> sequenceid=6, compaction requested=false
> 2016-12-08 15:00:26,323 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf1
> 2016-12-08 15:00:26,325 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf2
> 2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.token.TokenProvider
> {color:red}  2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.AccessController  {color}
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.regionserver.ExternalMetricObserver
> 2016-12-08 15:00:26,328 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1279)) - Closed 
> 

[jira] [Commented] (HBASE-13300) Fix casing in getTimeStamp() and setTimestamp() for Mutations

2016-12-27 Thread Jan Hentschel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780623#comment-15780623
 ] 

Jan Hentschel commented on HBASE-13300:
---

Ok, should I change the patch to *timeStamp* or do we stick with *timestamp*? 
Nevertheless, after a decision I would open additional tickets to unify the 
casing.

> Fix casing in getTimeStamp() and setTimestamp() for Mutations
> -
>
> Key: HBASE-13300
> URL: https://issues.apache.org/jira/browse/HBASE-13300
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.0.0
>Reporter: Lars George
>Assignee: Jan Hentschel
> Attachments: HBASE-13300.master.001.patch, 
> HBASE-13300.master.002.patch, HBASE-13300.xlsx
>
>
> For some reason we have two ways of writing this method. It should be 
> consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Liu Junhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780619#comment-15780619
 ] 

Liu Junhong commented on HBASE-17374:
-

The data-probe-test is a table for crontab to test if the hbase service is 
healthy, this test  call create, put, flush, disable and drop every 10 min.
When a regionserver startup at peak time, we disable the balancer, 
data-probe-test's region maybe opened and closed on the new regionserver, and 
the bug occurs.
I will fix as your prompt tomorrow,  thank you.

> ZKPermissionWatcher crashed when grant after close region 
> --
>
> Key: HBASE-17374
> URL: https://issues.apache.org/jira/browse/HBASE-17374
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.15
>Reporter: Liu Junhong
>Priority: Critical
> Attachments: 0001-fix-for-HBASE-17374.patch
>
>
> It was occurred many time that  I granted some permission,  but few of some 
> regionservers was not token effect and must be restart . When I look up logs, 
>  I found that :
> 2016-12-08 15:00:26,238 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] handler.CloseRegionHandler 
> (CloseRegionHandler.java:process(128)) - Processing close of 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> {color:red} 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1163)) - Closing 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
> compactions & flushes {color}
> 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1190)) - Updates disabled for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> 2016-12-08 15:00:26,242 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1753)) - Started memstore flush for 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., current 
> region memstore size 160
> 2016-12-08 15:00:26,284 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) 
> - Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,303 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) 
> - Committing store file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
>  as 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,318 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HStore 
> (HStore.java:commitFile(877)) - Added 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
>  entries=1, sequenceid=6, filesize=985
> 2016-12-08 15:00:26,319 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1920)) - Finished memstore flush of 
> ~160/160, currentsize=0/0 for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
> sequenceid=6, compaction requested=false
> 2016-12-08 15:00:26,323 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf1
> 2016-12-08 15:00:26,325 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf2
> 2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.token.TokenProvider
> {color:red}  2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.AccessController  {color}
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 

[jira] [Commented] (HBASE-17320) Add inclusive/exclusive support for startRow and endRow of scan

2016-12-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780588#comment-15780588
 ] 

Ted Yu commented on HBASE-17320:


The unit test failure should be fixed by HBASE-17379.

w.r.t. Duo's patch, please mark as Incompatible change and add release note.

> Add inclusive/exclusive support for startRow and endRow of scan
> ---
>
> Key: HBASE-17320
> URL: https://issues.apache.org/jira/browse/HBASE-17320
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17320-v1.patch, HBASE-17320-v2.patch, 
> HBASE-17320-v3.patch, HBASE-17320-v4.patch, HBASE-17320.patch
>
>
> This is especially useful when doing reverse scan. HBASE-17168 maybe a more 
> powerful solution but we need to be careful about the atomicity, and I do not 
> think we will provide the feature to end user. But I think it is OK to 
> provide inclusive/exclusive option to end user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17379:
---
Status: Patch Available  (was: Open)

> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1880)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5842)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> 

[jira] [Commented] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780586#comment-15780586
 ] 

Ted Yu commented on HBASE-17379:


Also fixed potential bug in drain() where pipeline.size() should be taken 
inside the synchronization block - otherwise there may be ImmutableSegment left 
behind.

> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1880)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5842)
> 

[jira] [Updated] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17379:
---
Attachment: 17379.v1.txt

> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1880)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5842)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> 

[jira] [Created] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2016-12-27 Thread Ted Yu (JIRA)
Ted Yu created HBASE-17379:
--

 Summary: Lack of synchronization in 
CompactionPipeline#getScanners()
 Key: HBASE-17379
 URL: https://issues.apache.org/jira/browse/HBASE-17379
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


>From 
>https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
> :
{code}
java.io.IOException: java.util.ConcurrentModificationException
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
at 
org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.ConcurrentModificationException: null
at 
java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
at java.util.LinkedList$ListItr.next(LinkedList.java:888)
at 
org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
at 
org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
at 
org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1880)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5842)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
{code}
The cause is in CompactionPipeline#getScanners() where there is no 
synchronization around iterating pipeline.




[jira] [Commented] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780556#comment-15780556
 ] 

Ted Yu commented on HBASE-17374:


Was the target of grant command the 'data-probe-test' table ?

> ZKPermissionWatcher crashed when grant after close region 
> --
>
> Key: HBASE-17374
> URL: https://issues.apache.org/jira/browse/HBASE-17374
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.15
>Reporter: Liu Junhong
>Priority: Critical
> Attachments: 0001-fix-for-HBASE-17374.patch
>
>
> It was occurred many time that  I granted some permission,  but few of some 
> regionservers was not token effect and must be restart . When I look up logs, 
>  I found that :
> 2016-12-08 15:00:26,238 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] handler.CloseRegionHandler 
> (CloseRegionHandler.java:process(128)) - Processing close of 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> {color:red} 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1163)) - Closing 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
> compactions & flushes {color}
> 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1190)) - Updates disabled for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> 2016-12-08 15:00:26,242 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1753)) - Started memstore flush for 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., current 
> region memstore size 160
> 2016-12-08 15:00:26,284 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) 
> - Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,303 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) 
> - Committing store file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
>  as 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,318 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HStore 
> (HStore.java:commitFile(877)) - Added 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
>  entries=1, sequenceid=6, filesize=985
> 2016-12-08 15:00:26,319 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1920)) - Finished memstore flush of 
> ~160/160, currentsize=0/0 for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
> sequenceid=6, compaction requested=false
> 2016-12-08 15:00:26,323 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf1
> 2016-12-08 15:00:26,325 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf2
> 2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.token.TokenProvider
> {color:red}  2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.AccessController  {color}
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.regionserver.ExternalMetricObserver
> 2016-12-08 15:00:26,328 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1279)) - Closed 
> 

[jira] [Comment Edited] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2016-12-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15760715#comment-15760715
 ] 

Ted Yu edited comment on HBASE-17290 at 12/27/16 2:57 PM:
--

The latest patch for HBASE-14417 is on reviewboard and attached to JIRA. 


was (Author: yuzhih...@gmail.com):
The latest patch for HBASE-14417 is on reviewboard. 

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780543#comment-15780543
 ] 

Ted Yu commented on HBASE-17374:


{code}
790   if (ref-1 <= 0 && shouldClose) {
{code}
When ref-1 < 0, care should be taken. The next time release() is called, we 
would get into the if block which triggers the abort.
If there is no release call in the future, we have memory leak.

> ZKPermissionWatcher crashed when grant after close region 
> --
>
> Key: HBASE-17374
> URL: https://issues.apache.org/jira/browse/HBASE-17374
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.15
>Reporter: Liu Junhong
>Priority: Critical
> Attachments: 0001-fix-for-HBASE-17374.patch
>
>
> It was occurred many time that  I granted some permission,  but few of some 
> regionservers was not token effect and must be restart . When I look up logs, 
>  I found that :
> 2016-12-08 15:00:26,238 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] handler.CloseRegionHandler 
> (CloseRegionHandler.java:process(128)) - Processing close of 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> {color:red} 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1163)) - Closing 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
> compactions & flushes {color}
> 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1190)) - Updates disabled for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> 2016-12-08 15:00:26,242 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1753)) - Started memstore flush for 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., current 
> region memstore size 160
> 2016-12-08 15:00:26,284 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) 
> - Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,303 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) 
> - Committing store file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
>  as 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,318 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HStore 
> (HStore.java:commitFile(877)) - Added 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
>  entries=1, sequenceid=6, filesize=985
> 2016-12-08 15:00:26,319 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1920)) - Finished memstore flush of 
> ~160/160, currentsize=0/0 for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
> sequenceid=6, compaction requested=false
> 2016-12-08 15:00:26,323 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf1
> 2016-12-08 15:00:26,325 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf2
> 2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.token.TokenProvider
> {color:red}  2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.AccessController  {color}
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> 

[jira] [Commented] (HBASE-17372) Make AsyncTable thread safe

2016-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780530#comment-15780530
 ] 

Hadoop QA commented on HBASE-17372:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 3s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 36s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 135m 14s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844760/HBASE-17372-v1.patch |
| JIRA Issue | HBASE-17372 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 20faf4c367b1 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5ffbd4a |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5058/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5058/testReport/ |
| modules | C: hbase-client hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5058/console |
| Powered by | Apache Yetus 0.3.0   

[jira] [Commented] (HBASE-17339) Scan-Memory-First Optimization for Get Operation

2016-12-27 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780507#comment-15780507
 ] 

Dave Latham commented on HBASE-17339:
-

Some previous discussion of restrictions on client provided timestamps occurred 
at HBASE-10247.
In particular, replication support was deemed to be a blocker at the time.

> Scan-Memory-First Optimization for Get Operation
> 
>
> Key: HBASE-17339
> URL: https://issues.apache.org/jira/browse/HBASE-17339
> Project: HBase
>  Issue Type: Improvement
>Reporter: Eshcar Hillel
> Attachments: HBASE-17339-V01.patch
>
>
> The current implementation of a get operation (to retrieve values for a 
> specific key) scans through all relevant stores of the region; for each store 
> both memory components (memstores segments) and disk components (hfiles) are 
> scanned in parallel.
> We suggest to apply an optimization that speculatively scans memory-only 
> components first and only if the result is incomplete scans both memory and 
> disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Liu Junhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Junhong updated HBASE-17374:

Description: 
It was occurred many time that  I granted some permission,  but few of some 
regionservers was not token effect and must be restart . When I look up logs,  
I found that :

2016-12-08 15:00:26,238 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
handler.CloseRegionHandler (CloseRegionHandler.java:process(128)) - Processing 
close of data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
{color:red} 2016-12-08 15:00:26,242 DEBUG 
[RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
(HRegion.java:doClose(1163)) - Closing 
data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
compactions & flushes {color}
2016-12-08 15:00:26,242 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HRegion (HRegion.java:doClose(1190)) - Updates disabled for region 
data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
2016-12-08 15:00:26,242 INFO  [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HRegion (HRegion.java:internalFlushcache(1753)) - Started memstore 
flush for data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., 
current region memstore size 160
2016-12-08 15:00:26,284 INFO  [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) - 
Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
2016-12-08 15:00:26,303 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) - 
Committing store file 
hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
 as 
hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
2016-12-08 15:00:26,318 INFO  [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HStore (HStore.java:commitFile(877)) - Added 
hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
 entries=1, sequenceid=6, filesize=985
2016-12-08 15:00:26,319 INFO  [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HRegion (HRegion.java:internalFlushcache(1920)) - Finished 
memstore flush of ~160/160, currentsize=0/0 for region 
data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
sequenceid=6, compaction requested=false
2016-12-08 15:00:26,323 INFO  
[StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
 regionserver.HStore (HStore.java:close(774)) - Closed cf1
2016-12-08 15:00:26,325 INFO  
[StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
 regionserver.HStore (HStore.java:close(774)) - Closed cf2
2016-12-08 15:00:26,326 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
coprocessor.CoprocessorHost (CoprocessorHost.java:shutdown(292)) - Stop 
coprocessor org.apache.hadoop.hbase.security.token.TokenProvider
{color:red}  2016-12-08 15:00:26,326 DEBUG 
[RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
(CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
org.apache.hadoop.hbase.security.access.AccessController  {color}
2016-12-08 15:00:26,327 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
coprocessor.CoprocessorHost (CoprocessorHost.java:shutdown(292)) - Stop 
coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
2016-12-08 15:00:26,327 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
coprocessor.CoprocessorHost (CoprocessorHost.java:shutdown(292)) - Stop 
coprocessor org.apache.hadoop.hbase.regionserver.ExternalMetricObserver
2016-12-08 15:00:26,328 INFO  [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HRegion (HRegion.java:doClose(1279)) - Closed 
data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
{color:red}  2016-12-08 15:00:27,590 ERROR [regionserver60020-EventThread] 
zookeeper.ClientCnxn (ClientCnxn.java:processEvent(524)) - Error while calling 
watcher
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.FutureTask@1851ab3a rejected from 
java.util.concurrent.ThreadPoolExecutor@19c0794f[Terminated, pool size = 0, 
active threads = 0, queued tasks = 0, completed tasks = 1]
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at 

[jira] [Commented] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Liu Junhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780428#comment-15780428
 ] 

Liu Junhong commented on HBASE-17374:
-

It is late now in Beijing , submit a path for discussion.
I will run UT tomorrow.

> ZKPermissionWatcher crashed when grant after close region 
> --
>
> Key: HBASE-17374
> URL: https://issues.apache.org/jira/browse/HBASE-17374
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.15
>Reporter: Liu Junhong
>Priority: Critical
> Attachments: 0001-fix-for-HBASE-17374.patch
>
>
> It was occurred many time that  I granted some permission,  but few of some 
> regionservers was not token effect and must be restart . When I look up logs, 
>  I found that :
> 2016-12-08 15:00:26,238 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] handler.CloseRegionHandler 
> (CloseRegionHandler.java:process(128)) - Processing close of 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> {color:red} 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1163)) - Closing 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
> compactions & flushes {color}
> 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1190)) - Updates disabled for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> 2016-12-08 15:00:26,242 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1753)) - Started memstore flush for 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., current 
> region memstore size 160
> 2016-12-08 15:00:26,284 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) 
> - Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,303 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) 
> - Committing store file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
>  as 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,318 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HStore 
> (HStore.java:commitFile(877)) - Added 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
>  entries=1, sequenceid=6, filesize=985
> 2016-12-08 15:00:26,319 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1920)) - Finished memstore flush of 
> ~160/160, currentsize=0/0 for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
> sequenceid=6, compaction requested=false
> 2016-12-08 15:00:26,323 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf1
> 2016-12-08 15:00:26,325 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf2
> 2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.token.TokenProvider
> {color:red}  2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.AccessController  {color}
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.regionserver.ExternalMetricObserver
> 2016-12-08 15:00:26,328 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1279)) - Closed 

[jira] [Updated] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Liu Junhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Junhong updated HBASE-17374:

Status: Patch Available  (was: Open)

> ZKPermissionWatcher crashed when grant after close region 
> --
>
> Key: HBASE-17374
> URL: https://issues.apache.org/jira/browse/HBASE-17374
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.15
>Reporter: Liu Junhong
>Priority: Critical
> Attachments: 0001-fix-for-HBASE-17374.patch
>
>
> It was occurred many time that  I granted some permission,  but few of some 
> regionservers was not token effect and must be restart . When I look up logs, 
>  I found that :
> 2016-12-08 15:00:26,238 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] handler.CloseRegionHandler 
> (CloseRegionHandler.java:process(128)) - Processing close of 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> {color:red} 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1163)) - Closing 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
> compactions & flushes {color}
> 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1190)) - Updates disabled for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> 2016-12-08 15:00:26,242 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1753)) - Started memstore flush for 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., current 
> region memstore size 160
> 2016-12-08 15:00:26,284 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) 
> - Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,303 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) 
> - Committing store file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
>  as 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,318 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HStore 
> (HStore.java:commitFile(877)) - Added 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
>  entries=1, sequenceid=6, filesize=985
> 2016-12-08 15:00:26,319 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1920)) - Finished memstore flush of 
> ~160/160, currentsize=0/0 for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
> sequenceid=6, compaction requested=false
> 2016-12-08 15:00:26,323 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf1
> 2016-12-08 15:00:26,325 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf2
> 2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.token.TokenProvider
> {color:red}  2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.AccessController  {color}
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.regionserver.ExternalMetricObserver
> 2016-12-08 15:00:26,328 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1279)) - Closed 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> {color:red}  

[jira] [Updated] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Liu Junhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Junhong updated HBASE-17374:

Attachment: 0001-fix-for-HBASE-17374.patch

> ZKPermissionWatcher crashed when grant after close region 
> --
>
> Key: HBASE-17374
> URL: https://issues.apache.org/jira/browse/HBASE-17374
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.15
>Reporter: Liu Junhong
>Priority: Critical
> Attachments: 0001-fix-for-HBASE-17374.patch
>
>
> It was occurred many time that  I granted some permission,  but few of some 
> regionservers was not token effect and must be restart . When I look up logs, 
>  I found that :
> 2016-12-08 15:00:26,238 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] handler.CloseRegionHandler 
> (CloseRegionHandler.java:process(128)) - Processing close of 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> {color:red} 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1163)) - Closing 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
> compactions & flushes {color}
> 2016-12-08 15:00:26,242 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1190)) - Updates disabled for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> 2016-12-08 15:00:26,242 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1753)) - Started memstore flush for 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., current 
> region memstore size 160
> 2016-12-08 15:00:26,284 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) 
> - Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,303 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
> regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) 
> - Committing store file 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
>  as 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
> 2016-12-08 15:00:26,318 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HStore 
> (HStore.java:commitFile(877)) - Added 
> hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
>  entries=1, sequenceid=6, filesize=985
> 2016-12-08 15:00:26,319 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:internalFlushcache(1920)) - Finished memstore flush of 
> ~160/160, currentsize=0/0 for region 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
> sequenceid=6, compaction requested=false
> 2016-12-08 15:00:26,323 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf1
> 2016-12-08 15:00:26,325 INFO  
> [StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
>  regionserver.HStore (HStore.java:close(774)) - Closed cf2
> 2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.token.TokenProvider
> {color:red}  2016-12-08 15:00:26,326 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.AccessController  {color}
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> 2016-12-08 15:00:26,327 DEBUG 
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
> (CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
> org.apache.hadoop.hbase.regionserver.ExternalMetricObserver
> 2016-12-08 15:00:26,328 INFO  
> [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
> (HRegion.java:doClose(1279)) - Closed 
> data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
> 

[jira] [Updated] (HBASE-17374) ZKPermissionWatcher crashed when grant after close region

2016-12-27 Thread Liu Junhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Junhong updated HBASE-17374:

Description: 
It was occurred many time that  I granted some permission,  but few of some 
regionservers was not token effect and must be restart . When I look up logs,  
I found that :

2016-12-08 15:00:26,238 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
handler.CloseRegionHandler (CloseRegionHandler.java:process(128)) - Processing 
close of data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
{color:red} 2016-12-08 15:00:26,242 DEBUG 
[RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] regionserver.HRegion 
(HRegion.java:doClose(1163)) - Closing 
data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.: disabling 
compactions & flushes {color}
2016-12-08 15:00:26,242 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HRegion (HRegion.java:doClose(1190)) - Updates disabled for region 
data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
2016-12-08 15:00:26,242 INFO  [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HRegion (HRegion.java:internalFlushcache(1753)) - Started memstore 
flush for data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14., 
current region memstore size 160
2016-12-08 15:00:26,284 INFO  [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.DefaultStoreFlusher (DefaultStoreFlusher.java:flushSnapshot(95)) - 
Flushed, sequenceid=6, memsize=160, hasBloomFilter=true, into tmp file 
hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
2016-12-08 15:00:26,303 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HRegionFileSystem (HRegionFileSystem.java:commitStoreFile(370)) - 
Committing store file 
hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/.tmp/8d734ce3d93e40628d8f82111e754cb3
 as 
hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3
2016-12-08 15:00:26,318 INFO  [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HStore (HStore.java:commitFile(877)) - Added 
hdfs://dx-data-hbase-watcher/hbase/data/default/data-probe-test/5f06cb6447343b602e05996bfd87ce14/cf2/8d734ce3d93e40628d8f82111e754cb3,
 entries=1, sequenceid=6, filesize=985
2016-12-08 15:00:26,319 INFO  [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HRegion (HRegion.java:internalFlushcache(1920)) - Finished 
memstore flush of ~160/160, currentsize=0/0 for region 
data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14. in 77ms, 
sequenceid=6, compaction requested=false
2016-12-08 15:00:26,323 INFO  
[StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
 regionserver.HStore (HStore.java:close(774)) - Closed cf1
2016-12-08 15:00:26,325 INFO  
[StoreCloserThread-data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.-1]
 regionserver.HStore (HStore.java:close(774)) - Closed cf2
2016-12-08 15:00:26,326 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
coprocessor.CoprocessorHost (CoprocessorHost.java:shutdown(292)) - Stop 
coprocessor org.apache.hadoop.hbase.security.token.TokenProvider
{color:red}  2016-12-08 15:00:26,326 DEBUG 
[RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] coprocessor.CoprocessorHost 
(CoprocessorHost.java:shutdown(292)) - Stop coprocessor 
org.apache.hadoop.hbase.security.access.AccessController  {color}
2016-12-08 15:00:26,327 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
coprocessor.CoprocessorHost (CoprocessorHost.java:shutdown(292)) - Stop 
coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
2016-12-08 15:00:26,327 DEBUG [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
coprocessor.CoprocessorHost (CoprocessorHost.java:shutdown(292)) - Stop 
coprocessor org.apache.hadoop.hbase.regionserver.ExternalMetricObserver
2016-12-08 15:00:26,328 INFO  [RS_CLOSE_REGION-dx-data-hbase-watcher05:60020-0] 
regionserver.HRegion (HRegion.java:doClose(1279)) - Closed 
data-probe-test,,1481180420784.5f06cb6447343b602e05996bfd87ce14.
{color:red}  2016-12-08 15:00:27,590 ERROR [regionserver60020-EventThread] 
zookeeper.ClientCnxn (ClientCnxn.java:processEvent(524)) - Error while calling 
watcher
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.FutureTask@1851ab3a rejected from 
java.util.concurrent.ThreadPoolExecutor@19c0794f[Terminated, pool size = 0, 
active threads = 0, queued tasks = 0, completed tasks = 1]
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at 

[jira] [Commented] (HBASE-17336) get/update replication peer config requests should be routed through master

2016-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780398#comment-15780398
 ] 

Hadoop QA commented on HBASE-17336:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 40s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 17m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
2s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 11s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 6s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 18m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
1s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 52 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 43s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 4m 
7s {color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 7s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 33s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 117m 38s 
{color} | {color:green} root in the patch passed. {color} |
| 

[jira] [Commented] (HBASE-17339) Scan-Memory-First Optimization for Get Operation

2016-12-27 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15780373#comment-15780373
 ] 

Eshcar Hillel commented on HBASE-17339:
---

Thanks [~yangzhe1991] for your suggestion.
I agree that a server-level configuration is not appropriate. I used it only 
since this way it was easier to benchmark the optimization. Your suggestion for 
verifying memory TSs are larger than flushed TSs is also reasonable.
However I think this should be a table-level property not a CF property due to 
the current implementation.

This is how get operation is currently implemented in the region level:
1. in all relevant CFs, open all relevant scanners (both scanners of memory 
segments, and HFile scanners); this includes initiating the scanner and seeking 
the key;
2. get result as defined by the scan object.

Already in the seek step in phase 1 the operation accesses HFile blocks, which 
may have side affect on the block cache. 

We aim to change this into 
{code}
if the optimization is applicable 
 1. open all relevant  *memory* scanners
 2. get results
 ONLY if result is not complete
  3. open all scanners
  4. get results
else
 1. open all scanners
 2. get results
{code}
This way the get operation can avoid unnecessary HFile access. Also we have a 
single point where we decide which steps to execute. 
This optimization is a best-effort heuristic. Even when all TSs are generated 
by the server the operation may need to run a full scan after running a 
memory-only scan if there is a possibility that the results are not full.
The store level (CF level) only provides scanners as requested; it is not aware 
of which step in the optimization is running. 
Therefore it is reasonable to have this as a table level property.
 

> Scan-Memory-First Optimization for Get Operation
> 
>
> Key: HBASE-17339
> URL: https://issues.apache.org/jira/browse/HBASE-17339
> Project: HBase
>  Issue Type: Improvement
>Reporter: Eshcar Hillel
> Attachments: HBASE-17339-V01.patch
>
>
> The current implementation of a get operation (to retrieve values for a 
> specific key) scans through all relevant stores of the region; for each store 
> both memory components (memstores segments) and disk components (hfiles) are 
> scanned in parallel.
> We suggest to apply an optimization that speculatively scans memory-only 
> components first and only if the result is incomplete scans both memory and 
> disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >