[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209470#comment-14209470
 ] 

Hadoop QA commented on HBASE-12457:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12681269/HBASE-12457.patch
  against trunk revision .
  ATTACHMENT ID: 12681269

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3787 checkstyle errors (more than the trunk's current 3786 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestRegionReplicas

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.testVerifySecondaryAbilityToReadWithOnFiles(TestRegionReplicas.java:421)
at 
org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:183)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11659//console

This message is automatically generated.

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a 

[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209485#comment-14209485
 ] 

Hudson commented on HBASE-12457:


FAILURE: Integrated in HBase-TRUNK #5772 (See 
[https://builds.apache.org/job/HBase-TRUNK/5772/])
HBASE-12457 Regions in transition for a long time when CLOSE interleaves with a 
slow compaction. (larsh: rev 231d3ee2adbfc32dfe4f7d7cd7a96ac33968520e)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionIO.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209520#comment-14209520
 ] 

Hudson commented on HBASE-12457:


SUCCESS: Integrated in HBase-0.98 #674 (See 
[https://builds.apache.org/job/HBase-0.98/674/])
HBASE-12457 Regions in transition for a long time when CLOSE interleaves with a 
slow compaction. (larsh: rev 56af34831fc854c177697aefaf80d535996f87e8)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionIO.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209531#comment-14209531
 ] 

Hudson commented on HBASE-12457:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #642 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/642/])
HBASE-12457 Regions in transition for a long time when CLOSE interleaves with a 
slow compaction. (larsh: rev 56af34831fc854c177697aefaf80d535996f87e8)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionIO.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBase-12394 Document.pdf

Attach an introduction document.

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 2.0.0, 0.98.6.1
Reporter: Weichen Ye
 Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
 HBASE-12394-v4.patch, HBASE-12394.patch, HBase-12394 Document.pdf


 Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
 The Latest Patch is Diff Revision 2 (Latest)
 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
 for one mapper. For example,if we have an HBase table with 300 regions, and 
 we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
 the table, the job will use only 300/3=100 mappers.
 In this way, we can control the number of mappers using the following formula.
 Number of Mappers = (Total region numbers) / 
 hbase.mapreduce.scan.regionspermapper
 This is an example of the configuration.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
 Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394-v5.patch

In the new Patch:
1, add some tests to demo the new code actually works
2, abstract out some duplicated code into a method so that the if branch and 
else branch can share
3, add some new comments in code 

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 2.0.0, 0.98.6.1
Reporter: Weichen Ye
 Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
 HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394.patch, HBase-12394 
 Document.pdf


 Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
 The Latest Patch is Diff Revision 2 (Latest)
 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
 for one mapper. For example,if we have an HBase table with 300 regions, and 
 we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
 the table, the job will use only 300/3=100 mappers.
 In this way, we can control the number of mappers using the following formula.
 Number of Mappers = (Total region numbers) / 
 hbase.mapreduce.scan.regionspermapper
 This is an example of the configuration.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
 Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12451) IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits in rolling update of cluster

2014-11-13 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-12451:

Status: Patch Available  (was: Open)

 IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits 
 in rolling update of cluster
 

 Key: HBASE-12451
 URL: https://issues.apache.org/jira/browse/HBASE-12451
 Project: HBase
  Issue Type: Bug
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-12451-v1.diff


 Currently IncreasingToUpperBoundRegionSplitPolicy is the default region split 
 policy. In this policy, split size is the number of regions that are on this 
 server that all are of the same table, cubed, times 2x the region flush size.
 But when unloading regions of a regionserver in a cluster using 
 region_mover.rb, the number of regions that are on this server that all are 
 of the same table will decrease, and the split size will decrease too, which 
 may cause the left region split in the regionsever. Region Splits also 
 happens when loading regions of a regionserver in a cluster. 
 A improvment may set a minimum split size in 
 IncreasingToUpperBoundRegionSplitPolicy
 Suggestions are welcomed. Thanks~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12451) IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits in rolling update of cluster

2014-11-13 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-12451:

Attachment: HBASE-12451-v1.diff

Patch for trunk

 IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits 
 in rolling update of cluster
 

 Key: HBASE-12451
 URL: https://issues.apache.org/jira/browse/HBASE-12451
 Project: HBase
  Issue Type: Bug
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-12451-v1.diff


 Currently IncreasingToUpperBoundRegionSplitPolicy is the default region split 
 policy. In this policy, split size is the number of regions that are on this 
 server that all are of the same table, cubed, times 2x the region flush size.
 But when unloading regions of a regionserver in a cluster using 
 region_mover.rb, the number of regions that are on this server that all are 
 of the same table will decrease, and the split size will decrease too, which 
 may cause the left region split in the regionsever. Region Splits also 
 happens when loading regions of a regionserver in a cluster. 
 A improvment may set a minimum split size in 
 IncreasingToUpperBoundRegionSplitPolicy
 Suggestions are welcomed. Thanks~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12451) IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits in rolling update of cluster

2014-11-13 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209626#comment-14209626
 ] 

Liu Shaohui commented on HBASE-12451:
-

[~Apache9] [~tianq]
Please help to review at https://reviews.apache.org/r/27983/. Thanks.

 IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits 
 in rolling update of cluster
 

 Key: HBASE-12451
 URL: https://issues.apache.org/jira/browse/HBASE-12451
 Project: HBase
  Issue Type: Bug
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-12451-v1.diff


 Currently IncreasingToUpperBoundRegionSplitPolicy is the default region split 
 policy. In this policy, split size is the number of regions that are on this 
 server that all are of the same table, cubed, times 2x the region flush size.
 But when unloading regions of a regionserver in a cluster using 
 region_mover.rb, the number of regions that are on this server that all are 
 of the same table will decrease, and the split size will decrease too, which 
 may cause the left region split in the regionsever. Region Splits also 
 happens when loading regions of a regionserver in a cluster. 
 A improvment may set a minimum split size in 
 IncreasingToUpperBoundRegionSplitPolicy
 Suggestions are welcomed. Thanks~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209660#comment-14209660
 ] 

Dima Spivak commented on HBASE-12457:
-

[~lhofhansl], this commit looks to be [breaking test-compile on 
branch-1|https://builds.apache.org/job/HBase-1.0/462/console] and is [causing 5 
tests from TestRegionReplicas to fail on 
master|https://builds.apache.org/job/HBase-TRUNK/5772/testReport/] :(. FWIW, I 
reran on my local build machines and got the same errors.

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209704#comment-14209704
 ] 

ramkrishna.s.vasudevan commented on HBASE-12457:


[~larsh]
{code}
writestate.wait(millis);
if (millis  0  EnvironmentEdgeManager.currentTime() - start = 
millis) {
  // if we waited once for compactions to finish, interrupt them, 
and try again
  if (LOG.isDebugEnabled()) {
LOG.debug(Waited for  + millis
  +  ms for compactions to finish on close. Interrupting 
  + currentCompactions.size() +  compactions.);
  }
  for (Thread t : currentCompactions.keySet()) {
// interrupt any current IO in the currently running 
compactions.
t.interrupt();
  }
  millis = 0;
}
{code}
In this code we interrupt all the threads and set the millis = 0.  So again the 
code goes to the outerloop and will once again wait for writeState.wait(0), 
expecting notify will happen. But what if by this time all the threads were 
interrupted and the notifyAll was also called.
{code}
finally {
if (wasStateSet) {
  synchronized (writestate) {
--writestate.compacting;
if (writestate.compacting = 0) {
  writestate.notifyAll();
}
  }
}
{code}
We will end up in infinite waiting?
I may be wrong here pls correct me.

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209708#comment-14209708
 ] 

Hadoop QA commented on HBASE-12394:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12681309/HBASE-12394-v5.patch
  against trunk revision .
  ATTACHMENT ID: 12681309

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestRegionReplicas

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.testVerifySecondaryAbilityToReadWithOnFiles(TestRegionReplicas.java:421)
at 
org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:183)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11661//console

This message is automatically generated.

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 2.0.0, 0.98.6.1
Reporter: Weichen Ye
 Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
 HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394.patch, HBase-12394 
 Document.pdf


 Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
 The Latest Patch is Diff Revision 2 (Latest)
 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper 

[jira] [Updated] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12457:
---
Attachment: HBASE-12457_addendum.patch

This would solve the compilation issue if am right.

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12451) IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits in rolling update of cluster

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209754#comment-14209754
 ] 

Hadoop QA commented on HBASE-12451:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12681313/HBASE-12451-v1.diff
  against trunk revision .
  ATTACHMENT ID: 12681313

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3792 checkstyle errors (more than the trunk's current 3787 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  public static ListTableStatistics 
toTableStatisticsList(ListRegionServerStatusProtos.TableStatistics protos) {
+// Get average count of regions that have the same common table as 
this.region and are on same server

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.quotas.TestQuotaAdmin
  
org.apache.hadoop.hbase.replication.TestReplicationKillMasterRS
  org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildHole
  org.apache.hadoop.hbase.regionserver.TestRegionReplicas
  org.apache.hadoop.hbase.quotas.TestQuotaTableUtil
  org.apache.hadoop.hbase.master.TestRollingRestart
  org.apache.hadoop.hbase.replication.TestReplicationSyncUpTool
  org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildBase
  org.apache.hadoop.hbase.master.TestRestartCluster
  org.apache.hadoop.hbase.replication.TestReplicationEndpoint
  org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient
  
org.apache.hadoop.hbase.replication.TestReplicationKillMasterRSCompressed
  org.apache.hadoop.hbase.regionserver.TestClusterId
  org.apache.hadoop.hbase.replication.TestReplicationSmallTests
  
org.apache.hadoop.hbase.replication.TestReplicationChangingPeerRegionservers
  
org.apache.hadoop.hbase.regionserver.TestRSKilledWhenInitializing
  org.apache.hadoop.hbase.quotas.TestQuotaThrottle
  org.apache.hadoop.hbase.client.TestAdmin1
  
org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildOverlap

 {color:red}-1 core zombie tests{color}.  There are 8 zombie test(s):   
at 
org.apache.hadoop.hbase.master.TestMasterNoCluster.testNotPullingDeadRegionServerFromZK(TestMasterNoCluster.java:306)
at 
org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas.testCreateTableWithMultipleReplicas(TestMasterOperationsForRegionReplicas.java:155)
at 
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testSplitRegionWithNoStoreFiles(TestSplitTransactionOnCluster.java:762)
at 
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testExistingZnodeBlocksSplitAndWeRollback(TestSplitTransactionOnCluster.java:336)
at 
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testRSSplitDaughtersAreOnlinedAfterShutdownHandling(TestSplitTransactionOnCluster.java:291)
at 
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testSplitHooksBeforeAndAfterPONR(TestSplitTransactionOnCluster.java:891)
at 
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testSplitAndRestartingMaster(TestSplitTransactionOnCluster.java:845)
at 
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testTableExistsIfTheSpecifiedTableRegionIsSplitParent(TestSplitTransactionOnCluster.java:626)
at 
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testRITStateForRollback(TestSplitTransactionOnCluster.java:180)
at 
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testSplitFailedCompactionAndSplit(TestSplitTransactionOnCluster.java:229)
at 

[jira] [Created] (HBASE-12468) AUTHORIZATIONS should be part of Visibility Label Docs

2014-11-13 Thread Kevin Odell (JIRA)
Kevin Odell created HBASE-12468:
---

 Summary: AUTHORIZATIONS should be part of Visibility Label Docs
 Key: HBASE-12468
 URL: https://issues.apache.org/jira/browse/HBASE-12468
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.98.6.1
Reporter: Kevin Odell
Assignee: Misty Stanley-Jones


Per https://issues.apache.org/jira/browse/HBASE-12346 you need to use 
AUTHORIZATIONS or setAuthorizations to see your labels. We may want to update 
http://hbase.apache.org/book/ch08s03.html with that information



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12463) MemstoreLAB reduce #objects created

2014-11-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209825#comment-14209825
 ] 

Anoop Sam John commented on HBASE-12463:


Ok I will test.
We have a multi threaded test on HeapMemstoreLAB . This itself I will use to 
test (with more ops per thread)

 MemstoreLAB reduce #objects created
 ---

 Key: HBASE-12463
 URL: https://issues.apache.org/jira/browse/HBASE-12463
 Project: HBase
  Issue Type: Improvement
  Components: Performance
Affects Versions: 0.99.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12463.patch


 By default Memstore uses MSLAB. For each of the Cell added to memstore, we 
 will allocate area in MSLAB and return the area in BR wrapper. So each time a 
 new BR object is created. Instead of this we can have ThreadLocal level BR 
 instance and each time when allocate() API return the BR, we can set the 
 byte[], offset, length on this ThreadLocal level BR instance. So totally only 
 those many objects as the threads count (max handler count)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12469) Way to view current labels

2014-11-13 Thread Kevin Odell (JIRA)
Kevin Odell created HBASE-12469:
---

 Summary: Way to view current labels
 Key: HBASE-12469
 URL: https://issues.apache.org/jira/browse/HBASE-12469
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell


There is currently no way to get the available labels for a system even if you 
are the super user.  You have to run a scan of hbase:labels and then interpret 
the output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12470) Way to determine which labels are applied to a cell in a table

2014-11-13 Thread Kevin Odell (JIRA)
Kevin Odell created HBASE-12470:
---

 Summary: Way to determine which labels are applied to a cell in a 
table
 Key: HBASE-12470
 URL: https://issues.apache.org/jira/browse/HBASE-12470
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell


There is currently no way to determine which labels are applied to a cell 
without using the HFile tool to dump each HFile and then translating the output 
back to the hbase:labels table.  This is quite tedious on larger tables.  Since 
this could be a security risk perhaps we make it tunable with 
hbase.superuser.can.veiw.cells or something along those lines?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12469) Way to view current labels

2014-11-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209849#comment-14209849
 ] 

Anoop Sam John commented on HBASE-12469:


You planning for a patch [~kevin.odell]?

 Way to view current labels
 --

 Key: HBASE-12469
 URL: https://issues.apache.org/jira/browse/HBASE-12469
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell

 There is currently no way to get the available labels for a system even if 
 you are the super user.  You have to run a scan of hbase:labels and then 
 interpret the output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12413) Mismatch in the equals and hashcode methods of KeyValue

2014-11-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12413:
---
Status: Patch Available  (was: Open)

 Mismatch in the equals and hashcode methods of KeyValue
 ---

 Key: HBASE-12413
 URL: https://issues.apache.org/jira/browse/HBASE-12413
 Project: HBase
  Issue Type: Bug
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Attachments: HBASE-12413-V2.diff, HBASE-12413.diff


 In the equals method of KeyValue only row key is compared, and in the 
 hashcode method all bacing bytes are calculated. This breaks the Java rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209924#comment-14209924
 ] 

Andrew Purtell commented on HBASE-12457:


+1 on the addendum for fixing test annotation import paths 

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394-v6.patch

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 2.0.0, 0.98.6.1
Reporter: Weichen Ye
 Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
 HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394-v6.patch, 
 HBASE-12394.patch, HBase-12394 Document.pdf


 Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
 The Latest Patch is Diff Revision 2 (Latest)
 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
 for one mapper. For example,if we have an HBase table with 300 regions, and 
 we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
 the table, the job will use only 300/3=100 mappers.
 In this way, we can control the number of mappers using the following formula.
 Number of Mappers = (Total region numbers) / 
 hbase.mapreduce.scan.regionspermapper
 This is an example of the configuration.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
 Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209952#comment-14209952
 ] 

Andrew Purtell commented on HBASE-12457:


I pushed the addendum to branch-1 and master.

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209985#comment-14209985
 ] 

Andrew Purtell commented on HBASE-12457:


I can see a TestRegionReplicas hang. We are getting hung up on waiting for a 
HTable thread pool to terminate:
{noformat}
Thread-2297 prio=10 tid=0x7feee0d1c800 nid=0x6173 waiting on condition 
[0x7fee508c6000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0x00078e04d4c8 (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1468)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1490)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.afterClass(TestRegionReplicas.java:107)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.restartRegionServer(TestRegionReplicas.java:220)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.testVerifySecondaryAbilityToReadWithOnFiles(TestRegionReplicas.java:421)
{noformat}

A worker thread in the HTable thread pool is hung up trying to get table state:

{noformat}
htable-pool53-t2 daemon prio=10 tid=0x7feea454c000 nid=0x566e waiting on 
condition [0x7feec0365000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1487)
- locked 0x00078cc03140 (a java.lang.Object)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1522)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1727)
- locked 0x00078cc03140 (a java.lang.Object)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTableState(ConnectionManager.java:2504)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableDisabled(ConnectionManager.java:894)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1064)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:289)
at 
org.apache.hadoop.hbase.client.ScannerCallable.prepare(ScannerCallable.java:135)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:294)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:275)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}

Not sure how this relates to any compaction changes. At first glance it doesn't 
seem to.


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but 

[jira] [Updated] (HBASE-12404) Task 5 from parent: Replace internal HTable constructor use with HConnection#getTable (0.98, 0.99)

2014-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12404:
--
Attachment: 12404v3.txt

Fix unit test failures.

 Task 5 from parent: Replace internal HTable constructor use with 
 HConnection#getTable (0.98, 0.99)
 --

 Key: HBASE-12404
 URL: https://issues.apache.org/jira/browse/HBASE-12404
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.99.2

 Attachments: 12404.txt, 12404v2.txt, 12404v3.txt


 Do the step 5. from the [~ndimiduk] list in parent issue.  Go through src 
 code and change all new HTable to instead be connection.getTable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209985#comment-14209985
 ] 

Andrew Purtell edited comment on HBASE-12457 at 11/13/14 4:37 PM:
--

I can see a TestRegionReplicas hang. It looks like a minicluster shutdown 
sequencing problem.

We are getting hung up on waiting for a HTable thread pool to terminate:
{noformat}
Thread-2297 prio=10 tid=0x7feee0d1c800 nid=0x6173 waiting on condition 
[0x7fee508c6000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0x00078e04d4c8 (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1468)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1490)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.afterClass(TestRegionReplicas.java:107)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.restartRegionServer(TestRegionReplicas.java:220)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.testVerifySecondaryAbilityToReadWithOnFiles(TestRegionReplicas.java:421)
{noformat}

A worker thread in the HTable thread pool is hung up trying to get table state:

{noformat}
htable-pool53-t2 daemon prio=10 tid=0x7feea454c000 nid=0x566e waiting on 
condition [0x7feec0365000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1487)
- locked 0x00078cc03140 (a java.lang.Object)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1522)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1727)
- locked 0x00078cc03140 (a java.lang.Object)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTableState(ConnectionManager.java:2504)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableDisabled(ConnectionManager.java:894)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1064)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:289)
at 
org.apache.hadoop.hbase.client.ScannerCallable.prepare(ScannerCallable.java:135)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:294)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:275)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}

Not sure how this relates to any compaction changes. At first glance it doesn't 
seem to.

I also see a regionserver trying to send a status report to the master. See 
below. What these have in common is there is no longer a running master. There 
are no master threads in the stack dump. Looks like a minicluster shutdown 
sequencing problem. 

{noformat}
RS:0;localhost:54421 prio=10 tid=0x7feea4549000 nid=0x55a9 waiting on 
condition [0x7fee605c3000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.sleep(HRegionServer.java:1186)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub(HRegionServer.java:2081)
- locked 0x00078c6768a8 (a 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1074)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:866)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:156)
at 

[jira] [Comment Edited] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209985#comment-14209985
 ] 

Andrew Purtell edited comment on HBASE-12457 at 11/13/14 4:36 PM:
--

I can see a TestRegionReplicas hang. We are getting hung up on waiting for a 
HTable thread pool to terminate:
{noformat}
Thread-2297 prio=10 tid=0x7feee0d1c800 nid=0x6173 waiting on condition 
[0x7fee508c6000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0x00078e04d4c8 (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1468)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1490)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.afterClass(TestRegionReplicas.java:107)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.restartRegionServer(TestRegionReplicas.java:220)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.testVerifySecondaryAbilityToReadWithOnFiles(TestRegionReplicas.java:421)
{noformat}

A worker thread in the HTable thread pool is hung up trying to get table state:

{noformat}
htable-pool53-t2 daemon prio=10 tid=0x7feea454c000 nid=0x566e waiting on 
condition [0x7feec0365000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1487)
- locked 0x00078cc03140 (a java.lang.Object)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1522)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1727)
- locked 0x00078cc03140 (a java.lang.Object)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTableState(ConnectionManager.java:2504)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableDisabled(ConnectionManager.java:894)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1064)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:289)
at 
org.apache.hadoop.hbase.client.ScannerCallable.prepare(ScannerCallable.java:135)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:294)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:275)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}

Not sure how this relates to any compaction changes. At first glance it doesn't 
seem to.

I also see a regionserver trying to send a status report to the master. See 
below. What these have in common is there is no longer a running master. There 
are no master threads in the stack dump. Looks like a minicluster shutdown 
sequencing problem. 

{noformat}
RS:0;localhost:54421 prio=10 tid=0x7feea4549000 nid=0x55a9 waiting on 
condition [0x7fee605c3000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.sleep(HRegionServer.java:1186)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub(HRegionServer.java:2081)
- locked 0x00078c6768a8 (a 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1074)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:866)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:156)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:108)
at 

[jira] [Comment Edited] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209985#comment-14209985
 ] 

Andrew Purtell edited comment on HBASE-12457 at 11/13/14 4:37 PM:
--

I can see a TestRegionReplicas hang. It looks like a minicluster shutdown 
sequencing problem.

We are getting hung up on waiting for a HTable thread pool to terminate:
{noformat}
Thread-2297 prio=10 tid=0x7feee0d1c800 nid=0x6173 waiting on condition 
[0x7fee508c6000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0x00078e04d4c8 (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1468)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1490)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.afterClass(TestRegionReplicas.java:107)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.restartRegionServer(TestRegionReplicas.java:220)
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.testVerifySecondaryAbilityToReadWithOnFiles(TestRegionReplicas.java:421)
{noformat}

A worker thread in the HTable thread pool is hung up trying to get table state:

{noformat}
htable-pool53-t2 daemon prio=10 tid=0x7feea454c000 nid=0x566e waiting on 
condition [0x7feec0365000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1487)
- locked 0x00078cc03140 (a java.lang.Object)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1522)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1727)
- locked 0x00078cc03140 (a java.lang.Object)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTableState(ConnectionManager.java:2504)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableDisabled(ConnectionManager.java:894)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1064)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:289)
at 
org.apache.hadoop.hbase.client.ScannerCallable.prepare(ScannerCallable.java:135)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:294)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:275)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}

Not sure how this relates to any compaction changes. At first glance it doesn't 
seem to.

I also see a regionserver trying to send a status report to the master. See 
below. What these have in common is there is no longer a running master. There 
are no master threads in the stack dump.

{noformat}
RS:0;localhost:54421 prio=10 tid=0x7feea4549000 nid=0x55a9 waiting on 
condition [0x7fee605c3000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.sleep(HRegionServer.java:1186)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionServerStatusStub(HRegionServer.java:2081)
- locked 0x00078c6768a8 (a 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1074)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:866)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:156)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:108)
at 

[jira] [Updated] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12457:
---
Attachment: TestRegionReplicas-jstack.txt

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12413) Mismatch in the equals and hashcode methods of KeyValue

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209997#comment-14209997
 ] 

Hadoop QA commented on HBASE-12413:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12679176/HBASE-12413-V2.diff
  against trunk revision .
  ATTACHMENT ID: 12679176

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestRegionReplicas

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.TestRegionReplicas.testVerifySecondaryAbilityToReadWithOnFiles(TestRegionReplicas.java:421)
at 
org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:183)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11663//console

This message is automatically generated.

 Mismatch in the equals and hashcode methods of KeyValue
 ---

 Key: HBASE-12413
 URL: https://issues.apache.org/jira/browse/HBASE-12413
 Project: HBase
  Issue Type: Bug
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Attachments: HBASE-12413-V2.diff, HBASE-12413.diff


 In the equals method of KeyValue only row key is compared, and in the 
 hashcode method all bacing bytes are calculated. This breaks the Java rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210020#comment-14210020
 ] 

Hudson commented on HBASE-12457:


FAILURE: Integrated in HBase-TRUNK #5773 (See 
[https://builds.apache.org/job/HBase-TRUNK/5773/])
Amend HBASE-12457 Regions in transition for a long time when CLOSE interleaves 
with a slow compaction; Test import fix (apurtell: rev 
f6d8cde1e4f67390a936e7bc9f8c70b65a808450)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionIO.java


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12413) Mismatch in the equals and hashcode methods of KeyValue

2014-11-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210019#comment-14210019
 ] 

Ted Yu commented on HBASE-12413:


lgtm
This is an incompatible change, right ?

 Mismatch in the equals and hashcode methods of KeyValue
 ---

 Key: HBASE-12413
 URL: https://issues.apache.org/jira/browse/HBASE-12413
 Project: HBase
  Issue Type: Bug
Reporter: Jingcheng Du
Assignee: Jingcheng Du
Priority: Minor
 Attachments: HBASE-12413-V2.diff, HBASE-12413.diff


 In the equals method of KeyValue only row key is compared, and in the 
 hashcode method all bacing bytes are calculated. This breaks the Java rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8607) Allow custom filters and coprocessors to be updated for a region server without requiring a restart

2014-11-13 Thread Julian Wissmann (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210032#comment-14210032
 ] 

Julian Wissmann commented on HBASE-8607:


Hi,
Sorry about the late reply. Time is a little scarce at the moment.
It'll take me a few weeks to get this going as it will really just be a little 
side project, but sure, I'll be happy to work on this and provide a patch.

 Allow custom filters and coprocessors to be updated for a region server 
 without requiring a restart
 ---

 Key: HBASE-8607
 URL: https://issues.apache.org/jira/browse/HBASE-8607
 Project: HBase
  Issue Type: New Feature
  Components: regionserver
Reporter: James Taylor

 One solution to allowing custom filters and coprocessors to be updated for a 
 region server without requiring a restart might be to run the HBase server in 
 an OSGi container (maybe there are other approaches as well?). Typically, 
 applications that use coprocessors and custom filters also have shared 
 classes underneath, so putting the burden on the user to include some kind of 
 version name in the class is not adequate. Including the version name in the 
 package might work in some cases (at least until dependent jars start to 
 change as well), but is cumbersome and overburdens the app developer.
 Regardless of what approach is taken, we'd need to define the life cycle of 
 the coprocessors and custom filters when a new version is loaded. For 
 example, in-flight invocations could continue to use the old version while 
 new invocations would use the new ones. Once the in-flight invocations are 
 complete, the old code/jar could be unloaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-8607) Allow custom filters and coprocessors to be updated for a region server without requiring a restart

2014-11-13 Thread Julian Wissmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Wissmann reassigned HBASE-8607:
--

Assignee: Julian Wissmann

 Allow custom filters and coprocessors to be updated for a region server 
 without requiring a restart
 ---

 Key: HBASE-8607
 URL: https://issues.apache.org/jira/browse/HBASE-8607
 Project: HBase
  Issue Type: New Feature
  Components: regionserver
Reporter: James Taylor
Assignee: Julian Wissmann

 One solution to allowing custom filters and coprocessors to be updated for a 
 region server without requiring a restart might be to run the HBase server in 
 an OSGi container (maybe there are other approaches as well?). Typically, 
 applications that use coprocessors and custom filters also have shared 
 classes underneath, so putting the burden on the user to include some kind of 
 version name in the class is not adequate. Including the version name in the 
 package might work in some cases (at least until dependent jars start to 
 change as well), but is cumbersome and overburdens the app developer.
 Regardless of what approach is taken, we'd need to define the life cycle of 
 the coprocessors and custom filters when a new version is loaded. For 
 example, in-flight invocations could continue to use the old version while 
 new invocations would use the new ones. Once the in-flight invocations are 
 complete, the old code/jar could be unloaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210037#comment-14210037
 ] 

Andrew Purtell commented on HBASE-12457:


Well for whatever reason this change does trigger the above condition, due to 
some kind of timing change, because if I go back two commits, before this patch 
and the addendum, the test makes progress and completes.

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210036#comment-14210036
 ] 

Lars Hofhansl commented on HBASE-12457:
---

Sorry about the build break on branch-1. I cherry-picked the patch. Usually I 
do a compile and run the relevant tests, but I spaced it this time.

The hang will not happen since we only notify *after* we set 
writestate.compacting (or writestate.flushing) back to false, so there is no 
race. I looked at that part :)

In the face of the test failures I am going to roll this back anyway, though.


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12457:
---
Fix Version/s: (was: 0.98.8)
   0.98.9

Moving out of .8. Sigh

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reopened HBASE-12457:


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210044#comment-14210044
 ] 

Lars Hofhansl commented on HBASE-12457:
---

reverted from all branches... sorry about the noise

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210046#comment-14210046
 ] 

Lars Hofhansl commented on HBASE-12457:
---

[~apurtell], you mean the test condition, right? Or did you see it hanging 
specifically on that writestate.wait(...)?

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210054#comment-14210054
 ] 

Andrew Purtell commented on HBASE-12457:


I meant the minicluster shutdown sequencing issue.  Thanks for trying to get 
this in for .8 Lars.

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210058#comment-14210058
 ] 

Hadoop QA commented on HBASE-12394:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12681344/HBASE-12394-v6.patch
  against trunk revision .
  ATTACHMENT ID: 12681344

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestRegionReplicas

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11664//console

This message is automatically generated.

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 2.0.0, 0.98.6.1
Reporter: Weichen Ye
 Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
 HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394-v6.patch, 
 HBASE-12394.patch, HBase-12394 Document.pdf


 Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
 The Latest Patch is Diff Revision 2 (Latest)
 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
 for one mapper. For example,if we have an HBase table with 300 

[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210070#comment-14210070
 ] 

Hudson commented on HBASE-12457:


FAILURE: Integrated in HBase-1.0 #463 (See 
[https://builds.apache.org/job/HBase-1.0/463/])
Amend HBASE-12457 Regions in transition for a long time when CLOSE interleaves 
with a slow compaction; Test import fix (apurtell: rev 
9d2ad55cfa6108718d785b5e71ab10e9fb75a988)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionIO.java


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210073#comment-14210073
 ] 

stack commented on HBASE-12457:
---

Thanks for backing out breaking change promptly.  Feel free to retry given you 
are watching the build results.

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12469) Way to view current labels

2014-11-13 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210082#comment-14210082
 ] 

Jerry He commented on HBASE-12469:
--

Dup of HBASE-12373?

 Way to view current labels
 --

 Key: HBASE-12469
 URL: https://issues.apache.org/jira/browse/HBASE-12469
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell

 There is currently no way to get the available labels for a system even if 
 you are the super user.  You have to run a scan of hbase:labels and then 
 interpret the output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12468) AUTHORIZATIONS should be part of Visibility Label Docs

2014-11-13 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210087#comment-14210087
 ] 

Jerry He commented on HBASE-12468:
--

This could be combined with HBASE-12466

 AUTHORIZATIONS should be part of Visibility Label Docs
 --

 Key: HBASE-12468
 URL: https://issues.apache.org/jira/browse/HBASE-12468
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.98.6.1
Reporter: Kevin Odell
Assignee: Misty Stanley-Jones

 Per https://issues.apache.org/jira/browse/HBASE-12346 you need to use 
 AUTHORIZATIONS or setAuthorizations to see your labels. We may want to update 
 http://hbase.apache.org/book/ch08s03.html with that information



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12470) Way to determine which labels are applied to a cell in a table

2014-11-13 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210096#comment-14210096
 ] 

Jerry He commented on HBASE-12470:
--

This can be seen as to be related to HBASE-12441.
We could use client scan result to contain the labels.  Maybe with hbase 
superuser and upon user's request by setting an attribute in the client scan? 

 Way to determine which labels are applied to a cell in a table
 --

 Key: HBASE-12470
 URL: https://issues.apache.org/jira/browse/HBASE-12470
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell

 There is currently no way to determine which labels are applied to a cell 
 without using the HFile tool to dump each HFile and then translating the 
 output back to the hbase:labels table.  This is quite tedious on larger 
 tables.  Since this could be a security risk perhaps we make it tunable with 
 hbase.superuser.can.veiw.cells or something along those lines?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HBASE-12464) meta table region assignment stuck in the FAILED_OPEN state due to region server not fully ready to serve

2014-11-13 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-12464 started by Stephen Yuan Jiang.
--
 meta table region assignment stuck in the FAILED_OPEN state due to region 
 server not fully ready to serve
 -

 Key: HBASE-12464
 URL: https://issues.apache.org/jira/browse/HBASE-12464
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 1.0.0, 2.0.0, 0.99.1
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 1.0.0, 2.0.0

 Attachments: HBASE-12464.v1-2.0.patch

   Original Estimate: 3h
  Remaining Estimate: 3h

 meta table region assignment could reach to the 'FAILED_OPEN' state, which 
 makes the region not available unless the target region server shutdown or 
 manual resolution.  This is undesirable state for meta tavle region.
 Here is the sequence how this could happen (the code is in 
 AssignmentManager::assign()):
 Step 1: Master detects a region server (RS1) that hosts one meta table region 
 is down, it changes the meta region state from 'online' to 'offline'
 Step 2: In a loop (with configuable maximumAttempts count, default is 10, and 
 minimal is 1), AssignmentManager tries to find a RS to host the meta table 
 region.  If there is no RS available, it would loop forver by resetting the 
 loop count (!!BUG#1 from this logic - a small bug!!) 
if (region.isMetaRegion()) {
 -try {
 -  Thread.sleep(this.sleepTimeBeforeRetryingMetaAssignment);
 -  if (i == maximumAttempts) i = 1; // == BUG: if 
 maximumAttempts is 1, then the loop will end.
 -  continue;
 -} catch (InterruptedException e) {
 -  ...
 -}
 Step 3: Once a new RS is found (RS2), inside the same loop as Step 2, 
 AssignmentManager tries to assign the meta region to RS2 (OFFLINE, RS1 = 
 PENDING_OPEN, RS2).  If for some reason that opening the region in RS2 failed 
 (eg. the target RS2 is not ready to serve - ServerNotRunningYetException), 
 AssignmentManager would change the state from (PENDING_OPEN, RS2) to 
 (FAILED_OPEN, RS2).  then it would retry (and even change the RS server to go 
 to).  The retry is up to maximumAttempts.  Once the maximumAttempts is 
 reached, the meta region will be in the 'FAILED_OPEN' state, unless either 
 (1).  RS2 shutdown to trigger region assignment again or (2). it is 
 reassigned by an operator via HBase Shell.  
 Based on the document ( http://hbase.apache.org/book/regions.arch.html ), 
 this is by design - 17. For regions in FAILED_OPEN or FAILED_CLOSE states , 
 the master tries to close them again when they are reassigned by an operator 
 via HBase Shell..  
 However, this is bad design, espcially for meta table region (it is arguable 
 that the design is good for regular table - for this ticket, I am more focus 
 on fixing the meta region availablity issue).  
 I propose 2 possible fixes:
 Fix#1 (band-aid change): in Step 3, just like Step 2, if the region is a meta 
 table region, reset the loop count so that it would not leave the loop with 
 meta table region in FAILED_OPEN state.
 Fix#2 (more involved): if a region is in FAILED_OPEN state, we should provide 
 a way to automatically trigger AssignmentManager::assign() after a short 
 period of time (leaving any region in FAILED_OPEN state or other states like 
 'FAILED_CLOSE' is undesirable, should have some way to retrying and auto-heal 
 the region).
 I think at least for 1.0.0, Fix#1 is good enough.  We can open a task-type of 
 JIRA for Fix#2 in future release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12470) Way to determine which labels are applied to a cell in a table

2014-11-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210135#comment-14210135
 ] 

Anoop Sam John commented on HBASE-12470:


bq.We could use client scan result to contain the labels. Maybe with hbase 
superuser and upon user's request by setting an attribute in the client scan?
We have to use different Codec at RPC then. That is the difficult part of the 
change.

 Way to determine which labels are applied to a cell in a table
 --

 Key: HBASE-12470
 URL: https://issues.apache.org/jira/browse/HBASE-12470
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell

 There is currently no way to determine which labels are applied to a cell 
 without using the HFile tool to dump each HFile and then translating the 
 output back to the hbase:labels table.  This is quite tedious on larger 
 tables.  Since this could be a security risk perhaps we make it tunable with 
 hbase.superuser.can.veiw.cells or something along those lines?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12469) Way to view current labels

2014-11-13 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John resolved HBASE-12469.

Resolution: Duplicate

Yes dup of HBASE-12373. Closing it.

 Way to view current labels
 --

 Key: HBASE-12469
 URL: https://issues.apache.org/jira/browse/HBASE-12469
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell

 There is currently no way to get the available labels for a system even if 
 you are the super user.  You have to run a scan of hbase:labels and then 
 interpret the output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210159#comment-14210159
 ] 

Hudson commented on HBASE-12457:


SUCCESS: Integrated in HBase-1.0 #464 (See 
[https://builds.apache.org/job/HBase-1.0/464/])
Revert Amend HBASE-12457 Regions in transition for a long time when CLOSE 
interleaves with a slow compaction; Test import fix (larsh: rev 
880c7c35fc50f28ec3e072a4c62a348fc964e9e0)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionIO.java
Revert HBASE-12457 Regions in transition for a long time when CLOSE 
interleaves with a slow compaction. (larsh: rev 
1861f9ce25bc8609629928a670fdf3566486ca25)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionIO.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210160#comment-14210160
 ] 

Hudson commented on HBASE-12457:


FAILURE: Integrated in HBase-0.98 #675 (See 
[https://builds.apache.org/job/HBase-0.98/675/])
Revert HBASE-12457 Regions in transition for a long time when CLOSE 
interleaves with a slow compaction. (larsh: rev 
7f5f1570ce83c62ce9408701677994415b127b36)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionIO.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12470) Way to determine which labels are applied to a cell in a table

2014-11-13 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210168#comment-14210168
 ] 

Jerry He commented on HBASE-12470:
--

Yes, after going thru your original HBASE-10322,  We need to give some thoughts.

 Way to determine which labels are applied to a cell in a table
 --

 Key: HBASE-12470
 URL: https://issues.apache.org/jira/browse/HBASE-12470
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell

 There is currently no way to determine which labels are applied to a cell 
 without using the HFile tool to dump each HFile and then translating the 
 output back to the hbase:labels table.  This is quite tedious on larger 
 tables.  Since this could be a security risk perhaps we make it tunable with 
 hbase.superuser.can.veiw.cells or something along those lines?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14210184#comment-14210184
 ] 

Hudson commented on HBASE-12457:


SUCCESS: Integrated in HBase-TRUNK #5774 (See 
[https://builds.apache.org/job/HBase-TRUNK/5774/])
Revert Amend HBASE-12457 Regions in transition for a long time when CLOSE 
interleaves with a slow compaction; Test import fix (larsh: rev 
9d634772fa12e16b86b0218802b2e38cacdfd528)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionIO.java
Revert HBASE-12457 Regions in transition for a long time when CLOSE 
interleaves with a slow compaction. (larsh: rev 
c29318c038f0f310562dc8194506b504eae72c1b)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionIO.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211269#comment-14211269
 ] 

Hudson commented on HBASE-12457:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #643 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/643/])
Revert HBASE-12457 Regions in transition for a long time when CLOSE 
interleaves with a slow compaction. (larsh: rev 
7f5f1570ce83c62ce9408701677994415b127b36)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DefaultCompactor.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionIO.java


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12470) Way to determine which labels are applied to a cell in a table

2014-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211325#comment-14211325
 ] 

Andrew Purtell commented on HBASE-12470:


This is also an issue for cell ACLs.

As Anoop mentioned we strip security tags in the RPC layer so we don't leak 
sensitive information to users, untrusted or otherwise. We can vary the codec 
but only globally by configuration.

In the run up to 0.98.0, while we were still at 0.97-SNAPSHOT, I proposed a 
couple of variations on per connection codec negotiation that didn't go 
anywhere on account of lack of time, interest, and community will. 
Per-connection negotiation is probably the best answer here. Might be worth it 
for you to reconsider the idea. After we authenticate a user as privileged (we 
can start with beloging to the superuser group) we could use the RPC codec 
which does not strip security tags, thus giving higher level APIs / policy 
monitoring / policy validation tools direct access to cell tags, and therefore 
ACL and visibility label metadata stored with them. 

 Way to determine which labels are applied to a cell in a table
 --

 Key: HBASE-12470
 URL: https://issues.apache.org/jira/browse/HBASE-12470
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell

 There is currently no way to determine which labels are applied to a cell 
 without using the HFile tool to dump each HFile and then translating the 
 output back to the hbase:labels table.  This is quite tedious on larger 
 tables.  Since this could be a security risk perhaps we make it tunable with 
 hbase.superuser.can.veiw.cells or something along those lines?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12470) Way to determine which labels are applied to a cell in a table

2014-11-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211325#comment-14211325
 ] 

Andrew Purtell edited comment on HBASE-12470 at 11/13/14 9:33 PM:
--

This is also an issue for cell ACLs.

As Anoop mentioned we strip security tags in the RPC layer so we don't leak 
sensitive information to users, untrusted or otherwise. We can vary the codec 
but only globally by configuration.

In the run up to 0.98.0, while we were still at 0.97-SNAPSHOT, I proposed a 
couple of variations on per connection codec negotiation that didn't go 
anywhere on account of lack of time, interest, and community will. 
Per-connection negotiation is probably the best answer here. Might be worth it 
for you to reconsider the idea. After we authenticate a user as privileged (we 
can start with beloging to the superuser group) we could use the RPC codec 
which does not strip security tags, thus giving higher level APIs / policy 
monitoring / policy validation tools direct access to cell tags, and therefore 
ACL and visibility label metadata stored with them. This requires the ability 
to swap RPC codecs on a per connection basis, after the authorization 
handshake, so some sort of negotiation...


was (Author: apurtell):
This is also an issue for cell ACLs.

As Anoop mentioned we strip security tags in the RPC layer so we don't leak 
sensitive information to users, untrusted or otherwise. We can vary the codec 
but only globally by configuration.

In the run up to 0.98.0, while we were still at 0.97-SNAPSHOT, I proposed a 
couple of variations on per connection codec negotiation that didn't go 
anywhere on account of lack of time, interest, and community will. 
Per-connection negotiation is probably the best answer here. Might be worth it 
for you to reconsider the idea. After we authenticate a user as privileged (we 
can start with beloging to the superuser group) we could use the RPC codec 
which does not strip security tags, thus giving higher level APIs / policy 
monitoring / policy validation tools direct access to cell tags, and therefore 
ACL and visibility label metadata stored with them. 

 Way to determine which labels are applied to a cell in a table
 --

 Key: HBASE-12470
 URL: https://issues.apache.org/jira/browse/HBASE-12470
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell

 There is currently no way to determine which labels are applied to a cell 
 without using the HFile tool to dump each HFile and then translating the 
 output back to the hbase:labels table.  This is quite tedious on larger 
 tables.  Since this could be a security risk perhaps we make it tunable with 
 hbase.superuser.can.veiw.cells or something along those lines?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12404) Task 5 from parent: Replace internal HTable constructor use with HConnection#getTable (0.98, 0.99)

2014-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12404:
--
Attachment: 12404v5.txt

Trying hadoopqa again.

 Task 5 from parent: Replace internal HTable constructor use with 
 HConnection#getTable (0.98, 0.99)
 --

 Key: HBASE-12404
 URL: https://issues.apache.org/jira/browse/HBASE-12404
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.99.2

 Attachments: 12404.txt, 12404v2.txt, 12404v3.txt, 12404v5.txt


 Do the step 5. from the [~ndimiduk] list in parent issue.  Go through src 
 code and change all new HTable to instead be connection.getTable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12404) Task 5 from parent: Replace internal HTable constructor use with HConnection#getTable (0.98, 0.99)

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211399#comment-14211399
 ] 

Hadoop QA commented on HBASE-12404:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12681405/12404v5.txt
  against trunk revision .
  ATTACHMENT ID: 12681405

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 81 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ * a 
href=https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/InterfaceClassification.html;Hadoop
 Interface Classification/a
+{@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, byte[], 
byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call)}, and
+{@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, byte[], 
byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call, 
org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)}
+   {@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, 
byte[], byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call)}
+   or {@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, 
byte[], byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call, 
org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)}
+method's argument.  Calling {@link 
org.apache.hadoop.hbase.client.Table#coprocessorService(Class, byte[], byte[], 
org.apache.hadoop.hbase.client.coprocessor.Batch.Call)}
+  final Connection connection, final ListGet gets, final KeyFromRowK 
kfr) throws IOException {

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestCatalogJanitor

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11666//console

This message is automatically generated.

 Task 5 from parent: Replace internal HTable constructor use with 
 HConnection#getTable (0.98, 0.99)
 --

 Key: HBASE-12404
 URL: 

[jira] [Updated] (HBASE-12404) Task 5 from parent: Replace internal HTable constructor use with HConnection#getTable (0.98, 0.99)

2014-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12404:
--
Attachment: 0001-HBASE-12404-Task-5-from-parent-Replace-internal-HTab.patch

Here is a patch ready for review.

 Task 5 from parent: Replace internal HTable constructor use with 
 HConnection#getTable (0.98, 0.99)
 --

 Key: HBASE-12404
 URL: https://issues.apache.org/jira/browse/HBASE-12404
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.99.2

 Attachments: 
 0001-HBASE-12404-Task-5-from-parent-Replace-internal-HTab.patch, 12404.txt, 
 12404v2.txt, 12404v3.txt, 12404v5.txt


 Do the step 5. from the [~ndimiduk] list in parent issue.  Go through src 
 code and change all new HTable to instead be connection.getTable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12404) Task 5 from parent: Replace internal HTable constructor use with HConnection#getTable (0.98, 0.99)

2014-11-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211411#comment-14211411
 ] 

stack commented on HBASE-12404:
---

Posted rb here: https://reviews.apache.org/r/28009/
Here are a few notes on the patch:

Replaced HTable under hbase-*/src/main/java. Skipped tests. Would take
till end of time to do all and some cases are cryptic. Also skipped
some mapreduce where HTable comes through in API. Can do both of
these stragglers in another issue.

Generally, if a utility class or standalone class, tried to pass in a
Connection rather than have the utility or standalone create its own
connection on each invocation; e.g. the Quota stuff. Where not possible,
noted where invocation comes from... if test or hbck, didn't worry about it.
Some classes are just standalone and nothing to be done to avoid
a Connection setup per invocation (this is probably how it worked
in the new HTable...days anyways). Some classes are not used:
AggregationClient, FavoredNodes... we should just purge this stuff.

Doc on what short circuit connection does (I can just use it...
I thought it was just for short circuit but no, it switches dependent
on where you are connecting).

Changed HConnection to super Interface ClusterConnection where safe (
internal usage by private classes only).

Doc cleanup in example usage so we do new mode rather than the old
fashion.

Used java7 idiom that allows you avoid writing out finally to call close
on implementations of Closeable.

Added a RegistryFactory.. moved it out from being inner class.

Added a utility createGetClosestRowOrBeforeReverseScan method to Scan
to create a Scan that can ...

Renamed getShortCircuitConnection as getConnection -- users don't need
to know what implementation does (that it can short-circuit RPC).
The old name gave pause. I was frightened to use it thinking it only
for short-circuit reading -- that it would not do remote too.

 Task 5 from parent: Replace internal HTable constructor use with 
 HConnection#getTable (0.98, 0.99)
 --

 Key: HBASE-12404
 URL: https://issues.apache.org/jira/browse/HBASE-12404
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.99.2

 Attachments: 
 0001-HBASE-12404-Task-5-from-parent-Replace-internal-HTab.patch, 12404.txt, 
 12404v2.txt, 12404v3.txt, 12404v5.txt


 Do the step 5. from the [~ndimiduk] list in parent issue.  Go through src 
 code and change all new HTable to instead be connection.getTable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12471) Step 4. replace internal ConnectionManager#{delete,get}Connection use with #close, #createConnection (0.98, 0.99)

2014-11-13 Thread stack (JIRA)
stack created HBASE-12471:
-

 Summary: Step 4. replace internal 
ConnectionManager#{delete,get}Connection use with #close, #createConnection 
(0.98, 0.99)
 Key: HBASE-12471
 URL: https://issues.apache.org/jira/browse/HBASE-12471
 Project: HBase
  Issue Type: Sub-task
Reporter: stack


Let me do this. A bunch of this was done in HBASE-12404 Let me see if can find 
more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12471) Task 4. replace internal ConnectionManager#{delete,get}Connection use with #close, #createConnection (0.98, 0.99)

2014-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12471:
--
Summary: Task 4. replace internal ConnectionManager#{delete,get}Connection 
use with #close, #createConnection (0.98, 0.99)  (was: Step 4. replace internal 
ConnectionManager#{delete,get}Connection use with #close, #createConnection 
(0.98, 0.99))

 Task 4. replace internal ConnectionManager#{delete,get}Connection use with 
 #close, #createConnection (0.98, 0.99)
 -

 Key: HBASE-12471
 URL: https://issues.apache.org/jira/browse/HBASE-12471
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
 Fix For: 0.99.2


 Let me do this. A bunch of this was done in HBASE-12404 Let me see if can 
 find more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12471) Task 4. replace internal ConnectionManager#{delete,get}Connection use with #close, #createConnection (0.98, 0.99)

2014-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12471:
--
Assignee: stack

 Task 4. replace internal ConnectionManager#{delete,get}Connection use with 
 #close, #createConnection (0.98, 0.99)
 -

 Key: HBASE-12471
 URL: https://issues.apache.org/jira/browse/HBASE-12471
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.99.2


 Let me do this. A bunch of this was done in HBASE-12404 Let me see if can 
 find more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12404) Task 5 from parent: Replace internal HTable constructor use with HConnection#getTable (0.98, 0.99)

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211483#comment-14211483
 ] 

Hadoop QA commented on HBASE-12404:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12681421/0001-HBASE-12404-Task-5-from-parent-Replace-internal-HTab.patch
  against trunk revision .
  ATTACHMENT ID: 12681421

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 103 
new or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ * a 
href=https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/InterfaceClassification.html;Hadoop
 Interface Classification/a
+{@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, byte[], 
byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call)}, and
+{@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, byte[], 
byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call, 
org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)}
+   {@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, 
byte[], byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call)}
+   or {@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, 
byte[], byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call, 
org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)}
+method's argument.  Calling {@link 
org.apache.hadoop.hbase.client.Table#coprocessorService(Class, byte[], byte[], 
org.apache.hadoop.hbase.client.coprocessor.Batch.Call)}
+  final Connection connection, final ListGet gets, final KeyFromRowK 
kfr) throws IOException {

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestCatalogJanitor

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11667//console

This message is automatically generated.

 Task 5 from parent: Replace internal HTable constructor use with 
 HConnection#getTable (0.98, 0.99)
 --

 

[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Affects Version/s: (was: 0.98.6.1)

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 2.0.0
Reporter: Weichen Ye
 Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
 HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394-v6.patch, 
 HBASE-12394.patch, HBase-12394 Document.pdf


 Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
 The Latest Patch is Diff Revision 2 (Latest)
 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
 for one mapper. For example,if we have an HBase table with 300 regions, and 
 we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
 the table, the job will use only 300/3=100 mappers.
 In this way, we can control the number of mappers using the following formula.
 Number of Mappers = (Total region numbers) / 
 hbase.mapreduce.scan.regionspermapper
 This is an example of the configuration.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
 Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: (was: HBASE-12394-v6.patch)

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 2.0.0
Reporter: Weichen Ye
 Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
 HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394.patch, HBase-12394 
 Document.pdf


 Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
 The Latest Patch is Diff Revision 2 (Latest)
 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
 for one mapper. For example,if we have an HBase table with 300 regions, and 
 we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
 the table, the job will use only 300/3=100 mappers.
 In this way, we can control the number of mappers using the following formula.
 Number of Mappers = (Total region numbers) / 
 hbase.mapreduce.scan.regionspermapper
 This is an example of the configuration.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
 Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394-v6.patch

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 2.0.0
Reporter: Weichen Ye
 Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
 HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394-v6.patch, 
 HBASE-12394.patch, HBase-12394 Document.pdf


 Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
 The Latest Patch is Diff Revision 2 (Latest)
 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
 for one mapper. For example,if we have an HBase table with 300 regions, and 
 we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
 the table, the job will use only 300/3=100 mappers.
 In this way, we can control the number of mappers using the following formula.
 Number of Mappers = (Total region numbers) / 
 hbase.mapreduce.scan.regionspermapper
 This is an example of the configuration.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
 Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12472) Improve debuggability of IntegrationTestBulkLoad

2014-11-13 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-12472:


 Summary: Improve debuggability of IntegrationTestBulkLoad
 Key: HBASE-12472
 URL: https://issues.apache.org/jira/browse/HBASE-12472
 Project: HBase
  Issue Type: Test
  Components: integration tests
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
Priority: Minor


Debugging failures in the above test is very difficult, particularly while 
using a test harness that collects logs but does not preserve data. Let's add 
some more information about breaks in the chain when they happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12449) Use the max timestamp of current or old cell's timestamp in HRegion.append()

2014-11-13 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211573#comment-14211573
 ] 

Enis Soztutar commented on HBASE-12449:
---

bq. For sure oldcell ts was set by the system and not the user?
For normal append, it seems that we are not using the TS coming from the 
append, but always use current TS: 
When there is previous data:
 {code}
newKV = new KeyValue(row.length, kv.getFamilyLength(),
kv.getQualifierLength(), now, KeyValue.Type.Put,
oldKv.getValueLength() + kv.getValueLength(),
oldKv.getTagsLengthUnsigned() + kv.getTagsLengthUnsigned());
{code}

When there is no previous data: 
{code}
// Append's KeyValue.Type==Put and 
ts==HConstants.LATEST_TIMESTAMP,
// so only need to update the timestamp to 'now'
newKV.updateLatestStamp(Bytes.toBytes(now));
{code}


bq. Increment has the same issue, right?
I did not check increment, but let me do that. 

 Use the max timestamp of current or old cell's timestamp in HRegion.append()
 

 Key: HBASE-12449
 URL: https://issues.apache.org/jira/browse/HBASE-12449
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: hbase-12449-0.98.patch, hbase-12449.patch


 We have observed an issue in SLES clusters where the system timestamp 
 regularly goes back in time. This happens frequently enough to cause test 
 failures when LTT is used with updater. 
 Everytime an mutation is performed, the updater creates a string in the form 
 #column:mutation_type and appends it to the column mutate_info. 
 It seems that when the test fails, it is always the case that the mutate_info 
 information for that particular column reported is not there in the column 
 mutate_info. However, according to the MultiThreadedUpdater source code, if a 
 row gets updated, all the columns will be mutated. So if a row contains 15 
 columns, all 15 should appear in mutate_info. 
 When the test fails though, we get an exception like: 
 {code}
 2014-11-02 04:31:12,018 ERROR [HBaseReaderThread_7] util.MultiThreadedAction: 
 Error checking data for key [b0485292cde20d8a76cca37410a9f115-23787], column 
 family [test_cf], column [8], mutation [null]; value of length 818
 {code}
 For the same row, the mutate info DOES NOT contain columns 8 (and 9) while it 
 should: 
 {code}
  test_cf:mutate_info timestamp=1414902651388, 
 value=#increment:1#0:0#1:0#10:3#11:0#12:3#13:0#14:0#15:0#16:2#2:3#3:0#4:2#5:3#6:0#7:0
  
 {code}
 Further debugging led to finding the root cause where It seems that on SUSE 
 System.currentTimeMillis() can go back in time freely (especially when run in 
 a virtualized env like EC2), and actually happens very frequently. 
 This is from a debug log that was put in place: 
 {code}
 2014-11-04 01:16:05,025 INFO  
 [B.DefaultRpcServer.handler=27,queue=0,port=60020] regionserver.MemStore: 
 upserting: 
 193002e668758ea9762904da1a22337c-1268/test_cf:mutate_info/1415063765025/Put/mvcc=8239/#increment:1
 2014-11-04 01:16:05,038 INFO  
 [B.DefaultRpcServer.handler=19,queue=1,port=60020] regionserver.MemStore: 
 upserting: 
 193002e668758ea9762904da1a22337c-1268/test_cf:mutate_info/1415063765038/Put/mvcc=8255/#increment:1#0:3
 2014-11-04 01:16:05,047 INFO  
 [B.DefaultRpcServer.handler=21,queue=0,port=60020] regionserver.MemStore: 
 upserting: 
 193002e668758ea9762904da1a22337c-1268/test_cf:mutate_info/1415063765047/Put/mvcc=8265/#increment:1#0:3#1:3
 2014-11-04 01:16:05,057 INFO  
 [B.DefaultRpcServer.handler=27,queue=0,port=60020] regionserver.MemStore: 
 upserting: 
 193002e668758ea9762904da1a22337c-1268/test_cf:mutate_info/1415063765056/Put/mvcc=8274/#increment:1#0:3#1:3#10:2
 2014-11-04 01:16:05,061 INFO  
 [B.DefaultRpcServer.handler=6,queue=0,port=60020] regionserver.MemStore: 
 upserting: 
 193002e668758ea9762904da1a22337c-1268/test_cf:mutate_info/1415063765061/Put/mvcc=8278/#increment:1#0:3#1:3#10:2#11:0
 2014-11-04 01:16:05,070 INFO  
 [B.DefaultRpcServer.handler=20,queue=2,port=60020] regionserver.MemStore: 
 upserting: 
 193002e668758ea9762904da1a22337c-1268/test_cf:mutate_info/1415063765070/Put/mvcc=8285/#increment:1#0:3#1:3#10:2#11:0#12:3
 2014-11-04 01:16:05,076 INFO  
 [B.DefaultRpcServer.handler=3,queue=0,port=60020] regionserver.MemStore: 
 upserting: 
 193002e668758ea9762904da1a22337c-1268/test_cf:mutate_info/1415063765076/Put/mvcc=8289/#increment:1#0:3#1:3#10:2#11:0#12:3#13:0
 2014-11-04 01:16:05,084 INFO  
 [B.DefaultRpcServer.handler=2,queue=2,port=60020] regionserver.MemStore: 
 upserting: 
 

[jira] [Updated] (HBASE-12472) Improve debuggability of IntegrationTestBulkLoad

2014-11-13 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-12472:
-
Fix Version/s: 0.99.2
   0.98.9
   2.0.0
   Status: Patch Available  (was: Open)

 Improve debuggability of IntegrationTestBulkLoad
 

 Key: HBASE-12472
 URL: https://issues.apache.org/jira/browse/HBASE-12472
 Project: HBase
  Issue Type: Test
  Components: integration tests
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
Priority: Minor
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: HBASE-12472.00.patch


 Debugging failures in the above test is very difficult, particularly while 
 using a test harness that collects logs but does not preserve data. Let's add 
 some more information about breaks in the chain when they happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12472) Improve debuggability of IntegrationTestBulkLoad

2014-11-13 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-12472:
-
Attachment: HBASE-12472.00.patch

 Improve debuggability of IntegrationTestBulkLoad
 

 Key: HBASE-12472
 URL: https://issues.apache.org/jira/browse/HBASE-12472
 Project: HBase
  Issue Type: Test
  Components: integration tests
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
Priority: Minor
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: HBASE-12472.00.patch


 Debugging failures in the above test is very difficult, particularly while 
 using a test harness that collects logs but does not preserve data. Let's add 
 some more information about breaks in the chain when they happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12404) Task 5 from parent: Replace internal HTable constructor use with HConnection#getTable (0.98, 0.99)

2014-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12404:
--
Attachment: 12404v6.txt

Fix failing test. Fixed a few long lines.  Remainder are in doc and referring 
to methods... hard to avoid.

 Task 5 from parent: Replace internal HTable constructor use with 
 HConnection#getTable (0.98, 0.99)
 --

 Key: HBASE-12404
 URL: https://issues.apache.org/jira/browse/HBASE-12404
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.99.2

 Attachments: 
 0001-HBASE-12404-Task-5-from-parent-Replace-internal-HTab.patch, 12404.txt, 
 12404v2.txt, 12404v3.txt, 12404v5.txt, 12404v6.txt


 Do the step 5. from the [~ndimiduk] list in parent issue.  Go through src 
 code and change all new HTable to instead be connection.getTable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12472) Improve debuggability of IntegrationTestBulkLoad

2014-11-13 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-12472:
-
Attachment: HBASE-12472.00-0.98.patch

 Improve debuggability of IntegrationTestBulkLoad
 

 Key: HBASE-12472
 URL: https://issues.apache.org/jira/browse/HBASE-12472
 Project: HBase
  Issue Type: Test
  Components: integration tests
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
Priority: Minor
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: HBASE-12472.00-0.98.patch, HBASE-12472.00.patch


 Debugging failures in the above test is very difficult, particularly while 
 using a test harness that collects logs but does not preserve data. Let's add 
 some more information about breaks in the chain when they happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12472) Improve debuggability of IntegrationTestBulkLoad

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211637#comment-14211637
 ] 

Hadoop QA commented on HBASE-12472:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12681457/HBASE-12472.00-0.98.patch
  against trunk revision .
  ATTACHMENT ID: 12681457

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11671//console

This message is automatically generated.

 Improve debuggability of IntegrationTestBulkLoad
 

 Key: HBASE-12472
 URL: https://issues.apache.org/jira/browse/HBASE-12472
 Project: HBase
  Issue Type: Test
  Components: integration tests
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
Priority: Minor
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: HBASE-12472.00-0.98.patch, HBASE-12472.00.patch


 Debugging failures in the above test is very difficult, particularly while 
 using a test harness that collects logs but does not preserve data. Let's add 
 some more information about breaks in the chain when they happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211683#comment-14211683
 ] 

Hadoop QA commented on HBASE-12394:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12681437/HBASE-12394-v6.patch
  against trunk revision .
  ATTACHMENT ID: 12681437

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11668//console

This message is automatically generated.

 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 2.0.0
Reporter: Weichen Ye
 Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
 HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394-v6.patch, 
 HBASE-12394.patch, HBase-12394 Document.pdf


 Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
 The Latest Patch is Diff Revision 2 (Latest)
 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
 for one mapper. For example,if we have an HBase table with 300 regions, and 
 we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to 

[jira] [Commented] (HBASE-12472) Improve debuggability of IntegrationTestBulkLoad

2014-11-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211693#comment-14211693
 ] 

stack commented on HBASE-12472:
---

Go for it [~ndimiduk] LGTM

 Improve debuggability of IntegrationTestBulkLoad
 

 Key: HBASE-12472
 URL: https://issues.apache.org/jira/browse/HBASE-12472
 Project: HBase
  Issue Type: Test
  Components: integration tests
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
Priority: Minor
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: HBASE-12472.00-0.98.patch, HBASE-12472.00.patch


 Debugging failures in the above test is very difficult, particularly while 
 using a test harness that collects logs but does not preserve data. Let's add 
 some more information about breaks in the chain when they happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211698#comment-14211698
 ] 

Weichen Ye commented on HBASE-12394:


Welcome to review the latest diff.
https://reviews.apache.org/r/27519/diff/#



 Support multiple regions as input to each mapper in map/reduce jobs
 ---

 Key: HBASE-12394
 URL: https://issues.apache.org/jira/browse/HBASE-12394
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 2.0.0
Reporter: Weichen Ye
 Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
 HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394-v6.patch, 
 HBASE-12394.patch, HBase-12394 Document.pdf


 Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
 The Latest Patch is Diff Revision 2 (Latest)
 For Hadoop cluster, a job with large HBase table as input always consumes a 
 large amount of computing resources. For example, we need to create a job 
 with 1000 mappers to scan a table with 1000 regions. This patch is to support 
 one mapper using multiple regions as input.
 In order to support multiple regions for one mapper, we need a new property 
 in configuration--hbase.mapreduce.scan.regionspermapper
 hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
 for one mapper. For example,if we have an HBase table with 300 regions, and 
 we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
 the table, the job will use only 300/3=100 mappers.
 In this way, we can control the number of mappers using the following formula.
 Number of Mappers = (Total region numbers) / 
 hbase.mapreduce.scan.regionspermapper
 This is an example of the configuration.
 property
  namehbase.mapreduce.scan.regionspermapper/name
  value3/value
 /property
 This is an example for Java code:
 TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
 Text.class, job);
  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12470) Way to determine which labels are applied to a cell in a table

2014-11-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211708#comment-14211708
 ] 

Anoop Sam John commented on HBASE-12470:


Agree with Andy.
Per connection codec usage is the best solution.  Copy/export table issue also 
can get solved then.

 Way to determine which labels are applied to a cell in a table
 --

 Key: HBASE-12470
 URL: https://issues.apache.org/jira/browse/HBASE-12470
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell

 There is currently no way to determine which labels are applied to a cell 
 without using the HFile tool to dump each HFile and then translating the 
 output back to the hbase:labels table.  This is quite tedious on larger 
 tables.  Since this could be a security risk perhaps we make it tunable with 
 hbase.superuser.can.veiw.cells or something along those lines?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12472) Improve debuggability of IntegrationTestBulkLoad

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211726#comment-14211726
 ] 

Hadoop QA commented on HBASE-12472:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12681450/HBASE-12472.00.patch
  against trunk revision .
  ATTACHMENT ID: 12681450

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11670//console

This message is automatically generated.

 Improve debuggability of IntegrationTestBulkLoad
 

 Key: HBASE-12472
 URL: https://issues.apache.org/jira/browse/HBASE-12472
 Project: HBase
  Issue Type: Test
  Components: integration tests
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
Priority: Minor
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: HBASE-12472.00-0.98.patch, HBASE-12472.00.patch


 Debugging failures in the above test is very difficult, particularly while 
 using a test harness that collects logs but does not preserve data. Let's add 
 some more information about breaks in the chain when they happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12404) Task 5 from parent: Replace internal HTable constructor use with HConnection#getTable (0.98, 0.99)

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211728#comment-14211728
 ] 

Hadoop QA commented on HBASE-12404:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12681452/12404v6.txt
  against trunk revision .
  ATTACHMENT ID: 12681452

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 103 
new or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ * a 
href=https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/InterfaceClassification.html;Hadoop
+{@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, byte[], 
byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call)}, and
+{@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, byte[], 
byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call, 
org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)}
+   {@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, 
byte[], byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call)}
+   or {@link org.apache.hadoop.hbase.client.Table#coprocessorService(Class, 
byte[], byte[], org.apache.hadoop.hbase.client.coprocessor.Batch.Call, 
org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)}
+method's argument.  Calling {@link 
org.apache.hadoop.hbase.client.Table#coprocessorService(Class, byte[], byte[], 
org.apache.hadoop.hbase.client.coprocessor.Batch.Call)}

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestNamespace
  org.apache.hadoop.hbase.regionserver.wal.TestLogRollPeriod
  
org.apache.hadoop.hbase.master.handler.TestTableDescriptorModification
  org.apache.hadoop.hbase.client.TestScannerTimeout
  org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient
  
org.apache.hadoop.hbase.master.TestMasterRestartAfterDisablingTable
  org.apache.hadoop.hbase.regionserver.wal.TestWALReplay
  org.apache.hadoop.hbase.regionserver.TestJoinedScanners
  
org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient
  
org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithACL
  
org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook
  org.apache.hadoop.hbase.mapred.TestTableMapReduceUtil
  org.apache.hadoop.hbase.master.TestMaster
  org.apache.hadoop.hbase.regionserver.TestTags
  
org.apache.hadoop.hbase.security.access.TestCellACLWithMultipleVersions
  org.apache.hadoop.hbase.quotas.TestQuotaAdmin
  
org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster
  org.apache.hadoop.hbase.client.TestFromClientSide
  org.apache.hadoop.hbase.mapreduce.TestImportTsv
  org.apache.hadoop.hbase.TestMultiVersions
  org.apache.hadoop.hbase.client.TestTimestampsFilter
  org.apache.hadoop.hbase.TestMetaTableAccessorNoCluster
  
org.apache.hadoop.hbase.replication.TestReplicationKillMasterRS
  org.apache.hadoop.hbase.mapred.TestTableSnapshotInputFormat
  
org.apache.hadoop.hbase.coprocessor.TestDoubleColumnInterpreter
  org.apache.hadoop.hbase.security.access.TestAccessController2
  org.apache.hadoop.hbase.trace.TestHTraceHooks
  org.apache.hadoop.hbase.master.TestRegionPlacement
  org.apache.hadoop.hbase.regionserver.TestCompactionState
  org.apache.hadoop.hbase.fs.TestBlockReorder
  org.apache.hadoop.hbase.regionserver.wal.TestLogRolling
  
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
  

[jira] [Updated] (HBASE-12471) Task 4. replace internal ConnectionManager#{delete,get}Connection use with #close, #createConnection (0.98, 0.99)

2014-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12471:
--
Attachment: 0001-HBASE-12471-Task-4.-replace-internal-ConnectionManag.patch

First cut.

Use ConnectionFactory instead of ConnectionManager.  There is some overlap 
between this patch and that of HBASE-12404.
 
This fixes all under hbase-*/src/main/java. Does not do tests. Figure can do 
that in next round.

No calls to delete anymore.  Probably leaking a Connection or two. Lets see 
what HadoopQA says.

Any chance of a review.

 Task 4. replace internal ConnectionManager#{delete,get}Connection use with 
 #close, #createConnection (0.98, 0.99)
 -

 Key: HBASE-12471
 URL: https://issues.apache.org/jira/browse/HBASE-12471
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.99.2

 Attachments: 
 0001-HBASE-12471-Task-4.-replace-internal-ConnectionManag.patch


 Let me do this. A bunch of this was done in HBASE-12404 Let me see if can 
 find more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12471) Task 4. replace internal ConnectionManager#{delete,get}Connection use with #close, #createConnection (0.98, 0.99)

2014-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12471:
--
Fix Version/s: 2.0.0
   Status: Patch Available  (was: Open)

 Task 4. replace internal ConnectionManager#{delete,get}Connection use with 
 #close, #createConnection (0.98, 0.99)
 -

 Key: HBASE-12471
 URL: https://issues.apache.org/jira/browse/HBASE-12471
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 
 0001-HBASE-12471-Task-4.-replace-internal-ConnectionManag.patch


 Let me do this. A bunch of this was done in HBASE-12404 Let me see if can 
 find more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12471) Task 4. replace internal ConnectionManager#{delete,get}Connection use with #close, #createConnection (0.98, 0.99)

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211793#comment-14211793
 ] 

Hadoop QA commented on HBASE-12471:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12681474/0001-HBASE-12471-Task-4.-replace-internal-ConnectionManag.patch
  against trunk revision .
  ATTACHMENT ID: 12681474

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 13 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.client.TestHBaseAdminNoCluster.testMasterMonitorCollableRetries(TestHBaseAdminNoCluster.java:80)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11672//console

This message is automatically generated.

 Task 4. replace internal ConnectionManager#{delete,get}Connection use with 
 #close, #createConnection (0.98, 0.99)
 -

 Key: HBASE-12471
 URL: https://issues.apache.org/jira/browse/HBASE-12471
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 
 0001-HBASE-12471-Task-4.-replace-internal-ConnectionManag.patch


 Let me do this. A bunch of this was done in HBASE-12404 Let me see if can 
 find more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12346) Scan's default auths behavior under Visibility labels

2014-11-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211798#comment-14211798
 ] 

ramkrishna.s.vasudevan commented on HBASE-12346:


@apurtell
Is it good to commit now?

 Scan's default auths behavior under Visibility labels
 -

 Key: HBASE-12346
 URL: https://issues.apache.org/jira/browse/HBASE-12346
 Project: HBase
  Issue Type: Bug
  Components: API, security
Affects Versions: 0.98.7, 0.99.1
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: HBASE-12346-master-v2.patch, 
 HBASE-12346-master-v3.patch, HBASE-12346-master-v4.patch, 
 HBASE-12346-master.patch


 In Visibility Labels security, a set of labels (auths) are administered and 
 associated with a user.
 A user can normally  only see cell data during scan that are part of the 
 user's label set (auths).
 Scan uses setAuthorizations to indicates its wants to use the auths to access 
 the cells.
 Similarly in the shell:
 {code}
 scan 'table1', AUTHORIZATIONS = ['private']
 {code}
 But it is a surprise to find that setAuthorizations seems to be 'mandatory' 
 in the default visibility label security setting.  Every scan needs to 
 setAuthorizations before the scan can get any cells even the cells are under 
 the labels the request user is part of.
 The following steps will illustrate the issue:
 Run as superuser.
 {code}
 1. create a visibility label called 'private'
 2. create 'table1'
 3. put into 'table1' data and label the data as 'private'
 4. set_auths 'user1', 'private'
 5. grant 'user1', 'RW', 'table1'
 {code}
 Run as 'user1':
 {code}
 1. scan 'table1'
 This show no cells.
 2. scan 'table1', scan 'table1', AUTHORIZATIONS = ['private']
 This will show all the data.
 {code}
 I am not sure if this is expected by design or a bug.
 But a more reasonable, more client application backward compatible, and less 
 surprising default behavior should probably look like this:
 A scan's default auths, if its Authorizations attributes is not set 
 explicitly, should be all the auths the request user is administered and 
 allowed on the server.
 If scan.setAuthorizations is used, then the server further filter the auths 
 during scan: use the input auths minus what is not in user's label set on the 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12470) Way to determine which labels are applied to a cell in a table

2014-11-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211803#comment-14211803
 ] 

ramkrishna.s.vasudevan commented on HBASE-12470:


Yes, connection negotiation was the suggestion given in HBASE-12441 also.
https://issues.apache.org/jira/browse/HBASE-12441?focusedCommentId=14201557page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14201557

HBASE-9681 is the JIRA for connection negotiation.  I have some patches that I 
worked on (Need to check it).  The main problem is that for supporting this 
negotiation we may have to introduce a two way handshake mechanism and that may 
have BC issues with clients and servers with/without this negotiation support.

 Way to determine which labels are applied to a cell in a table
 --

 Key: HBASE-12470
 URL: https://issues.apache.org/jira/browse/HBASE-12470
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell

 There is currently no way to determine which labels are applied to a cell 
 without using the HFile tool to dump each HFile and then translating the 
 output back to the hbase:labels table.  This is quite tedious on larger 
 tables.  Since this could be a security risk perhaps we make it tunable with 
 hbase.superuser.can.veiw.cells or something along those lines?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12470) Way to determine which labels are applied to a cell in a table

2014-11-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211805#comment-14211805
 ] 

ramkrishna.s.vasudevan commented on HBASE-12470:


Also the client should be determined to know whether it is really a legitmate 
user. A malicious client could claim to be a legitmate user and ask for a codec 
that could send across the tags.  The server should be able to clearly identify 
such cases.

 Way to determine which labels are applied to a cell in a table
 --

 Key: HBASE-12470
 URL: https://issues.apache.org/jira/browse/HBASE-12470
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.6.1
Reporter: Kevin Odell

 There is currently no way to determine which labels are applied to a cell 
 without using the HFile tool to dump each HFile and then translating the 
 output back to the hbase:labels table.  This is quite tedious on larger 
 tables.  Since this could be a security risk perhaps we make it tunable with 
 hbase.superuser.can.veiw.cells or something along those lines?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211809#comment-14211809
 ] 

Lars Hofhansl commented on HBASE-12457:
---

OK... What caused TestRegionReplicas to hang was the change that moved 
{{this.parent.writestate.writesEnabled = true;}} from SplitTransaction to 
HRegion.initializeRegionInternals.

That part is not needed anyway, it just looked like it would be more correct. 
Here's a patch for trunk that does passes TestRegionReplicas.


 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457.interrupt-v2.txt, 
 12457.interrupt.txt, HBASE-12457.patch, HBASE-12457_addendum.patch, 
 TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-12457:
--
Attachment: 12457-trunk-v3.txt

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457-trunk-v3.txt, 
 12457.interrupt-v2.txt, 12457.interrupt.txt, HBASE-12457.patch, 
 HBASE-12457_addendum.patch, TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-12457:
--
Status: Patch Available  (was: Reopened)

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457-trunk-v3.txt, 
 12457.interrupt-v2.txt, 12457.interrupt.txt, HBASE-12457.patch, 
 HBASE-12457_addendum.patch, TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is in transition and not usable.
 In every case I tracked down so far the time between the requested CLOSE and 
 abort of the compaction is almost exactly 20 minutes, which is suspicious.
 Of course part of the issue is having compactions that take over 20 minutes, 
 but maybe we can do better here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12462) Support deleting all columns of the specified family of a row in hbase shell

2014-11-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211812#comment-14211812
 ] 

Lars Hofhansl commented on HBASE-12462:
---

Unfortunately this is not backwards compatible, so we could not add this to 
0.98 (and maybe 1.0).
Maybe we can add a new command delete_family (or something) for this.

That way we can later also add delete_version (again, or something like this) 
to delete a specific version of a cell (which is possible through the API, but 
not the shell).


 Support deleting all columns of the specified family of a row in hbase shell
 

 Key: HBASE-12462
 URL: https://issues.apache.org/jira/browse/HBASE-12462
 Project: HBase
  Issue Type: New Feature
  Components: shell
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-12462-v1.diff


 Currently, HBase shell only support deleting a column of a row in a table. In 
 some scenarios, we want to delete all the columns under a a column family of 
 a row,  but there may be many columns there. It's difficult to delete the 
 columns one by one in shell.
 It's easy to add this feature in shell since the Delete have the API of 
 deleting a family.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12457) Regions in transition for a long time when CLOSE interleaves with a slow compaction

2014-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211906#comment-14211906
 ] 

Hadoop QA commented on HBASE-12457:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12681485/12457-trunk-v3.txt
  against trunk revision .
  ATTACHMENT ID: 12681485

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3787 checkstyle errors (more than the trunk's current 3786 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.coprocessor.TestMasterObserver.testRegionTransitionOperations(TestMasterObserver.java:1488)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11673//console

This message is automatically generated.

 Regions in transition for a long time when CLOSE interleaves with a slow 
 compaction
 ---

 Key: HBASE-12457
 URL: https://issues.apache.org/jira/browse/HBASE-12457
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.7
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 2.0.0, 0.98.9, 0.99.2

 Attachments: 12457-combined-0.98-v2.txt, 12457-combined-0.98.txt, 
 12457-combined-trunk.txt, 12457-minifix.txt, 12457-trunk-v3.txt, 
 12457.interrupt-v2.txt, 12457.interrupt.txt, HBASE-12457.patch, 
 HBASE-12457_addendum.patch, TestRegionReplicas-jstack.txt


 Under heave load we have observed regions remaining in transition for 20 
 minutes when the master requests a close while a slow compaction is running.
 The pattern is always something like this:
 # RS starts a compaction
 # HM request the region to be closed on this RS
 # Compaction is not aborted for another 20 minutes
 # The region is 

[jira] [Commented] (HBASE-6913) Implement new data block encoding algorithm that combines the advantages of FAST_DIFF and DIFF_KEY

2014-11-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14211948#comment-14211948
 ] 

ramkrishna.s.vasudevan commented on HBASE-6913:
---

Currently in FAST_DIFF we don't repeat the values if they are exactly same.  
But we are not trying to write the part of the value that is not repeating and 
just indicating the common part that is repeating as we do in the key part.

But doing this will have a problem - we will lose the optimization done in 
HBASE-10801 where we currently don't copy the value part when the KVs are taken 
up stream for comparison during seek or during fetching a KV to be sent to the 
client.
Once we start encoding the value part also then we may have to copy the value 
also before we move on to the next KV.

 Implement new data block encoding algorithm that combines the advantages of 
 FAST_DIFF and DIFF_KEY
 --

 Key: HBASE-6913
 URL: https://issues.apache.org/jira/browse/HBASE-6913
 Project: HBase
  Issue Type: Improvement
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin

 We have noticed that both FAST_DIFF and DIFF_KEY encoding algorithms have 
 some drawbacks in that they don't take advantage of certain types of 
 redundancies in keys/values. We need to implement a new algorithm that 
 combines the most useful properties of these two algorithms, and specifically 
 unit-test that various types of redundancies are removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12471) Task 4. replace internal ConnectionManager#{delete,get}Connection use with #close, #createConnection (0.98, 0.99)

2014-11-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12471:
--
Attachment: 12471v2.txt

Fix unit test failure and javadoc warning.

 Task 4. replace internal ConnectionManager#{delete,get}Connection use with 
 #close, #createConnection (0.98, 0.99)
 -

 Key: HBASE-12471
 URL: https://issues.apache.org/jira/browse/HBASE-12471
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 
 0001-HBASE-12471-Task-4.-replace-internal-ConnectionManag.patch, 12471v2.txt


 Let me do this. A bunch of this was done in HBASE-12404 Let me see if can 
 find more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)