[jira] [Commented] (HBASE-17434) New Synchronization Scheme for Compaction Pipeline

2017-01-07 Thread Manjunath Anand (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808912#comment-15808912
 ] 

Manjunath Anand commented on HBASE-17434:
-

If you are fine with the above discussion and if you can give me contributor 
access then I can send the modified patch file. Anyways for now I have added it 
along with this comment:-

{code}
>From 9c40770283175a293a168da92450b7d09aed363c Mon Sep 17 00:00:00 2001
From: Manjunath Anand 
Date: Sun, 8 Jan 2017 12:06:14 +0530
Subject: [PATCH] Use read write lock as new synchronization scheme for
 compaction pipeline

---
 .../hbase/regionserver/CompactingMemStore.java |  6 +--
 .../hbase/regionserver/CompactionPipeline.java | 55 +++---
 2 files changed, 41 insertions(+), 20 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java
index e1289f8..99c1685 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java
@@ -217,8 +217,8 @@ public class CompactingMemStore extends AbstractMemStore {
   @VisibleForTesting
   @Override
   protected List getSegments() {
-List pipelineList = pipeline.getSegments();
-List list = new ArrayList(pipelineList.size() + 2);
+List pipelineList = pipeline.getSegments();
+List list = new ArrayList<>(pipelineList.size() + 2);
 list.add(this.active);
 list.addAll(pipelineList);
 list.add(this.snapshot);
@@ -264,7 +264,7 @@ public class CompactingMemStore extends AbstractMemStore {
* Scanners are ordered from 0 (oldest) to newest in increasing order.
*/
   public List getScanners(long readPt) throws IOException {
-List pipelineList = pipeline.getSegments();
+List pipelineList = pipeline.getSegments();
 long order = pipelineList.size();
 // The list of elements in pipeline + the active element + the snapshot 
segment
 // TODO : This will change when the snapshot is made of more than one 
element
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionPipeline.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionPipeline.java
index 9d5df77..132e8d6 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionPipeline.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionPipeline.java
@@ -22,6 +22,7 @@ import java.util.ArrayList;
 import java.util.Iterator;
 import java.util.LinkedList;
 import java.util.List;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -49,36 +50,49 @@ public class CompactionPipeline {
 
   private final RegionServicesForStores region;
   private LinkedList pipeline;
+  private volatile LinkedList readOnlyCopy;
   private long version;
+  private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
 
   public CompactionPipeline(RegionServicesForStores region) {
 this.region = region;
 this.pipeline = new LinkedList<>();
+this.readOnlyCopy = new LinkedList<>();
 this.version = 0;
   }
 
   public boolean pushHead(MutableSegment segment) {
 ImmutableSegment immutableSegment = SegmentFactory.instance().
 createImmutableSegment(segment);
-synchronized (pipeline){
-  return addFirst(immutableSegment);
+lock.writeLock().lock();
+try {
+  boolean res = addFirst(immutableSegment);
+  readOnlyCopy = new LinkedList<>(pipeline);
+  return res;
+} finally {
+  lock.writeLock().unlock();
 }
   }
 
   public VersionedSegmentsList getVersionedList() {
-synchronized (pipeline){
-  List segmentList = new ArrayList<>(pipeline);
-  return new VersionedSegmentsList(segmentList, version);
+lock.readLock().lock();
+try {
+  return new VersionedSegmentsList(readOnlyCopy, version);
+} finally {
+  lock.readLock().unlock();
 }
   }
 
   public VersionedSegmentsList getVersionedTail() {
-synchronized (pipeline){
+lock.readLock().lock();
+try {
   List segmentList = new ArrayList<>();
   if(!pipeline.isEmpty()) {
 segmentList.add(0, pipeline.getLast());
   }
   return new VersionedSegmentsList(segmentList, version);
+} finally {
+  lock.readLock().unlock();
 }
   }
 
@@ -99,7 +113,9 @@ public class CompactionPipeline {
   return false;
 }
 List suffix;
-synchronized (pipeline){
+// A write lock as pipeline is modified
+lock.writeLock().lock();
+try {
   if(versionedList.getVersion() != version) {
 return false;
   }
@@ -115,6 +131,9 @@ 

[jira] [Comment Edited] (HBASE-17434) New Synchronization Scheme for Compaction Pipeline

2017-01-07 Thread Manjunath Anand (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808895#comment-15808895
 ] 

Manjunath Anand edited comment on HBASE-17434 at 1/8/17 7:11 AM:
-

Hi [~eshcar] , I went through this patch and the patch submitted for 
HBASE-17081 and felt that using ReentrantReadWriteLock would be more advisable 
than synchronized blocks due to below 2 reasons:-
1) Allows readers to concurrently access the pipeline and also allows for fine 
grain read and write locks where required
2) Also allows to lock on something other than the pipeline. This makes more 
meaning when you compare the drain method implemented in the HBASE-17081 patch 
(which has O(N) complexity) versus what I present below (which has O(1) 
complexity) using the new locking mechanism:-
{code}
public List drain() {
List result = null;
lock.writeLock().lock();
try {
  version++;
  result = this.pipeline;
  this.pipeline = new LinkedList<>();
  this.readOnlyCopy = new LinkedList<>();
} finally {
  lock.writeLock().unlock();
}
return result;
  }
{code}



was (Author: manju_hadoop):
Hi [~eshcar] , I went through this patch and the patch submitted for 
HBASE-17081 and felt that using ReentrantReadWriteLock would be more advisable 
than synchronized blocks due to below 2 reasons:-
1) Allows readers to concurrently access the pipeline and also allows for fine 
grain read and write locks where required
2) Also allows to lock on something other than the pipeline. This makes more 
meaning when you compare the drain method implemented in the HBASE-17081 patch 
(which has O(n) complexity) versus what I present below (which has O(1) 
complexity) using the new locking mechanism:-
{code}
public List drain() {
List result = null;
lock.writeLock().lock();
try {
  version++;
  result = this.pipeline;
  this.pipeline = new LinkedList<>();
  this.readOnlyCopy = new LinkedList<>();
} finally {
  lock.writeLock().unlock();
}
return result;
  }
{code}


> New Synchronization Scheme for Compaction Pipeline
> --
>
> Key: HBASE-17434
> URL: https://issues.apache.org/jira/browse/HBASE-17434
> Project: HBase
>  Issue Type: Bug
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17434-V01.patch
>
>
> A new copyOnWrite synchronization scheme is introduced for the compaction 
> pipeline.
> The new scheme is better since it removes the lock from getSegments() which 
> is invoked in every get and scan operation, and it reduces the number of 
> LinkedList objects that are created at runtime, thus can reduce GC (not by 
> much, but still...).
> In addition, it fixes the method getTailSize() in compaction pipeline. This 
> method creates a MemstoreSize object which comprises the data size and the 
> overhead size of the segment and needs to be atomic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17434) New Synchronization Scheme for Compaction Pipeline

2017-01-07 Thread Manjunath Anand (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808895#comment-15808895
 ] 

Manjunath Anand commented on HBASE-17434:
-

Hi [~eshcar] , I went through this patch and the patch submitted for 
HBASE-17081 and felt that using ReentrantReadWriteLock would be more advisable 
than synchronized blocks due to below 2 reasons:-
1) Allows readers to concurrently access the pipeline and also allows for fine 
grain read and write locks where required
2) Also allows to lock on something other than the pipeline. This makes more 
meaning when you compare the drain method implemented in the HBASE-17081 patch 
(which has O(n) complexity) versus what I present below (which has O(1) 
complexity) using the new locking mechanism:-
{code}
public List drain() {
List result = null;
lock.writeLock().lock();
try {
  version++;
  result = this.pipeline;
  this.pipeline = new LinkedList<>();
  this.readOnlyCopy = new LinkedList<>();
} finally {
  lock.writeLock().unlock();
}
return result;
  }
{code}


> New Synchronization Scheme for Compaction Pipeline
> --
>
> Key: HBASE-17434
> URL: https://issues.apache.org/jira/browse/HBASE-17434
> Project: HBase
>  Issue Type: Bug
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17434-V01.patch
>
>
> A new copyOnWrite synchronization scheme is introduced for the compaction 
> pipeline.
> The new scheme is better since it removes the lock from getSegments() which 
> is invoked in every get and scan operation, and it reduces the number of 
> LinkedList objects that are created at runtime, thus can reduce GC (not by 
> much, but still...).
> In addition, it fixes the method getTailSize() in compaction pipeline. This 
> method creates a MemstoreSize object which comprises the data size and the 
> overhead size of the segment and needs to be atomic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-07 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808590#comment-15808590
 ] 

Phil Yang commented on HBASE-15995:
---

I think we should find a way to make global entries size limiter still working 
in the new model. The queue size is configurable so the size may be larger than 
double, and even if we have HBASE-17432 there are still failover peers so there 
is still risk for OOM. Maybe we can check AtomitLong each time when we want to 
read next entry. Now we only check it after reading an entry and push if exeed. 
Double check may be mush safer?

> Separate replication WAL reading from shipping
> --
>
> Key: HBASE-15995
> URL: https://issues.apache.org/jira/browse/HBASE-15995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 2.0.0
>
> Attachments: HBASE-15995.master.v1.patch, 
> HBASE-15995.master.v2.patch, replicationV1_100ms_delay.png, 
> replicationV2_100ms_delay.png
>
>
> Currently ReplicationSource reads edits from the WAL and ships them in the 
> same thread.
> By breaking out the reading from the shipping, we can introduce greater 
> parallelism and lay the foundation for further refactoring to a pipelined, 
> streaming model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15042) refactor so that site materials are in the Standard Maven Place

2017-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808515#comment-15808515
 ] 

Hadoop QA commented on HBASE-15042:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
5s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch 95 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 30s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 15s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 53s 
{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 180m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846203/HBASE-15042.master.001.patch
 |
| JIRA Issue | HBASE-15042 |
| Optional Tests |  asflicense  shellcheck  shelldocs  javac  javadoc  unit  
xml  compile  mvnsite  |
| uname | Linux f2399345155f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6fecf55 |
| Default Java | 1.8.0_111 |
| shellcheck | v0.4.5 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5183/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5183/artifact/patchprocess/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5183/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5183/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |



[jira] [Commented] (HBASE-15042) refactor so that site materials are in the Standard Maven Place

2017-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808301#comment-15808301
 ] 

Hadoop QA commented on HBASE-15042:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/5183/console in case of 
problems.


> refactor so that site materials are in the Standard Maven Place
> ---
>
> Key: HBASE-15042
> URL: https://issues.apache.org/jira/browse/HBASE-15042
> Project: HBase
>  Issue Type: Task
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15042.master.001.patch
>
>
> for some reason we currently have our site materials in {{src/main/site}} 
> rather than the maven prescribed {{src/site}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HBASE-15042) refactor so that site materials are in the Standard Maven Place

2017-01-07 Thread Jan Hentschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-15042 started by Jan Hentschel.
-
> refactor so that site materials are in the Standard Maven Place
> ---
>
> Key: HBASE-15042
> URL: https://issues.apache.org/jira/browse/HBASE-15042
> Project: HBase
>  Issue Type: Task
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15042.master.001.patch
>
>
> for some reason we currently have our site materials in {{src/main/site}} 
> rather than the maven prescribed {{src/site}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15042) refactor so that site materials are in the Standard Maven Place

2017-01-07 Thread Jan Hentschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel updated HBASE-15042:
--
Status: Patch Available  (was: In Progress)

> refactor so that site materials are in the Standard Maven Place
> ---
>
> Key: HBASE-15042
> URL: https://issues.apache.org/jira/browse/HBASE-15042
> Project: HBase
>  Issue Type: Task
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15042.master.001.patch
>
>
> for some reason we currently have our site materials in {{src/main/site}} 
> rather than the maven prescribed {{src/site}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15042) refactor so that site materials are in the Standard Maven Place

2017-01-07 Thread Jan Hentschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel updated HBASE-15042:
--
Attachment: HBASE-15042.master.001.patch

> refactor so that site materials are in the Standard Maven Place
> ---
>
> Key: HBASE-15042
> URL: https://issues.apache.org/jira/browse/HBASE-15042
> Project: HBase
>  Issue Type: Task
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15042.master.001.patch
>
>
> for some reason we currently have our site materials in {{src/main/site}} 
> rather than the maven prescribed {{src/site}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store

2017-01-07 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12148:
--
Assignee: huaxiang sun  (was: Walter Koetke)
  Status: In Progress  (was: Patch Available)

> Remove TimeRangeTracker as point of contention when many threads writing a 
> Store
> 
>
> Key: HBASE-12148
> URL: https://issues.apache.org/jira/browse/HBASE-12148
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 0.99.1, 2.0.0
>Reporter: stack
>Assignee: huaxiang sun
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 
> 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 
> 12148.addendum.txt, 12148.min_and_max_run_independent.patch, 12148.txt, 
> 12148.txt, 12148v2.txt, 12148v2.txt, 12148v4.patch, HBASE-12148-V3.patch, 
> HBASE-12148-V3.patch, HBASE-12148-master-v6.patch, 
> HBASE-12148.branch-1.v5.patch, HBASE-12148.branch-1.v5.patch, 
> HBASE-12148.txt, HBASE-12148V2.txt, Screen Shot 2014-10-01 at 3.39.46 PM.png, 
> Screen Shot 2014-10-01 at 3.41.07 PM.png, Screen Shot 2016-04-13 at 1.49.30 
> PM.png, Screen Shot 2016-04-13 at 2.02.22 PM.png, Screen Shot 2016-05-18 at 
> 10.21.53 PM.png, TimeRangeTracker.tiff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15042) refactor so that site materials are in the Standard Maven Place

2017-01-07 Thread Jan Hentschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel reassigned HBASE-15042:
-

Assignee: Jan Hentschel

> refactor so that site materials are in the Standard Maven Place
> ---
>
> Key: HBASE-15042
> URL: https://issues.apache.org/jira/browse/HBASE-15042
> Project: HBase
>  Issue Type: Task
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 2.0.0
>
>
> for some reason we currently have our site materials in {{src/main/site}} 
> rather than the maven prescribed {{src/site}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17435) Call to preCommitStoreFile() hook encounters SaslException in secure deployment

2017-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808168#comment-15808168
 ] 

Hadoop QA commented on HBASE-17435:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
50s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s 
{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 50s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 45s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 112m 3s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestMetaWithReplicas |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:e01ee2f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846194/17435.branch-1.v2.txt 
|
| 

[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-07 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808102#comment-15808102
 ] 

Yu Li commented on HBASE-14061:
---

Thanks for review [~tedyu] and [~ashish singhi], will commit this soon.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, 
> HBASE-14061.v3.patch, HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17435) Call to preCommitStoreFile() hook encounters SaslException in secure deployment

2017-01-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17435:
---
Attachment: 17435.branch-1.v2.txt

> Call to preCommitStoreFile() hook encounters SaslException in secure 
> deployment
> ---
>
> Key: HBASE-17435
> URL: https://issues.apache.org/jira/browse/HBASE-17435
> Project: HBase
>  Issue Type: Bug
>Reporter: Romil Choksi
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17435.branch-1.v2.txt, 17435.v1.txt, 17435.v2.txt
>
>
> [~romil.choksi] was testing bulk load in secure cluster where 
> LoadIncrementalHFiles failed.
> Looking at region server log, we saw the following:
> {code}
> at 
> org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:185)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1257)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1163)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162)
> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
> at 
> org.apache.hadoop.hbase.backup.impl.BackupSystemTable.getBackupContexts(BackupSystemTable.java:540)
> at 
> org.apache.hadoop.hbase.backup.impl.BackupSystemTable.getBackupHistory(BackupSystemTable.java:517)
> at 
> org.apache.hadoop.hbase.backup.impl.BackupSystemTable.getTablesForBackupType(BackupSystemTable.java:589)
> at 
> org.apache.hadoop.hbase.backup.BackupObserver.preCommitStoreFile(BackupObserver.java:89)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$61.call(RegionCoprocessorHost.java:1494)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1660)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1734)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1692)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCommitStoreFile(RegionCoprocessorHost.java:1490)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:5512)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:293)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:276)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:356)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1704)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.secureBulkLoadHFiles(SecureBulkLoadEndpoint.java:276)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos$SecureBulkLoadService.callMethod(SecureBulkLoadProtos.java:4721)
> ...
> Caused by: java.lang.RuntimeException: SASL authentication failed. The most 
> likely cause is missing or invalid credentials. Consider 'kinit'.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:679)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:637)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:745)
> ... 16 more
> Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> 

[jira] [Commented] (HBASE-17437) Support specifying a WAL directory outside of the root directory

2017-01-07 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807936#comment-15807936
 ] 

Ted Yu commented on HBASE-17437:


The following is repeated for HBASE_DIR_PERMS and HBASE_LOG_DIR_PERMS :
{code}
+  /** Parameter name for HBase instance log directory permission*/
{code}
Please differentiate between the two.
{code}
+hbase.regionserver.hlog.dir.perms
+700
+FS Permissions for the log directory in a secure(kerberos) 
setup.
+  When master starts, it creates the logdir with this permissions or sets 
the permissions
{code}
Permission for region server hlog dir is set by master ?

For MasterFileSystem.java :
{code}
+  // root log directory on the FS
   private final Path rootdir;

+  // root hbase directory on the FS
+  private final Path logRootDir;
{code}
The comments don't seem to match the variable names. Consider putting 
declaration of logRootDir close to that for logFs.

For TestHRegionServerBulkLoadWithLogDir, add license header
It should be annotated with LargeTests
{code}
+assertEquals("Expect 2 logs in oldWALs dir", 2, getWALFiles(logFs, new 
Path(logRootDir, HConstants.HREGION_OLDLOGDIR_NAME)).size());
+assertEquals("Expect 1 logs in WALs dir", 1, getWALFiles(logFs, new 
Path(logRootDir, HConstants.HREGION_LOGDIR_NAME)).size());
{code}
Wrap long lines. "1 logs" -> "1 log"


> Support specifying a WAL directory outside of the root directory
> 
>
> Key: HBASE-17437
> URL: https://issues.apache.org/jira/browse/HBASE-17437
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.4
>Reporter: Yishan Yang
>  Labels: patch
> Fix For: 2.0.0, 1.2.5
>
> Attachments: hbase-17437-branch-1.2.patch, hbase-17437-master.patch
>
>
> Currently, the WAL and the StoreFiles need to be on the same FileSystem. Some 
> FileSystems (such as Amazon S3) don’t support append or consistent writes. 
> These two properties are imperative for the WAL in order to avoid loss of 
> writes. However, StoreFiles don’t necessarily need the same consistency 
> guarantees (since writes are cached locally and if writes fail, they can 
> always be replayed from the WAL).
>  
> This JIRA aims to allow users to configure a log directory (for WALs) that is 
> outside of the root directory or even in a different FileSystem. The default 
> value will still put the log directory under the root directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17435) Call to preCommitStoreFile() hook encounters SaslException in secure deployment

2017-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807870#comment-15807870
 ] 

Hadoop QA commented on HBASE-17435:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 26s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 84m 2s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 121m 7s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846166/17435.v2.txt |
| JIRA Issue | HBASE-17435 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 4b1c874cafe5 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6fecf55 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5181/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5181/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Call to preCommitStoreFile() hook encounters SaslException in secure 
> deployment
> 

[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-07 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807677#comment-15807677
 ] 

Ted Yu commented on HBASE-14061:


+1

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, 
> HBASE-14061.v3.patch, HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store

2017-01-07 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807655#comment-15807655
 ] 

huaxiang sun commented on HBASE-12148:
--

Thanks [~stack]. I will add another unittest case to verify the correctness of 
the change (there is already one in the current TestTimeRangeTracker).

> Remove TimeRangeTracker as point of contention when many threads writing a 
> Store
> 
>
> Key: HBASE-12148
> URL: https://issues.apache.org/jira/browse/HBASE-12148
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 2.0.0, 0.99.1
>Reporter: stack
>Assignee: Walter Koetke
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 
> 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 
> 12148.addendum.txt, 12148.min_and_max_run_independent.patch, 12148.txt, 
> 12148.txt, 12148v2.txt, 12148v2.txt, 12148v4.patch, HBASE-12148-V3.patch, 
> HBASE-12148-V3.patch, HBASE-12148-master-v6.patch, 
> HBASE-12148.branch-1.v5.patch, HBASE-12148.branch-1.v5.patch, 
> HBASE-12148.txt, HBASE-12148V2.txt, Screen Shot 2014-10-01 at 3.39.46 PM.png, 
> Screen Shot 2014-10-01 at 3.41.07 PM.png, Screen Shot 2016-04-13 at 1.49.30 
> PM.png, Screen Shot 2016-04-13 at 2.02.22 PM.png, Screen Shot 2016-05-18 at 
> 10.21.53 PM.png, TimeRangeTracker.tiff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-07 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807447#comment-15807447
 ] 

Ashish Singhi commented on HBASE-14061:
---

lgtm

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, 
> HBASE-14061.v3.patch, HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13652) Case-insensitivity of file system affects table creation

2017-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807438#comment-15807438
 ] 

Hadoop QA commented on HBASE-13652:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 10s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 82m 30s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 120m 40s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846171/HBASE-13652.master.003.patch
 |
| JIRA Issue | HBASE-13652 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux fdbc199776ff 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6fecf55 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5180/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5180/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Case-insensitivity of file system affects table creation
> 
>
> Key: HBASE-13652
> URL: https://issues.apache.org/jira/browse/HBASE-13652
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 1.0.0
> Environment: HBase standalone mode on Mac OS X.
>Reporter: Lars George
>Assignee: Xuesen Liang
>   

[jira] [Commented] (HBASE-17337) list replication peers request should be routed through master

2017-01-07 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807417#comment-15807417
 ] 

Ashish Singhi commented on HBASE-17337:
---

bq. No, this is listing operation, as opposed to a get with with peer_id. We 
should not throw an exception even if there are no peers.
Ya, makes sense. Thank you.

> list replication peers request should be routed through master
> --
>
> Key: HBASE-17337
> URL: https://issues.apache.org/jira/browse/HBASE-17337
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17337-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13652) Case-insensitivity of file system affects table creation

2017-01-07 Thread Xuesen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuesen Liang updated HBASE-13652:
-
Attachment: HBASE-13652.master.003.patch

Attach HBASE-13652.master.003.patch

Only use LocalFileSystem for MAC.

> Case-insensitivity of file system affects table creation
> 
>
> Key: HBASE-13652
> URL: https://issues.apache.org/jira/browse/HBASE-13652
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 1.0.0
> Environment: HBase standalone mode on Mac OS X.
>Reporter: Lars George
>Assignee: Xuesen Liang
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-13652.master.001.patch, 
> HBASE-13652.master.002.patch, HBASE-13652.master.003.patch
>
>
> I noticed this on my Mac OS machine:
> {noformat}
> hbase(main):003:0> list
> TABLE 
>   
>
> 0 row(s) in 0.0260 seconds
> => []
> hbase(main):004:0> create 'TestTable', 'info'
> 0 row(s) in 0.2830 seconds
> => Hbase::Table - TestTable
> hbase(main):005:0> create 'testtable', 'colfam1'
> 0 row(s) in 0.1750 seconds
> => Hbase::Table - testtable
> hbase(main):006:0> list
> TABLE 
>   
>
> TestTable 
>   
>
> TestTable 
>   
>
> 2 row(s) in 0.0170 seconds
> => ["TestTable", "TestTable"]
> hbase(main):007:0> status 'detailed'
> version 1.0.0
> 0 regionsInTransition
> master coprocessors: []
> 1 live servers
> de1-app-mba-1.internal.larsgeorge.com:58824 1431124680152
> requestsPerSecond=0.0, numberOfOnlineRegions=4, usedHeapMB=47, 
> maxHeapMB=4062, numberOfStores=4, numberOfStorefiles=0, 
> storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, 
> storefileIndexSizeMB=0, readRequestsCount=16, writeRequestsCount=8, 
> rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, 
> totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
> coprocessors=[]
> "TestTable,,1431124762491.cb30b003b192aa1d429e5a8e908a2f7d."
> numberOfStores=1, numberOfStorefiles=0, 
> storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, 
> storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
> rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, 
> totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
> completeSequenceId=-1, dataLocality=0.0
> "hbase:meta,,1"
> numberOfStores=1, numberOfStorefiles=0, 
> storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, 
> storefileIndexSizeMB=0, readRequestsCount=10, writeRequestsCount=6, 
> rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, 
> totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
> completeSequenceId=-1, dataLocality=0.0
> "hbase:namespace,,1431124686536.42728f3a3411b0d8ff21c7a2622d9378."
> numberOfStores=1, numberOfStorefiles=0, 
> storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, 
> storefileIndexSizeMB=0, readRequestsCount=6, writeRequestsCount=2, 
> rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, 
> totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
> completeSequenceId=-1, dataLocality=0.0
> "testtable,,1431124773352.8d461d5069d1c88bc3b26562ab30e333."
> numberOfStores=1, numberOfStorefiles=0, 
> storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, 
> storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
> rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, 
> totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
> completeSequenceId=-1, dataLocality=0.0
> 0 dead servers
> {noformat}
> and on the file system there is 
> {noformat}
> larsgeorge:~$ ls -l hbase/data/default/
> total 0
> drwxr-xr-x  7 larsgeorge  staff  238 May  8 15:39 TestTable
> {noformat}
> This should not happen on a case sensitive file system (I presume), so is of 
> marginal importance. But a simple extra check during directory creation could 
> fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17435) Call to preCommitStoreFile() hook encounters SaslException in secure deployment

2017-01-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17435:
---
Attachment: 17435.v2.txt

In patch v2, the check of security support is added before calling 
TokenUtil.obtainToken().

> Call to preCommitStoreFile() hook encounters SaslException in secure 
> deployment
> ---
>
> Key: HBASE-17435
> URL: https://issues.apache.org/jira/browse/HBASE-17435
> Project: HBase
>  Issue Type: Bug
>Reporter: Romil Choksi
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17435.v1.txt, 17435.v2.txt
>
>
> [~romil.choksi] was testing bulk load in secure cluster where 
> LoadIncrementalHFiles failed.
> Looking at region server log, we saw the following:
> {code}
> at 
> org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:185)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1257)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1163)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162)
> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
> at 
> org.apache.hadoop.hbase.backup.impl.BackupSystemTable.getBackupContexts(BackupSystemTable.java:540)
> at 
> org.apache.hadoop.hbase.backup.impl.BackupSystemTable.getBackupHistory(BackupSystemTable.java:517)
> at 
> org.apache.hadoop.hbase.backup.impl.BackupSystemTable.getTablesForBackupType(BackupSystemTable.java:589)
> at 
> org.apache.hadoop.hbase.backup.BackupObserver.preCommitStoreFile(BackupObserver.java:89)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$61.call(RegionCoprocessorHost.java:1494)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1660)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1734)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1692)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCommitStoreFile(RegionCoprocessorHost.java:1490)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:5512)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:293)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:276)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:356)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1704)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.secureBulkLoadHFiles(SecureBulkLoadEndpoint.java:276)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos$SecureBulkLoadService.callMethod(SecureBulkLoadProtos.java:4721)
> ...
> Caused by: java.lang.RuntimeException: SASL authentication failed. The most 
> likely cause is missing or invalid credentials. Consider 'kinit'.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:679)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:637)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:745)
> ... 16 more
> Caused by: 

[jira] [Updated] (HBASE-17435) Call to preCommitStoreFile() hook encounters SaslException in secure deployment

2017-01-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17435:
---
Description: 
[~romil.choksi] was testing bulk load in secure cluster where 
LoadIncrementalHFiles failed.

Looking at region server log, we saw the following:
{code}
at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:185)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1257)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1163)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
at 
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
at 
org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
at 
org.apache.hadoop.hbase.backup.impl.BackupSystemTable.getBackupContexts(BackupSystemTable.java:540)
at 
org.apache.hadoop.hbase.backup.impl.BackupSystemTable.getBackupHistory(BackupSystemTable.java:517)
at 
org.apache.hadoop.hbase.backup.impl.BackupSystemTable.getTablesForBackupType(BackupSystemTable.java:589)
at 
org.apache.hadoop.hbase.backup.BackupObserver.preCommitStoreFile(BackupObserver.java:89)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$61.call(RegionCoprocessorHost.java:1494)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1660)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1734)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1692)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCommitStoreFile(RegionCoprocessorHost.java:1490)
at 
org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:5512)
at 
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:293)
at 
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:276)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1704)
at 
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.secureBulkLoadHFiles(SecureBulkLoadEndpoint.java:276)
at 
org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos$SecureBulkLoadService.callMethod(SecureBulkLoadProtos.java:4721)
...
Caused by: java.lang.RuntimeException: SASL authentication failed. The most 
likely cause is missing or invalid credentials. Consider 'kinit'.
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:679)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:637)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:745)
... 16 more
Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Attempt to obtain 
new INITIATE  credentials failed! (null))]
at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
at 
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:611)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:156)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:737)
{code}
The cause was that in the coprocessor hook, security 

[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-07 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807178#comment-15807178
 ] 

Ted Yu commented on HBASE-17408:


Created HBASE-17438 for the server side work.

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17438) Server side change to accommodate limit by number of mutations

2017-01-07 Thread Ted Yu (JIRA)
Ted Yu created HBASE-17438:
--

 Summary: Server side change to accommodate limit by number of 
mutations
 Key: HBASE-17438
 URL: https://issues.apache.org/jira/browse/HBASE-17438
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu


HBASE-17408 introduced per request limit by number of mutations for the client.
This JIRA is to add support on server side, in similar way to HBASE-14946.

Server side support would keep a counter for the mutations. When the counter 
exceeds threshold for limit of number of mutations, exception would be returned 
to client so that client retries the remaining mutations.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13652) Case-insensitivity of file system affects table creation

2017-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807148#comment-15807148
 ] 

Hadoop QA commented on HBASE-13652:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
1s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 15s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 50s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s 
{color} | {color:red} The patch generated 14 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 121m 21s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.procedure.TestEnableTableProcedure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846162/HBASE-13652.master.002.patch
 |
| JIRA Issue | HBASE-13652 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 7829ef213cc5 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6fecf55 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5178/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/5178/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5178/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5178/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5178/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Case-insensitivity 

[jira] [Commented] (HBASE-17424) Protect REST client against malicious XML responses.

2017-01-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807061#comment-15807061
 ] 

Hudson commented on HBASE-17424:


FAILURE: Integrated in Jenkins build HBase-1.1-JDK8 #1918 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1918/])
HBASE-17424 Disable external entity parsing in RemoteAdmin (elserj: rev 
ca72bb2860bcfe8264e91924a2a3e07fe72238aa)
* (edit) 
hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteAdmin.java
* (add) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestXmlParsing.java


> Protect REST client against malicious XML responses.
> 
>
> Key: HBASE-17424
> URL: https://issues.apache.org/jira/browse/HBASE-17424
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-17424.001.patch, HBASE-17424.002.patch
>
>
> If, by some means, an unsuspecting REST server client would get a malformed 
> response from the REST server, it could result in the client performing some 
> unintended action from the XML parsing.
> We should disable these extra options on the XML parser to prevent the 
> possibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)