hbase git commit: Add hbasecon asia and next weeks visa meetup

2017-04-17 Thread stack
Repository: hbase
Updated Branches:
  refs/heads/master 5eda5fb9d -> b35121d90


Add hbasecon asia and next weeks visa meetup


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b35121d9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b35121d9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b35121d9

Branch: refs/heads/master
Commit: b35121d904e7e16a04e60a6471d05fb15d598acf
Parents: 5eda5fb
Author: Michael Stack 
Authored: Mon Apr 17 22:19:49 2017 -0700
Committer: Michael Stack 
Committed: Mon Apr 17 22:20:04 2017 -0700

--
 src/main/site/xdoc/index.xml | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b35121d9/src/main/site/xdoc/index.xml
--
diff --git a/src/main/site/xdoc/index.xml b/src/main/site/xdoc/index.xml
index 83c9f01..1848d40 100644
--- a/src/main/site/xdoc/index.xml
+++ b/src/main/site/xdoc/index.xml
@@ -83,7 +83,9 @@ Apache HBase is an open-source, distributed, versioned, 
non-relational database
 
 
  
+   August 4th, 2017 https://easychair.org/cfp/HBaseConAsia2017;>HBaseCon Asia 2017 @ the 
Huawei Campus in Shenzhen, China
June 12th, 2017 https://easychair.org/cfp/hbasecon2017;>HBaseCon2017 at the 
Crittenden Buildings on the Google Mountain View Campus
+   April 25th, 2017 https://www.meetup.com/hbaseusergroup/events/239291716/;>Meetup @ 
Visa in Palo Alto
 December 8th, 2016 https://www.meetup.com/hbaseusergroup/events/235542241/;>Meetup@Splice
 in San Francisco
September 26th, 2016 http://www.meetup.com/HBase-NYC/events/233024937/;>HBaseConEast2016 
at Google in Chelsea, NYC
  May 24th, 2016 http://www.hbasecon.com/;>HBaseCon2016 
at The Village, 969 Market, San Francisco



hbase git commit: HBASE-17912 - Avoid major compactions on region server startup

2017-04-17 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/branch-1 cb2f2a7d1 -> a26de9b51


HBASE-17912 - Avoid major compactions on region server startup

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a26de9b5
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a26de9b5
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a26de9b5

Branch: refs/heads/branch-1
Commit: a26de9b51e5adb461c20b197235b5777300776a9
Parents: cb2f2a7
Author: gjacoby 
Authored: Mon Apr 17 16:08:25 2017 -0700
Committer: tedyu 
Committed: Mon Apr 17 19:43:03 2017 -0700

--
 .../java/org/apache/hadoop/hbase/regionserver/HRegionServer.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a26de9b5/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index 3544757..6a93606 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -1580,7 +1580,9 @@ public class HRegionServer extends HasThread implements
 private final HRegionServer instance;
 private final int majorCompactPriority;
 private final static int DEFAULT_PRIORITY = Integer.MAX_VALUE;
-private long iteration = 0;
+//Iteration is 1-based rather than 0-based so we don't check for compaction
+// immediately upon region server startup
+private long iteration = 1;
 
 CompactionChecker(final HRegionServer h, final int sleepTime,
 final Stoppable stopper) {



hbase git commit: HBASE-17912 - Avoid major compactions on region server startup

2017-04-17 Thread tedyu
Repository: hbase
Updated Branches:
  refs/heads/master 3c32032f5 -> 5eda5fb9d


HBASE-17912 - Avoid major compactions on region server startup

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5eda5fb9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5eda5fb9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5eda5fb9

Branch: refs/heads/master
Commit: 5eda5fb9d7d7fd5ae77d862c2e1666787e72ead0
Parents: 3c32032
Author: gjacoby 
Authored: Mon Apr 17 16:08:25 2017 -0700
Committer: tedyu 
Committed: Mon Apr 17 19:41:19 2017 -0700

--
 .../java/org/apache/hadoop/hbase/regionserver/HRegionServer.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5eda5fb9/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index b3b5113..d14571b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -1628,7 +1628,9 @@ public class HRegionServer extends HasThread implements
 private final HRegionServer instance;
 private final int majorCompactPriority;
 private final static int DEFAULT_PRIORITY = Integer.MAX_VALUE;
-private long iteration = 0;
+//Iteration is 1-based rather than 0-based so we don't check for compaction
+// immediately upon region server startup
+private long iteration = 1;
 
 CompactionChecker(final HRegionServer h, final int sleepTime,
 final Stoppable stopper) {



hbase git commit: HBASE-17929 Add more options for PE tool

2017-04-17 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/master ecdfb8232 -> 3c32032f5


HBASE-17929 Add more options for PE tool


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3c32032f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3c32032f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3c32032f

Branch: refs/heads/master
Commit: 3c32032f5ce935eedd2b6d471f20b030c857acbc
Parents: ecdfb82
Author: zhangduo 
Authored: Mon Apr 17 16:20:45 2017 +0800
Committer: zhangduo 
Committed: Tue Apr 18 09:52:34 2017 +0800

--
 .../hadoop/hbase/PerformanceEvaluation.java | 37 ++--
 1 file changed, 26 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3c32032f/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
index 40e50cf..96ee515 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
@@ -636,6 +636,8 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
 MemoryCompactionPolicy inMemoryCompaction =
 MemoryCompactionPolicy.valueOf(
 CompactingMemStore.COMPACTING_MEMSTORE_TYPE_DEFAULT);
+boolean asyncPrefetch = false;
+boolean cacheBlocks = true;
 
 public TestOptions() {}
 
@@ -1246,8 +1248,9 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
 
 @Override
 void testRow(final int i) throws IOException {
-  Scan scan = new Scan(getRandomRow(this.rand, opts.totalRows));
-  scan.setCaching(opts.caching);
+  Scan scan =
+  new Scan().withStartRow(getRandomRow(this.rand, 
opts.totalRows)).setCaching(opts.caching)
+  
.setCacheBlocks(opts.cacheBlocks).setAsyncPrefetch(opts.asyncPrefetch);
   FilterList list = new FilterList();
   if (opts.addColumns) {
 scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
@@ -1282,8 +1285,9 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
 @Override
 void testRow(final int i) throws IOException {
   Pair startAndStopRow = getStartAndStopRow();
-  Scan scan = new Scan(startAndStopRow.getFirst(), 
startAndStopRow.getSecond());
-  scan.setCaching(opts.caching);
+  Scan scan = new Scan().withStartRow(startAndStopRow.getFirst())
+  .withStopRow(startAndStopRow.getSecond()).setCaching(opts.caching)
+  
.setCacheBlocks(opts.cacheBlocks).setAsyncPrefetch(opts.asyncPrefetch);
   if (opts.filterAll) {
 scan.setFilter(new FilterAllFilter());
   }
@@ -1477,8 +1481,8 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
 @Override
 void testRow(final int i) throws IOException {
   if (this.testScanner == null) {
-Scan scan = new Scan(format(opts.startRow));
-scan.setCaching(opts.caching);
+Scan scan = new 
Scan().withStartRow(format(opts.startRow)).setCaching(opts.caching)
+
.setCacheBlocks(opts.cacheBlocks).setAsyncPrefetch(opts.asyncPrefetch);
 if (opts.addColumns) {
   scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
 } else {
@@ -1487,7 +1491,7 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
 if (opts.filterAll) {
   scan.setFilter(new FilterAllFilter());
 }
-   this.testScanner = table.getScanner(scan);
+this.testScanner = table.getScanner(scan);
   }
   Result r = testScanner.next();
   updateValueSize(r);
@@ -1687,8 +1691,8 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
   if(opts.filterAll) {
 list.addFilter(new FilterAllFilter());
   }
-  Scan scan = new Scan();
-  scan.setCaching(opts.caching);
+  Scan scan = new 
Scan().setCaching(opts.caching).setCacheBlocks(opts.cacheBlocks)
+  .setAsyncPrefetch(opts.asyncPrefetch);
   if (opts.addColumns) {
 scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
   } else {
@@ -2138,8 +2142,8 @@ public class PerformanceEvaluation extends Configured 
implements Tool {
 
   final String inMemoryCompaction = "--inmemoryCompaction=";
   if (cmd.startsWith(inMemoryCompaction)) {
-opts.inMemoryCompaction = opts.inMemoryCompaction.valueOf(cmd.substring
-(inMemoryCompaction.length()));
+opts.inMemoryCompaction =
+

[01/50] [abbrv] hbase git commit: HBASE-17859 ByteBufferUtils#compareTo is wrong

2017-04-17 Thread syuanjiang
Repository: hbase
Updated Branches:
  refs/heads/hbase-12439 1c4d9c896 -> ecdfb8232


HBASE-17859 ByteBufferUtils#compareTo is wrong


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/73e1bcd3
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/73e1bcd3
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/73e1bcd3

Branch: refs/heads/hbase-12439
Commit: 73e1bcd33515061be2dc2e51e6ad19d9798a8ef6
Parents: 9facfa5
Author: CHIA-PING TSAI 
Authored: Fri Mar 31 19:45:10 2017 +0800
Committer: Chia-Ping Tsai 
Committed: Sat Apr 1 13:42:36 2017 +0800

--
 .../hadoop/hbase/util/ByteBufferUtils.java  |  9 +++--
 .../hadoop/hbase/util/TestByteBufferUtils.java  | 39 
 2 files changed, 45 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/73e1bcd3/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
index 760afd4..4bed97c 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
@@ -16,6 +16,7 @@
  */
 package org.apache.hadoop.hbase.util;
 
+import com.google.common.annotations.VisibleForTesting;
 import java.io.ByteArrayOutputStream;
 import java.io.DataInput;
 import java.io.DataInputStream;
@@ -49,8 +50,10 @@ public final class ByteBufferUtils {
   public final static int VALUE_MASK = 0x7f;
   public final static int NEXT_BIT_SHIFT = 7;
   public final static int NEXT_BIT_MASK = 1 << 7;
-  private static final boolean UNSAFE_AVAIL = UnsafeAvailChecker.isAvailable();
-  private static final boolean UNSAFE_UNALIGNED = 
UnsafeAvailChecker.unaligned();
+  @VisibleForTesting
+  static boolean UNSAFE_AVAIL = UnsafeAvailChecker.isAvailable();
+  @VisibleForTesting
+  static boolean UNSAFE_UNALIGNED = UnsafeAvailChecker.unaligned();
 
   private ByteBufferUtils() {
   }
@@ -668,7 +671,7 @@ public final class ByteBufferUtils {
 int end2 = o2 + l2;
 for (int i = o1, j = o2; i < end1 && j < end2; i++, j++) {
   int a = buf1[i] & 0xFF;
-  int b = buf2.get(i) & 0xFF;
+  int b = buf2.get(j) & 0xFF;
   if (a != b) {
 return a - b;
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/73e1bcd3/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
--
diff --git 
a/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
 
b/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
index b78574a..053fb24 100644
--- 
a/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
+++ 
b/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
@@ -28,8 +28,10 @@ import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.List;
 import java.util.Set;
 import java.util.SortedSet;
 import java.util.TreeSet;
@@ -38,15 +40,44 @@ import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.testclassification.MiscTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.apache.hadoop.io.WritableUtils;
+import org.junit.AfterClass;
 import org.junit.Before;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
 
 @Category({MiscTests.class, SmallTests.class})
+@RunWith(Parameterized.class)
 public class TestByteBufferUtils {
 
   private byte[] array;
 
+  @AfterClass
+  public static void afterClass() throws Exception {
+ByteBufferUtils.UNSAFE_AVAIL = UnsafeAvailChecker.isAvailable();
+ByteBufferUtils.UNSAFE_UNALIGNED = UnsafeAvailChecker.unaligned();
+  }
+
+  @Parameterized.Parameters
+  public static Collection parameters() {
+List paramList = new ArrayList<>(2);
+{
+  paramList.add(new Object[] { false });
+  paramList.add(new Object[] { true });
+}
+return paramList;
+  }
+
+  public TestByteBufferUtils(boolean useUnsafeIfPossible) {
+if (useUnsafeIfPossible) {
+  ByteBufferUtils.UNSAFE_AVAIL = UnsafeAvailChecker.isAvailable();
+  ByteBufferUtils.UNSAFE_UNALIGNED = UnsafeAvailChecker.unaligned();
+} else {
+  ByteBufferUtils.UNSAFE_AVAIL = false;
+  

[05/50] [abbrv] hbase git commit: HBASE-16780 Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit Update internal pb to 3.2 from 3.1.; AMENDMENT -- FORGOT TO REBUILD

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/e916b79d/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Option.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Option.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Option.java
index 86f8d4b..cfd28c9 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Option.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Option.java
@@ -94,10 +94,13 @@ public  final class Option extends
   private volatile java.lang.Object name_;
   /**
* 
-   * The option's name. For example, `"java_package"`.
+   * The option's name. For protobuf built-in options (options defined in
+   * descriptor.proto), this is the short name. For example, `"map_entry"`.
+   * For custom options, it should be the fully-qualified name. For example,
+   * `"google.api.http"`.
* 
*
-   * optional string name = 1;
+   * string name = 1;
*/
   public java.lang.String getName() {
 java.lang.Object ref = name_;
@@ -113,10 +116,13 @@ public  final class Option extends
   }
   /**
* 
-   * The option's name. For example, `"java_package"`.
+   * The option's name. For protobuf built-in options (options defined in
+   * descriptor.proto), this is the short name. For example, `"map_entry"`.
+   * For custom options, it should be the fully-qualified name. For example,
+   * `"google.api.http"`.
* 
*
-   * optional string name = 1;
+   * string name = 1;
*/
   public org.apache.hadoop.hbase.shaded.com.google.protobuf.ByteString
   getNameBytes() {
@@ -136,30 +142,39 @@ public  final class Option extends
   private org.apache.hadoop.hbase.shaded.com.google.protobuf.Any value_;
   /**
* 
-   * The option's value. For example, 
`"org.apache.hadoop.hbase.shaded.com.google.protobuf"`.
+   * The option's value packed in an Any message. If the value is a primitive,
+   * the corresponding wrapper type defined in google/protobuf/wrappers.proto
+   * should be used. If the value is an enum, it should be stored as an int32
+   * value using the google.protobuf.Int32Value type.
* 
*
-   * optional .google.protobuf.Any value = 2;
+   * .google.protobuf.Any value = 2;
*/
   public boolean hasValue() {
 return value_ != null;
   }
   /**
* 
-   * The option's value. For example, 
`"org.apache.hadoop.hbase.shaded.com.google.protobuf"`.
+   * The option's value packed in an Any message. If the value is a primitive,
+   * the corresponding wrapper type defined in google/protobuf/wrappers.proto
+   * should be used. If the value is an enum, it should be stored as an int32
+   * value using the google.protobuf.Int32Value type.
* 
*
-   * optional .google.protobuf.Any value = 2;
+   * .google.protobuf.Any value = 2;
*/
   public org.apache.hadoop.hbase.shaded.com.google.protobuf.Any getValue() {
 return value_ == null ? 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Any.getDefaultInstance() : 
value_;
   }
   /**
* 
-   * The option's value. For example, 
`"org.apache.hadoop.hbase.shaded.com.google.protobuf"`.
+   * The option's value packed in an Any message. If the value is a primitive,
+   * the corresponding wrapper type defined in google/protobuf/wrappers.proto
+   * should be used. If the value is an enum, it should be stored as an int32
+   * value using the google.protobuf.Int32Value type.
* 
*
-   * optional .google.protobuf.Any value = 2;
+   * .google.protobuf.Any value = 2;
*/
   public org.apache.hadoop.hbase.shaded.com.google.protobuf.AnyOrBuilder 
getValueOrBuilder() {
 return getValue();
@@ -229,7 +244,7 @@ public  final class Option extends
   return memoizedHashCode;
 }
 int hash = 41;
-hash = (19 * hash) + getDescriptorForType().hashCode();
+hash = (19 * hash) + getDescriptor().hashCode();
 hash = (37 * hash) + NAME_FIELD_NUMBER;
 hash = (53 * hash) + getName().hashCode();
 if (hasValue()) {
@@ -472,10 +487,13 @@ public  final class Option extends
 private java.lang.Object name_ = "";
 /**
  * 
- * The option's name. For example, `"java_package"`.
+ * The option's name. For protobuf built-in options (options defined in
+ * descriptor.proto), this is the short name. For example, `"map_entry"`.
+ * For custom options, it should be the fully-qualified name. For example,
+ * `"google.api.http"`.
  * 
  *
- * optional string name = 1;
+ * string name = 1;
  */
 public java.lang.String getName() {
   java.lang.Object ref = name_;
@@ -491,10 +509,13 @@ public  final class Option extends
 }
 /**
  * 
- * The option's name. For example, `"java_package"`.
+ * The option's name. For protobuf built-in 

[31/50] [abbrv] hbase git commit: HBASE-17905: [hbase-spark] bulkload does not work when table not exist - revert due to misspelling

2017-04-17 Thread syuanjiang
HBASE-17905: [hbase-spark] bulkload does not work when table not exist - revert 
due to misspelling


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/02da5a61
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/02da5a61
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/02da5a61

Branch: refs/heads/hbase-12439
Commit: 02da5a6104d413570472ae4621e44fa7e75c0ca6
Parents: 22f602c
Author: tedyu 
Authored: Tue Apr 11 17:18:37 2017 -0700
Committer: tedyu 
Committed: Tue Apr 11 17:18:37 2017 -0700

--
 .../hadoop/hbase/spark/BulkLoadPartitioner.scala  | 13 +
 .../apache/hadoop/hbase/spark/HBaseContext.scala  | 18 +-
 2 files changed, 6 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/02da5a61/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
--
diff --git 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
index 022c933..ab4fc41 100644
--- 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
+++ 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
@@ -33,8 +33,8 @@ import org.apache.spark.Partitioner
 @InterfaceAudience.Public
 class BulkLoadPartitioner(startKeys:Array[Array[Byte]])
   extends Partitioner {
-  // when table not exist, startKeys = Byte[0][]
-  override def numPartitions: Int = if (startKeys.length == 0) 1 else 
startKeys.length
+
+  override def numPartitions: Int = startKeys.length
 
   override def getPartition(key: Any): Int = {
 
@@ -53,11 +53,8 @@ class BulkLoadPartitioner(startKeys:Array[Array[Byte]])
 case _ =>
   key.asInstanceOf[Array[Byte]]
   }
-var partition = util.Arrays.binarySearch(startKeys, rowKey, comparator)
-if (partition < 0)
-  partition = partition * -1 + -2
-if (partition < 0)
-  partition = 0
-partition
+val partition = util.Arrays.binarySearch(startKeys, rowKey, comparator)
+if (partition < 0) partition * -1 + -2
+else partition
   }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/02da5a61/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
--
diff --git 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
index 8c4e0f4..e2891db 100644
--- 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
+++ 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
@@ -48,7 +48,7 @@ import org.apache.spark.streaming.dstream.DStream
 import java.io._
 import org.apache.hadoop.security.UserGroupInformation
 import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod
-import org.apache.hadoop.fs.{Path, FileAlreadyExistsException, FileSystem}
+import org.apache.hadoop.fs.{Path, FileSystem}
 import scala.collection.mutable
 
 /**
@@ -620,17 +620,9 @@ class HBaseContext(@transient sc: SparkContext,
   compactionExclude: Boolean = false,
   maxSize:Long = HConstants.DEFAULT_MAX_FILE_SIZE):
   Unit = {
-val stagingPath = new Path(stagingDir)
-val fs = stagingPath.getFileSystem(config)
-if (fs.exists(stagingPath)) {
-  throw new FileAlreadyExistsException("Path " + stagingDir + " already 
exist")
-}
 val conn = HBaseConnectionCache.getConnection(config)
 val regionLocator = conn.getRegionLocator(tableName)
 val startKeys = regionLocator.getStartKeys
-if (startKeys.length == 0) {
-  logInfo("Table " + tableName.toString + " was not found")
-}
 val defaultCompressionStr = config.get("hfile.compression",
   Compression.Algorithm.NONE.getName)
 val hfileCompression = HFileWriterImpl
@@ -751,17 +743,9 @@ class HBaseContext(@transient sc: SparkContext,
   compactionExclude: Boolean = false,
   maxSize:Long = HConstants.DEFAULT_MAX_FILE_SIZE):
   Unit = {
-val stagingPath = new Path(stagingDir)
-val fs = stagingPath.getFileSystem(config)
-if (fs.exists(stagingPath)) {
-  throw new FileAlreadyExistsException("Path " + stagingDir + " already 
exist")
-}
 val conn = HBaseConnectionCache.getConnection(config)
 val regionLocator = conn.getRegionLocator(tableName)
 val startKeys = regionLocator.getStartKeys
-if (startKeys.length == 0) {
-  logInfo("Table " + tableName.toString + " was not found")
-}
 val defaultCompressionStr = 

[18/50] [abbrv] hbase git commit: HBASE-17858 Update refguide about the IS annotation if necessary

2017-04-17 Thread syuanjiang
HBASE-17858 Update refguide about the IS annotation if necessary


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/17737b27
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/17737b27
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/17737b27

Branch: refs/heads/hbase-12439
Commit: 17737b2710a2a1271eb791478eb99f7a573ecac1
Parents: 029fa29
Author: zhangduo 
Authored: Fri Mar 31 18:20:45 2017 +0800
Committer: zhangduo 
Committed: Thu Apr 6 09:48:18 2017 +0800

--
 src/main/asciidoc/_chapters/upgrading.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/17737b27/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index df5bbfe..46f637d 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -128,7 +128,7 @@ for warning about incompatible changes). All effort will be 
made to provide a de
 HBase has a lot of API points, but for the compatibility matrix above, we 
differentiate between Client API, Limited Private API, and Private API. HBase 
uses a version of 
link:https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html[Hadoop's
 Interface classification]. HBase's Interface classification classes can be 
found 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/classification/package-summary.html[here].
 
 * InterfaceAudience: captures the intended audience, possible values are 
Public (for end users and external projects), LimitedPrivate (for other 
Projects, Coprocessors or other plugin points), and Private (for internal use).
-* InterfaceStability: describes what types of interface changes are permitted. 
Possible values are Stable, Evolving, Unstable, and Deprecated.
+* InterfaceStability: describes what types of interface changes are permitted. 
Possible values are Stable, Evolving, Unstable, and Deprecated. Notice that 
this annotation is only valid for classes which are marked as 
IA.LimitedPrivate. The stability of IA.Public classes is only related to the 
upgrade type(major, minor or patch). And for IA.Private classes, there is no 
guarantee on the stability between releases. Refer to the Compatibility Matrix 
above for more details.
 
 [[hbase.client.api]]
 HBase Client API::



[45/50] [abbrv] hbase git commit: HBASE-17366 Run TestHFile#testReaderWithoutBlockCache failes

2017-04-17 Thread syuanjiang
HBASE-17366 Run TestHFile#testReaderWithoutBlockCache failes

Signed-off-by: CHIA-PING TSAI 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c1ac3f77
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c1ac3f77
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c1ac3f77

Branch: refs/heads/hbase-12439
Commit: c1ac3f7739f8c9e20f6aed428558128339467d04
Parents: 363f627
Author: huaxiang sun 
Authored: Mon Apr 17 10:32:17 2017 +0800
Committer: CHIA-PING TSAI 
Committed: Mon Apr 17 10:34:17 2017 +0800

--
 .../apache/hadoop/hbase/regionserver/StoreFileWriter.java   | 9 +
 .../java/org/apache/hadoop/hbase/io/hfile/TestHFile.java| 2 +-
 2 files changed, 10 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c1ac3f77/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
index ccfd735..88cba75 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
@@ -384,6 +384,15 @@ public class StoreFileWriter implements CellSink, 
ShipperListener {
 }
 
 /**
+ * Creates Builder with cache configuration disabled
+ */
+public Builder(Configuration conf, FileSystem fs) {
+  this.conf = conf;
+  this.cacheConf = CacheConfig.DISABLED;
+  this.fs = fs;
+}
+
+/**
  * @param trt A premade TimeRangeTracker to use rather than build one per 
append (building one
  * of these is expensive so good to pass one in if you have one).
  * @return this (for chained invocation)

http://git-wip-us.apache.org/repos/asf/hbase/blob/c1ac3f77/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
index 7074c9d..4db459a 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
@@ -115,7 +115,7 @@ public class TestHFile  {
 Path storeFileParentDir = new Path(TEST_UTIL.getDataTestDir(), 
"TestHFile");
 HFileContext meta = new HFileContextBuilder().withBlockSize(64 * 
1024).build();
 StoreFileWriter sfw =
-new StoreFileWriter.Builder(conf, cacheConf, 
fs).withOutputDir(storeFileParentDir)
+new StoreFileWriter.Builder(conf, fs).withOutputDir(storeFileParentDir)
 
.withComparator(CellComparator.COMPARATOR).withFileContext(meta).build();
 
 final int rowLen = 32;



[46/50] [abbrv] hbase git commit: HBASE-16875 Changed try-with-resources in the docs to recommended way

2017-04-17 Thread syuanjiang
HBASE-16875 Changed try-with-resources in the docs to recommended way

Signed-off-by: Chia-Ping Tsai 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c8cd921b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c8cd921b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c8cd921b

Branch: refs/heads/hbase-12439
Commit: c8cd921bededa67b2b0de823005830d750534d93
Parents: c1ac3f7
Author: Jan Hentschel 
Authored: Sat Mar 4 10:04:02 2017 +0100
Committer: Chia-Ping Tsai 
Committed: Mon Apr 17 10:59:46 2017 +0800

--
 src/main/asciidoc/_chapters/architecture.adoc |  7 +++---
 src/main/asciidoc/_chapters/security.adoc | 29 --
 2 files changed, 13 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c8cd921b/src/main/asciidoc/_chapters/architecture.adoc
--
diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 27aebd9..7f9ba07 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -219,10 +219,9 @@ For applications which require high-end multithreaded 
access (e.g., web-servers
 
 // Create a connection to the cluster.
 Configuration conf = HBaseConfiguration.create();
-try (Connection connection = ConnectionFactory.createConnection(conf)) {
-  try (Table table = connection.getTable(TableName.valueOf(tablename)) {
-// use table as needed, the table returned is lightweight
-  }
+try (Connection connection = ConnectionFactory.createConnection(conf);
+ Table table = connection.getTable(TableName.valueOf(tablename))) {
+  // use table as needed, the table returned is lightweight
 }
 
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/c8cd921b/src/main/asciidoc/_chapters/security.adoc
--
diff --git a/src/main/asciidoc/_chapters/security.adoc 
b/src/main/asciidoc/_chapters/security.adoc
index 0ed9ba2..ccb5adb 100644
--- a/src/main/asciidoc/_chapters/security.adoc
+++ b/src/main/asciidoc/_chapters/security.adoc
@@ -202,10 +202,9 @@ Set it in the `Configuration` supplied to `Table`:
 Configuration conf = HBaseConfiguration.create();
 Connection connection = ConnectionFactory.createConnection(conf);
 conf.set("hbase.rpc.protection", "privacy");
-try (Connection connection = ConnectionFactory.createConnection(conf)) {
-  try (Table table = connection.getTable(TableName.valueOf(tablename)) {
+try (Connection connection = ConnectionFactory.createConnection(conf);
+ Table table = connection.getTable(TableName.valueOf(tablename))) {
    do your stuff
-  }
 }
 
 
@@ -1014,24 +1013,16 @@ public static void grantOnTable(final 
HBaseTestingUtility util, final String use
   SecureTestUtil.updateACLs(util, new Callable() {
 @Override
 public Void call() throws Exception {
-  Configuration conf = HBaseConfiguration.create();
-  Connection connection = ConnectionFactory.createConnection(conf);
-  try (Connection connection = ConnectionFactory.createConnection(conf)) {
-try (Table table = connection.getTable(TableName.valueOf(tablename)) {
-  AccessControlLists.ACL_TABLE_NAME);
-  try {
-BlockingRpcChannel service = 
acl.coprocessorService(HConstants.EMPTY_START_ROW);
-AccessControlService.BlockingInterface protocol =
-AccessControlService.newBlockingStub(service);
-ProtobufUtil.grant(protocol, user, table, family, qualifier, 
actions);
-  } finally {
-acl.close();
-  }
-  return null;
-}
+  try (Connection connection = 
ConnectionFactory.createConnection(util.getConfiguration());
+   Table acl = connection.getTable(AccessControlLists.ACL_TABLE_NAME)) 
{
+BlockingRpcChannel service = 
acl.coprocessorService(HConstants.EMPTY_START_ROW);
+AccessControlService.BlockingInterface protocol =
+  AccessControlService.newBlockingStub(service);
+AccessControlUtil.grant(null, protocol, user, table, family, 
qualifier, false, actions);
   }
+  return null;
 }
-  }
+  });
 }
 
 



[43/50] [abbrv] hbase git commit: HBASE-17903 Corrected the alias for the link of HBASE-6580

2017-04-17 Thread syuanjiang
HBASE-17903 Corrected the alias for the link of HBASE-6580

Signed-off-by: CHIA-PING TSAI 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/918aa465
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/918aa465
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/918aa465

Branch: refs/heads/hbase-12439
Commit: 918aa4655c4109159f27b6d78460bd3681c11f06
Parents: 8db9760
Author: Jan Hentschel 
Authored: Sun Apr 16 17:02:47 2017 +0200
Committer: CHIA-PING TSAI 
Committed: Mon Apr 17 10:22:25 2017 +0800

--
 src/main/asciidoc/_chapters/architecture.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/918aa465/src/main/asciidoc/_chapters/architecture.adoc
--
diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 773d237..27aebd9 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -230,7 +230,7 @@ try (Connection connection = 
ConnectionFactory.createConnection(conf)) {
 .`HTablePool` is Deprecated
 [WARNING]
 
-Previous versions of this guide discussed `HTablePool`, which was deprecated 
in HBase 0.94, 0.95, and 0.96, and removed in 0.98.1, by 
link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6500], or 
`HConnection`, which is deprecated in HBase 1.0 by `Connection`.
+Previous versions of this guide discussed `HTablePool`, which was deprecated 
in HBase 0.94, 0.95, and 0.96, and removed in 0.98.1, by 
link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6580], or 
`HConnection`, which is deprecated in HBase 1.0 by `Connection`.
 Please use 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Connection.html[Connection]
 instead.
 
 



[20/50] [abbrv] hbase git commit: HBASE-17871 scan#setBatch(int) call leads wrong result of VerifyReplication

2017-04-17 Thread syuanjiang
HBASE-17871 scan#setBatch(int) call leads wrong result of VerifyReplication

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ec5188df
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ec5188df
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ec5188df

Branch: refs/heads/hbase-12439
Commit: ec5188df3090d42088b6f4cb8f0c2fd49425f8c1
Parents: d7e3116
Author: Tomu Tsuruhara 
Authored: Wed Apr 5 21:42:28 2017 +0900
Committer: tedyu 
Committed: Thu Apr 6 07:00:13 2017 -0700

--
 .../mapreduce/replication/VerifyReplication.java | 19 +++
 1 file changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ec5188df/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
index ba5966b..3f8317b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
@@ -77,7 +77,7 @@ public class VerifyReplication extends Configured implements 
Tool {
   private final static String PEER_CONFIG_PREFIX = NAME + ".peer.";
   static long startTime = 0;
   static long endTime = Long.MAX_VALUE;
-  static int batch = Integer.MAX_VALUE;
+  static int batch = -1;
   static int versions = -1;
   static String tableName = null;
   static String families = null;
@@ -110,6 +110,7 @@ public class VerifyReplication extends Configured 
implements Tool {
 private int sleepMsBeforeReCompare;
 private String delimiter = "";
 private boolean verbose = false;
+private int batch = -1;
 
 /**
  * Map method that compares every scanned row with the equivalent from
@@ -128,8 +129,11 @@ public class VerifyReplication extends Configured 
implements Tool {
 sleepMsBeforeReCompare = conf.getInt(NAME +".sleepMsBeforeReCompare", 
0);
 delimiter = conf.get(NAME + ".delimiter", "");
 verbose = conf.getBoolean(NAME +".verbose", false);
+batch = conf.getInt(NAME + ".batch", -1);
 final Scan scan = new Scan();
-scan.setBatch(batch);
+if (batch > 0) {
+  scan.setBatch(batch);
+}
 scan.setCacheBlocks(false);
 scan.setCaching(conf.getInt(TableInputFormat.SCAN_CACHEDROWS, 1));
 long startTime = conf.getLong(NAME + ".startTime", 0);
@@ -329,6 +333,7 @@ public class VerifyReplication extends Configured 
implements Tool {
 conf.setLong(NAME+".endTime", endTime);
 conf.setInt(NAME +".sleepMsBeforeReCompare", sleepMsBeforeReCompare);
 conf.set(NAME + ".delimiter", delimiter);
+conf.setInt(NAME + ".batch", batch);
 conf.setBoolean(NAME +".verbose", verbose);
 conf.setBoolean(NAME +".includeDeletedCells", includeDeletedCells);
 if (families != null) {
@@ -356,6 +361,10 @@ public class VerifyReplication extends Configured 
implements Tool {
 Scan scan = new Scan();
 scan.setTimeRange(startTime, endTime);
 scan.setRaw(includeDeletedCells);
+scan.setCacheBlocks(false);
+if (batch > 0) {
+  scan.setBatch(batch);
+}
 if (versions >= 0) {
   scan.setMaxVersions(versions);
   LOG.info("Number of versions set to " + versions);
@@ -503,7 +512,7 @@ public class VerifyReplication extends Configured 
implements Tool {
   private static void restoreDefaults() {
 startTime = 0;
 endTime = Long.MAX_VALUE;
-batch = Integer.MAX_VALUE;
+batch = -1;
 versions = -1;
 tableName = null;
 families = null;
@@ -521,13 +530,15 @@ public class VerifyReplication extends Configured 
implements Tool {
 }
 System.err.println("Usage: verifyrep [--starttime=X]" +
 " [--endtime=Y] [--families=A] [--row-prefixes=B] [--delimiter=] 
[--recomparesleep=] " +
-"[--verbose]  ");
+"[--batch=] [--verbose]  ");
 System.err.println();
 System.err.println("Options:");
 System.err.println(" starttimebeginning of the time range");
 System.err.println("  without endtime means from starttime to 
forever");
 System.err.println(" endtime  end of the time range");
 System.err.println(" versions number of cell versions to verify");
+System.err.println(" batchbatch count for scan, " +
+"note that result row counts will no longer be actual number of rows 
when you use this option");
 

[02/50] [abbrv] hbase git commit: HBASE-16780 Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit Update internal pb to 3.2 from 3.1.; AMENDMENT -- FORGOT TO REBUILD

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/e916b79d/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
index bb6b40e..0071bef 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
@@ -440,7 +440,7 @@ public final class MasterProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasTableName()) {
 hash = (37 * hash) + TABLE_NAME_FIELD_NUMBER;
 hash = (53 * hash) + getTableName().hashCode();
@@ -1245,7 +1245,7 @@ public final class MasterProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasProcId()) {
 hash = (37 * hash) + PROC_ID_FIELD_NUMBER;
 hash = (53 * hash) + 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Internal.hashLong(
@@ -1867,7 +1867,7 @@ public final class MasterProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasTableName()) {
 hash = (37 * hash) + TABLE_NAME_FIELD_NUMBER;
 hash = (53 * hash) + getTableName().hashCode();
@@ -2577,7 +2577,7 @@ public final class MasterProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasProcId()) {
 hash = (37 * hash) + PROC_ID_FIELD_NUMBER;
 hash = (53 * hash) + 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Internal.hashLong(
@@ -3220,7 +3220,7 @@ public final class MasterProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasTableName()) {
 hash = (37 * hash) + TABLE_NAME_FIELD_NUMBER;
 hash = (53 * hash) + getTableName().hashCode();
@@ -4025,7 +4025,7 @@ public final class MasterProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasProcId()) {
 hash = (37 * hash) + PROC_ID_FIELD_NUMBER;
 hash = (53 * hash) + 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Internal.hashLong(
@@ -4582,7 +4582,7 @@ public final class MasterProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasRegion()) {
 hash = (37 * hash) + REGION_FIELD_NUMBER;
 hash = (53 * hash) + getRegion().hashCode();
@@ -5250,7 +5250,7 @@ public final class MasterProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   hash = (29 * hash) + unknownFields.hashCode();
   memoizedHashCode = hash;
   return hash;
@@ -5843,7 +5843,7 @@ public final class MasterProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (getRegionCount() > 0) {
 hash = (37 * hash) + REGION_FIELD_NUMBER;
 hash = (53 * hash) + getRegionList().hashCode();
@@ -6698,7 +6698,7 @@ public final class MasterProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasProcId()) {
 hash = (37 * hash) + PROC_ID_FIELD_NUMBER;
 hash = (53 * hash) + 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Internal.hashLong(
@@ -7190,7 +7190,7 @@ public final class MasterProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasRegion()) {
 hash = (37 * hash) + REGION_FIELD_NUMBER;
 hash = (53 * hash) + 

[41/50] [abbrv] hbase git commit: HBASE-17904 Get runs into NoSuchElementException when using Read Replica, with hbase. ipc.client.specificThreadForWriting to be true and hbase.rpc.client.impl to be o

2017-04-17 Thread syuanjiang
HBASE-17904 Get runs into NoSuchElementException when using Read Replica, with 
hbase. ipc.client.specificThreadForWriting
to be true and hbase.rpc.client.impl to be 
org.apache.hadoop.hbase.ipc.RpcClientImpl (Huaxiang Sun)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7678855f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7678855f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7678855f

Branch: refs/heads/hbase-12439
Commit: 7678855fac011a9c02e5d6a42470c0178482a4ce
Parents: 0cd4cec
Author: Michael Stack 
Authored: Sun Apr 16 11:00:57 2017 -0700
Committer: Michael Stack 
Committed: Sun Apr 16 11:01:06 2017 -0700

--
 .../hadoop/hbase/ipc/BlockingRpcConnection.java |  2 +-
 .../hbase/client/TestReplicaWithCluster.java| 50 
 2 files changed, 51 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7678855f/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
index 15eb10c..1012ad0 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
@@ -156,7 +156,7 @@ class BlockingRpcConnection extends RpcConnection 
implements Runnable {
 }
 
 public void remove(Call call) {
-  callsToWrite.remove();
+  callsToWrite.remove(call);
   // By removing the call from the expected call list, we make the list 
smaller, but
   // it means as well that we don't know how many calls we cancelled.
   calls.remove(call.id);

http://git-wip-us.apache.org/repos/asf/hbase/blob/7678855f/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java
index becb2eb..2c77541 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java
@@ -40,6 +40,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.Waiter;
+
 import org.apache.hadoop.hbase.client.replication.ReplicationAdmin;
 import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
@@ -515,7 +516,56 @@ public class TestReplicaWithCluster {
 
   Assert.assertTrue(r.isStale());
 } finally {
+  HTU.getAdmin().disableTable(hdt.getTableName());
+  HTU.deleteTable(hdt.getTableName());
+}
+  }
+
+  @Test
+  public void testReplicaGetWithRpcClientImpl() throws IOException {
+
HTU.getConfiguration().setBoolean("hbase.ipc.client.specificThreadForWriting", 
true);
+HTU.getConfiguration().set("hbase.rpc.client.impl", 
"org.apache.hadoop.hbase.ipc.RpcClientImpl");
+// Create table then get the single region for our new table.
+HTableDescriptor hdt = 
HTU.createTableDescriptor("testReplicaGetWithRpcClientImpl");
+hdt.setRegionReplication(NB_SERVERS);
+hdt.addCoprocessor(SlowMeCopro.class.getName());
+
+try {
+  Table table = HTU.createTable(hdt, new byte[][] { f }, null);
+
+  Put p = new Put(row);
+  p.addColumn(f, row, row);
+  table.put(p);
 
+  // Flush so it can be picked by the replica refresher thread
+  HTU.flush(table.getName());
+
+  // Sleep for some time until data is picked up by replicas
+  try {
+Thread.sleep(2 * REFRESH_PERIOD);
+  } catch (InterruptedException e1) {
+LOG.error(e1);
+  }
+
+  try {
+// Create the new connection so new config can kick in
+Connection connection = 
ConnectionFactory.createConnection(HTU.getConfiguration());
+Table t = connection.getTable(hdt.getTableName());
+
+// But if we ask for stale we will get it
+SlowMeCopro.cdl.set(new CountDownLatch(1));
+Get g = new Get(row);
+g.setConsistency(Consistency.TIMELINE);
+Result r = t.get(g);
+Assert.assertTrue(r.isStale());
+SlowMeCopro.cdl.get().countDown();
+  } finally {
+SlowMeCopro.cdl.get().countDown();
+SlowMeCopro.sleepTime.set(0);
+  }
+} 

[06/50] [abbrv] hbase git commit: HBASE-16780 Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit Update internal pb to 3.2 from 3.1.; AMENDMENT -- FORGOT TO REBUILD

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/e916b79d/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/ExtensionRegistryLite.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/ExtensionRegistryLite.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/ExtensionRegistryLite.java
index 878e46a..9bf452a 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/ExtensionRegistryLite.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/ExtensionRegistryLite.java
@@ -105,9 +105,9 @@ public class ExtensionRegistryLite {
 
   /**
* Construct a new, empty instance.
-   * 
-   * 
-   * This may be an {@code ExtensionRegistry} if the full (non-Lite) proto 
libraries are available.
+   *
+   * This may be an {@code ExtensionRegistry} if the full (non-Lite) proto 
libraries are
+   * available.
*/
   public static ExtensionRegistryLite newInstance() {
 return ExtensionRegistryFactory.create();
@@ -121,6 +121,7 @@ public class ExtensionRegistryLite {
 return ExtensionRegistryFactory.createEmpty();
   }
 
+
   /** Returns an unmodifiable view of the registry. */
   public ExtensionRegistryLite getUnmodifiable() {
 return new ExtensionRegistryLite(this);

http://git-wip-us.apache.org/repos/asf/hbase/blob/e916b79d/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Field.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Field.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Field.java
index 15951b3..d33fd75 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Field.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/Field.java
@@ -709,7 +709,7 @@ public  final class Field extends
* The field type.
* 
*
-   * optional .google.protobuf.Field.Kind kind = 1;
+   * .google.protobuf.Field.Kind kind = 1;
*/
   public int getKindValue() {
 return kind_;
@@ -719,7 +719,7 @@ public  final class Field extends
* The field type.
* 
*
-   * optional .google.protobuf.Field.Kind kind = 1;
+   * .google.protobuf.Field.Kind kind = 1;
*/
   public org.apache.hadoop.hbase.shaded.com.google.protobuf.Field.Kind 
getKind() {
 org.apache.hadoop.hbase.shaded.com.google.protobuf.Field.Kind result = 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Field.Kind.valueOf(kind_);
@@ -733,7 +733,7 @@ public  final class Field extends
* The field cardinality.
* 
*
-   * optional .google.protobuf.Field.Cardinality cardinality = 2;
+   * .google.protobuf.Field.Cardinality cardinality = 2;
*/
   public int getCardinalityValue() {
 return cardinality_;
@@ -743,7 +743,7 @@ public  final class Field extends
* The field cardinality.
* 
*
-   * optional .google.protobuf.Field.Cardinality cardinality = 2;
+   * .google.protobuf.Field.Cardinality cardinality = 2;
*/
   public org.apache.hadoop.hbase.shaded.com.google.protobuf.Field.Cardinality 
getCardinality() {
 org.apache.hadoop.hbase.shaded.com.google.protobuf.Field.Cardinality 
result = 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Field.Cardinality.valueOf(cardinality_);
@@ -757,7 +757,7 @@ public  final class Field extends
* The field number.
* 
*
-   * optional int32 number = 3;
+   * int32 number = 3;
*/
   public int getNumber() {
 return number_;
@@ -770,7 +770,7 @@ public  final class Field extends
* The field name.
* 
*
-   * optional string name = 4;
+   * string name = 4;
*/
   public java.lang.String getName() {
 java.lang.Object ref = name_;
@@ -789,7 +789,7 @@ public  final class Field extends
* The field name.
* 
*
-   * optional string name = 4;
+   * string name = 4;
*/
   public org.apache.hadoop.hbase.shaded.com.google.protobuf.ByteString
   getNameBytes() {
@@ -813,7 +813,7 @@ public  final class Field extends
* types. Example: 
`"type.googleapis.org.apache.hadoop.hbase.shaded.com.google.protobuf.Timestamp"`.
* 
*
-   * optional string type_url = 6;
+   * string type_url = 6;
*/
   public java.lang.String getTypeUrl() {
 java.lang.Object ref = typeUrl_;
@@ -833,7 +833,7 @@ public  final class Field extends
* types. Example: 
`"type.googleapis.org.apache.hadoop.hbase.shaded.com.google.protobuf.Timestamp"`.
* 
*
-   * optional string type_url = 6;
+   * string type_url = 6;
*/
   public org.apache.hadoop.hbase.shaded.com.google.protobuf.ByteString
   getTypeUrlBytes() {
@@ -857,7 +857,7 @@ public  

[32/50] [abbrv] hbase git commit: HBASE-17905 [hbase-spark] bulkload does not work when table not exist

2017-04-17 Thread syuanjiang
HBASE-17905 [hbase-spark] bulkload does not work when table not exist

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d7ddc791
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d7ddc791
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d7ddc791

Branch: refs/heads/hbase-12439
Commit: d7ddc79198679d8c642e7d8ad5141ba518f8d9f3
Parents: 02da5a6
Author: Yi Liang 
Authored: Tue Apr 11 17:04:40 2017 -0700
Committer: tedyu 
Committed: Tue Apr 11 17:18:49 2017 -0700

--
 .../hadoop/hbase/spark/BulkLoadPartitioner.scala  | 13 -
 .../apache/hadoop/hbase/spark/HBaseContext.scala  | 18 +-
 2 files changed, 25 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d7ddc791/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
--
diff --git 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
index ab4fc41..022c933 100644
--- 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
+++ 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
@@ -33,8 +33,8 @@ import org.apache.spark.Partitioner
 @InterfaceAudience.Public
 class BulkLoadPartitioner(startKeys:Array[Array[Byte]])
   extends Partitioner {
-
-  override def numPartitions: Int = startKeys.length
+  // when table not exist, startKeys = Byte[0][]
+  override def numPartitions: Int = if (startKeys.length == 0) 1 else 
startKeys.length
 
   override def getPartition(key: Any): Int = {
 
@@ -53,8 +53,11 @@ class BulkLoadPartitioner(startKeys:Array[Array[Byte]])
 case _ =>
   key.asInstanceOf[Array[Byte]]
   }
-val partition = util.Arrays.binarySearch(startKeys, rowKey, comparator)
-if (partition < 0) partition * -1 + -2
-else partition
+var partition = util.Arrays.binarySearch(startKeys, rowKey, comparator)
+if (partition < 0)
+  partition = partition * -1 + -2
+if (partition < 0)
+  partition = 0
+partition
   }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/d7ddc791/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
--
diff --git 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
index e2891db..1948bd3 100644
--- 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
+++ 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
@@ -48,7 +48,7 @@ import org.apache.spark.streaming.dstream.DStream
 import java.io._
 import org.apache.hadoop.security.UserGroupInformation
 import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod
-import org.apache.hadoop.fs.{Path, FileSystem}
+import org.apache.hadoop.fs.{Path, FileAlreadyExistsException, FileSystem}
 import scala.collection.mutable
 
 /**
@@ -620,9 +620,17 @@ class HBaseContext(@transient sc: SparkContext,
   compactionExclude: Boolean = false,
   maxSize:Long = HConstants.DEFAULT_MAX_FILE_SIZE):
   Unit = {
+val stagingPath = new Path(stagingDir)
+val fs = stagingPath.getFileSystem(config)
+if (fs.exists(stagingPath)) {
+  throw new FileAlreadyExistsException("Path " + stagingDir + " already 
exists")
+}
 val conn = HBaseConnectionCache.getConnection(config)
 val regionLocator = conn.getRegionLocator(tableName)
 val startKeys = regionLocator.getStartKeys
+if (startKeys.length == 0) {
+  logInfo("Table " + tableName.toString + " was not found")
+}
 val defaultCompressionStr = config.get("hfile.compression",
   Compression.Algorithm.NONE.getName)
 val hfileCompression = HFileWriterImpl
@@ -743,9 +751,17 @@ class HBaseContext(@transient sc: SparkContext,
   compactionExclude: Boolean = false,
   maxSize:Long = HConstants.DEFAULT_MAX_FILE_SIZE):
   Unit = {
+val stagingPath = new Path(stagingDir)
+val fs = stagingPath.getFileSystem(config)
+if (fs.exists(stagingPath)) {
+  throw new FileAlreadyExistsException("Path " + stagingDir + " already 
exists")
+}
 val conn = HBaseConnectionCache.getConnection(config)
 val regionLocator = conn.getRegionLocator(tableName)
 val startKeys = regionLocator.getStartKeys
+if (startKeys.length == 0) {
+  logInfo("Table " + tableName.toString + " was not found")
+}
 val 

[24/50] [abbrv] hbase git commit: HBASE-17836 CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell

2017-04-17 Thread syuanjiang
HBASE-17836 CellUtil#estimatedSerializedSizeOf is slow when input is 
ByteBufferCell


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1a701ce4
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1a701ce4
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1a701ce4

Branch: refs/heads/hbase-12439
Commit: 1a701ce44484f45a8a07ea9826b84f0df6f1518e
Parents: 48b2502
Author: Chia-Ping Tsai 
Authored: Sat Apr 1 13:50:01 2017 +0800
Committer: Chia-Ping Tsai 
Committed: Fri Apr 7 09:30:15 2017 +0800

--
 .../src/main/java/org/apache/hadoop/hbase/CellUtil.java  | 8 +++-
 .../src/main/java/org/apache/hadoop/hbase/SplitLogTask.java  | 2 +-
 2 files changed, 4 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/1a701ce4/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
--
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
index 6585173..e1bc969 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
@@ -1385,12 +1385,10 @@ public final class CellUtil {
* @return Estimate of the cell size in bytes.
*/
   public static int estimatedSerializedSizeOf(final Cell cell) {
-// If a KeyValue, we can give a good estimate of size.
-if (cell instanceof KeyValue) {
-  return ((KeyValue)cell).getLength() + Bytes.SIZEOF_INT;
+if (cell instanceof ExtendedCell) {
+  return ((ExtendedCell) cell).getSerializedSize(true) + Bytes.SIZEOF_INT;
 }
-// TODO: Should we add to Cell a sizeOf?  Would it help? Does it make 
sense if Cell is
-// prefix encoded or compressed?
+
 return getSumOfCellElementLengths(cell) +
   // Use the KeyValue's infrastructure size presuming that another 
implementation would have
   // same basic cost.

http://git-wip-us.apache.org/repos/asf/hbase/blob/1a701ce4/hbase-server/src/main/java/org/apache/hadoop/hbase/SplitLogTask.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/SplitLogTask.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/SplitLogTask.java
index 03d5108..3ecaa86 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/SplitLogTask.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/SplitLogTask.java
@@ -86,7 +86,7 @@ public class SplitLogTask {
   public ServerName getServerName() {
 return this.originServer;
   }
-  
+
   public ZooKeeperProtos.SplitLogTask.RecoveryMode getMode() {
 return this.mode;
   }



[28/50] [abbrv] hbase git commit: HBASE-16477 Remove Writable interface and related code from WALEdit/WALKey

2017-04-17 Thread syuanjiang
HBASE-16477 Remove Writable interface and related code from WALEdit/WALKey


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/82d554e3
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/82d554e3
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/82d554e3

Branch: refs/heads/hbase-12439
Commit: 82d554e3783372cc6b05489452c815b57c06f6cd
Parents: df96d32
Author: Enis Soztutar 
Authored: Mon Apr 10 02:31:42 2017 -0700
Committer: Enis Soztutar 
Committed: Mon Apr 10 02:31:42 2017 -0700

--
 .../regionserver/wal/KeyValueCompression.java   | 133 --
 .../hadoop/hbase/regionserver/wal/WALEdit.java  | 136 +--
 .../java/org/apache/hadoop/hbase/wal/WAL.java   |   1 -
 .../org/apache/hadoop/hbase/wal/WALKey.java |  95 ++---
 .../wal/TestKeyValueCompression.java| 116 
 5 files changed, 14 insertions(+), 467 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/82d554e3/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/KeyValueCompression.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/KeyValueCompression.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/KeyValueCompression.java
deleted file mode 100644
index a33ff9e..000
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/KeyValueCompression.java
+++ /dev/null
@@ -1,133 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hbase.regionserver.wal;
-
-import java.io.DataInput;
-import java.io.DataOutput;
-import java.io.IOException;
-
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.io.WritableUtils;
-
-/**
- * DO NOT USE. This class is deprecated and should only be used in pre-PB WAL.
- * 
- * Compression class for {@link KeyValue}s written to the WAL. This is not
- * synchronized, so synchronization should be handled outside.
- * 
- * Class only compresses and uncompresses row keys, family names, and the
- * qualifier. More may be added depending on use patterns.
- */
-@Deprecated
-@InterfaceAudience.Private
-class KeyValueCompression {
-  /**
-   * Uncompresses a KeyValue from a DataInput and returns it.
-   * 
-   * @param in the DataInput
-   * @param readContext the compressionContext to use.
-   * @return an uncompressed KeyValue
-   * @throws IOException
-   */
-
-  public static KeyValue readKV(DataInput in, CompressionContext readContext)
-  throws IOException {
-int keylength = WritableUtils.readVInt(in);
-int vlength = WritableUtils.readVInt(in);
-int tagsLength = WritableUtils.readVInt(in);
-int length = (int) KeyValue.getKeyValueDataStructureSize(keylength, 
vlength, tagsLength);
-
-byte[] backingArray = new byte[length];
-int pos = 0;
-pos = Bytes.putInt(backingArray, pos, keylength);
-pos = Bytes.putInt(backingArray, pos, vlength);
-
-// the row
-int elemLen = Compressor.uncompressIntoArray(backingArray,
-pos + Bytes.SIZEOF_SHORT, in, readContext.rowDict);
-checkLength(elemLen, Short.MAX_VALUE);
-pos = Bytes.putShort(backingArray, pos, (short)elemLen);
-pos += elemLen;
-
-// family
-elemLen = Compressor.uncompressIntoArray(backingArray,
-pos + Bytes.SIZEOF_BYTE, in, readContext.familyDict);
-checkLength(elemLen, Byte.MAX_VALUE);
-pos = Bytes.putByte(backingArray, pos, (byte)elemLen);
-pos += elemLen;
-
-// qualifier
-elemLen = Compressor.uncompressIntoArray(backingArray, pos, in,
-readContext.qualifierDict);
-pos += elemLen;
-
-// the rest
-in.readFully(backingArray, pos, length - pos);
-
-return new KeyValue(backingArray, 0, length);
-  }
-
-  private static void checkLength(int len, int max) throws 

[09/50] [abbrv] hbase git commit: HBASE-14141 HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits from backed up tables (Vladimir Rodionov)

2017-04-17 Thread syuanjiang
HBASE-14141 HBase Backup/Restore Phase 3: Filter WALs on backup to include only 
edits from backed up tables (Vladimir Rodionov)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/910b6808
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/910b6808
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/910b6808

Branch: refs/heads/hbase-12439
Commit: 910b68082c8f200f0ba6395a76b7ee1c8917e401
Parents: e916b79
Author: tedyu 
Authored: Tue Apr 4 18:20:11 2017 -0700
Committer: tedyu 
Committed: Tue Apr 4 18:20:11 2017 -0700

--
 .../hadoop/hbase/backup/impl/BackupManager.java |   2 +-
 .../backup/impl/IncrementalBackupManager.java   |  89 ++--
 .../impl/IncrementalTableBackupClient.java  | 211 +++
 .../hbase/backup/impl/RestoreTablesClient.java  |   5 +-
 .../hbase/backup/impl/TableBackupClient.java|   4 -
 .../backup/mapreduce/HFileSplitterJob.java  |   2 +-
 .../backup/mapreduce/MapReduceRestoreJob.java   |  14 +-
 .../hadoop/hbase/backup/util/RestoreTool.java   | 134 ++--
 .../hadoop/hbase/mapreduce/WALInputFormat.java  | 119 +++
 .../hadoop/hbase/mapreduce/WALPlayer.java   |  10 +-
 .../hadoop/hbase/wal/AbstractFSWALProvider.java | 101 +
 11 files changed, 410 insertions(+), 281 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/910b6808/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupManager.java
index c09ce48..f09310f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupManager.java
@@ -466,7 +466,7 @@ public class BackupManager implements Closeable {
 
   /**
* Saves list of WAL files after incremental backup operation. These files 
will be stored until
-   * TTL expiration and are used by Backup Log Cleaner plugin to determine 
which WAL files can be
+   * TTL expiration and are used by Backup Log Cleaner plug-in to determine 
which WAL files can be
* safely purged.
*/
   public void recordWALFiles(List files) throws IOException {

http://git-wip-us.apache.org/repos/asf/hbase/blob/910b6808/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.java
index 0f1453e..6330899 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/impl/IncrementalBackupManager.java
@@ -21,8 +21,10 @@ package org.apache.hadoop.hbase.backup.impl;
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.HashMap;
+import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
+import java.util.Set;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -33,7 +35,6 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.PathFilter;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.backup.BackupInfo;
 import org.apache.hadoop.hbase.backup.impl.BackupSystemTable.WALItem;
 import org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager;
 import org.apache.hadoop.hbase.backup.util.BackupUtils;
@@ -59,12 +60,10 @@ public class IncrementalBackupManager extends BackupManager 
{
   /**
* Obtain the list of logs that need to be copied out for this incremental 
backup. The list is set
* in BackupInfo.
-   * @param conn the Connection
-   * @param backupInfo backup info
-   * @return The new HashMap of RS log timestamps after the log roll for this 
incremental backup.
+   * @return The new HashMap of RS log time stamps after the log roll for this 
incremental backup.
* @throws IOException exception
*/
-  public HashMap getIncrBackupLogFileList(Connection conn, 
BackupInfo backupInfo)
+  public HashMap getIncrBackupLogFileMap()
   throws IOException {
 List logList;
 HashMap newTimestamps;
@@ -105,40 +104,84 @@ public class IncrementalBackupManager extends 
BackupManager {
 List logFromSystemTable =
 getLogFilesFromBackupSystem(previousTimestampMins, newTimestamps, 

[37/50] [abbrv] hbase git commit: HBASE-17906 When a huge amount of data writing to hbase through thrift2, there will be a deadlock error. (Albert Lee)

2017-04-17 Thread syuanjiang
HBASE-17906 When a huge amount of data writing to hbase through thrift2, there 
will be a deadlock error. (Albert Lee)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/9dd5cda0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/9dd5cda0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/9dd5cda0

Branch: refs/heads/hbase-12439
Commit: 9dd5cda01747ffb91ac084792fa4a8670859e810
Parents: da5fb27
Author: Michael Stack 
Authored: Thu Apr 13 21:59:11 2017 -0700
Committer: Michael Stack 
Committed: Thu Apr 13 21:59:11 2017 -0700

--
 .../main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java   | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/9dd5cda0/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
index 560ae64..8f56b10 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
@@ -432,9 +432,6 @@ public class ThriftServer extends Configured implements 
Tool {
   throw new RuntimeException("Could not parse the value provided for the 
port option", e);
 }
 
-// Thrift's implementation uses '0' as a placeholder for 'use the default.'
-int backlog = conf.getInt(BACKLOG_CONF_KEY, 0);
-
 // Local hostname and user name,
 // used only if QOP is configured.
 String host = null;



[33/50] [abbrv] hbase git commit: HBASE-17896 The FIXED_OVERHEAD of Segment is incorrect

2017-04-17 Thread syuanjiang
HBASE-17896 The FIXED_OVERHEAD of Segment is incorrect


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3aadc675
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3aadc675
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3aadc675

Branch: refs/heads/hbase-12439
Commit: 3aadc675b0f02c3c13be625b40d72fbf6a844964
Parents: d7ddc79
Author: CHIA-PING TSAI 
Authored: Tue Apr 11 16:31:20 2017 +0800
Committer: Chia-Ping Tsai 
Committed: Wed Apr 12 11:26:43 2017 +0800

--
 .../java/org/apache/hadoop/hbase/util/ClassSize.java |  7 +++
 .../apache/hadoop/hbase/regionserver/Segment.java|  7 ---
 .../org/apache/hadoop/hbase/io/TestHeapSize.java | 15 +--
 3 files changed, 20 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3aadc675/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java
index 465bd9c..e1690c0 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java
@@ -20,6 +20,7 @@
 
 package org.apache.hadoop.hbase.util;
 
+import com.google.common.annotations.VisibleForTesting;
 import java.lang.reflect.Field;
 import java.lang.reflect.Modifier;
 import java.util.concurrent.ConcurrentHashMap;
@@ -235,6 +236,12 @@ public class ClassSize {
   }
 
   private static final MemoryLayout memoryLayout = getMemoryLayout();
+  private static final boolean USE_UNSAFE_LAYOUT = (memoryLayout instanceof 
UnsafeLayout);
+
+  @VisibleForTesting
+  public static boolean useUnsafeLayout() {
+return USE_UNSAFE_LAYOUT;
+  }
 
   /**
* Method for reading the arc settings and setting overheads according

http://git-wip-us.apache.org/repos/asf/hbase/blob/3aadc675/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
index 6f431c9..8f43fa8 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
@@ -48,12 +48,13 @@ import com.google.common.annotations.VisibleForTesting;
 @InterfaceAudience.Private
 public abstract class Segment {
 
-  final static long FIXED_OVERHEAD = ClassSize.align(ClassSize.OBJECT
-  + 5 * ClassSize.REFERENCE // cellSet, comparator, memStoreLAB, size, 
timeRangeTracker
+  public final static long FIXED_OVERHEAD = ClassSize.align(ClassSize.OBJECT
+  + 6 * ClassSize.REFERENCE // cellSet, comparator, memStoreLAB, dataSize,
+// heapSize, and timeRangeTracker
   + Bytes.SIZEOF_LONG // minSequenceId
   + Bytes.SIZEOF_BOOLEAN); // tagsPresent
   public final static long DEEP_OVERHEAD = FIXED_OVERHEAD + 
ClassSize.ATOMIC_REFERENCE
-  + ClassSize.CELL_SET + ClassSize.ATOMIC_LONG + 
ClassSize.TIMERANGE_TRACKER;
+  + ClassSize.CELL_SET + 2 * ClassSize.ATOMIC_LONG + 
ClassSize.TIMERANGE_TRACKER;
 
   private AtomicReference cellSet= new AtomicReference<>();
   private final CellComparator comparator;

http://git-wip-us.apache.org/repos/asf/hbase/blob/3aadc675/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
index 6b943a7..bf74a9e 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
@@ -344,7 +344,7 @@ public class TestHeapSize  {
 cl = Segment.class;
 actual = Segment.DEEP_OVERHEAD;
 expected = ClassSize.estimateBase(cl, false);
-expected += ClassSize.estimateBase(AtomicLong.class, false);
+expected += 2 * ClassSize.estimateBase(AtomicLong.class, false);
 expected += ClassSize.estimateBase(AtomicReference.class, false);
 expected += ClassSize.estimateBase(CellSet.class, false);
 expected += ClassSize.estimateBase(TimeRangeTracker.class, false);
@@ -361,7 +361,7 @@ public class TestHeapSize  {
 cl = MutableSegment.class;
 actual = MutableSegment.DEEP_OVERHEAD;
 expected = ClassSize.estimateBase(cl, false);
-expected += 

[44/50] [abbrv] hbase git commit: HBASE-15535 Correct link to Trafodion

2017-04-17 Thread syuanjiang
HBASE-15535 Correct link to Trafodion

Signed-off-by: CHIA-PING TSAI 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/363f6275
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/363f6275
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/363f6275

Branch: refs/heads/hbase-12439
Commit: 363f62751c760cc8056a2b1be40a410281e634f7
Parents: 918aa46
Author: Gábor Lipták 
Authored: Sat Apr 15 11:43:38 2017 -0400
Committer: CHIA-PING TSAI 
Committed: Mon Apr 17 10:26:28 2017 +0800

--
 src/main/asciidoc/_chapters/sql.adoc  | 2 +-
 src/main/site/xdoc/supportingprojects.xml | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/363f6275/src/main/asciidoc/_chapters/sql.adoc
--
diff --git a/src/main/asciidoc/_chapters/sql.adoc 
b/src/main/asciidoc/_chapters/sql.adoc
index b47104c..b1ad063 100644
--- a/src/main/asciidoc/_chapters/sql.adoc
+++ b/src/main/asciidoc/_chapters/sql.adoc
@@ -37,6 +37,6 @@ link:http://phoenix.apache.org[Apache Phoenix]
 
 === Trafodion
 
-link:https://wiki.trafodion.org/[Trafodion: Transactional SQL-on-HBase]
+link:http://trafodion.incubator.apache.org/[Trafodion: Transactional 
SQL-on-HBase]
 
 :numbered:

http://git-wip-us.apache.org/repos/asf/hbase/blob/363f6275/src/main/site/xdoc/supportingprojects.xml
--
diff --git a/src/main/site/xdoc/supportingprojects.xml 
b/src/main/site/xdoc/supportingprojects.xml
index f349c7f..f949a57 100644
--- a/src/main/site/xdoc/supportingprojects.xml
+++ b/src/main/site/xdoc/supportingprojects.xml
@@ -46,9 +46,9 @@ under the License.
 for HBase.
https://github.com/juwi/HBase-TAggregator;>HBase 
TAggregator
An HBase coprocessor for timeseries-based aggregations.
-   http://www.trafodion.org;>Trafodion
-   Trafodion is an HP-sponsored Apache-licensed open source SQL on HBase
-DBMS with full-ACID distributed transaction support.
+   http://trafodion.incubator.apache.org/;>Apache 
Trafodion
+   Apache Trafodion is a webscale SQL-on-Hadoop solution enabling
+transactional or operational workloads on Hadoop.
http://phoenix.apache.org/;>Apache Phoenix
Apache Phoenix is a relational database layer over HBase delivered as a
 client-embedded JDBC driver targeting low latency queries over HBase 
data.



[30/50] [abbrv] hbase git commit: HBASE-17905: [hbase-spark] bulkload does not work when table not exist

2017-04-17 Thread syuanjiang
HBASE-17905: [hbase-spark] bulkload does not work when table not exist

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/22f602ca
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/22f602ca
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/22f602ca

Branch: refs/heads/hbase-12439
Commit: 22f602cab5e9739a650fc962f4b08a0ccc51a972
Parents: 0b5bd78
Author: Yi Liang 
Authored: Tue Apr 11 15:30:13 2017 -0700
Committer: tedyu 
Committed: Tue Apr 11 17:01:07 2017 -0700

--
 .../hadoop/hbase/spark/BulkLoadPartitioner.scala  | 13 -
 .../apache/hadoop/hbase/spark/HBaseContext.scala  | 18 +-
 2 files changed, 25 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/22f602ca/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
--
diff --git 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
index ab4fc41..022c933 100644
--- 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
+++ 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/BulkLoadPartitioner.scala
@@ -33,8 +33,8 @@ import org.apache.spark.Partitioner
 @InterfaceAudience.Public
 class BulkLoadPartitioner(startKeys:Array[Array[Byte]])
   extends Partitioner {
-
-  override def numPartitions: Int = startKeys.length
+  // when table not exist, startKeys = Byte[0][]
+  override def numPartitions: Int = if (startKeys.length == 0) 1 else 
startKeys.length
 
   override def getPartition(key: Any): Int = {
 
@@ -53,8 +53,11 @@ class BulkLoadPartitioner(startKeys:Array[Array[Byte]])
 case _ =>
   key.asInstanceOf[Array[Byte]]
   }
-val partition = util.Arrays.binarySearch(startKeys, rowKey, comparator)
-if (partition < 0) partition * -1 + -2
-else partition
+var partition = util.Arrays.binarySearch(startKeys, rowKey, comparator)
+if (partition < 0)
+  partition = partition * -1 + -2
+if (partition < 0)
+  partition = 0
+partition
   }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/22f602ca/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
--
diff --git 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
index e2891db..8c4e0f4 100644
--- 
a/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
+++ 
b/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
@@ -48,7 +48,7 @@ import org.apache.spark.streaming.dstream.DStream
 import java.io._
 import org.apache.hadoop.security.UserGroupInformation
 import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod
-import org.apache.hadoop.fs.{Path, FileSystem}
+import org.apache.hadoop.fs.{Path, FileAlreadyExistsException, FileSystem}
 import scala.collection.mutable
 
 /**
@@ -620,9 +620,17 @@ class HBaseContext(@transient sc: SparkContext,
   compactionExclude: Boolean = false,
   maxSize:Long = HConstants.DEFAULT_MAX_FILE_SIZE):
   Unit = {
+val stagingPath = new Path(stagingDir)
+val fs = stagingPath.getFileSystem(config)
+if (fs.exists(stagingPath)) {
+  throw new FileAlreadyExistsException("Path " + stagingDir + " already 
exist")
+}
 val conn = HBaseConnectionCache.getConnection(config)
 val regionLocator = conn.getRegionLocator(tableName)
 val startKeys = regionLocator.getStartKeys
+if (startKeys.length == 0) {
+  logInfo("Table " + tableName.toString + " was not found")
+}
 val defaultCompressionStr = config.get("hfile.compression",
   Compression.Algorithm.NONE.getName)
 val hfileCompression = HFileWriterImpl
@@ -743,9 +751,17 @@ class HBaseContext(@transient sc: SparkContext,
   compactionExclude: Boolean = false,
   maxSize:Long = HConstants.DEFAULT_MAX_FILE_SIZE):
   Unit = {
+val stagingPath = new Path(stagingDir)
+val fs = stagingPath.getFileSystem(config)
+if (fs.exists(stagingPath)) {
+  throw new FileAlreadyExistsException("Path " + stagingDir + " already 
exist")
+}
 val conn = HBaseConnectionCache.getConnection(config)
 val regionLocator = conn.getRegionLocator(tableName)
 val startKeys = regionLocator.getStartKeys
+if (startKeys.length == 0) {
+  logInfo("Table " + tableName.toString + " was not found")
+}
 val 

[36/50] [abbrv] hbase git commit: HBASE-16775 Fix flaky TestExportSnapshot#testExportRetry.

2017-04-17 Thread syuanjiang
HBASE-16775 Fix flaky TestExportSnapshot#testExportRetry.

Reason for flakyness: Current test is probability based fault injection and 
triggers failure 3% of the time. Earlier when test used LocalJobRunner which 
didn't honor "mapreduce.map.maxattempts", it'd pass 97% time (when no fault is 
injected) and fail 3% time (when fault was injected). Point being, even when 
the test was complete wrong, we couldn't catch it because it was probability 
based.

This change will inject fault in a deterministic manner.
On design side, it encapsulates all testing hooks in ExportSnapshot.java into 
single inner class.

Change-Id: Icba866e1d56a5281748df89f4dd374bc45bad249


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/da5fb27e
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/da5fb27e
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/da5fb27e

Branch: refs/heads/hbase-12439
Commit: da5fb27eabed4a4b4d251be973ee945fb52895bf
Parents: cf3215d
Author: Apekshit Sharma 
Authored: Thu Oct 6 14:20:58 2016 -0700
Committer: Apekshit Sharma 
Committed: Wed Apr 12 11:11:31 2017 -0700

--
 .../hadoop/hbase/snapshot/ExportSnapshot.java   | 58 +++---
 .../hbase/snapshot/TestExportSnapshot.java  | 84 +++-
 .../snapshot/TestExportSnapshotNoCluster.java   |  2 +-
 .../hbase/snapshot/TestMobExportSnapshot.java   |  7 +-
 .../snapshot/TestMobSecureExportSnapshot.java   |  7 +-
 .../snapshot/TestSecureExportSnapshot.java  |  7 +-
 6 files changed, 93 insertions(+), 72 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/da5fb27e/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
index e2086e9..e3ad951 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
@@ -29,7 +29,6 @@ import java.util.Collections;
 import java.util.Comparator;
 import java.util.LinkedList;
 import java.util.List;
-import java.util.Random;
 
 import org.apache.commons.cli.CommandLine;
 import org.apache.commons.cli.Option;
@@ -110,9 +109,12 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   private static final String CONF_BANDWIDTH_MB = 
"snapshot.export.map.bandwidth.mb";
   protected static final String CONF_SKIP_TMP = "snapshot.export.skip.tmp";
 
-  static final String CONF_TEST_FAILURE = "test.snapshot.export.failure";
-  static final String CONF_TEST_RETRY = "test.snapshot.export.failure.retry";
-
+  static class Testing {
+static final String CONF_TEST_FAILURE = "test.snapshot.export.failure";
+static final String CONF_TEST_FAILURE_COUNT = 
"test.snapshot.export.failure.count";
+int failuresCountToInject = 0;
+int injectedFailureCount = 0;
+  }
 
   // Command line options and defaults.
   static final class Options {
@@ -149,12 +151,10 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
 
   private static class ExportMapper extends Mapper 
{
+private static final Log LOG = LogFactory.getLog(ExportMapper.class);
 final static int REPORT_SIZE = 1 * 1024 * 1024;
 final static int BUFFER_SIZE = 64 * 1024;
 
-private boolean testFailures;
-private Random random;
-
 private boolean verifyChecksum;
 private String filesGroup;
 private String filesUser;
@@ -169,9 +169,12 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
 private Path inputArchive;
 private Path inputRoot;
 
+private static Testing testing = new Testing();
+
 @Override
 public void setup(Context context) throws IOException {
   Configuration conf = context.getConfiguration();
+
   Configuration srcConf = HBaseConfiguration.createClusterConf(conf, null, 
CONF_SOURCE_PREFIX);
   Configuration destConf = HBaseConfiguration.createClusterConf(conf, 
null, CONF_DEST_PREFIX);
 
@@ -186,8 +189,6 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   inputArchive = new Path(inputRoot, HConstants.HFILE_ARCHIVE_DIRECTORY);
   outputArchive = new Path(outputRoot, HConstants.HFILE_ARCHIVE_DIRECTORY);
 
-  testFailures = conf.getBoolean(CONF_TEST_FAILURE, false);
-
   try {
 srcConf.setBoolean("fs." + inputRoot.toUri().getScheme() + 
".impl.disable.cache", true);
 inputFs = 

[12/50] [abbrv] hbase git commit: HBASE-17857 Remove IS annotations from IA.Public classes

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshotException.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshotException.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshotException.java
index 05f3556..f6817e7 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshotException.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshotException.java
@@ -18,13 +18,11 @@
 package org.apache.hadoop.hbase.snapshot;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 
 /**
  * Thrown when a snapshot could not be exported due to an error during the 
operation.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Stable
 @SuppressWarnings("serial")
 public class ExportSnapshotException extends HBaseSnapshotException {
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.java
index 2fe58ed..bd185a1 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.java
@@ -19,7 +19,6 @@ package org.apache.hadoop.hbase.snapshot;
 
 import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.client.SnapshotDescription;
 
 /**
@@ -27,7 +26,6 @@ import org.apache.hadoop.hbase.client.SnapshotDescription;
  */
 @SuppressWarnings("serial")
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 public class HBaseSnapshotException extends DoNotRetryIOException {
 
   private SnapshotDescription description;

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotException.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotException.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotException.java
index 70e8d3b..de58077 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotException.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotException.java
@@ -19,7 +19,6 @@
 package org.apache.hadoop.hbase.snapshot;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.client.SnapshotDescription;
 
 /**
@@ -27,7 +26,6 @@ import org.apache.hadoop.hbase.client.SnapshotDescription;
  */
 @SuppressWarnings("serial")
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 public class RestoreSnapshotException extends HBaseSnapshotException {
   public RestoreSnapshotException(String msg, SnapshotDescription desc) {
 super(msg, desc);

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotCreationException.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotCreationException.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotCreationException.java
index 2738b3d..9cfe83a 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotCreationException.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotCreationException.java
@@ -18,7 +18,6 @@
 package org.apache.hadoop.hbase.snapshot;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.client.SnapshotDescription;
 
 /**
@@ -27,7 +26,6 @@ import org.apache.hadoop.hbase.client.SnapshotDescription;
  */
 @SuppressWarnings("serial")
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 public class SnapshotCreationException extends HBaseSnapshotException {
 
   /**

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDoesNotExistException.java
--
diff --git 

[15/50] [abbrv] hbase git commit: HBASE-17857 Remove IS annotations from IA.Public classes

2017-04-17 Thread syuanjiang
HBASE-17857 Remove IS annotations from IA.Public classes


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a66d4918
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a66d4918
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a66d4918

Branch: refs/heads/hbase-12439
Commit: a66d491892514fd4a188d6ca87d6260d8ae46184
Parents: 910b680
Author: zhangduo 
Authored: Tue Apr 4 20:30:10 2017 +0800
Committer: zhangduo 
Committed: Wed Apr 5 15:34:06 2017 +0800

--
 .../hbase/classification/InterfaceAudience.java |   1 -
 .../classification/InterfaceStability.java  |   1 -
 .../hadoop/hbase/CallDroppedException.java  |   4 +-
 .../hadoop/hbase/CallQueueTooBigException.java  |   4 +-
 .../hadoop/hbase/ClockOutOfSyncException.java   |   2 -
 .../org/apache/hadoop/hbase/ClusterStatus.java  |   2 -
 .../hadoop/hbase/DoNotRetryIOException.java |   2 -
 .../hadoop/hbase/DroppedSnapshotException.java  |   2 -
 .../apache/hadoop/hbase/HColumnDescriptor.java  |   2 -
 .../org/apache/hadoop/hbase/HRegionInfo.java|   2 -
 .../apache/hadoop/hbase/HRegionLocation.java|   2 -
 .../apache/hadoop/hbase/HTableDescriptor.java   |   2 -
 .../hbase/InvalidFamilyOperationException.java  |   2 -
 .../apache/hadoop/hbase/KeepDeletedCells.java   |   2 -
 .../hadoop/hbase/MasterNotRunningException.java |   2 -
 .../hadoop/hbase/MemoryCompactionPolicy.java|   2 -
 .../hadoop/hbase/MultiActionResultTooLarge.java |   2 -
 .../hadoop/hbase/NamespaceExistException.java   |   2 -
 .../hbase/NamespaceNotFoundException.java   |   2 -
 .../hbase/NotAllMetaRegionsOnlineException.java |   2 -
 .../hadoop/hbase/NotServingRegionException.java |   2 -
 .../hadoop/hbase/PleaseHoldException.java   |   2 -
 .../apache/hadoop/hbase/RegionException.java|   2 -
 .../org/apache/hadoop/hbase/RegionLoad.java |   2 -
 .../hadoop/hbase/RegionTooBusyException.java|   2 -
 .../hbase/ReplicationPeerNotFoundException.java |   4 +-
 .../hadoop/hbase/RetryImmediatelyException.java |   2 -
 .../org/apache/hadoop/hbase/ServerLoad.java |   2 -
 .../hadoop/hbase/TableExistsException.java  |   2 -
 .../hadoop/hbase/TableInfoMissingException.java |   2 -
 .../hadoop/hbase/TableNotDisabledException.java |   2 -
 .../hadoop/hbase/TableNotEnabledException.java  |   2 -
 .../hadoop/hbase/TableNotFoundException.java|   2 -
 .../hadoop/hbase/UnknownRegionException.java|   2 -
 .../hadoop/hbase/UnknownScannerException.java   |   2 -
 .../hbase/ZooKeeperConnectionException.java |   2 -
 .../org/apache/hadoop/hbase/client/Admin.java   |   2 -
 .../org/apache/hadoop/hbase/client/Append.java  |   2 -
 .../apache/hadoop/hbase/client/AsyncAdmin.java  |   2 -
 .../hadoop/hbase/client/AsyncConnection.java|   4 +-
 .../apache/hadoop/hbase/client/AsyncTable.java  |   2 -
 .../hadoop/hbase/client/AsyncTableBase.java |   2 -
 .../hadoop/hbase/client/AsyncTableBuilder.java  |   2 -
 .../hbase/client/AsyncTableRegionLocator.java   |   2 -
 .../apache/hadoop/hbase/client/Attributes.java  |   2 -
 .../hadoop/hbase/client/BufferedMutator.java|   3 -
 .../hbase/client/BufferedMutatorParams.java |   2 -
 .../apache/hadoop/hbase/client/CompactType.java |   4 +-
 .../hadoop/hbase/client/CompactionState.java|   2 -
 .../apache/hadoop/hbase/client/Connection.java  |   2 -
 .../hadoop/hbase/client/ConnectionFactory.java  |   2 -
 .../apache/hadoop/hbase/client/Consistency.java |   2 -
 .../org/apache/hadoop/hbase/client/Delete.java  |   2 -
 .../hbase/client/DoNotRetryRegionException.java |   2 -
 .../apache/hadoop/hbase/client/Durability.java  |   2 -
 .../org/apache/hadoop/hbase/client/Get.java |   2 -
 .../hadoop/hbase/client/HTableMultiplexer.java  |   3 -
 .../apache/hadoop/hbase/client/Increment.java   |   2 -
 .../hadoop/hbase/client/IsolationLevel.java |   2 -
 .../hadoop/hbase/client/MasterSwitchType.java   |   4 +-
 .../hbase/client/MobCompactPartitionPolicy.java |   2 -
 .../apache/hadoop/hbase/client/Mutation.java|   2 -
 .../client/NoServerForRegionException.java  |   2 -
 .../apache/hadoop/hbase/client/Operation.java   |   2 -
 .../hbase/client/OperationWithAttributes.java   |   2 -
 .../org/apache/hadoop/hbase/client/Put.java |   2 -
 .../org/apache/hadoop/hbase/client/Query.java   |   4 +-
 .../hadoop/hbase/client/RawAsyncTable.java  |   4 -
 .../hbase/client/RawScanResultConsumer.java |   6 +-
 .../hadoop/hbase/client/RegionLoadStats.java|   2 -
 .../hadoop/hbase/client/RegionLocator.java  |   2 -
 .../hbase/client/RegionOfflineException.java|   2 -
 .../hadoop/hbase/client/RequestController.java  |   4 -
 .../hbase/client/RequestControllerFactory.java  |   2 -
 .../org/apache/hadoop/hbase/client/Result.java  |   2 -
 .../hadoop/hbase/client/ResultScanner.java  |   2 -
 

[14/50] [abbrv] hbase git commit: HBASE-17857 Remove IS annotations from IA.Public classes

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java
index eb1cbc5..179a566 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java
@@ -29,7 +29,6 @@ import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.io.TimeRange;
 import org.apache.hadoop.hbase.security.access.Permission;
 import org.apache.hadoop.hbase.security.visibility.CellVisibility;
@@ -49,7 +48,6 @@ import org.apache.hadoop.hbase.util.ClassSize;
  * {@link #addColumn(byte[], byte[], long)} method.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Stable
 public class Increment extends Mutation implements Comparable {
   private static final long HEAP_OVERHEAD =  ClassSize.REFERENCE + 
ClassSize.TIMERANGE;
   private TimeRange tr = new TimeRange();

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/client/IsolationLevel.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/IsolationLevel.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/IsolationLevel.java
index 01aba6f..ad0897e 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/IsolationLevel.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/IsolationLevel.java
@@ -21,7 +21,6 @@
 package org.apache.hadoop.hbase.client;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 
 /**
  * Specify Isolation levels in Scan operations.
@@ -33,7 +32,6 @@ import 
org.apache.hadoop.hbase.classification.InterfaceStability;
  * not have been committed yet.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Stable
 public enum IsolationLevel {
 
   READ_COMMITTED(1),

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterSwitchType.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterSwitchType.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterSwitchType.java
index 7e31b25..5fa9ec2 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterSwitchType.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterSwitchType.java
@@ -17,13 +17,11 @@
  */
 package org.apache.hadoop.hbase.client;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 /**
  * Represents the master switch type
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 public enum MasterSwitchType {
   SPLIT,
   MERGE
-}
\ No newline at end of file
+}

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MobCompactPartitionPolicy.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MobCompactPartitionPolicy.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MobCompactPartitionPolicy.java
index f550572..076ab6f 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MobCompactPartitionPolicy.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MobCompactPartitionPolicy.java
@@ -19,13 +19,11 @@
 package org.apache.hadoop.hbase.client;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 
 /**
  * Enum describing the mob compact partition policy types.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 public enum MobCompactPartitionPolicy {
   /**
* Compact daily mob files into one file

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java
index fb55fdd..b010c2f 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java
+++ 

[10/50] [abbrv] hbase git commit: HBASE-17857 Remove IS annotations from IA.Public classes

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.java
index dc2fc0d..3c90b59 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.java
@@ -16,7 +16,6 @@ import java.util.Map;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -44,7 +43,6 @@ import com.google.common.annotations.VisibleForTesting;
  * 
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 @VisibleForTesting
 public class MultiHFileOutputFormat extends 
FileOutputFormat {
   private static final Log LOG = 
LogFactory.getLog(MultiHFileOutputFormat.class);

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.java
index 3099c0d..a8e6837 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.java
@@ -22,7 +22,6 @@ import java.util.ArrayList;
 import java.util.List;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configurable;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.client.Scan;
@@ -55,7 +54,6 @@ import org.apache.hadoop.hbase.client.Scan;
  * 
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 public class MultiTableInputFormat extends MultiTableInputFormatBase implements
 Configurable {
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
index 25ea047..e18b3aa 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
@@ -25,7 +25,6 @@ import java.util.List;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.TableName;
@@ -54,7 +53,6 @@ import java.util.Iterator;
  * filters etc. Subclasses may use other TableRecordReader implementations.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 public abstract class MultiTableInputFormatBase extends
 InputFormat {
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java
index 7feb7a9..4cc784f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java
@@ -25,7 +25,6 @@ import java.util.Map;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.TableName;
@@ -61,7 +60,6 @@ import 

[26/50] [abbrv] hbase git commit: HBASE-17863-addendum: Reverted the order of updateStoreOnExec() and store.isRunning() in execProcedure()

2017-04-17 Thread syuanjiang
HBASE-17863-addendum: Reverted the order of updateStoreOnExec() and 
store.isRunning() in execProcedure()

Change-Id: I1c9d5ee264f4f593a6b2a09011853ab63693f677


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/59e8b8e2
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/59e8b8e2
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/59e8b8e2

Branch: refs/heads/hbase-12439
Commit: 59e8b8e2ba4d403d042fe4cc02f8f9f80aad67af
Parents: 18c5ecf
Author: Umesh Agashe 
Authored: Fri Apr 7 14:01:37 2017 -0700
Committer: Apekshit Sharma 
Committed: Fri Apr 7 16:13:37 2017 -0700

--
 .../hadoop/hbase/procedure2/ProcedureExecutor.java   | 11 ---
 .../hadoop/hbase/procedure2/ProcedureTestingUtility.java |  2 +-
 .../procedure2/store/wal/TestWALProcedureStore.java  |  3 +--
 3 files changed, 10 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/59e8b8e2/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java
--
diff --git 
a/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java
 
b/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java
index 8832637..43f5839 100644
--- 
a/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java
+++ 
b/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java
@@ -1373,12 +1373,17 @@ public class ProcedureExecutor {
 return;
   }
 
-  // if the store is not running we are aborting
-  if (!store.isRunning()) return;
-
+  // TODO: The code here doesn't check if store is running before 
persisting to the store as
+  // it relies on the method call below to throw RuntimeException to wind 
up the stack and
+  // executor thread to stop. The statement following the method call 
below seems to check if
+  // store is not running, to prevent scheduling children procedures, 
re-execution or yield
+  // of this procedure. This may need more scrutiny and subsequent cleanup 
in future
   // Commit the transaction
   updateStoreOnExec(procStack, procedure, subprocs);
 
+  // if the store is not running we are aborting
+  if (!store.isRunning()) return;
+
   // if the procedure is kind enough to pass the slot to someone else, 
yield
   if (procedure.isRunnable() && !suspended &&
   procedure.isYieldAfterExecutionStep(getEnvironment())) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/59e8b8e2/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/ProcedureTestingUtility.java
--
diff --git 
a/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/ProcedureTestingUtility.java
 
b/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/ProcedureTestingUtility.java
index 1f4244a..dd3c8f4 100644
--- 
a/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/ProcedureTestingUtility.java
+++ 
b/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/ProcedureTestingUtility.java
@@ -408,7 +408,7 @@ public class ProcedureTestingUtility {
   addStackIndex(index);
 }
 
-public void setFinishedState() {
+public void setSuccessState() {
   setState(ProcedureState.SUCCESS);
 }
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/59e8b8e2/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java
--
diff --git 
a/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java
 
b/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java
index f8c3486..525a663 100644
--- 
a/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java
+++ 
b/hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java
@@ -26,7 +26,6 @@ import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Comparator;
 import java.util.HashSet;
-import java.util.List;
 import java.util.Set;
 
 import org.apache.commons.logging.Log;
@@ -785,7 +784,7 @@ public class TestWALProcedureStore {
 
 // back to A
 a.addStackId(5);
-a.setFinishedState();
+a.setSuccessState();
 procStore.delete(a, new long[] { b.getProcId(), c.getProcId() });
 restartAndAssert(3, 0, 1, 0);
   }



[17/50] [abbrv] hbase git commit: HBASE-17785 RSGroupBasedLoadBalancer fails to assign new table regions when cloning snapshot

2017-04-17 Thread syuanjiang
HBASE-17785 RSGroupBasedLoadBalancer fails to assign new table regions when 
cloning snapshot


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/029fa297
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/029fa297
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/029fa297

Branch: refs/heads/hbase-12439
Commit: 029fa297129f7ced276d19c4877d19bf32dcfde0
Parents: cbcbcf4
Author: Andrew Purtell 
Authored: Wed Apr 5 16:25:56 2017 -0700
Committer: Andrew Purtell 
Committed: Wed Apr 5 16:25:56 2017 -0700

--
 .../hbase/rsgroup/RSGroupAdminEndpoint.java | 28 ++--
 .../hadoop/hbase/rsgroup/TestRSGroups.java  | 19 +
 2 files changed, 39 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/029fa297/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
--
diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
index 83389e4..14907ba 100644
--- 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
@@ -67,6 +67,7 @@ import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.MoveTablesR
 import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.RSGroupAdminService;
 import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.RemoveRSGroupRequest;
 import 
org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos.RemoveRSGroupResponse;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.SnapshotDescription;
 
 @InterfaceAudience.Private
 public class RSGroupAdminEndpoint implements MasterObserver, 
CoprocessorService {
@@ -267,14 +268,7 @@ public class RSGroupAdminEndpoint implements 
MasterObserver, CoprocessorService
 }
   }
 
-  /
-  // MasterObserver overrides
-  /
-
-  // Assign table to default RSGroup.
-  @Override
-  public void preCreateTable(ObserverContext ctx,
-  HTableDescriptor desc, HRegionInfo[] regions) throws IOException {
+  void assignTableToGroup(HTableDescriptor desc) throws IOException {
 String groupName =
 
master.getClusterSchema().getNamespace(desc.getTableName().getNamespaceAsString())
 .getConfigurationValue(RSGroupInfo.NAMESPACE_DESC_PROP_GROUP);
@@ -292,6 +286,17 @@ public class RSGroupAdminEndpoint implements 
MasterObserver, CoprocessorService
 }
   }
 
+  /
+  // MasterObserver overrides
+  /
+
+  // Assign table to default RSGroup.
+  @Override
+  public void preCreateTable(ObserverContext ctx,
+  HTableDescriptor desc, HRegionInfo[] regions) throws IOException {
+assignTableToGroup(desc);
+  }
+
   // Remove table from its RSGroup.
   @Override
   public void postDeleteTable(ObserverContext 
ctx,
@@ -322,5 +327,12 @@ public class RSGroupAdminEndpoint implements 
MasterObserver, CoprocessorService
  NamespaceDescriptor ns) throws IOException {
 preCreateNamespace(ctx, ns);
   }
+
+  @Override
+  public void preCloneSnapshot(ObserverContext 
ctx,
+  SnapshotDescription snapshot, HTableDescriptor desc) throws IOException {
+assignTableToGroup(desc);
+  }
+
   /
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/029fa297/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroups.java
--
diff --git 
a/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroups.java 
b/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroups.java
index 3886684..d6bd43b 100644
--- 
a/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroups.java
+++ 
b/hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroups.java
@@ -40,6 +40,7 @@ import org.apache.hadoop.hbase.client.ClusterConnection;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
 import org.apache.hadoop.hbase.master.HMaster;
 import org.apache.hadoop.hbase.master.ServerManager;
+import org.apache.hadoop.hbase.master.snapshot.SnapshotManager;
 import org.apache.hadoop.hbase.net.Address;
 import 

[50/50] [abbrv] hbase git commit: Revert "HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)"

2017-04-17 Thread syuanjiang
Revert "HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)"

This reverts commit c2c2178b2eebe4439eadec6b37fae2566944c16b.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ecdfb823
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ecdfb823
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ecdfb823

Branch: refs/heads/hbase-12439
Commit: ecdfb82326035ad8221940919bbeb3fe16ec2658
Parents: c2c2178
Author: Ramkrishna 
Authored: Tue Apr 18 00:00:12 2017 +0530
Committer: Ramkrishna 
Committed: Tue Apr 18 00:00:12 2017 +0530

--
 .../java/org/apache/hadoop/hbase/CellUtil.java  |  24 ++
 .../org/apache/hadoop/hbase/ExtendedCell.java   |  10 -
 .../org/apache/hadoop/hbase/master/HMaster.java |   2 -
 .../hbase/regionserver/ByteBufferChunkCell.java |  48 ---
 .../apache/hadoop/hbase/regionserver/Chunk.java |  60 +--
 .../hadoop/hbase/regionserver/ChunkCreator.java | 404 ---
 .../hbase/regionserver/HRegionServer.java   |  14 +-
 .../hbase/regionserver/MemStoreChunkPool.java   | 265 
 .../hadoop/hbase/regionserver/MemStoreLAB.java  |   4 +-
 .../hbase/regionserver/MemStoreLABImpl.java | 171 
 .../regionserver/NoTagByteBufferChunkCell.java  |  48 ---
 .../hadoop/hbase/regionserver/OffheapChunk.java |  31 +-
 .../hadoop/hbase/regionserver/OnheapChunk.java  |  32 +-
 .../hadoop/hbase/HBaseTestingUtility.java   |   3 -
 .../coprocessor/TestCoprocessorInterface.java   |   4 -
 .../TestRegionObserverScannerOpenHook.java  |   3 -
 .../coprocessor/TestRegionObserverStacking.java |   3 -
 .../io/hfile/TestScannerFromBucketCache.java|   3 -
 .../hadoop/hbase/master/TestCatalogJanitor.java |   7 -
 .../hadoop/hbase/regionserver/TestBulkLoad.java |   2 +-
 .../hbase/regionserver/TestCellFlatSet.java |   2 +-
 .../regionserver/TestCompactingMemStore.java|  37 +-
 .../TestCompactingToCellArrayMapMemStore.java   |  16 +-
 .../TestCompactionArchiveConcurrentClose.java   |   1 -
 .../TestCompactionArchiveIOException.java   |   1 -
 .../regionserver/TestCompactionPolicy.java  |   1 -
 .../hbase/regionserver/TestDefaultMemStore.java |  14 +-
 .../regionserver/TestFailedAppendAndSync.java   |   1 -
 .../hbase/regionserver/TestHMobStore.java   |   2 +-
 .../hadoop/hbase/regionserver/TestHRegion.java  |   2 -
 .../regionserver/TestHRegionReplayEvents.java   |   2 +-
 .../regionserver/TestMemStoreChunkPool.java |  48 +--
 .../hbase/regionserver/TestMemStoreLAB.java |  27 +-
 .../TestMemstoreLABWithoutPool.java | 168 
 .../hbase/regionserver/TestRecoveredEdits.java  |   1 -
 .../hbase/regionserver/TestRegionIncrement.java |   1 -
 .../hadoop/hbase/regionserver/TestStore.java|   1 -
 .../TestStoreFileRefresherChore.java|   1 -
 .../hbase/regionserver/TestWALLockup.java   |   1 -
 .../TestWALMonotonicallyIncreasingSeqId.java|   1 -
 .../hbase/regionserver/wal/TestDurability.java  |   3 -
 41 files changed, 479 insertions(+), 990 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ecdfb823/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
--
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
index 56de21b..e1bc969 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
@@ -3135,4 +3135,28 @@ public final class CellUtil {
   return Type.DeleteFamily.getCode();
 }
   }
+
+  /**
+   * Clone the passed cell by copying its data into the passed buf.
+   */
+  public static Cell copyCellTo(Cell cell, ByteBuffer buf, int offset, int 
len) {
+int tagsLen = cell.getTagsLength();
+if (cell instanceof ExtendedCell) {
+  ((ExtendedCell) cell).write(buf, offset);
+} else {
+  // Normally all Cell impls within Server will be of type ExtendedCell. 
Just considering the
+  // other case also. The data fragments within Cell is copied into buf as 
in KeyValue
+  // serialization format only.
+  KeyValueUtil.appendTo(cell, buf, offset, true);
+}
+if (tagsLen == 0) {
+  // When tagsLen is 0, make a NoTagsByteBufferKeyValue version. This is 
an optimized class
+  // which directly return tagsLen as 0. So we avoid parsing many length 
components in
+  // reading the tagLength stored in the backing buffer. The Memstore 
addition of every Cell
+  // call getTagsLength().
+  return new NoTagsByteBufferKeyValue(buf, offset, len, 
cell.getSequenceId());
+} else {
+  return new 

[29/50] [abbrv] hbase git commit: HBASE-16469 Several log refactoring/improvement suggestions

2017-04-17 Thread syuanjiang
HBASE-16469 Several log refactoring/improvement suggestions

Signed-off-by: Sean Busbey 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0b5bd78d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0b5bd78d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0b5bd78d

Branch: refs/heads/hbase-12439
Commit: 0b5bd78d6e7c51a5c1b6b30a1f385eafcdba8f7b
Parents: 82d554e
Author: Nemo Chen 
Authored: Wed Apr 5 21:20:40 2017 -0400
Committer: Sean Busbey 
Committed: Tue Apr 11 14:16:12 2017 -0500

--
 .../hadoop/hbase/client/PreemptiveFastFailInterceptor.java | 2 +-
 .../test/java/org/apache/hadoop/hbase/HBaseClusterManager.java | 1 +
 .../java/org/apache/hadoop/hbase/regionserver/HRegion.java | 6 +++---
 .../hadoop/hbase/regionserver/handler/CloseRegionHandler.java  | 2 +-
 .../org/apache/hadoop/hbase/util/MultiThreadedUpdater.java | 2 +-
 5 files changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0b5bd78d/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
index a29a662..abac040 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
@@ -291,7 +291,7 @@ class PreemptiveFastFailInterceptor extends 
RetryingCallerInterceptor {
 // If we were able to connect to the server, reset the failure
 // information.
 if (couldNotCommunicate == false) {
-  LOG.info("Clearing out PFFE for server " + server.getServerName());
+  LOG.info("Clearing out PFFE for server " + server);
   repeatedFailuresMap.remove(server);
 } else {
   // update time of last attempt

http://git-wip-us.apache.org/repos/asf/hbase/blob/0b5bd78d/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
--
diff --git 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
index 07014e5..d358b9a 100644
--- a/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
+++ b/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
@@ -84,6 +84,7 @@ public class HBaseClusterManager extends Configured 
implements ClusterManager {
   sshOptions = StringUtils.join(new Object[] { sshOptions, extraSshOptions 
}, " ");
 }
 sshOptions = (sshOptions == null) ? "" : sshOptions;
+sshUserName = (sshUserName == null) ? "" : sshUserName;
 tunnelCmd = conf.get("hbase.it.clustermanager.ssh.cmd", 
DEFAULT_TUNNEL_CMD);
 // Print out ssh special config if any.
 if ((sshUserName != null && sshUserName.length() > 0) ||

http://git-wip-us.apache.org/repos/asf/hbase/blob/0b5bd78d/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index a87b679..78ce608 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -1390,12 +1390,12 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
*/
   public boolean isMergeable() {
 if (!isAvailable()) {
-  LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
+  LOG.debug("Region " + this
   + " is not mergeable because it is closing or closed");
   return false;
 }
 if (hasReferences()) {
-  LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
+  LOG.debug("Region " + this
   + " is not mergeable because it has references");
   return false;
 }
@@ -1559,7 +1559,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 // the close flag?
 if (!abort && worthPreFlushing() && canFlush) {
   status.setStatus("Pre-flushing region before close");
-  LOG.info("Running close preflush of " + 
getRegionInfo().getRegionNameAsString());
+  LOG.info("Running close preflush of " + this);
   try {
 internalFlushcache(status);
   } catch (IOException ioe) {


[23/50] [abbrv] hbase git commit: HBASE-17816 HRegion#mutateRowWithLocks should update writeRequestCount metric (Weizhan Zeng)

2017-04-17 Thread syuanjiang
HBASE-17816 HRegion#mutateRowWithLocks should update writeRequestCount metric 
(Weizhan Zeng)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/48b2502a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/48b2502a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/48b2502a

Branch: refs/heads/hbase-12439
Commit: 48b2502a5fcd4d3cd954c3abf6703422da7cdc2f
Parents: af604f0
Author: Jerry He 
Authored: Thu Apr 6 16:45:45 2017 -0700
Committer: Jerry He 
Committed: Thu Apr 6 16:45:45 2017 -0700

--
 .../hadoop/hbase/regionserver/HRegion.java  |  1 +
 .../hadoop/hbase/regionserver/TestHRegion.java  | 24 
 2 files changed, 25 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/48b2502a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 7f889ce..a87b679 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -6966,6 +6966,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
   @Override
   public void mutateRowsWithLocks(Collection mutations,
   Collection rowsToLock, long nonceGroup, long nonce) throws 
IOException {
+writeRequestsCount.add(mutations.size());
 MultiRowMutationProcessor proc = new MultiRowMutationProcessor(mutations, 
rowsToLock);
 processRowsWithLocks(proc, -1, nonceGroup, nonce);
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/48b2502a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
index eac3c77..d56d6ec 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
@@ -6391,4 +6391,28 @@ public class TestHRegion {
   this.region = null;
 }
   }
+
+  @Test
+  public void testMutateRow_WriteRequestCount() throws Exception {
+byte[] row1 = Bytes.toBytes("row1");
+byte[] fam1 = Bytes.toBytes("fam1");
+byte[] qf1 = Bytes.toBytes("qualifier");
+byte[] val1 = Bytes.toBytes("value1");
+
+RowMutations rm = new RowMutations(row1);
+Put put = new Put(row1);
+put.addColumn(fam1, qf1, val1);
+rm.add(put);
+
+this.region = initHRegion(tableName, method, CONF, fam1);
+try {
+  long wrcBeforeMutate = this.region.writeRequestsCount.longValue();
+  this.region.mutateRow(rm);
+  long wrcAfterMutate = this.region.writeRequestsCount.longValue();
+  Assert.assertEquals(wrcBeforeMutate + rm.getMutations().size(), 
wrcAfterMutate);
+} finally {
+  HBaseTestingUtility.closeRegionAndWAL(this.region);
+  this.region = null;
+}
+  }
 }



[38/50] [abbrv] hbase git commit: HBASE-17090 Redundant exclusion of jruby-complete in pom of hbase-spark

2017-04-17 Thread syuanjiang
HBASE-17090 Redundant exclusion of jruby-complete in pom of hbase-spark

Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e2a74615
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e2a74615
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e2a74615

Branch: refs/heads/hbase-12439
Commit: e2a746152ca8c02c18214f0b5180ed8dcc84e947
Parents: 9dd5cda
Author: Xiang Li 
Authored: Fri Apr 14 16:15:42 2017 +0800
Committer: Michael Stack 
Committed: Fri Apr 14 08:08:42 2017 -0700

--
 hbase-spark/pom.xml | 24 
 1 file changed, 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e2a74615/hbase-spark/pom.xml
--
diff --git a/hbase-spark/pom.xml b/hbase-spark/pom.xml
index a7997f1..1afae85 100644
--- a/hbase-spark/pom.xml
+++ b/hbase-spark/pom.xml
@@ -290,10 +290,6 @@
 thrift
 
 
-org.jruby
-jruby-complete
-
-
 org.slf4j
 slf4j-log4j12
 
@@ -338,10 +334,6 @@
 jasper-compiler
 
 
-org.jruby
-jruby-complete
-
-
 org.jboss.netty
 netty
 
@@ -382,10 +374,6 @@
 thrift
 
 
-org.jruby
-jruby-complete
-
-
 org.slf4j
 slf4j-log4j12
 
@@ -430,10 +418,6 @@
 jasper-compiler
 
 
-org.jruby
-jruby-complete
-
-
 org.jboss.netty
 netty
 
@@ -460,10 +444,6 @@
 thrift
 
 
-org.jruby
-jruby-complete
-
-
 org.slf4j
 slf4j-log4j12
 
@@ -508,10 +488,6 @@
 jasper-compiler
 
 
-org.jruby
-jruby-complete
-
-
 org.jboss.netty
 netty
 



[16/50] [abbrv] hbase git commit: HBASE-17854 Use StealJobQueue in HFileCleaner after HBASE-17215

2017-04-17 Thread syuanjiang
HBASE-17854 Use StealJobQueue in HFileCleaner after HBASE-17215


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/cbcbcf4d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/cbcbcf4d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/cbcbcf4d

Branch: refs/heads/hbase-12439
Commit: cbcbcf4dcd3401327cc36173f3ca8e5362da1e0c
Parents: a66d491
Author: Yu Li 
Authored: Wed Apr 5 17:53:21 2017 +0800
Committer: Yu Li 
Committed: Wed Apr 5 17:53:21 2017 +0800

--
 .../hbase/master/cleaner/HFileCleaner.java  | 98 +---
 .../apache/hadoop/hbase/util/StealJobQueue.java | 22 +
 .../hbase/master/cleaner/TestHFileCleaner.java  | 28 +++---
 .../hadoop/hbase/util/TestStealJobQueue.java|  2 +-
 4 files changed, 102 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/cbcbcf4d/hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/HFileCleaner.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/HFileCleaner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/HFileCleaner.java
index 3a68252..8b3515a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/HFileCleaner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/HFileCleaner.java
@@ -22,7 +22,6 @@ import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
 import java.util.concurrent.BlockingQueue;
-import java.util.concurrent.LinkedBlockingQueue;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -35,6 +34,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.Stoppable;
 import org.apache.hadoop.hbase.io.HFileLink;
 import org.apache.hadoop.hbase.regionserver.StoreFileInfo;
+import org.apache.hadoop.hbase.util.StealJobQueue;
 
 import com.google.common.annotations.VisibleForTesting;
 /**
@@ -57,23 +57,23 @@ public class HFileCleaner extends 
CleanerChore impleme
   "hbase.regionserver.thread.hfilecleaner.throttle";
   public final static int DEFAULT_HFILE_DELETE_THROTTLE_THRESHOLD = 64 * 1024 
* 1024;// 64M
 
-  // Configuration key for large queue size
-  public final static String LARGE_HFILE_DELETE_QUEUE_SIZE =
+  // Configuration key for large queue initial size
+  public final static String LARGE_HFILE_QUEUE_INIT_SIZE =
   "hbase.regionserver.hfilecleaner.large.queue.size";
-  public final static int DEFAULT_LARGE_HFILE_DELETE_QUEUE_SIZE = 1048576;
+  public final static int DEFAULT_LARGE_HFILE_QUEUE_INIT_SIZE = 10240;
 
-  // Configuration key for small queue size
-  public final static String SMALL_HFILE_DELETE_QUEUE_SIZE =
+  // Configuration key for small queue initial size
+  public final static String SMALL_HFILE_QUEUE_INIT_SIZE =
   "hbase.regionserver.hfilecleaner.small.queue.size";
-  public final static int DEFAULT_SMALL_HFILE_DELETE_QUEUE_SIZE = 1048576;
+  public final static int DEFAULT_SMALL_HFILE_QUEUE_INIT_SIZE = 10240;
 
   private static final Log LOG = LogFactory.getLog(HFileCleaner.class);
 
-  BlockingQueue largeFileQueue;
+  StealJobQueue largeFileQueue;
   BlockingQueue smallFileQueue;
   private int throttlePoint;
-  private int largeQueueSize;
-  private int smallQueueSize;
+  private int largeQueueInitSize;
+  private int smallQueueInitSize;
   private List threads = new ArrayList();
   private boolean running;
 
@@ -94,12 +94,12 @@ public class HFileCleaner extends 
CleanerChore impleme
   directory, MASTER_HFILE_CLEANER_PLUGINS, params);
 throttlePoint =
 conf.getInt(HFILE_DELETE_THROTTLE_THRESHOLD, 
DEFAULT_HFILE_DELETE_THROTTLE_THRESHOLD);
-largeQueueSize =
-conf.getInt(LARGE_HFILE_DELETE_QUEUE_SIZE, 
DEFAULT_LARGE_HFILE_DELETE_QUEUE_SIZE);
-smallQueueSize =
-conf.getInt(SMALL_HFILE_DELETE_QUEUE_SIZE, 
DEFAULT_SMALL_HFILE_DELETE_QUEUE_SIZE);
-largeFileQueue = new 
LinkedBlockingQueue(largeQueueSize);
-smallFileQueue = new 
LinkedBlockingQueue(smallQueueSize);
+largeQueueInitSize =
+conf.getInt(LARGE_HFILE_QUEUE_INIT_SIZE, 
DEFAULT_LARGE_HFILE_QUEUE_INIT_SIZE);
+smallQueueInitSize =
+conf.getInt(SMALL_HFILE_QUEUE_INIT_SIZE, 
DEFAULT_SMALL_HFILE_QUEUE_INIT_SIZE);
+largeFileQueue = new StealJobQueue<>(largeQueueInitSize, 
smallQueueInitSize);
+smallFileQueue = largeFileQueue.getStealFromQueue();
 startHFileDeleteThreads();
   }
 
@@ -152,6 +152,7 @@ public class HFileCleaner extends 
CleanerChore impleme
   private boolean dispatch(HFileDeleteTask task) {
 if (task.fileLength >= this.throttlePoint) {
   if (!this.largeFileQueue.offer(task)) {
+// should never 

[34/50] [abbrv] hbase git commit: HBASE-17895 TestAsyncProcess#testAction fails if unsafe support is false

2017-04-17 Thread syuanjiang
HBASE-17895 TestAsyncProcess#testAction fails if unsafe support is false

Signed-off-by: Chia-Ping Tsai 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/23249eb0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/23249eb0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/23249eb0

Branch: refs/heads/hbase-12439
Commit: 23249eb0f5466b3608d80847b398b38b698fcf95
Parents: 3aadc67
Author: AShiou 
Authored: Tue Apr 11 23:03:48 2017 +0800
Committer: Chia-Ping Tsai 
Committed: Wed Apr 12 11:42:13 2017 +0800

--
 .../java/org/apache/hadoop/hbase/client/TestAsyncProcess.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/23249eb0/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java
--
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java
index 3139af1..6c5c1e4 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java
@@ -1185,8 +1185,8 @@ public class TestAsyncProcess {
 assertTrue(action_2.equals(action_3));
 assertFalse(action_0.equals(action_3));
 assertEquals(0, action_0.compareTo(action_0));
-assertEquals(-1, action_0.compareTo(action_1));
-assertEquals(1, action_1.compareTo(action_0));
+assertTrue(action_0.compareTo(action_1) < 0);
+assertTrue(action_1.compareTo(action_0) > 0);
 assertEquals(0, action_1.compareTo(action_2));
   }
 



[25/50] [abbrv] hbase git commit: HBASE-17881 Remove the ByteBufferCellImpl

2017-04-17 Thread syuanjiang
HBASE-17881 Remove the ByteBufferCellImpl


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/18c5ecf6
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/18c5ecf6
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/18c5ecf6

Branch: refs/heads/hbase-12439
Commit: 18c5ecf6ed57e80b32568ca1a1a12c7af36bab46
Parents: 1a701ce
Author: Chia-Ping Tsai 
Authored: Wed Apr 5 21:11:29 2017 +0800
Committer: Chia-Ping Tsai 
Committed: Fri Apr 7 21:14:19 2017 +0800

--
 .../hadoop/hbase/filter/TestComparators.java|  14 +-
 .../hadoop/hbase/filter/TestKeyOnlyFilter.java  |   4 +-
 .../apache/hadoop/hbase/TestCellComparator.java |   7 +-
 .../org/apache/hadoop/hbase/TestCellUtil.java   | 184 +--
 .../filter/TestSingleColumnValueFilter.java |  36 ++--
 5 files changed, 36 insertions(+), 209 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/18c5ecf6/hbase-client/src/test/java/org/apache/hadoop/hbase/filter/TestComparators.java
--
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/filter/TestComparators.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/filter/TestComparators.java
index d9e4033..0c69ece 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/filter/TestComparators.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/filter/TestComparators.java
@@ -21,11 +21,11 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 import java.nio.ByteBuffer;
+import org.apache.hadoop.hbase.ByteBufferKeyValue;
 
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparator;
 import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.TestCellUtil.ByteBufferCellImpl;
 import org.apache.hadoop.hbase.testclassification.MiscTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -51,18 +51,18 @@ public class TestComparators {
 // Row compare
 KeyValue kv = new KeyValue(r1, f, q1, v1);
 ByteBuffer buffer = ByteBuffer.wrap(kv.getBuffer());
-Cell bbCell = new ByteBufferCellImpl(buffer, 0, buffer.remaining());
+Cell bbCell = new ByteBufferKeyValue(buffer, 0, buffer.remaining());
 ByteArrayComparable comparable = new BinaryComparator(r1);
 assertEquals(0, CellComparator.compareRow(bbCell, comparable));
 assertEquals(0, CellComparator.compareRow(kv, comparable));
 kv = new KeyValue(r0, f, q1, v1);
 buffer = ByteBuffer.wrap(kv.getBuffer());
-bbCell = new ByteBufferCellImpl(buffer, 0, buffer.remaining());
+bbCell = new ByteBufferKeyValue(buffer, 0, buffer.remaining());
 assertTrue(CellComparator.compareRow(bbCell, comparable) > 0);
 assertTrue(CellComparator.compareRow(kv, comparable) > 0);
 kv = new KeyValue(r2, f, q1, v1);
 buffer = ByteBuffer.wrap(kv.getBuffer());
-bbCell = new ByteBufferCellImpl(buffer, 0, buffer.remaining());
+bbCell = new ByteBufferKeyValue(buffer, 0, buffer.remaining());
 assertTrue(CellComparator.compareRow(bbCell, comparable) < 0);
 assertTrue(CellComparator.compareRow(kv, comparable) < 0);
 // Qualifier compare
@@ -71,12 +71,12 @@ public class TestComparators {
 assertEquals(0, CellComparator.compareQualifier(kv, comparable));
 kv = new KeyValue(r2, f, q2, v1);
 buffer = ByteBuffer.wrap(kv.getBuffer());
-bbCell = new ByteBufferCellImpl(buffer, 0, buffer.remaining());
+bbCell = new ByteBufferKeyValue(buffer, 0, buffer.remaining());
 assertEquals(0, CellComparator.compareQualifier(bbCell, comparable));
 assertEquals(0, CellComparator.compareQualifier(kv, comparable));
 kv = new KeyValue(r2, f, q3, v1);
 buffer = ByteBuffer.wrap(kv.getBuffer());
-bbCell = new ByteBufferCellImpl(buffer, 0, buffer.remaining());
+bbCell = new ByteBufferKeyValue(buffer, 0, buffer.remaining());
 assertTrue(CellComparator.compareQualifier(bbCell, comparable) < 0);
 assertTrue(CellComparator.compareQualifier(kv, comparable) < 0);
 // Value compare
@@ -85,7 +85,7 @@ public class TestComparators {
 assertEquals(0, CellComparator.compareValue(kv, comparable));
 kv = new KeyValue(r1, f, q1, v2);
 buffer = ByteBuffer.wrap(kv.getBuffer());
-bbCell = new ByteBufferCellImpl(buffer, 0, buffer.remaining());
+bbCell = new ByteBufferKeyValue(buffer, 0, buffer.remaining());
 assertTrue(CellComparator.compareValue(bbCell, comparable) < 0);
 assertTrue(CellComparator.compareValue(kv, comparable) < 0);
 // Family compare


[47/50] [abbrv] hbase git commit: HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/c2c2178b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
index d56d6ec..095f4bd 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
@@ -116,6 +116,7 @@ import org.apache.hadoop.hbase.filter.BinaryComparator;
 import org.apache.hadoop.hbase.filter.ColumnCountGetFilter;
 import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.FilterAllFilter;
 import org.apache.hadoop.hbase.filter.FilterBase;
 import org.apache.hadoop.hbase.filter.FilterList;
 import org.apache.hadoop.hbase.filter.NullComparator;
@@ -4931,6 +4932,7 @@ public class TestHRegion {
   String callingMethod, Configuration conf, boolean isReadOnly, byte[]... 
families)
   throws IOException {
 Path logDir = TEST_UTIL.getDataTestDirOnTestFS(callingMethod + ".log");
+ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false, 0, 0, 
0, null);
 HRegionInfo hri = new HRegionInfo(tableName, startKey, stopKey);
 final WAL wal = HBaseTestingUtility.createWal(conf, logDir, hri);
 return initHRegion(tableName, startKey, stopKey, isReadOnly,

http://git-wip-us.apache.org/repos/asf/hbase/blob/c2c2178b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
index 0054642..6eed7df 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
@@ -153,7 +153,7 @@ public class TestHRegionReplayEvents {
 }
 
 time = System.currentTimeMillis();
-
+ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false, 0, 0, 
0, null);
 primaryHri = new HRegionInfo(htd.getTableName(),
   HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW,
   false, time, 0);

http://git-wip-us.apache.org/repos/asf/hbase/blob/c2c2178b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
index 37a7664..1768801 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
@@ -48,30 +48,30 @@ import static org.junit.Assert.assertTrue;
 @Category({RegionServerTests.class, SmallTests.class})
 public class TestMemStoreChunkPool {
   private final static Configuration conf = new Configuration();
-  private static MemStoreChunkPool chunkPool;
+  private static ChunkCreator chunkCreator;
   private static boolean chunkPoolDisabledBeforeTest;
 
   @BeforeClass
   public static void setUpBeforeClass() throws Exception {
 conf.setBoolean(MemStoreLAB.USEMSLAB_KEY, true);
 conf.setFloat(MemStoreLAB.CHUNK_POOL_MAXSIZE_KEY, 0.2f);
-chunkPoolDisabledBeforeTest = MemStoreChunkPool.chunkPoolDisabled;
-MemStoreChunkPool.chunkPoolDisabled = false;
+chunkPoolDisabledBeforeTest = ChunkCreator.chunkPoolDisabled;
+ChunkCreator.chunkPoolDisabled = false;
 long globalMemStoreLimit = (long) 
(ManagementFactory.getMemoryMXBean().getHeapMemoryUsage()
 .getMax() * MemorySizeUtil.getGlobalMemStoreHeapPercent(conf, false));
-chunkPool = MemStoreChunkPool.initialize(globalMemStoreLimit, 0.2f,
-MemStoreLAB.POOL_INITIAL_SIZE_DEFAULT, 
MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false);
-assertTrue(chunkPool != null);
+chunkCreator = ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, 
false,
+  globalMemStoreLimit, 0.2f, MemStoreLAB.POOL_INITIAL_SIZE_DEFAULT, null);
+assertTrue(chunkCreator != null);
   }
 
   @AfterClass
   public static void tearDownAfterClass() throws Exception {
-MemStoreChunkPool.chunkPoolDisabled = chunkPoolDisabledBeforeTest;
+ChunkCreator.chunkPoolDisabled = chunkPoolDisabledBeforeTest;
   }
 
   @Before
   public void tearDown() throws Exception {
-chunkPool.clearChunks();
+chunkCreator.clearChunksInPool();
   }
 
   @Test

[22/50] [abbrv] hbase git commit: HBASE-17869 UnsafeAvailChecker wrongly returns false on ppc

2017-04-17 Thread syuanjiang
HBASE-17869 UnsafeAvailChecker wrongly returns false on ppc


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/af604f0c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/af604f0c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/af604f0c

Branch: refs/heads/hbase-12439
Commit: af604f0c0cf3c40c56746150ffa860aad07f128a
Parents: 9109803
Author: Jerry He 
Authored: Thu Apr 6 16:04:47 2017 -0700
Committer: Jerry He 
Committed: Thu Apr 6 16:04:47 2017 -0700

--
 .../hadoop/hbase/util/UnsafeAvailChecker.java   | 24 
 1 file changed, 15 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/af604f0c/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAvailChecker.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAvailChecker.java
 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAvailChecker.java
index 90e6ec8..886cb3c 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAvailChecker.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAvailChecker.java
@@ -51,15 +51,21 @@ public class UnsafeAvailChecker {
 });
 // When Unsafe itself is not available/accessible consider unaligned as 
false.
 if (avail) {
-  try {
-// Using java.nio.Bits#unaligned() to check for unaligned-access 
capability
-Class clazz = Class.forName("java.nio.Bits");
-Method m = clazz.getDeclaredMethod("unaligned");
-m.setAccessible(true);
-unaligned = (Boolean) m.invoke(null);
-  } catch (Exception e) {
-LOG.warn("java.nio.Bits#unaligned() check failed."
-+ "Unsafe based read/write of primitive types won't be used", e);
+  String arch = System.getProperty("os.arch");
+  if ("ppc64".equals(arch) || "ppc64le".equals(arch)) {
+// java.nio.Bits.unaligned() wrongly returns false on ppc 
(JDK-8165231),
+unaligned = true;
+  } else {
+try {
+  // Using java.nio.Bits#unaligned() to check for unaligned-access 
capability
+  Class clazz = Class.forName("java.nio.Bits");
+  Method m = clazz.getDeclaredMethod("unaligned");
+  m.setAccessible(true);
+  unaligned = (Boolean) m.invoke(null);
+} catch (Exception e) {
+  LOG.warn("java.nio.Bits#unaligned() check failed."
+  + "Unsafe based read/write of primitive types won't be used", e);
+}
   }
 }
   }



[11/50] [abbrv] hbase git commit: HBASE-17857 Remove IS annotations from IA.Public classes

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawString.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawString.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawString.java
index 7e3b350..b70e103 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawString.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawString.java
@@ -18,7 +18,6 @@
 package org.apache.hadoop.hbase.types;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Order;
 import org.apache.hadoop.hbase.util.PositionedByteRange;
@@ -32,7 +31,6 @@ import org.apache.hadoop.hbase.util.PositionedByteRange;
  * @see RawStringTerminated
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 public class RawString implements DataType {
 
   public static final RawString ASCENDING = new RawString(Order.ASCENDING);

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringFixedLength.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringFixedLength.java
 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringFixedLength.java
index d11bead..24a394c 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringFixedLength.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringFixedLength.java
@@ -18,7 +18,6 @@
 package org.apache.hadoop.hbase.types;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.util.Order;
 
 /**
@@ -31,7 +30,6 @@ import org.apache.hadoop.hbase.util.Order;
  * @see RawString
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 public class RawStringFixedLength extends FixedLengthWrapper {
 
   /**

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringTerminated.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringTerminated.java
 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringTerminated.java
index 4d89d5b..408b57a 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringTerminated.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringTerminated.java
@@ -18,7 +18,6 @@
 package org.apache.hadoop.hbase.types;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.util.Order;
 
 /**
@@ -33,7 +32,6 @@ import org.apache.hadoop.hbase.util.Order;
  * @see OrderedString
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 public class RawStringTerminated extends TerminatedWrapper {
 
   /**

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-common/src/main/java/org/apache/hadoop/hbase/types/Struct.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/types/Struct.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/types/Struct.java
index 550088a..eea64d9 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/types/Struct.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/types/Struct.java
@@ -20,7 +20,6 @@ package org.apache.hadoop.hbase.types;
 import java.util.Iterator;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.util.Order;
 import org.apache.hadoop.hbase.util.PositionedByteRange;
 
@@ -78,7 +77,6 @@ import org.apache.hadoop.hbase.util.PositionedByteRange;
  * @see DataType#isNullable()
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
 public class Struct implements DataType {
 
   @SuppressWarnings("rawtypes")

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-common/src/main/java/org/apache/hadoop/hbase/types/StructBuilder.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/types/StructBuilder.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/types/StructBuilder.java
index d73a17d..ad4f021 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/types/StructBuilder.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/types/StructBuilder.java

[39/50] [abbrv] hbase git commit: HBASE-17888: Added generic methods for updating metrics on submit and finish of a procedure execution

2017-04-17 Thread syuanjiang
HBASE-17888: Added generic methods for updating metrics on submit and finish of 
a procedure execution

Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c8461456
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c8461456
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c8461456

Branch: refs/heads/hbase-12439
Commit: c8461456d0ae81b90d67d36e1e077ae1d01102e5
Parents: e2a7461
Author: Umesh Agashe 
Authored: Mon Apr 10 15:32:43 2017 -0700
Committer: Michael Stack 
Committed: Fri Apr 14 11:51:08 2017 -0700

--
 .../apache/hadoop/hbase/client/HBaseAdmin.java  |   2 +-
 .../org/apache/hadoop/hbase/ProcedureInfo.java  |  20 +-
 .../master/MetricsAssignmentManagerSource.java  |   9 +-
 .../MetricsAssignmentManagerSourceImpl.java |   9 +-
 .../hadoop/hbase/procedure2/Procedure.java  |  41 +-
 .../hbase/procedure2/ProcedureExecutor.java |  11 +
 .../hadoop/hbase/procedure2/ProcedureUtil.java  |  10 +-
 .../hbase/procedure2/TestProcedureMetrics.java  | 254 ++
 .../procedure2/TestStateMachineProcedure.java   |   1 -
 .../shaded/protobuf/generated/MasterProtos.java | 490 +--
 .../protobuf/generated/ProcedureProtos.java | 146 +++---
 .../src/main/protobuf/Master.proto  |   2 +-
 .../src/main/protobuf/Procedure.proto   |   2 +-
 .../hadoop/hbase/master/MasterRpcServices.java  |   4 +-
 .../master/procedure/ServerCrashProcedure.java  |   2 +-
 .../hbase-webapps/master/procedures.jsp |   2 +-
 .../main/ruby/shell/commands/list_procedures.rb |   6 +-
 17 files changed, 652 insertions(+), 359 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c8461456/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
index 155a272..cadd6cc 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
@@ -2114,7 +2114,7 @@ public class HBaseAdmin implements Admin {
 procedureState, procProto.hasParentId() ? procProto.getParentId() : 
-1, nonceKey,
 procProto.hasException()?
 ForeignExceptionUtil.toIOException(procProto.getException()): 
null,
-procProto.getLastUpdate(), procProto.getStartTime(),
+procProto.getLastUpdate(), procProto.getSubmittedTime(),
 procProto.hasResult()? procProto.getResult().toByteArray() : null);
   }
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/c8461456/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java
index bb8bb08..6104c22 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java
@@ -39,7 +39,7 @@ public class ProcedureInfo implements Cloneable {
   private final NonceKey nonceKey;
   private final IOException exception;
   private final long lastUpdate;
-  private final long startTime;
+  private final long submittedTime;
   private final byte[] result;
 
   private long clientAckTime = -1;
@@ -54,7 +54,7 @@ public class ProcedureInfo implements Cloneable {
   final NonceKey nonceKey,
   final IOException exception,
   final long lastUpdate,
-  final long startTime,
+  final long submittedTime,
   final byte[] result) {
 this.procId = procId;
 this.procName = procName;
@@ -63,7 +63,7 @@ public class ProcedureInfo implements Cloneable {
 this.parentId = parentId;
 this.nonceKey = nonceKey;
 this.lastUpdate = lastUpdate;
-this.startTime = startTime;
+this.submittedTime = submittedTime;
 
 // If the procedure is completed, we should treat exception and result 
differently
 this.exception = exception;
@@ -74,7 +74,7 @@ public class ProcedureInfo implements Cloneable {
   justification="Intentional; calling super class clone doesn't make sense 
here.")
   public ProcedureInfo clone() {
 return new ProcedureInfo(procId, procName, procOwner, procState, parentId, 
nonceKey,
-  exception, lastUpdate, startTime, result);
+  exception, lastUpdate, submittedTime, result);
   }
 
   @Override
@@ -96,10 +96,10 @@ public class ProcedureInfo implements Cloneable {
 sb.append(procState);

[49/50] [abbrv] hbase git commit: Revert "HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)"

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/ecdfb823/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
index 095f4bd..d56d6ec 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
@@ -116,7 +116,6 @@ import org.apache.hadoop.hbase.filter.BinaryComparator;
 import org.apache.hadoop.hbase.filter.ColumnCountGetFilter;
 import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.filter.FilterAllFilter;
 import org.apache.hadoop.hbase.filter.FilterBase;
 import org.apache.hadoop.hbase.filter.FilterList;
 import org.apache.hadoop.hbase.filter.NullComparator;
@@ -4932,7 +4931,6 @@ public class TestHRegion {
   String callingMethod, Configuration conf, boolean isReadOnly, byte[]... 
families)
   throws IOException {
 Path logDir = TEST_UTIL.getDataTestDirOnTestFS(callingMethod + ".log");
-ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false, 0, 0, 
0, null);
 HRegionInfo hri = new HRegionInfo(tableName, startKey, stopKey);
 final WAL wal = HBaseTestingUtility.createWal(conf, logDir, hri);
 return initHRegion(tableName, startKey, stopKey, isReadOnly,

http://git-wip-us.apache.org/repos/asf/hbase/blob/ecdfb823/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
index 6eed7df..0054642 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
@@ -153,7 +153,7 @@ public class TestHRegionReplayEvents {
 }
 
 time = System.currentTimeMillis();
-ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false, 0, 0, 
0, null);
+
 primaryHri = new HRegionInfo(htd.getTableName(),
   HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW,
   false, time, 0);

http://git-wip-us.apache.org/repos/asf/hbase/blob/ecdfb823/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
index 1768801..37a7664 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
@@ -48,30 +48,30 @@ import static org.junit.Assert.assertTrue;
 @Category({RegionServerTests.class, SmallTests.class})
 public class TestMemStoreChunkPool {
   private final static Configuration conf = new Configuration();
-  private static ChunkCreator chunkCreator;
+  private static MemStoreChunkPool chunkPool;
   private static boolean chunkPoolDisabledBeforeTest;
 
   @BeforeClass
   public static void setUpBeforeClass() throws Exception {
 conf.setBoolean(MemStoreLAB.USEMSLAB_KEY, true);
 conf.setFloat(MemStoreLAB.CHUNK_POOL_MAXSIZE_KEY, 0.2f);
-chunkPoolDisabledBeforeTest = ChunkCreator.chunkPoolDisabled;
-ChunkCreator.chunkPoolDisabled = false;
+chunkPoolDisabledBeforeTest = MemStoreChunkPool.chunkPoolDisabled;
+MemStoreChunkPool.chunkPoolDisabled = false;
 long globalMemStoreLimit = (long) 
(ManagementFactory.getMemoryMXBean().getHeapMemoryUsage()
 .getMax() * MemorySizeUtil.getGlobalMemStoreHeapPercent(conf, false));
-chunkCreator = ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, 
false,
-  globalMemStoreLimit, 0.2f, MemStoreLAB.POOL_INITIAL_SIZE_DEFAULT, null);
-assertTrue(chunkCreator != null);
+chunkPool = MemStoreChunkPool.initialize(globalMemStoreLimit, 0.2f,
+MemStoreLAB.POOL_INITIAL_SIZE_DEFAULT, 
MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false);
+assertTrue(chunkPool != null);
   }
 
   @AfterClass
   public static void tearDownAfterClass() throws Exception {
-ChunkCreator.chunkPoolDisabled = chunkPoolDisabledBeforeTest;
+MemStoreChunkPool.chunkPoolDisabled = chunkPoolDisabledBeforeTest;
   }
 
   @Before
   public void tearDown() throws Exception {
-chunkCreator.clearChunksInPool();
+chunkPool.clearChunks();
   }
 
   @Test

[27/50] [abbrv] hbase git commit: HBASE-17872 The MSLABImpl generates the invaild cells when unsafe is not availble

2017-04-17 Thread syuanjiang
HBASE-17872 The MSLABImpl generates the invaild cells when unsafe is not 
availble


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/df96d328
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/df96d328
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/df96d328

Branch: refs/heads/hbase-12439
Commit: df96d328fb9fa11f04f84607e9a23f254f513202
Parents: 59e8b8e
Author: CHIA-PING TSAI 
Authored: Sat Apr 8 17:37:37 2017 +0800
Committer: Chia-Ping Tsai 
Committed: Sun Apr 9 23:28:34 2017 +0800

--
 .../hadoop/hbase/util/ByteBufferUtils.java  |  30 ++--
 .../hadoop/hbase/util/TestByteBufferUtils.java  | 165 ++-
 .../hbase/util/TestFromClientSide3WoUnsafe.java |  43 +
 3 files changed, 213 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/df96d328/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
index ff4c843..34a4e02 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
@@ -43,15 +43,14 @@ import sun.nio.ch.DirectBuffer;
 @SuppressWarnings("restriction")
 @InterfaceAudience.Public
 public final class ByteBufferUtils {
-
   // "Compressed integer" serialization helper constants.
   public final static int VALUE_MASK = 0x7f;
   public final static int NEXT_BIT_SHIFT = 7;
   public final static int NEXT_BIT_MASK = 1 << 7;
   @VisibleForTesting
-  static boolean UNSAFE_AVAIL = UnsafeAvailChecker.isAvailable();
+  final static boolean UNSAFE_AVAIL = UnsafeAvailChecker.isAvailable();
   @VisibleForTesting
-  static boolean UNSAFE_UNALIGNED = UnsafeAvailChecker.unaligned();
+  final static boolean UNSAFE_UNALIGNED = UnsafeAvailChecker.unaligned();
 
   private ByteBufferUtils() {
   }
@@ -404,12 +403,11 @@ public final class ByteBufferUtils {
 } else if (UNSAFE_AVAIL) {
   UnsafeAccess.copy(in, sourceOffset, out, destinationOffset, length);
 } else {
-  int outOldPos = out.position();
-  out.position(destinationOffset);
+  ByteBuffer outDup = out.duplicate();
+  outDup.position(destinationOffset);
   ByteBuffer inDup = in.duplicate();
   inDup.position(sourceOffset).limit(sourceOffset + length);
-  out.put(inDup);
-  out.position(outOldPos);
+  outDup.put(inDup);
 }
 return destinationOffset + length;
   }
@@ -990,7 +988,7 @@ public final class ByteBufferUtils {
 
   /**
* Copies bytes from given array's offset to length part into the given 
buffer. Puts the bytes
-   * to buffer's given position.
+   * to buffer's given position. This doesn't affact the position of buffer.
* @param out
* @param in
* @param inOffset
@@ -1003,16 +1001,15 @@ public final class ByteBufferUtils {
 } else if (UNSAFE_AVAIL) {
   UnsafeAccess.copy(in, inOffset, out, outOffset, length);
 } else {
-  int oldPos = out.position();
-  out.position(outOffset);
-  out.put(in, inOffset, length);
-  out.position(oldPos);
+  ByteBuffer outDup = out.duplicate();
+  outDup.position(outOffset);
+  outDup.put(in, inOffset, length);
 }
   }
 
   /**
* Copies specified number of bytes from given offset of 'in' ByteBuffer to
-   * the array.
+   * the array. This doesn't affact the position of buffer.
* @param out
* @param in
* @param sourceOffset
@@ -1026,10 +1023,9 @@ public final class ByteBufferUtils {
 } else if (UNSAFE_AVAIL) {
   UnsafeAccess.copy(in, sourceOffset, out, destinationOffset, length);
 } else {
-  int oldPos = in.position();
-  in.position(sourceOffset);
-  in.get(out, destinationOffset, length);
-  in.position(oldPos);
+  ByteBuffer inDup = in.duplicate();
+  inDup.position(sourceOffset);
+  inDup.get(out, destinationOffset, length);
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/df96d328/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
--
diff --git 
a/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
 
b/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
index 053fb24..ee03c7b 100644
--- 
a/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
+++ 
b/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java
@@ -27,14 +27,22 @@ import 

[19/50] [abbrv] hbase git commit: HBASE-17886 Fix compatibility of ServerSideScanMetrics

2017-04-17 Thread syuanjiang
HBASE-17886 Fix compatibility of ServerSideScanMetrics


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d7e3116a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d7e3116a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d7e3116a

Branch: refs/heads/hbase-12439
Commit: d7e3116a1744057359ca48d94aa50d7fdf0db974
Parents: 17737b2
Author: Yu Li 
Authored: Thu Apr 6 17:29:22 2017 +0800
Committer: Yu Li 
Committed: Thu Apr 6 17:29:22 2017 +0800

--
 .../hadoop/hbase/client/metrics/ServerSideScanMetrics.java  | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d7e3116a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ServerSideScanMetrics.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ServerSideScanMetrics.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ServerSideScanMetrics.java
index 8a96aeb..03764ed 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ServerSideScanMetrics.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ServerSideScanMetrics.java
@@ -49,6 +49,11 @@ public class ServerSideScanMetrics {
   public static final String COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME = 
"ROWS_SCANNED";
   public static final String COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME = 
"ROWS_FILTERED";
 
+  /** @deprecated Use {@link #COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME} instead */
+  public static final String COUNT_OF_ROWS_SCANNED_KEY = 
COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME;
+  /** @deprecated Use {@link #COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME} instead 
*/
+  public static final String COUNT_OF_ROWS_FILTERED_KEY = 
COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME;
+
   /**
* number of rows filtered during scan RPC
*/



[03/50] [abbrv] hbase git commit: HBASE-16780 Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit Update internal pb to 3.2 from 3.1.; AMENDMENT -- FORGOT TO REBUILD

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/e916b79d/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos.java
index 47ab440..a5f81e6 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos.java
@@ -305,7 +305,7 @@ public final class ClientProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (getLabelCount() > 0) {
 hash = (37 * hash) + LABEL_FIELD_NUMBER;
 hash = (53 * hash) + getLabelList().hashCode();
@@ -884,7 +884,7 @@ public final class ClientProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasExpression()) {
 hash = (37 * hash) + EXPRESSION_FIELD_NUMBER;
 hash = (53 * hash) + getExpression().hashCode();
@@ -1474,7 +1474,7 @@ public final class ClientProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasFamily()) {
 hash = (37 * hash) + FAMILY_FIELD_NUMBER;
 hash = (53 * hash) + getFamily().hashCode();
@@ -2776,7 +2776,7 @@ public final class ClientProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasRow()) {
 hash = (37 * hash) + ROW_FIELD_NUMBER;
 hash = (53 * hash) + getRow().hashCode();
@@ -5132,7 +5132,7 @@ public final class ClientProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (getCellCount() > 0) {
 hash = (37 * hash) + CELL_FIELD_NUMBER;
 hash = (53 * hash) + getCellList().hashCode();
@@ -6313,7 +6313,7 @@ public final class ClientProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasRegion()) {
 hash = (37 * hash) + REGION_FIELD_NUMBER;
 hash = (53 * hash) + getRegion().hashCode();
@@ -7048,7 +7048,7 @@ public final class ClientProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasResult()) {
 hash = (37 * hash) + RESULT_FIELD_NUMBER;
 hash = (53 * hash) + getResult().hashCode();
@@ -7831,7 +7831,7 @@ public final class ClientProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasRow()) {
 hash = (37 * hash) + ROW_FIELD_NUMBER;
 hash = (53 * hash) + getRow().hashCode();
@@ -9553,7 +9553,7 @@ public final class ClientProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasQualifier()) {
 hash = (37 * hash) + QUALIFIER_FIELD_NUMBER;
 hash = (53 * hash) + getQualifier().hashCode();
@@ -10176,7 +10176,7 @@ public final class ClientProtos {
   return memoizedHashCode;
 }
 int hash = 41;
-hash = (19 * hash) + getDescriptorForType().hashCode();
+hash = (19 * hash) + getDescriptor().hashCode();
 if (hasFamily()) {
   hash = (37 * hash) + FAMILY_FIELD_NUMBER;
   hash = (53 * hash) + getFamily().hashCode();
@@ -11150,7 +11150,7 @@ public final class ClientProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasRow()) {
 hash = (37 * hash) + ROW_FIELD_NUMBER;
 hash = (53 * hash) + getRow().hashCode();
@@ -12875,7 +12875,7 @@ public final class ClientProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * 

[13/50] [abbrv] hbase git commit: HBASE-17857 Remove IS annotations from IA.Public classes

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
index 7aa807c..ed95a7d 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
@@ -24,7 +24,6 @@ import java.util.ArrayList;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparator;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferException;
 import org.apache.hadoop.hbase.shaded.com.google.protobuf.UnsafeByteOperations;
@@ -40,7 +39,6 @@ import com.google.common.base.Preconditions;
  * Use this filter to include the stop row, eg: [A,Z].
  */
 @InterfaceAudience.Public
-@InterfaceStability.Stable
 public class InclusiveStopFilter extends FilterBase {
   private byte [] stopRowKey;
   private boolean done = false;

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/IncompatibleFilterException.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/IncompatibleFilterException.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/IncompatibleFilterException.java
index 8eba03c..6410ab4 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/IncompatibleFilterException.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/IncompatibleFilterException.java
@@ -19,13 +19,11 @@
 package org.apache.hadoop.hbase.filter;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 
 /**
  * Used to indicate a filter incompatibility
  */
 @InterfaceAudience.Public
-@InterfaceStability.Stable
 public class IncompatibleFilterException extends RuntimeException {
   private static final long serialVersionUID = 3236763276623198231L;
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InvalidRowFilterException.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InvalidRowFilterException.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InvalidRowFilterException.java
index 81aae0b..0406058 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InvalidRowFilterException.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InvalidRowFilterException.java
@@ -19,13 +19,11 @@
 package org.apache.hadoop.hbase.filter;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 
 /**
  * Used to indicate an invalid RowFilter.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Stable
 public class InvalidRowFilterException extends RuntimeException {
   private static final long serialVersionUID = 2667894046345657865L;
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/a66d4918/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java
index adbf304..b082941 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java
@@ -27,7 +27,6 @@ import org.apache.hadoop.hbase.ByteBufferCell;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.FilterProtos;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -43,7 +42,6 @@ import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferE
  * the values.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Stable
 public class KeyOnlyFilter extends FilterBase {
 
   boolean lenAsVal;


[21/50] [abbrv] hbase git commit: HBASE-17863: Procedure V2: Some cleanup around Procedure.isFinished() and procedure executor

2017-04-17 Thread syuanjiang
HBASE-17863: Procedure V2: Some cleanup around Procedure.isFinished() and 
procedure executor

Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/91098038
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/91098038
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/91098038

Branch: refs/heads/hbase-12439
Commit: 9109803891e256f8c047af72572f07695e604a3f
Parents: ec5188d
Author: Umesh Agashe 
Authored: Mon Apr 3 17:37:41 2017 -0700
Committer: Michael Stack 
Committed: Thu Apr 6 12:05:23 2017 -0700

--
 .../org/apache/hadoop/hbase/ProcedureState.java |  2 +-
 .../hadoop/hbase/procedure2/Procedure.java  | 48 
 .../hbase/procedure2/ProcedureExecutor.java | 32 ++---
 .../hadoop/hbase/procedure2/ProcedureUtil.java  |  2 +-
 .../hbase/procedure2/store/ProcedureStore.java  |  2 +-
 .../store/wal/ProcedureWALFormatReader.java | 19 
 .../procedure2/ProcedureTestingUtility.java |  4 +-
 ...ProcedureWALLoaderPerformanceEvaluation.java |  2 +-
 .../protobuf/generated/ProcedureProtos.java | 40 +++-
 .../src/main/protobuf/Procedure.proto   |  3 +-
 .../master/procedure/TestProcedureAdmin.java|  5 +-
 11 files changed, 92 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/91098038/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureState.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureState.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureState.java
index 5d95add..0080baa 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureState.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureState.java
@@ -24,5 +24,5 @@ import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
  */
 @InterfaceAudience.Public
 public enum ProcedureState {
-  INITIALIZING, RUNNABLE, WAITING, WAITING_TIMEOUT, ROLLEDBACK, FINISHED;
+  INITIALIZING, RUNNABLE, WAITING, WAITING_TIMEOUT, ROLLEDBACK, SUCCESS, 
FAILED;
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/91098038/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/Procedure.java
--
diff --git 
a/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/Procedure.java
 
b/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/Procedure.java
index 2a7fa6e..761ab3a 100644
--- 
a/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/Procedure.java
+++ 
b/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/Procedure.java
@@ -216,9 +216,9 @@ public abstract class Procedure implements 
Comparable {
   }
 
   /**
-   * By default, the executor will try ro run procedures start to finish.
+   * By default, the executor will try to run procedures start to finish.
* Return true to make the executor yield between each execution step to
-   * give other procedures time to run their steps.
+   * give other procedures a chance to run.
* @param env the environment passed to the ProcedureExecutor
* @return Return true if the executor should yield on completion of an 
execution step.
* Defaults to return false.
@@ -271,7 +271,7 @@ public abstract class Procedure implements 
Comparable {
 toStringState(sb);
 
 if (hasException()) {
-  sb.append(", failed=" + getException());
+  sb.append(", exception=" + getException());
 }
 
 sb.append(", ");
@@ -506,6 +506,25 @@ public abstract class Procedure implements 
Comparable {
   // 
==
 
   /**
+   * Procedure has states which are defined in proto file. At some places in 
the code, we
+   * need to determine more about those states. Following Methods help 
determine:
+   *
+   * {@link #isFailed()} - A procedure has executed at least once and has 
failed. The procedure
+   *   may or may not have rolled back yet. Any procedure 
in FAILED state
+   *   will be eventually moved to ROLLEDBACK state.
+   *
+   * {@link #isSuccess()} - A procedure is completed successfully without any 
exception.
+   *
+   * {@link #isFinished()} - As a procedure in FAILED state will be tried 
forever for rollback, only
+   * condition when scheduler/ executor will drop 
procedure from further
+   * processing is when procedure state is ROLLEDBACK 
or isSuccess()
+   * returns true. This is a terminal state of the 
procedure.
+   *
+   * 

[04/50] [abbrv] hbase git commit: HBASE-16780 Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit Update internal pb to 3.2 from 3.1.; AMENDMENT -- FORGOT TO REBUILD

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/e916b79d/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/compiler/PluginProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/compiler/PluginProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/compiler/PluginProtos.java
index 42627bd..71975c2 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/compiler/PluginProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/compiler/PluginProtos.java
@@ -14,6 +14,879 @@ public final class PluginProtos {
 registerAllExtensions(
 
(org.apache.hadoop.hbase.shaded.com.google.protobuf.ExtensionRegistryLite) 
registry);
   }
+  public interface VersionOrBuilder extends
+  // 
@@protoc_insertion_point(interface_extends:google.protobuf.compiler.Version)
+  org.apache.hadoop.hbase.shaded.com.google.protobuf.MessageOrBuilder {
+
+/**
+ * optional int32 major = 1;
+ */
+boolean hasMajor();
+/**
+ * optional int32 major = 1;
+ */
+int getMajor();
+
+/**
+ * optional int32 minor = 2;
+ */
+boolean hasMinor();
+/**
+ * optional int32 minor = 2;
+ */
+int getMinor();
+
+/**
+ * optional int32 patch = 3;
+ */
+boolean hasPatch();
+/**
+ * optional int32 patch = 3;
+ */
+int getPatch();
+
+/**
+ * 
+ * A suffix for alpha, beta or rc release, e.g., "alpha-1", "rc2". It 
should
+ * be empty for mainline stable releases.
+ * 
+ *
+ * optional string suffix = 4;
+ */
+boolean hasSuffix();
+/**
+ * 
+ * A suffix for alpha, beta or rc release, e.g., "alpha-1", "rc2". It 
should
+ * be empty for mainline stable releases.
+ * 
+ *
+ * optional string suffix = 4;
+ */
+java.lang.String getSuffix();
+/**
+ * 
+ * A suffix for alpha, beta or rc release, e.g., "alpha-1", "rc2". It 
should
+ * be empty for mainline stable releases.
+ * 
+ *
+ * optional string suffix = 4;
+ */
+org.apache.hadoop.hbase.shaded.com.google.protobuf.ByteString
+getSuffixBytes();
+  }
+  /**
+   * 
+   * The version number of protocol compiler.
+   * 
+   *
+   * Protobuf type {@code google.protobuf.compiler.Version}
+   */
+  public  static final class Version extends
+  org.apache.hadoop.hbase.shaded.com.google.protobuf.GeneratedMessageV3 
implements
+  // 
@@protoc_insertion_point(message_implements:google.protobuf.compiler.Version)
+  VersionOrBuilder {
+// Use Version.newBuilder() to construct.
+private 
Version(org.apache.hadoop.hbase.shaded.com.google.protobuf.GeneratedMessageV3.Builder
 builder) {
+  super(builder);
+}
+private Version() {
+  major_ = 0;
+  minor_ = 0;
+  patch_ = 0;
+  suffix_ = "";
+}
+
+@java.lang.Override
+public final 
org.apache.hadoop.hbase.shaded.com.google.protobuf.UnknownFieldSet
+getUnknownFields() {
+  return this.unknownFields;
+}
+private Version(
+org.apache.hadoop.hbase.shaded.com.google.protobuf.CodedInputStream 
input,
+
org.apache.hadoop.hbase.shaded.com.google.protobuf.ExtensionRegistryLite 
extensionRegistry)
+throws 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferException
 {
+  this();
+  int mutable_bitField0_ = 0;
+  
org.apache.hadoop.hbase.shaded.com.google.protobuf.UnknownFieldSet.Builder 
unknownFields =
+  
org.apache.hadoop.hbase.shaded.com.google.protobuf.UnknownFieldSet.newBuilder();
+  try {
+boolean done = false;
+while (!done) {
+  int tag = input.readTag();
+  switch (tag) {
+case 0:
+  done = true;
+  break;
+default: {
+  if (!parseUnknownField(input, unknownFields,
+ extensionRegistry, tag)) {
+done = true;
+  }
+  break;
+}
+case 8: {
+  bitField0_ |= 0x0001;
+  major_ = input.readInt32();
+  break;
+}
+case 16: {
+  bitField0_ |= 0x0002;
+  minor_ = input.readInt32();
+  break;
+}
+case 24: {
+  bitField0_ |= 0x0004;
+  patch_ = input.readInt32();
+  break;
+}
+case 34: {
+  org.apache.hadoop.hbase.shaded.com.google.protobuf.ByteString bs 
= input.readBytes();
+  bitField0_ |= 0x0008;
+  suffix_ = bs;
+  break;
+}
+  }
+}
+  } catch 

[40/50] [abbrv] hbase git commit: Revert "HBASE-17906 When a huge amount of data writing to hbase through thrift2, there will be a deadlock error. (Albert Lee)" Mistaken commit.

2017-04-17 Thread syuanjiang
Revert "HBASE-17906 When a huge amount of data writing to hbase through 
thrift2, there will be a deadlock error. (Albert Lee)"
Mistaken commit.

This reverts commit 9dd5cda01747ffb91ac084792fa4a8670859e810.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0cd4cec5
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0cd4cec5
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0cd4cec5

Branch: refs/heads/hbase-12439
Commit: 0cd4cec5d24b5e7194a903e4d900f5558ed8b9a7
Parents: c846145
Author: Michael Stack 
Authored: Fri Apr 14 12:07:40 2017 -0700
Committer: Michael Stack 
Committed: Fri Apr 14 12:07:40 2017 -0700

--
 .../main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java   | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0cd4cec5/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
index 8f56b10..560ae64 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
@@ -432,6 +432,9 @@ public class ThriftServer extends Configured implements 
Tool {
   throw new RuntimeException("Could not parse the value provided for the 
port option", e);
 }
 
+// Thrift's implementation uses '0' as a placeholder for 'use the default.'
+int backlog = conf.getInt(BACKLOG_CONF_KEY, 0);
+
 // Local hostname and user name,
 // used only if QOP is configured.
 String host = null;



[08/50] [abbrv] hbase git commit: HBASE-16780 Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit Update internal pb to 3.2 from 3.1.; AMENDMENT -- FORGOT TO REBUILD

2017-04-17 Thread syuanjiang
HBASE-16780 Since move to protobuf3.1, Cells are limited to 64MB where previous 
they had no limit Update internal pb to 3.2 from 3.1.; AMENDMENT -- FORGOT TO 
REBUILD PBs


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e916b79d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e916b79d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e916b79d

Branch: refs/heads/hbase-12439
Commit: e916b79db58bb9be806a833b2c0e675f1136c15a
Parents: 73e1bcd
Author: Michael Stack 
Authored: Mon Apr 3 15:26:11 2017 -0700
Committer: Michael Stack 
Committed: Mon Apr 3 15:26:11 2017 -0700

--
 .../google/protobuf/AbstractMessageLite.java|1 -
 .../com/google/protobuf/AbstractParser.java |2 +-
 .../hbase/shaded/com/google/protobuf/Any.java   |   26 +-
 .../com/google/protobuf/AnyOrBuilder.java   |6 +-
 .../hbase/shaded/com/google/protobuf/Api.java   |   68 +-
 .../com/google/protobuf/ApiOrBuilder.java   |   18 +-
 .../shaded/com/google/protobuf/ApiProto.java|7 +-
 .../shaded/com/google/protobuf/BoolValue.java   |   10 +-
 .../com/google/protobuf/BoolValueOrBuilder.java |2 +-
 .../com/google/protobuf/ByteBufferWriter.java   |   50 +-
 .../shaded/com/google/protobuf/ByteString.java  |   18 +-
 .../shaded/com/google/protobuf/BytesValue.java  |   10 +-
 .../google/protobuf/BytesValueOrBuilder.java|2 +-
 .../com/google/protobuf/CodedInputStream.java   |7 +-
 .../com/google/protobuf/DescriptorProtos.java   |  701 +--
 .../shaded/com/google/protobuf/Descriptors.java |2 +-
 .../shaded/com/google/protobuf/DoubleValue.java |   10 +-
 .../google/protobuf/DoubleValueOrBuilder.java   |2 +-
 .../shaded/com/google/protobuf/Duration.java|   18 +-
 .../com/google/protobuf/DurationOrBuilder.java  |4 +-
 .../com/google/protobuf/DynamicMessage.java |2 +-
 .../hbase/shaded/com/google/protobuf/Empty.java |2 +-
 .../hbase/shaded/com/google/protobuf/Enum.java  |   54 +-
 .../com/google/protobuf/EnumOrBuilder.java  |   14 +-
 .../shaded/com/google/protobuf/EnumValue.java   |   24 +-
 .../com/google/protobuf/EnumValueOrBuilder.java |6 +-
 .../com/google/protobuf/ExtensionRegistry.java  |1 -
 .../google/protobuf/ExtensionRegistryLite.java  |7 +-
 .../hbase/shaded/com/google/protobuf/Field.java |  110 +-
 .../shaded/com/google/protobuf/FieldMask.java   |2 +-
 .../com/google/protobuf/FieldMaskProto.java |7 +-
 .../com/google/protobuf/FieldOrBuilder.java |   30 +-
 .../shaded/com/google/protobuf/FieldSet.java|1 +
 .../shaded/com/google/protobuf/FloatValue.java  |   10 +-
 .../google/protobuf/FloatValueOrBuilder.java|2 +-
 .../google/protobuf/GeneratedMessageLite.java   |  163 ++-
 .../com/google/protobuf/GeneratedMessageV3.java |   26 +-
 .../shaded/com/google/protobuf/Int32Value.java  |   10 +-
 .../google/protobuf/Int32ValueOrBuilder.java|2 +-
 .../shaded/com/google/protobuf/Int64Value.java  |   10 +-
 .../google/protobuf/Int64ValueOrBuilder.java|2 +-
 .../com/google/protobuf/LazyFieldLite.java  |   25 +-
 .../shaded/com/google/protobuf/ListValue.java   |2 +-
 .../shaded/com/google/protobuf/MapEntry.java|2 +-
 .../com/google/protobuf/MapFieldLite.java   |4 +-
 .../google/protobuf/MessageLiteToString.java|4 +-
 .../shaded/com/google/protobuf/Method.java  |   74 +-
 .../com/google/protobuf/MethodOrBuilder.java|   20 +-
 .../hbase/shaded/com/google/protobuf/Mixin.java |   30 +-
 .../com/google/protobuf/MixinOrBuilder.java |8 +-
 .../shaded/com/google/protobuf/NullValue.java   |3 +-
 .../shaded/com/google/protobuf/Option.java  |  135 +-
 .../com/google/protobuf/OptionOrBuilder.java|   35 +-
 .../com/google/protobuf/SmallSortedMap.java |   21 +-
 .../com/google/protobuf/SourceContext.java  |   16 +-
 .../google/protobuf/SourceContextOrBuilder.java |4 +-
 .../com/google/protobuf/SourceContextProto.java |8 +-
 .../shaded/com/google/protobuf/StringValue.java |   16 +-
 .../google/protobuf/StringValueOrBuilder.java   |4 +-
 .../shaded/com/google/protobuf/Struct.java  |   14 +-
 .../shaded/com/google/protobuf/Syntax.java  |3 +-
 .../shaded/com/google/protobuf/Timestamp.java   |   18 +-
 .../com/google/protobuf/TimestampOrBuilder.java |4 +-
 .../hbase/shaded/com/google/protobuf/Type.java  |   54 +-
 .../com/google/protobuf/TypeOrBuilder.java  |   14 +-
 .../shaded/com/google/protobuf/TypeProto.java   |7 +-
 .../shaded/com/google/protobuf/UInt32Value.java |   10 +-
 .../google/protobuf/UInt32ValueOrBuilder.java   |2 +-
 .../shaded/com/google/protobuf/UInt64Value.java |   10 +-
 .../google/protobuf/UInt64ValueOrBuilder.java   |2 +-
 .../com/google/protobuf/UnknownFieldSet.java|   35 +-
 

[48/50] [abbrv] hbase git commit: HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)

2017-04-17 Thread syuanjiang
HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c2c2178b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c2c2178b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c2c2178b

Branch: refs/heads/hbase-12439
Commit: c2c2178b2eebe4439eadec6b37fae2566944c16b
Parents: c8cd921
Author: Ramkrishna 
Authored: Mon Apr 17 09:10:59 2017 +0530
Committer: Ramkrishna 
Committed: Mon Apr 17 09:28:24 2017 +0530

--
 .../java/org/apache/hadoop/hbase/CellUtil.java  |  24 --
 .../org/apache/hadoop/hbase/ExtendedCell.java   |  10 +
 .../org/apache/hadoop/hbase/master/HMaster.java |   2 +
 .../hbase/regionserver/ByteBufferChunkCell.java |  48 +++
 .../apache/hadoop/hbase/regionserver/Chunk.java |  60 ++-
 .../hadoop/hbase/regionserver/ChunkCreator.java | 404 +++
 .../hbase/regionserver/HRegionServer.java   |  14 +-
 .../hbase/regionserver/MemStoreChunkPool.java   | 265 
 .../hadoop/hbase/regionserver/MemStoreLAB.java  |   4 +-
 .../hbase/regionserver/MemStoreLABImpl.java | 171 
 .../regionserver/NoTagByteBufferChunkCell.java  |  48 +++
 .../hadoop/hbase/regionserver/OffheapChunk.java |  31 +-
 .../hadoop/hbase/regionserver/OnheapChunk.java  |  32 +-
 .../hadoop/hbase/HBaseTestingUtility.java   |   3 +
 .../coprocessor/TestCoprocessorInterface.java   |   4 +
 .../TestRegionObserverScannerOpenHook.java  |   3 +
 .../coprocessor/TestRegionObserverStacking.java |   3 +
 .../io/hfile/TestScannerFromBucketCache.java|   3 +
 .../hadoop/hbase/master/TestCatalogJanitor.java |   7 +
 .../hadoop/hbase/regionserver/TestBulkLoad.java |   2 +-
 .../hbase/regionserver/TestCellFlatSet.java |   2 +-
 .../regionserver/TestCompactingMemStore.java|  37 +-
 .../TestCompactingToCellArrayMapMemStore.java   |  16 +-
 .../TestCompactionArchiveConcurrentClose.java   |   1 +
 .../TestCompactionArchiveIOException.java   |   1 +
 .../regionserver/TestCompactionPolicy.java  |   1 +
 .../hbase/regionserver/TestDefaultMemStore.java |  14 +-
 .../regionserver/TestFailedAppendAndSync.java   |   1 +
 .../hbase/regionserver/TestHMobStore.java   |   2 +-
 .../hadoop/hbase/regionserver/TestHRegion.java  |   2 +
 .../regionserver/TestHRegionReplayEvents.java   |   2 +-
 .../regionserver/TestMemStoreChunkPool.java |  48 +--
 .../hbase/regionserver/TestMemStoreLAB.java |  27 +-
 .../TestMemstoreLABWithoutPool.java | 168 
 .../hbase/regionserver/TestRecoveredEdits.java  |   1 +
 .../hbase/regionserver/TestRegionIncrement.java |   1 +
 .../hadoop/hbase/regionserver/TestStore.java|   1 +
 .../TestStoreFileRefresherChore.java|   1 +
 .../hbase/regionserver/TestWALLockup.java   |   1 +
 .../TestWALMonotonicallyIncreasingSeqId.java|   1 +
 .../hbase/regionserver/wal/TestDurability.java  |   3 +
 41 files changed, 990 insertions(+), 479 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c2c2178b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
--
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
index e1bc969..56de21b 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
@@ -3135,28 +3135,4 @@ public final class CellUtil {
   return Type.DeleteFamily.getCode();
 }
   }
-
-  /**
-   * Clone the passed cell by copying its data into the passed buf.
-   */
-  public static Cell copyCellTo(Cell cell, ByteBuffer buf, int offset, int 
len) {
-int tagsLen = cell.getTagsLength();
-if (cell instanceof ExtendedCell) {
-  ((ExtendedCell) cell).write(buf, offset);
-} else {
-  // Normally all Cell impls within Server will be of type ExtendedCell. 
Just considering the
-  // other case also. The data fragments within Cell is copied into buf as 
in KeyValue
-  // serialization format only.
-  KeyValueUtil.appendTo(cell, buf, offset, true);
-}
-if (tagsLen == 0) {
-  // When tagsLen is 0, make a NoTagsByteBufferKeyValue version. This is 
an optimized class
-  // which directly return tagsLen as 0. So we avoid parsing many length 
components in
-  // reading the tagLength stored in the backing buffer. The Memstore 
addition of every Cell
-  // call getTagsLength().
-  return new NoTagsByteBufferKeyValue(buf, offset, len, 
cell.getSequenceId());
-} else {
-  return new ByteBufferKeyValue(buf, offset, len, cell.getSequenceId());
-}
-  }
 }


[42/50] [abbrv] hbase git commit: HBASE-17866: Implement async setQuota/getQuota methods

2017-04-17 Thread syuanjiang
HBASE-17866: Implement async setQuota/getQuota methods

Signed-off-by: Guanghao Zhang 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8db97603
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8db97603
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8db97603

Branch: refs/heads/hbase-12439
Commit: 8db9760363890d4d0bfaba25ae6797d45aaf7fec
Parents: 7678855
Author: huzheng 
Authored: Fri Apr 14 14:51:38 2017 +0800
Committer: Guanghao Zhang 
Committed: Mon Apr 17 09:49:30 2017 +0800

--
 .../apache/hadoop/hbase/client/AsyncAdmin.java  |  16 ++
 .../hadoop/hbase/client/AsyncHBaseAdmin.java|  47 +
 .../hadoop/hbase/quotas/QuotaRetriever.java |  32 +--
 .../hadoop/hbase/quotas/QuotaTableUtil.java |  32 +++
 .../hbase/client/TestAsyncQuotaAdminApi.java| 207 +++
 5 files changed, 306 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/8db97603/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
index ab791c2..270f28f 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hbase.client;
 
+import java.util.List;
 import java.util.concurrent.CompletableFuture;
 import java.util.regex.Pattern;
 
@@ -27,6 +28,8 @@ import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.NamespaceDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.quotas.QuotaFilter;
+import org.apache.hadoop.hbase.quotas.QuotaSettings;
 import org.apache.hadoop.hbase.util.Pair;
 
 /**
@@ -465,4 +468,17 @@ public interface AsyncAdmin {
*  startcode. Here is an example:  
host187.example.com,60020,1289493121758
*/
   CompletableFuture move(final byte[] regionName, final byte[] 
destServerName);
+
+  /**
+   * Apply the new quota settings.
+   * @param quota the quota settings
+   */
+  CompletableFuture setQuota(final QuotaSettings quota);
+
+  /**
+   * List the quotas based on the filter.
+   * @param filter the quota settings filter
+   * @return the QuotaSetting list, which wrapped by a CompletableFuture.
+   */
+  CompletableFuture getQuota(QuotaFilter filter);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/8db97603/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
index e42ee57..180cd19 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
@@ -56,6 +56,9 @@ import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterReques
 import org.apache.hadoop.hbase.client.Scan.ReadType;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.ipc.HBaseRpcController;
+import org.apache.hadoop.hbase.quotas.QuotaFilter;
+import org.apache.hadoop.hbase.quotas.QuotaSettings;
+import org.apache.hadoop.hbase.quotas.QuotaTableUtil;
 import org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
@@ -112,6 +115,8 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.OfflineReg
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.OfflineRegionResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetBalancerRunningRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetBalancerRunningResponse;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateTableRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateTableResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.UnassignRegionRequest;
@@ -1149,6 +1154,48 @@ public class 

[07/50] [abbrv] hbase git commit: HBASE-16780 Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit Update internal pb to 3.2 from 3.1.; AMENDMENT -- FORGOT TO REBUILD

2017-04-17 Thread syuanjiang
http://git-wip-us.apache.org/repos/asf/hbase/blob/e916b79d/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/DescriptorProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/DescriptorProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/DescriptorProtos.java
index 99dfec2..0468e6c 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/DescriptorProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/com/google/protobuf/DescriptorProtos.java
@@ -223,7 +223,7 @@ public final class DescriptorProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (getFileCount() > 0) {
 hash = (37 * hash) + FILE_FIELD_NUMBER;
 hash = (53 * hash) + getFileList().hashCode();
@@ -2062,7 +2062,7 @@ public final class DescriptorProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasName()) {
 hash = (37 * hash) + NAME_FIELD_NUMBER;
 hash = (53 * hash) + getName().hashCode();
@@ -5283,7 +5283,7 @@ public final class DescriptorProtos {
   return memoizedHashCode;
 }
 int hash = 41;
-hash = (19 * hash) + getDescriptorForType().hashCode();
+hash = (19 * hash) + getDescriptor().hashCode();
 if (hasStart()) {
   hash = (37 * hash) + START_FIELD_NUMBER;
   hash = (53 * hash) + getStart();
@@ -5874,7 +5874,7 @@ public final class DescriptorProtos {
   return memoizedHashCode;
 }
 int hash = 41;
-hash = (19 * hash) + getDescriptorForType().hashCode();
+hash = (19 * hash) + getDescriptor().hashCode();
 if (hasStart()) {
   hash = (37 * hash) + START_FIELD_NUMBER;
   hash = (53 * hash) + getStart();
@@ -6803,7 +6803,7 @@ public final class DescriptorProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasName()) {
 hash = (37 * hash) + NAME_FIELD_NUMBER;
 hash = (53 * hash) + getName().hashCode();
@@ -9930,6 +9930,9 @@ public final class DescriptorProtos {
   /**
* 
* Tag-delimited aggregate.
+   * Group type is deprecated and not supported in proto3. However, Proto3
+   * implementations should still be able to parse the group wire format 
and
+   * treat group fields as unknown fields.
* 
*
* TYPE_GROUP = 10;
@@ -10039,6 +10042,9 @@ public final class DescriptorProtos {
   /**
* 
* Tag-delimited aggregate.
+   * Group type is deprecated and not supported in proto3. However, Proto3
+   * implementations should still be able to parse the group wire format 
and
+   * treat group fields as unknown fields.
* 
*
* TYPE_GROUP = 10;
@@ -10193,10 +10199,6 @@ public final class DescriptorProtos {
*/
   LABEL_REQUIRED(2),
   /**
-   * 
-   * TODO(sanjay): Should we add LABEL_MAP?
-   * 
-   *
* LABEL_REPEATED = 3;
*/
   LABEL_REPEATED(3),
@@ -10215,10 +10217,6 @@ public final class DescriptorProtos {
*/
   public static final int LABEL_REQUIRED_VALUE = 2;
   /**
-   * 
-   * TODO(sanjay): Should we add LABEL_MAP?
-   * 
-   *
* LABEL_REPEATED = 3;
*/
   public static final int LABEL_REPEATED_VALUE = 3;
@@ -10854,7 +10852,7 @@ public final class DescriptorProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasName()) {
 hash = (37 * hash) + NAME_FIELD_NUMBER;
 hash = (53 * hash) + getName().hashCode();
@@ -12376,7 +12374,7 @@ public final class DescriptorProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasName()) {
 hash = (37 * hash) + NAME_FIELD_NUMBER;
 hash = (53 * hash) + getName().hashCode();
@@ -13225,7 +13223,7 @@ public final class DescriptorProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasName()) {
 hash = (37 * hash) + 

[45/50] [abbrv] hbase git commit: HBASE-17447 Implement a MasterObserver for automatically deleting space quotas

2017-04-17 Thread elserj
HBASE-17447 Implement a MasterObserver for automatically deleting space quotas

When a table or namespace is deleted, it would be nice to automatically
delete the quota on said table/NS. It's possible that not all people
would want this functionality so we can leave it up to the user to
configure this Observer.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2524f8cd
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2524f8cd
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2524f8cd

Branch: refs/heads/HBASE-16961
Commit: 2524f8cd26422ff7a44cca2aa6c913f713c1d530
Parents: abe1c06
Author: Josh Elser 
Authored: Thu Mar 16 18:54:01 2017 -0400
Committer: Josh Elser 
Committed: Mon Apr 17 15:47:49 2017 -0400

--
 .../hbase/quotas/MasterSpaceQuotaObserver.java  |  85 ++
 .../quotas/TestMasterSpaceQuotaObserver.java| 169 +++
 src/main/asciidoc/_chapters/ops_mgt.adoc|  17 ++
 3 files changed, 271 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2524f8cd/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterSpaceQuotaObserver.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterSpaceQuotaObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterSpaceQuotaObserver.java
new file mode 100644
index 000..a3abf32
--- /dev/null
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterSpaceQuotaObserver.java
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.quotas;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.MasterObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas;
+
+/**
+ * An observer to automatically delete space quotas when a table/namespace
+ * are deleted.
+ */
+@InterfaceAudience.Private
+public class MasterSpaceQuotaObserver implements MasterObserver {
+  private CoprocessorEnvironment cpEnv;
+  private Configuration conf;
+  private boolean quotasEnabled = false;
+
+  @Override
+  public void start(CoprocessorEnvironment ctx) throws IOException {
+this.cpEnv = ctx;
+this.conf = cpEnv.getConfiguration();
+this.quotasEnabled = QuotaUtil.isQuotaEnabled(conf);
+  }
+
+  @Override
+  public void postDeleteTable(
+  ObserverContext ctx, TableName tableName) 
throws IOException {
+// Do nothing if quotas aren't enabled
+if (!quotasEnabled) {
+  return;
+}
+final MasterServices master = ctx.getEnvironment().getMasterServices();
+final Connection conn = master.getConnection();
+Quotas quotas = QuotaUtil.getTableQuota(master.getConnection(), tableName);
+if (null != quotas && quotas.hasSpace()) {
+  QuotaSettings settings = 
QuotaSettingsFactory.removeTableSpaceLimit(tableName);
+  try (Admin admin = conn.getAdmin()) {
+admin.setQuota(settings);
+  }
+}
+  }
+
+  @Override
+  public void postDeleteNamespace(
+  ObserverContext ctx, String namespace) 
throws IOException {
+// Do nothing if quotas aren't enabled
+if (!quotasEnabled) {
+  return;
+}
+final MasterServices master = ctx.getEnvironment().getMasterServices();
+final Connection conn = master.getConnection();
+Quotas quotas = QuotaUtil.getNamespaceQuota(master.getConnection(), 
namespace);
+if (null != quotas && quotas.hasSpace()) {
+  

[23/50] [abbrv] hbase git commit: HBASE-16995 Build client Java API and client protobuf messages (Josh Elser)

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/990062a9/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
index 01ba8f6..e3c6bfd 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
@@ -239,12 +239,20 @@ public final class QuotaProtos {
  * THROTTLE = 1;
  */
 THROTTLE(1),
+/**
+ * SPACE = 2;
+ */
+SPACE(2),
 ;
 
 /**
  * THROTTLE = 1;
  */
 public static final int THROTTLE_VALUE = 1;
+/**
+ * SPACE = 2;
+ */
+public static final int SPACE_VALUE = 2;
 
 
 public final int getNumber() {
@@ -262,6 +270,7 @@ public final class QuotaProtos {
 public static QuotaType forNumber(int value) {
   switch (value) {
 case 1: return THROTTLE;
+case 2: return SPACE;
 default: return null;
   }
 }
@@ -311,6 +320,150 @@ public final class QuotaProtos {
 // @@protoc_insertion_point(enum_scope:hbase.pb.QuotaType)
   }
 
+  /**
+   * 
+   * Defines what action should be taken when the SpaceQuota is violated
+   * 
+   *
+   * Protobuf enum {@code hbase.pb.SpaceViolationPolicy}
+   */
+  public enum SpaceViolationPolicy
+  implements 
org.apache.hadoop.hbase.shaded.com.google.protobuf.ProtocolMessageEnum {
+/**
+ * 
+ * Disable the table(s)
+ * 
+ *
+ * DISABLE = 1;
+ */
+DISABLE(1),
+/**
+ * 
+ * No writes, bulk-loads, or compactions
+ * 
+ *
+ * NO_WRITES_COMPACTIONS = 2;
+ */
+NO_WRITES_COMPACTIONS(2),
+/**
+ * 
+ * No writes or bulk-loads
+ * 
+ *
+ * NO_WRITES = 3;
+ */
+NO_WRITES(3),
+/**
+ * 
+ * No puts or bulk-loads, but deletes are allowed
+ * 
+ *
+ * NO_INSERTS = 4;
+ */
+NO_INSERTS(4),
+;
+
+/**
+ * 
+ * Disable the table(s)
+ * 
+ *
+ * DISABLE = 1;
+ */
+public static final int DISABLE_VALUE = 1;
+/**
+ * 
+ * No writes, bulk-loads, or compactions
+ * 
+ *
+ * NO_WRITES_COMPACTIONS = 2;
+ */
+public static final int NO_WRITES_COMPACTIONS_VALUE = 2;
+/**
+ * 
+ * No writes or bulk-loads
+ * 
+ *
+ * NO_WRITES = 3;
+ */
+public static final int NO_WRITES_VALUE = 3;
+/**
+ * 
+ * No puts or bulk-loads, but deletes are allowed
+ * 
+ *
+ * NO_INSERTS = 4;
+ */
+public static final int NO_INSERTS_VALUE = 4;
+
+
+public final int getNumber() {
+  return value;
+}
+
+/**
+ * @deprecated Use {@link #forNumber(int)} instead.
+ */
+@java.lang.Deprecated
+public static SpaceViolationPolicy valueOf(int value) {
+  return forNumber(value);
+}
+
+public static SpaceViolationPolicy forNumber(int value) {
+  switch (value) {
+case 1: return DISABLE;
+case 2: return NO_WRITES_COMPACTIONS;
+case 3: return NO_WRITES;
+case 4: return NO_INSERTS;
+default: return null;
+  }
+}
+
+public static 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Internal.EnumLiteMap
+internalGetValueMap() {
+  return internalValueMap;
+}
+private static final 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Internal.EnumLiteMap<
+SpaceViolationPolicy> internalValueMap =
+  new 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Internal.EnumLiteMap()
 {
+public SpaceViolationPolicy findValueByNumber(int number) {
+  return SpaceViolationPolicy.forNumber(number);
+}
+  };
+
+public final 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.EnumValueDescriptor
+getValueDescriptor() {
+  return getDescriptor().getValues().get(ordinal());
+}
+public final 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.EnumDescriptor
+getDescriptorForType() {
+  return getDescriptor();
+}
+public static final 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.EnumDescriptor
+getDescriptor() {
+  return 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.getDescriptor().getEnumTypes().get(3);
+}
+
+private static final SpaceViolationPolicy[] VALUES = values();
+
+public static SpaceViolationPolicy valueOf(
+
org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.EnumValueDescriptor
 desc) {
+  if (desc.getType() != getDescriptor()) {
+throw new 

[41/50] [abbrv] hbase git commit: HBASE-17428 Implement informational RPCs for space quotas

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/095fabf1/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
index e90c934..c70b736 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
@@ -10429,7 +10429,7 @@ public final class RegionServerStatusProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasRegion()) {
 hash = (37 * hash) + REGION_FIELD_NUMBER;
 hash = (53 * hash) + getRegion().hashCode();
@@ -10824,7 +10824,7 @@ public final class RegionServerStatusProtos {
* optional .hbase.pb.RegionInfo region = 1;
*/
   private 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
-  
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo, 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfoOrBuilder>
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo, 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfoOrBuilder>
 
   getRegionFieldBuilder() {
 if (regionBuilder_ == null) {
   regionBuilder_ = new 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
@@ -10940,7 +10940,7 @@ public final class RegionServerStatusProtos {
 /**
  * repeated .hbase.pb.RegionSpaceUse space_use = 1;
  */
-
java.util.List
+
java.util.List
 
 getSpaceUseList();
 /**
  * repeated .hbase.pb.RegionSpaceUse space_use = 1;
@@ -10953,7 +10953,7 @@ public final class RegionServerStatusProtos {
 /**
  * repeated .hbase.pb.RegionSpaceUse space_use = 1;
  */
-java.util.List
+java.util.List
 
 getSpaceUseOrBuilderList();
 /**
  * repeated .hbase.pb.RegionSpaceUse space_use = 1;
@@ -11056,7 +11056,7 @@ public final class RegionServerStatusProtos {
 /**
  * repeated .hbase.pb.RegionSpaceUse space_use = 1;
  */
-public java.util.List
+public java.util.List
 
 getSpaceUseOrBuilderList() {
   return spaceUse_;
 }
@@ -11142,7 +11142,7 @@ public final class RegionServerStatusProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (getSpaceUseCount() > 0) {
 hash = (37 * hash) + SPACE_USE_FIELD_NUMBER;
 hash = (53 * hash) + getSpaceUseList().hashCode();
@@ -11368,7 +11368,7 @@ public final class RegionServerStatusProtos {
   spaceUseBuilder_ = null;
   spaceUse_ = other.spaceUse_;
   bitField0_ = (bitField0_ & ~0x0001);
-  spaceUseBuilder_ =
+  spaceUseBuilder_ = 
 
org.apache.hadoop.hbase.shaded.com.google.protobuf.GeneratedMessageV3.alwaysUseFieldBuilders
 ?
getSpaceUseFieldBuilder() : null;
 } else {
@@ -11604,7 +11604,7 @@ public final class RegionServerStatusProtos {
   /**
* repeated .hbase.pb.RegionSpaceUse space_use = 1;
*/
-  public java.util.List
+  public java.util.List
 
getSpaceUseOrBuilderList() {
 if (spaceUseBuilder_ != null) {
   return spaceUseBuilder_.getMessageOrBuilderList();
@@ -11630,12 +11630,12 @@ public final class RegionServerStatusProtos {
   /**
* repeated .hbase.pb.RegionSpaceUse space_use = 1;
*/
-  public 
java.util.List
+  public 
java.util.List
 
getSpaceUseBuilderList() {
 return getSpaceUseFieldBuilder().getBuilderList();
   }
   private 
org.apache.hadoop.hbase.shaded.com.google.protobuf.RepeatedFieldBuilderV3<
-  
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionSpaceUse,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionSpaceUse.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionSpaceUseOrBuilder>
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionSpaceUse,
 

[48/50] [abbrv] hbase git commit: HBASE-17002 JMX metrics and some UI additions for space quotas

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/96c6b8fa/hbase-protocol-shaded/src/main/protobuf/Master.proto
--
diff --git a/hbase-protocol-shaded/src/main/protobuf/Master.proto 
b/hbase-protocol-shaded/src/main/protobuf/Master.proto
index 3318a39..beb8f02 100644
--- a/hbase-protocol-shaded/src/main/protobuf/Master.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/Master.proto
@@ -930,7 +930,11 @@ service MasterService {
   rpc removeDrainFromRegionServers(RemoveDrainFromRegionServersRequest)
 returns(RemoveDrainFromRegionServersResponse);
 
-  /** Fetches the Master's view of space quotas */
+  /** Fetches the Master's view of space utilization */
   rpc GetSpaceQuotaRegionSizes(GetSpaceQuotaRegionSizesRequest)
 returns(GetSpaceQuotaRegionSizesResponse);
+
+  /** Fetches the Master's view of quotas */
+  rpc GetQuotaStates(GetQuotaStatesRequest)
+returns(GetQuotaStatesResponse);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/96c6b8fa/hbase-protocol-shaded/src/main/protobuf/Quota.proto
--
diff --git a/hbase-protocol-shaded/src/main/protobuf/Quota.proto 
b/hbase-protocol-shaded/src/main/protobuf/Quota.proto
index 2d7e5f5..1a6d5ed 100644
--- a/hbase-protocol-shaded/src/main/protobuf/Quota.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/Quota.proto
@@ -119,6 +119,7 @@ message GetSpaceQuotaRegionSizesResponse {
   message RegionSizes {
 optional TableName table_name = 1;
 optional uint64 size = 2;
+
   }
   repeated RegionSizes sizes = 1;
 }
@@ -146,3 +147,19 @@ message GetSpaceQuotaEnforcementsResponse {
   }
   repeated TableViolationPolicy violation_policies = 1;
 }
+
+message GetQuotaStatesRequest {
+}
+
+message GetQuotaStatesResponse {
+  message TableQuotaSnapshot {
+optional TableName table_name = 1;
+optional SpaceQuotaSnapshot snapshot = 2;
+  }
+  message NamespaceQuotaSnapshot {
+optional string namespace = 1;
+optional SpaceQuotaSnapshot snapshot = 2;
+  }
+  repeated TableQuotaSnapshot table_snapshots = 1;
+  repeated NamespaceQuotaSnapshot ns_snapshots = 2;
+}

http://git-wip-us.apache.org/repos/asf/hbase/blob/96c6b8fa/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index 0243934..92c1461 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -908,7 +908,7 @@ public class HMaster extends HRegionServer implements 
MasterServices {
   // Create the quota snapshot notifier
   spaceQuotaSnapshotNotifier = createQuotaSnapshotNotifier();
   spaceQuotaSnapshotNotifier.initialize(getClusterConnection());
-  this.quotaObserverChore = new QuotaObserverChore(this);
+  this.quotaObserverChore = new QuotaObserverChore(this, 
getMasterMetrics());
   // Start the chore to read the region FS space reports and act on them
   getChoreService().scheduleChore(quotaObserverChore);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/96c6b8fa/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
index 3d72a3e..8f796bb 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
@@ -62,7 +62,9 @@ import 
org.apache.hadoop.hbase.procedure.MasterProcedureManager;
 import org.apache.hadoop.hbase.procedure2.Procedure;
 import org.apache.hadoop.hbase.procedure2.ProcedureUtil;
 import org.apache.hadoop.hbase.quotas.MasterQuotaManager;
+import org.apache.hadoop.hbase.quotas.QuotaObserverChore;
 import org.apache.hadoop.hbase.quotas.QuotaUtil;
+import org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshot;
 import org.apache.hadoop.hbase.regionserver.RSRpcServices;
 import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
@@ -214,8 +216,12 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateTa
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateTableResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.UnassignRegionRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.UnassignRegionResponse;
+import 

[30/50] [abbrv] hbase git commit: HBASE-17001 Enforce quota violation policies in the RegionServer

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/0d76d667/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/SpaceViolationPolicyEnforcementFactory.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/SpaceViolationPolicyEnforcementFactory.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/SpaceViolationPolicyEnforcementFactory.java
new file mode 100644
index 000..6b754b9
--- /dev/null
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/SpaceViolationPolicyEnforcementFactory.java
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.quotas;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.classification.InterfaceStability;
+import org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshot.SpaceQuotaStatus;
+import 
org.apache.hadoop.hbase.quotas.policies.BulkLoadVerifyingViolationPolicyEnforcement;
+import 
org.apache.hadoop.hbase.quotas.policies.DisableTableViolationPolicyEnforcement;
+import 
org.apache.hadoop.hbase.quotas.policies.NoInsertsViolationPolicyEnforcement;
+import 
org.apache.hadoop.hbase.quotas.policies.NoWritesCompactionsViolationPolicyEnforcement;
+import 
org.apache.hadoop.hbase.quotas.policies.NoWritesViolationPolicyEnforcement;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+
+/**
+ * A factory class for instantiating {@link SpaceViolationPolicyEnforcement} 
instances.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class SpaceViolationPolicyEnforcementFactory {
+
+  private static final SpaceViolationPolicyEnforcementFactory INSTANCE =
+  new SpaceViolationPolicyEnforcementFactory();
+
+  private SpaceViolationPolicyEnforcementFactory() {}
+
+  /**
+   * Returns an instance of this factory.
+   */
+  public static SpaceViolationPolicyEnforcementFactory getInstance() {
+return INSTANCE;
+  }
+
+  /**
+   * Constructs the appropriate {@link SpaceViolationPolicyEnforcement} for 
tables that are
+   * in violation of their space quota.
+   */
+  public SpaceViolationPolicyEnforcement create(
+  RegionServerServices rss, TableName tableName, SpaceQuotaSnapshot 
snapshot) {
+SpaceViolationPolicyEnforcement enforcement;
+SpaceQuotaStatus status = snapshot.getQuotaStatus();
+if (!status.isInViolation()) {
+  throw new IllegalArgumentException(tableName + " is not in violation. 
Snapshot=" + snapshot);
+}
+switch (status.getPolicy()) {
+  case DISABLE:
+enforcement = new DisableTableViolationPolicyEnforcement();
+break;
+  case NO_WRITES_COMPACTIONS:
+enforcement = new NoWritesCompactionsViolationPolicyEnforcement();
+break;
+  case NO_WRITES:
+enforcement = new NoWritesViolationPolicyEnforcement();
+break;
+  case NO_INSERTS:
+enforcement = new NoInsertsViolationPolicyEnforcement();
+break;
+  default:
+throw new IllegalArgumentException("Unhandled SpaceViolationPolicy: " 
+ status.getPolicy());
+}
+enforcement.initialize(rss, tableName, snapshot);
+return enforcement;
+  }
+
+  /**
+   * Creates the "default" {@link SpaceViolationPolicyEnforcement} for a table 
that isn't in
+   * violation. This is used to have uniform policy checking for tables in and 
not quotas.
+   */
+  public SpaceViolationPolicyEnforcement createWithoutViolation(
+  RegionServerServices rss, TableName tableName, SpaceQuotaSnapshot 
snapshot) {
+SpaceQuotaStatus status = snapshot.getQuotaStatus();
+if (status.isInViolation()) {
+  throw new IllegalArgumentException(
+  tableName + " is in violation. Logic error. Snapshot=" + snapshot);
+}
+BulkLoadVerifyingViolationPolicyEnforcement enforcement = new 
BulkLoadVerifyingViolationPolicyEnforcement();
+enforcement.initialize(rss, tableName, snapshot);
+return enforcement;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hbase/blob/0d76d667/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/TableQuotaSnapshotStore.java

[07/50] [abbrv] hbase git commit: HBASE-17866: Implement async setQuota/getQuota methods

2017-04-17 Thread elserj
HBASE-17866: Implement async setQuota/getQuota methods

Signed-off-by: Guanghao Zhang 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8db97603
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8db97603
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8db97603

Branch: refs/heads/HBASE-16961
Commit: 8db9760363890d4d0bfaba25ae6797d45aaf7fec
Parents: 7678855
Author: huzheng 
Authored: Fri Apr 14 14:51:38 2017 +0800
Committer: Guanghao Zhang 
Committed: Mon Apr 17 09:49:30 2017 +0800

--
 .../apache/hadoop/hbase/client/AsyncAdmin.java  |  16 ++
 .../hadoop/hbase/client/AsyncHBaseAdmin.java|  47 +
 .../hadoop/hbase/quotas/QuotaRetriever.java |  32 +--
 .../hadoop/hbase/quotas/QuotaTableUtil.java |  32 +++
 .../hbase/client/TestAsyncQuotaAdminApi.java| 207 +++
 5 files changed, 306 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/8db97603/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
index ab791c2..270f28f 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hbase.client;
 
+import java.util.List;
 import java.util.concurrent.CompletableFuture;
 import java.util.regex.Pattern;
 
@@ -27,6 +28,8 @@ import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.NamespaceDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.quotas.QuotaFilter;
+import org.apache.hadoop.hbase.quotas.QuotaSettings;
 import org.apache.hadoop.hbase.util.Pair;
 
 /**
@@ -465,4 +468,17 @@ public interface AsyncAdmin {
*  startcode. Here is an example:  
host187.example.com,60020,1289493121758
*/
   CompletableFuture move(final byte[] regionName, final byte[] 
destServerName);
+
+  /**
+   * Apply the new quota settings.
+   * @param quota the quota settings
+   */
+  CompletableFuture setQuota(final QuotaSettings quota);
+
+  /**
+   * List the quotas based on the filter.
+   * @param filter the quota settings filter
+   * @return the QuotaSetting list, which wrapped by a CompletableFuture.
+   */
+  CompletableFuture getQuota(QuotaFilter filter);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/8db97603/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
index e42ee57..180cd19 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
@@ -56,6 +56,9 @@ import 
org.apache.hadoop.hbase.client.AsyncRpcRetryingCallerFactory.MasterReques
 import org.apache.hadoop.hbase.client.Scan.ReadType;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
 import org.apache.hadoop.hbase.ipc.HBaseRpcController;
+import org.apache.hadoop.hbase.quotas.QuotaFilter;
+import org.apache.hadoop.hbase.quotas.QuotaSettings;
+import org.apache.hadoop.hbase.quotas.QuotaTableUtil;
 import org.apache.hadoop.hbase.shaded.com.google.protobuf.RpcCallback;
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
@@ -112,6 +115,8 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.OfflineReg
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.OfflineRegionResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetBalancerRunningRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetBalancerRunningResponse;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateTableRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.TruncateTableResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.UnassignRegionRequest;
@@ -1149,6 +1154,48 @@ public class 

[35/50] [abbrv] hbase git commit: HBASE-17259 API to remove space quotas on a table/namespace

2017-04-17 Thread elserj
HBASE-17259 API to remove space quotas on a table/namespace


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0effca42
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0effca42
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0effca42

Branch: refs/heads/HBASE-16961
Commit: 0effca421562066d10d13575f5c12434e2741c63
Parents: 0d76d66
Author: Josh Elser 
Authored: Wed Jan 11 12:47:06 2017 -0500
Committer: Josh Elser 
Committed: Mon Apr 17 15:35:32 2017 -0400

--
 .../hbase/quotas/QuotaSettingsFactory.java  |  22 +++
 .../hadoop/hbase/quotas/QuotaTableUtil.java |   6 +-
 .../hadoop/hbase/quotas/SpaceLimitSettings.java |  44 -
 .../hbase/quotas/TestQuotaSettingsFactory.java  |  20 +++
 .../shaded/protobuf/generated/QuotaProtos.java  | 157 +++---
 .../src/main/protobuf/Quota.proto   |   1 +
 .../hbase/protobuf/generated/QuotaProtos.java   | 159 ---
 hbase-protocol/src/main/protobuf/Quota.proto|   1 +
 .../hadoop/hbase/quotas/MasterQuotaManager.java |   9 +-
 .../hadoop/hbase/quotas/TestQuotaAdmin.java |  49 +-
 10 files changed, 423 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0effca42/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
index 7f1c180..184277d 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
@@ -316,6 +316,17 @@ public class QuotaSettingsFactory {
   }
 
   /**
+   * Creates a {@link QuotaSettings} object to remove the FileSystem space 
quota for the given
+   * table.
+   *
+   * @param tableName The name of the table to remove the quota for.
+   * @return A {@link QuotaSettings} object.
+   */
+  public static QuotaSettings removeTableSpaceLimit(TableName tableName) {
+return new SpaceLimitSettings(tableName, true);
+  }
+
+  /**
* Creates a {@link QuotaSettings} object to limit the FileSystem space 
usage for the given
* namespace to the given size in bytes. When the space usage is exceeded by 
all tables in the
* namespace, the provided {@link SpaceViolationPolicy} is enacted on all 
tables in the namespace.
@@ -329,4 +340,15 @@ public class QuotaSettingsFactory {
   final String namespace, long sizeLimit, final SpaceViolationPolicy 
violationPolicy) {
 return new SpaceLimitSettings(namespace, sizeLimit, violationPolicy);
   }
+
+  /**
+   * Creates a {@link QuotaSettings} object to remove the FileSystem space 
quota for the given
+* namespace.
+   *
+   * @param namespace The namespace to remove the quota on.
+   * @return A {@link QuotaSettings} object.
+   */
+  public static QuotaSettings removeNamespaceSpaceLimit(String namespace) {
+return new SpaceLimitSettings(namespace, true);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/0effca42/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
index 66535b2..ce4cd04 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
@@ -422,7 +422,11 @@ public class QuotaTableUtil {
 boolean hasSettings = false;
 hasSettings |= quotas.hasThrottle();
 hasSettings |= quotas.hasBypassGlobals();
-hasSettings |= quotas.hasSpace();
+// Only when there is a space quota, make sure there's actually both 
fields provided
+// Otherwise, it's a noop.
+if (quotas.hasSpace()) {
+  hasSettings |= (quotas.getSpace().hasSoftLimit() && 
quotas.getSpace().hasViolationPolicy());
+}
 return !hasSettings;
   }
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/0effca42/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/SpaceLimitSettings.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/SpaceLimitSettings.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/SpaceLimitSettings.java
index e54882e..8ff7623 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/SpaceLimitSettings.java
+++ 

[18/50] [abbrv] hbase git commit: HBASE-16998 Implement Master-side analysis of region space reports

2017-04-17 Thread elserj
HBASE-16998 Implement Master-side analysis of region space reports

Adds a new Chore to the Master that analyzes the reports that are
sent by RegionServers. The Master must then, for all tables with
quotas, determine the tables that are violating quotas and move
those tables into violation. Similarly, tables no longer violating
the quota can be moved out of violation.

The Chore is the "stateful" bit, managing which tables are and
are not in violation. Everything else is just performing
computation and informing the Chore on the updated state.

Added InterfaceAudience annotations and clean up the QuotaObserverChore
constructor. Cleaned up some javadoc and QuotaObserverChore. Reuse
the QuotaViolationStore impl objects.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/bcf6da40
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/bcf6da40
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/bcf6da40

Branch: refs/heads/HBASE-16961
Commit: bcf6da40514f0858c8f63cb9ac379dbcd28d1058
Parents: d77ec49
Author: Josh Elser 
Authored: Tue Nov 8 18:55:12 2016 -0500
Committer: Josh Elser 
Committed: Mon Apr 17 15:35:31 2017 -0400

--
 .../hadoop/hbase/quotas/QuotaRetriever.java |  27 +-
 .../org/apache/hadoop/hbase/master/HMaster.java |  20 +
 .../hadoop/hbase/quotas/MasterQuotaManager.java |   1 +
 .../quotas/NamespaceQuotaViolationStore.java| 127 
 .../hadoop/hbase/quotas/QuotaObserverChore.java | 618 +++
 .../hbase/quotas/QuotaViolationStore.java   |  89 +++
 .../quotas/SpaceQuotaViolationNotifier.java |  44 ++
 .../SpaceQuotaViolationNotifierForTest.java |  50 ++
 .../hbase/quotas/TableQuotaViolationStore.java  | 127 
 .../TestNamespaceQuotaViolationStore.java   | 156 +
 .../hbase/quotas/TestQuotaObserverChore.java| 106 
 .../TestQuotaObserverChoreWithMiniCluster.java  | 596 ++
 .../hadoop/hbase/quotas/TestQuotaTableUtil.java |   4 -
 .../quotas/TestTableQuotaViolationStore.java| 151 +
 .../hbase/quotas/TestTablesWithQuotas.java  | 198 ++
 15 files changed, 2305 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/bcf6da40/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaRetriever.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaRetriever.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaRetriever.java
index 0f7baa5..4482693 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaRetriever.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaRetriever.java
@@ -22,6 +22,7 @@ import java.io.Closeable;
 import java.io.IOException;
 import java.util.Iterator;
 import java.util.LinkedList;
+import java.util.Objects;
 import java.util.Queue;
 
 import org.apache.commons.logging.Log;
@@ -54,11 +55,23 @@ public class QuotaRetriever implements Closeable, 
Iterable {
   private Connection connection;
   private Table table;
 
-  private QuotaRetriever() {
+  /**
+   * Should QutoaRetriever manage the state of the connection, or leave it be.
+   */
+  private boolean isManagedConnection = false;
+
+  QuotaRetriever() {
   }
 
   void init(final Configuration conf, final Scan scan) throws IOException {
-this.connection = ConnectionFactory.createConnection(conf);
+// Set this before creating the connection and passing it down to make sure
+// it's cleaned up if we fail to construct the Scanner.
+this.isManagedConnection = true;
+init(ConnectionFactory.createConnection(conf), scan);
+  }
+
+  void init(final Connection conn, final Scan scan) throws IOException {
+this.connection = Objects.requireNonNull(conn);
 this.table = this.connection.getTable(QuotaTableUtil.QUOTA_TABLE_NAME);
 try {
   scanner = table.getScanner(scan);
@@ -77,10 +90,14 @@ public class QuotaRetriever implements Closeable, 
Iterable {
   this.table.close();
   this.table = null;
 }
-if (this.connection != null) {
-  this.connection.close();
-  this.connection = null;
+// Null out the connection on close() even if we didn't explicitly close it
+// to maintain typical semantics.
+if (isManagedConnection) {
+  if (this.connection != null) {
+this.connection.close();
+  }
 }
+this.connection = null;
   }
 
   public QuotaSettings next() throws IOException {

http://git-wip-us.apache.org/repos/asf/hbase/blob/bcf6da40/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
--
diff --git 

[40/50] [abbrv] hbase git commit: HBASE-17568 Better handle stale/missing region size reports

2017-04-17 Thread elserj
HBASE-17568 Better handle stale/missing region size reports

* Expire region reports in the master after a timeout.
* Move regions in violation out of violation when insufficient
region size reports are observed.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/71e14a3c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/71e14a3c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/71e14a3c

Branch: refs/heads/HBASE-16961
Commit: 71e14a3cccfbe7125284a32250e4aefc0d575b30
Parents: 4db44ad
Author: Josh Elser 
Authored: Fri Feb 3 16:33:47 2017 -0500
Committer: Josh Elser 
Committed: Mon Apr 17 15:44:00 2017 -0400

--
 .../hadoop/hbase/master/MasterRpcServices.java  |   4 +-
 .../hadoop/hbase/quotas/MasterQuotaManager.java |  86 ++-
 .../hadoop/hbase/quotas/QuotaObserverChore.java |  53 -
 .../hbase/quotas/TestMasterQuotaManager.java|  48 +++-
 .../TestQuotaObserverChoreRegionReports.java| 233 +++
 5 files changed, 412 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/71e14a3c/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
index 42f99b2..3d72a3e 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
@@ -251,6 +251,7 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos.Updat
 import org.apache.hadoop.hbase.snapshot.ClientSnapshotDescriptionUtils;
 import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
 import org.apache.hadoop.hbase.util.ForeignExceptionUtil;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.zookeeper.KeeperException;
@@ -2027,8 +2028,9 @@ public class MasterRpcServices extends RSRpcServices
 return RegionSpaceUseReportResponse.newBuilder().build();
   }
   MasterQuotaManager quotaManager = this.master.getMasterQuotaManager();
+  final long now = EnvironmentEdgeManager.currentTime();
   for (RegionSpaceUse report : request.getSpaceUseList()) {
-quotaManager.addRegionSize(HRegionInfo.convert(report.getRegion()), 
report.getSize());
+quotaManager.addRegionSize(HRegionInfo.convert(report.getRegion()), 
report.getSize(), now);
   }
   return RegionSpaceUseReportResponse.newBuilder().build();
 } catch (Exception e) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/71e14a3c/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
index cb614ea..0622dba 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
@@ -22,9 +22,12 @@ import java.io.IOException;
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.HashSet;
+import java.util.Iterator;
 import java.util.Map;
+import java.util.Map.Entry;
 import java.util.concurrent.ConcurrentHashMap;
 
+import org.apache.commons.lang.builder.HashCodeBuilder;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.DoNotRetryIOException;
@@ -47,6 +50,8 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Throttle;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.ThrottleRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.TimedQuota;
 
+import com.google.common.annotations.VisibleForTesting;
+
 /**
  * Master Quota Manager.
  * It is responsible for initialize the quota table on the first-run and
@@ -68,7 +73,7 @@ public class MasterQuotaManager implements 
RegionStateListener {
   private NamedLock userLocks;
   private boolean enabled = false;
   private NamespaceAuditor namespaceQuotaManager;
-  private ConcurrentHashMap regionSizes;
+  private ConcurrentHashMap 
regionSizes;
 
   public MasterQuotaManager(final MasterServices masterServices) {
 this.masterServices = masterServices;
@@ -531,21 +536,88 @@ public class 

[28/50] [abbrv] hbase git commit: HBASE-17001 Enforce quota violation policies in the RegionServer

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/0d76d667/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/policies/BaseViolationPolicyEnforcement.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/policies/BaseViolationPolicyEnforcement.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/policies/BaseViolationPolicyEnforcement.java
new file mode 100644
index 000..ec8f1bf
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/policies/BaseViolationPolicyEnforcement.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.quotas.policies;
+
+import org.apache.hadoop.hbase.client.Append;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class BaseViolationPolicyEnforcement {
+
+  static final Append APPEND = new Append(Bytes.toBytes("foo"));
+  static final Delete DELETE = new Delete(Bytes.toBytes("foo"));
+  static final Increment INCREMENT = new Increment(Bytes.toBytes("foo"));
+  static final Put PUT = new Put(Bytes.toBytes("foo"));
+}

http://git-wip-us.apache.org/repos/asf/hbase/blob/0d76d667/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/policies/TestBulkLoadCheckingViolationPolicyEnforcement.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/policies/TestBulkLoadCheckingViolationPolicyEnforcement.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/policies/TestBulkLoadCheckingViolationPolicyEnforcement.java
new file mode 100644
index 000..abe1b9d
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/policies/TestBulkLoadCheckingViolationPolicyEnforcement.java
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.quotas.policies;
+
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.quotas.SpaceLimitingException;
+import org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshot;
+import org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshot.SpaceQuotaStatus;
+import org.apache.hadoop.hbase.quotas.SpaceViolationPolicyEnforcement;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category(SmallTests.class)
+public class TestBulkLoadCheckingViolationPolicyEnforcement {
+
+  FileSystem fs;
+  RegionServerServices rss;
+  TableName tableName;
+  SpaceViolationPolicyEnforcement policy;
+
+  @Before
+  public void setup() {
+fs = mock(FileSystem.class);
+rss = mock(RegionServerServices.class);
+tableName = TableName.valueOf("foo");
+policy = new BulkLoadVerifyingViolationPolicyEnforcement();
+  }
+
+  @Test
+  public void testFilesUnderLimit() throws Exception {
+final List paths = new ArrayList<>();
+final List 

[22/50] [abbrv] hbase git commit: HBASE-16995 Build client Java API and client protobuf messages (Josh Elser)

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/990062a9/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java
--
diff --git 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java
 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java
index 05894b9..1925828 100644
--- 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java
+++ 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java
@@ -217,12 +217,20 @@ public final class QuotaProtos {
  * THROTTLE = 1;
  */
 THROTTLE(0, 1),
+/**
+ * SPACE = 2;
+ */
+SPACE(1, 2),
 ;
 
 /**
  * THROTTLE = 1;
  */
 public static final int THROTTLE_VALUE = 1;
+/**
+ * SPACE = 2;
+ */
+public static final int SPACE_VALUE = 2;
 
 
 public final int getNumber() { return value; }
@@ -230,6 +238,7 @@ public final class QuotaProtos {
 public static QuotaType valueOf(int value) {
   switch (value) {
 case 1: return THROTTLE;
+case 2: return SPACE;
 default: return null;
   }
 }
@@ -281,6 +290,142 @@ public final class QuotaProtos {
 // @@protoc_insertion_point(enum_scope:hbase.pb.QuotaType)
   }
 
+  /**
+   * Protobuf enum {@code hbase.pb.SpaceViolationPolicy}
+   *
+   * 
+   * Defines what action should be taken when the SpaceQuota is violated
+   * 
+   */
+  public enum SpaceViolationPolicy
+  implements com.google.protobuf.ProtocolMessageEnum {
+/**
+ * DISABLE = 1;
+ *
+ * 
+ * Disable the table(s)
+ * 
+ */
+DISABLE(0, 1),
+/**
+ * NO_WRITES_COMPACTIONS = 2;
+ *
+ * 
+ * No writes, bulk-loads, or compactions
+ * 
+ */
+NO_WRITES_COMPACTIONS(1, 2),
+/**
+ * NO_WRITES = 3;
+ *
+ * 
+ * No writes or bulk-loads
+ * 
+ */
+NO_WRITES(2, 3),
+/**
+ * NO_INSERTS = 4;
+ *
+ * 
+ * No puts or bulk-loads, but deletes are allowed
+ * 
+ */
+NO_INSERTS(3, 4),
+;
+
+/**
+ * DISABLE = 1;
+ *
+ * 
+ * Disable the table(s)
+ * 
+ */
+public static final int DISABLE_VALUE = 1;
+/**
+ * NO_WRITES_COMPACTIONS = 2;
+ *
+ * 
+ * No writes, bulk-loads, or compactions
+ * 
+ */
+public static final int NO_WRITES_COMPACTIONS_VALUE = 2;
+/**
+ * NO_WRITES = 3;
+ *
+ * 
+ * No writes or bulk-loads
+ * 
+ */
+public static final int NO_WRITES_VALUE = 3;
+/**
+ * NO_INSERTS = 4;
+ *
+ * 
+ * No puts or bulk-loads, but deletes are allowed
+ * 
+ */
+public static final int NO_INSERTS_VALUE = 4;
+
+
+public final int getNumber() { return value; }
+
+public static SpaceViolationPolicy valueOf(int value) {
+  switch (value) {
+case 1: return DISABLE;
+case 2: return NO_WRITES_COMPACTIONS;
+case 3: return NO_WRITES;
+case 4: return NO_INSERTS;
+default: return null;
+  }
+}
+
+public static 
com.google.protobuf.Internal.EnumLiteMap
+internalGetValueMap() {
+  return internalValueMap;
+}
+private static 
com.google.protobuf.Internal.EnumLiteMap
+internalValueMap =
+  new com.google.protobuf.Internal.EnumLiteMap() 
{
+public SpaceViolationPolicy findValueByNumber(int number) {
+  return SpaceViolationPolicy.valueOf(number);
+}
+  };
+
+public final com.google.protobuf.Descriptors.EnumValueDescriptor
+getValueDescriptor() {
+  return getDescriptor().getValues().get(index);
+}
+public final com.google.protobuf.Descriptors.EnumDescriptor
+getDescriptorForType() {
+  return getDescriptor();
+}
+public static final com.google.protobuf.Descriptors.EnumDescriptor
+getDescriptor() {
+  return 
org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.getDescriptor().getEnumTypes().get(3);
+}
+
+private static final SpaceViolationPolicy[] VALUES = values();
+
+public static SpaceViolationPolicy valueOf(
+com.google.protobuf.Descriptors.EnumValueDescriptor desc) {
+  if (desc.getType() != getDescriptor()) {
+throw new java.lang.IllegalArgumentException(
+  "EnumValueDescriptor is not for this type.");
+  }
+  return VALUES[desc.getIndex()];
+}
+
+private final int index;
+private final int value;
+
+private SpaceViolationPolicy(int index, int value) {
+  this.index = index;
+  this.value = value;
+}
+
+// @@protoc_insertion_point(enum_scope:hbase.pb.SpaceViolationPolicy)
+  }
+
   public interface TimedQuotaOrBuilder
   extends com.google.protobuf.MessageOrBuilder {
 
@@ -3315,6 +3460,20 @@ public final class QuotaProtos {
 

[26/50] [abbrv] hbase git commit: HBASE-16995 Build client Java API and client protobuf messages - addendum fixes line lengths (Josh Elser)

2017-04-17 Thread elserj
HBASE-16995 Build client Java API and client protobuf messages - addendum fixes 
line lengths (Josh Elser)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/988a23ef
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/988a23ef
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/988a23ef

Branch: refs/heads/HBASE-16961
Commit: 988a23eff38e460ec383544930d48147d4771d4c
Parents: eaeef44
Author: tedyu 
Authored: Mon Nov 21 13:00:27 2016 -0800
Committer: Josh Elser 
Committed: Mon Apr 17 15:35:31 2017 -0400

--
 .../hbase/quotas/QuotaSettingsFactory.java  | 20 
 .../hadoop/hbase/quotas/SpaceLimitSettings.java |  8 
 .../hbase/shaded/protobuf/ProtobufUtil.java |  9 +
 3 files changed, 21 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/988a23ef/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
index 8512e39..7f1c180 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
@@ -128,7 +128,8 @@ public class QuotaSettingsFactory {
 
   static QuotaSettings fromSpace(TableName table, String namespace, SpaceQuota 
protoQuota) {
 if ((null == table && null == namespace) || (null != table && null != 
namespace)) {
-  throw new IllegalArgumentException("Can only construct 
SpaceLimitSettings for a table or namespace.");
+  throw new IllegalArgumentException(
+  "Can only construct SpaceLimitSettings for a table or namespace.");
 }
 if (null != table) {
   return SpaceLimitSettings.fromSpaceQuota(table, protoQuota);
@@ -300,29 +301,32 @@ public class QuotaSettingsFactory {
*/
 
   /**
-   * Creates a {@link QuotaSettings} object to limit the FileSystem space 
usage for the given table to the given size in bytes.
-   * When the space usage is exceeded by the table, the provided {@link 
SpaceViolationPolicy} is enacted on the table.
+   * Creates a {@link QuotaSettings} object to limit the FileSystem space 
usage for the given table
+   * to the given size in bytes. When the space usage is exceeded by the 
table, the provided
+   * {@link SpaceViolationPolicy} is enacted on the table.
*
* @param tableName The name of the table on which the quota should be 
applied.
* @param sizeLimit The limit of a table's size in bytes.
* @param violationPolicy The action to take when the quota is exceeded.
* @return An {@link QuotaSettings} object.
*/
-  public static QuotaSettings limitTableSpace(final TableName tableName, long 
sizeLimit, final SpaceViolationPolicy violationPolicy) {
+  public static QuotaSettings limitTableSpace(
+  final TableName tableName, long sizeLimit, final SpaceViolationPolicy 
violationPolicy) {
 return new SpaceLimitSettings(tableName, sizeLimit, violationPolicy);
   }
 
   /**
-   * Creates a {@link QuotaSettings} object to limit the FileSystem space 
usage for the given namespace to the given size in bytes.
-   * When the space usage is exceeded by all tables in the namespace, the 
provided {@link SpaceViolationPolicy} is enacted on
-   * all tables in the namespace.
+   * Creates a {@link QuotaSettings} object to limit the FileSystem space 
usage for the given
+   * namespace to the given size in bytes. When the space usage is exceeded by 
all tables in the
+   * namespace, the provided {@link SpaceViolationPolicy} is enacted on all 
tables in the namespace.
*
* @param namespace The namespace on which the quota should be applied.
* @param sizeLimit The limit of the namespace's size in bytes.
* @param violationPolicy The action to take when the the quota is exceeded.
* @return An {@link QuotaSettings} object.
*/
-  public static QuotaSettings limitNamespaceSpace(final String namespace, long 
sizeLimit, final SpaceViolationPolicy violationPolicy) {
+  public static QuotaSettings limitNamespaceSpace(
+  final String namespace, long sizeLimit, final SpaceViolationPolicy 
violationPolicy) {
 return new SpaceLimitSettings(namespace, sizeLimit, violationPolicy);
   }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/988a23ef/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/SpaceLimitSettings.java
--
diff --git 

[50/50] [abbrv] hbase git commit: HBASE-17002 JMX metrics and some UI additions for space quotas

2017-04-17 Thread elserj
HBASE-17002 JMX metrics and some UI additions for space quotas


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/96c6b8fa
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/96c6b8fa
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/96c6b8fa

Branch: refs/heads/HBASE-16961
Commit: 96c6b8fa8c7c038ee9b0134fb92061b0e61a5fd4
Parents: 71e14a3
Author: Josh Elser 
Authored: Wed Feb 15 14:24:57 2017 -0500
Committer: Josh Elser 
Committed: Mon Apr 17 15:47:49 2017 -0400

--
 .../hbase/client/ConnectionImplementation.java  |8 +
 .../hadoop/hbase/client/QuotaStatusCalls.java   |   39 +-
 .../client/ShortCircuitMasterConnection.java|8 +
 .../hadoop/hbase/quotas/QuotaTableUtil.java |   41 +
 .../hbase/shaded/protobuf/RequestConverter.java |   11 +
 .../hbase/master/MetricsMasterQuotaSource.java  |   75 +
 .../master/MetricsMasterQuotaSourceFactory.java |   26 +
 .../hbase/master/MetricsMasterWrapper.java  |   13 +
 .../MetricsRegionServerQuotaSource.java |   54 +
 .../MetricsMasterQuotaSourceFactoryImpl.java|   36 +
 .../master/MetricsMasterQuotaSourceImpl.java|  129 +
 ...hadoop.hbase.master.MetricsMasterQuotaSource |   18 +
 ...hbase.master.MetricsMasterQuotaSourceFactory |   18 +
 .../shaded/protobuf/generated/MasterProtos.java |   93 +-
 .../shaded/protobuf/generated/QuotaProtos.java  | 3099 +-
 .../src/main/protobuf/Master.proto  |6 +-
 .../src/main/protobuf/Quota.proto   |   17 +
 .../org/apache/hadoop/hbase/master/HMaster.java |2 +-
 .../hadoop/hbase/master/MasterRpcServices.java  |   38 +
 .../hadoop/hbase/master/MetricsMaster.java  |   42 +
 .../hbase/master/MetricsMasterWrapperImpl.java  |   42 +-
 .../hadoop/hbase/quotas/QuotaObserverChore.java |   92 +-
 .../resources/hbase-webapps/master/table.jsp|   59 +
 .../hbase/master/TestMasterMetricsWrapper.java  |   17 +
 .../hbase/quotas/TestQuotaStatusRPCs.java   |   83 +
 25 files changed, 4032 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/96c6b8fa/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
index 3f27e1c..d9219f3 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
@@ -94,6 +94,8 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SecurityCa
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SecurityCapabilitiesResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetNormalizerRunningRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetNormalizerRunningResponse;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.GetQuotaStatesRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.GetQuotaStatesResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.GetSpaceQuotaRegionSizesRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.GetSpaceQuotaRegionSizesResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos.AddReplicationPeerRequest;
@@ -1740,6 +1742,12 @@ class ConnectionImplementation implements 
ClusterConnection, Closeable {
   throws ServiceException {
 return stub.getSpaceQuotaRegionSizes(controller, request);
   }
+
+  @Override
+  public GetQuotaStatesResponse getQuotaStates(
+  RpcController controller, GetQuotaStatesRequest request) throws 
ServiceException {
+return stub.getQuotaStates(controller, request);
+  }
 };
   }
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/96c6b8fa/hbase-client/src/main/java/org/apache/hadoop/hbase/client/QuotaStatusCalls.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/QuotaStatusCalls.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/QuotaStatusCalls.java
index f0f385d..af36d1e 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/QuotaStatusCalls.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/QuotaStatusCalls.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import 

[34/50] [abbrv] hbase git commit: HBASE-17478 Avoid reporting FS use when quotas are disabled

2017-04-17 Thread elserj
HBASE-17478 Avoid reporting FS use when quotas are disabled

Also, gracefully produce responses when quotas are disabled.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/efd6edc4
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/efd6edc4
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/efd6edc4

Branch: refs/heads/HBASE-16961
Commit: efd6edc44193155fabbf8dfe8d7af360087023c1
Parents: 7a1f9d3
Author: Josh Elser 
Authored: Tue Jan 17 14:41:45 2017 -0500
Committer: Josh Elser 
Committed: Mon Apr 17 15:35:32 2017 -0400

--
 .../hadoop/hbase/master/MasterRpcServices.java  |  4 +++
 .../hadoop/hbase/quotas/MasterQuotaManager.java | 13 +--
 .../hbase/regionserver/HRegionServer.java   |  5 ++-
 .../hbase/quotas/TestMasterQuotaManager.java| 37 
 4 files changed, 56 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/efd6edc4/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
index f454248..de437fd 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
@@ -59,6 +59,7 @@ import 
org.apache.hadoop.hbase.procedure.MasterProcedureManager;
 import org.apache.hadoop.hbase.procedure2.Procedure;
 import org.apache.hadoop.hbase.procedure2.ProcedureUtil;
 import org.apache.hadoop.hbase.quotas.MasterQuotaManager;
+import org.apache.hadoop.hbase.quotas.QuotaUtil;
 import org.apache.hadoop.hbase.regionserver.RSRpcServices;
 import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
@@ -2016,6 +2017,9 @@ public class MasterRpcServices extends RSRpcServices
   RegionSpaceUseReportRequest request) throws ServiceException {
 try {
   master.checkInitialized();
+  if (!QuotaUtil.isQuotaEnabled(master.getConfiguration())) {
+return RegionSpaceUseReportResponse.newBuilder().build();
+  }
   MasterQuotaManager quotaManager = this.master.getMasterQuotaManager();
   for (RegionSpaceUse report : request.getSpaceUseList()) {
 quotaManager.addRegionSize(HRegionInfo.convert(report.getRegion()), 
report.getSize());

http://git-wip-us.apache.org/repos/asf/hbase/blob/efd6edc4/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
index a5832f9..cb614ea 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
@@ -19,6 +19,7 @@
 package org.apache.hadoop.hbase.quotas;
 
 import java.io.IOException;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Map;
@@ -58,6 +59,8 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.TimedQuota;
 @InterfaceStability.Evolving
 public class MasterQuotaManager implements RegionStateListener {
   private static final Log LOG = LogFactory.getLog(MasterQuotaManager.class);
+  private static final Map EMPTY_MAP = 
Collections.unmodifiableMap(
+  new HashMap<>());
 
   private final MasterServices masterServices;
   private NamedLock namespaceLocks;
@@ -529,13 +532,19 @@ public class MasterQuotaManager implements 
RegionStateListener {
   }
 
   public void addRegionSize(HRegionInfo hri, long size) {
-// TODO Make proper API
+if (null == regionSizes) {
+  return;
+}
+// TODO Make proper API?
 // TODO Prevent from growing indefinitely
 regionSizes.put(hri, size);
   }
 
   public Map snapshotRegionSizes() {
-// TODO Make proper API
+if (null == regionSizes) {
+  return EMPTY_MAP;
+}
+// TODO Make proper API?
 return new HashMap<>(regionSizes);
   }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/efd6edc4/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 

[03/50] [abbrv] hbase git commit: HBASE-17090 Redundant exclusion of jruby-complete in pom of hbase-spark

2017-04-17 Thread elserj
HBASE-17090 Redundant exclusion of jruby-complete in pom of hbase-spark

Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e2a74615
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e2a74615
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e2a74615

Branch: refs/heads/HBASE-16961
Commit: e2a746152ca8c02c18214f0b5180ed8dcc84e947
Parents: 9dd5cda
Author: Xiang Li 
Authored: Fri Apr 14 16:15:42 2017 +0800
Committer: Michael Stack 
Committed: Fri Apr 14 08:08:42 2017 -0700

--
 hbase-spark/pom.xml | 24 
 1 file changed, 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e2a74615/hbase-spark/pom.xml
--
diff --git a/hbase-spark/pom.xml b/hbase-spark/pom.xml
index a7997f1..1afae85 100644
--- a/hbase-spark/pom.xml
+++ b/hbase-spark/pom.xml
@@ -290,10 +290,6 @@
 thrift
 
 
-org.jruby
-jruby-complete
-
-
 org.slf4j
 slf4j-log4j12
 
@@ -338,10 +334,6 @@
 jasper-compiler
 
 
-org.jruby
-jruby-complete
-
-
 org.jboss.netty
 netty
 
@@ -382,10 +374,6 @@
 thrift
 
 
-org.jruby
-jruby-complete
-
-
 org.slf4j
 slf4j-log4j12
 
@@ -430,10 +418,6 @@
 jasper-compiler
 
 
-org.jruby
-jruby-complete
-
-
 org.jboss.netty
 netty
 
@@ -460,10 +444,6 @@
 thrift
 
 
-org.jruby
-jruby-complete
-
-
 org.slf4j
 slf4j-log4j12
 
@@ -508,10 +488,6 @@
 jasper-compiler
 
 
-org.jruby
-jruby-complete
-
-
 org.jboss.netty
 netty
 



[36/50] [abbrv] hbase git commit: HBASE-16999 Implement master and regionserver synchronization of quota state

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/dccfc846/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestTableSpaceQuotaViolationNotifier.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestTableSpaceQuotaViolationNotifier.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestTableSpaceQuotaViolationNotifier.java
new file mode 100644
index 000..4a7000c
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestTableSpaceQuotaViolationNotifier.java
@@ -0,0 +1,144 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.quotas;
+
+import static org.mockito.Matchers.argThat;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map.Entry;
+import java.util.NavigableMap;
+import java.util.Objects;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.ArgumentMatcher;
+
+/**
+ * Test case for {@link TableSpaceQuotaViolationNotifier}.
+ */
+@Category(SmallTests.class)
+public class TestTableSpaceQuotaViolationNotifier {
+
+  private TableSpaceQuotaViolationNotifier notifier;
+  private Connection conn;
+
+  @Before
+  public void setup() throws Exception {
+notifier = new TableSpaceQuotaViolationNotifier();
+conn = mock(Connection.class);
+notifier.initialize(conn);
+  }
+
+  @Test
+  public void testToViolation() throws Exception {
+final TableName tn = TableName.valueOf("inviolation");
+final SpaceViolationPolicy policy = SpaceViolationPolicy.NO_INSERTS;
+final Table quotaTable = mock(Table.class);
+
when(conn.getTable(QuotaTableUtil.QUOTA_TABLE_NAME)).thenReturn(quotaTable);
+
+final Put expectedPut = new Put(Bytes.toBytes("t." + 
tn.getNameAsString()));
+final SpaceQuota protoQuota = SpaceQuota.newBuilder()
+.setViolationPolicy(ProtobufUtil.toProtoViolationPolicy(policy))
+.build();
+expectedPut.addColumn(Bytes.toBytes("u"), Bytes.toBytes("v"), 
protoQuota.toByteArray());
+
+notifier.transitionTableToViolation(tn, policy);
+
+verify(quotaTable).put(argThat(new SingleCellPutMatcher(expectedPut)));
+  }
+
+  @Test
+  public void testToObservance() throws Exception {
+final TableName tn = TableName.valueOf("notinviolation");
+final Table quotaTable = mock(Table.class);
+
when(conn.getTable(QuotaTableUtil.QUOTA_TABLE_NAME)).thenReturn(quotaTable);
+
+final Delete expectedDelete = new Delete(Bytes.toBytes("t." + 
tn.getNameAsString()));
+expectedDelete.addColumn(Bytes.toBytes("u"), Bytes.toBytes("v"));
+
+notifier.transitionTableToObservance(tn);
+
+verify(quotaTable).delete(argThat(new 
SingleCellDeleteMatcher(expectedDelete)));
+  }
+
+  /**
+   * Parameterized for Puts.
+   */
+  private static class SingleCellPutMatcher extends 
SingleCellMutationMatcher {
+private SingleCellPutMatcher(Put expected) {
+  super(expected);
+}
+  }
+
+  /**
+   * Parameterized for Deletes.
+   */
+  private static class SingleCellDeleteMatcher extends 
SingleCellMutationMatcher {
+private SingleCellDeleteMatcher(Delete expected) {
+  super(expected);
+}
+  }
+
+  /**
+   * Quick hack to verify a Mutation with one column.
+   */
+  private static class SingleCellMutationMatcher extends ArgumentMatcher 
{
+private final Mutation expected;
+
+private SingleCellMutationMatcher(Mutation expected) {
+  

[16/50] [abbrv] hbase git commit: HBASE-17557 HRegionServer#reportRegionSizesForQuotas() should respond to UnsupportedOperationException

2017-04-17 Thread elserj
HBASE-17557 HRegionServer#reportRegionSizesForQuotas() should respond to 
UnsupportedOperationException


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d77ec498
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d77ec498
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d77ec498

Branch: refs/heads/HBASE-16961
Commit: d77ec498c23de9c75390237ce028fe6e32f3c8b3
Parents: 2dea676
Author: tedyu 
Authored: Mon Jan 30 07:47:40 2017 -0800
Committer: Josh Elser 
Committed: Mon Apr 17 15:35:31 2017 -0400

--
 .../quotas/FileSystemUtilizationChore.java  | 20 +---
 .../hbase/regionserver/HRegionServer.java   | 24 
 2 files changed, 36 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d77ec498/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileSystemUtilizationChore.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileSystemUtilizationChore.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileSystemUtilizationChore.java
index 01540eb..efc17ff 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileSystemUtilizationChore.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileSystemUtilizationChore.java
@@ -53,6 +53,9 @@ public class FileSystemUtilizationChore extends 
ScheduledChore {
   static final String FS_UTILIZATION_MAX_ITERATION_DURATION_KEY = 
"hbase.regionserver.quotas.fs.utilization.chore.max.iteration.millis";
   static final long FS_UTILIZATION_MAX_ITERATION_DURATION_DEFAULT = 5000L;
 
+  private int numberOfCyclesToSkip = 0, prevNumberOfCyclesToSkip = 0;
+  private static final int CYCLE_UPPER_BOUND = 32;
+
   private final HRegionServer rs;
   private final long maxIterationMillis;
   private Iterator leftoverRegions;
@@ -67,6 +70,10 @@ public class FileSystemUtilizationChore extends 
ScheduledChore {
 
   @Override
   protected void chore() {
+if (numberOfCyclesToSkip > 0) {
+  numberOfCyclesToSkip--;
+  return;
+}
 final Map onlineRegionSizes = new HashMap<>();
 final Set onlineRegions = new HashSet<>(rs.getOnlineRegions());
 // Process the regions from the last run if we have any. If we are somehow 
having difficulty
@@ -126,7 +133,14 @@ public class FileSystemUtilizationChore extends 
ScheduledChore {
   + skippedSplitParents + " regions due to being the parent of a 
split, and"
   + skippedRegionReplicas + " regions due to being region replicas.");
 }
-reportRegionSizesToMaster(onlineRegionSizes);
+if (!reportRegionSizesToMaster(onlineRegionSizes)) {
+  // backoff reporting
+  numberOfCyclesToSkip = prevNumberOfCyclesToSkip > 0 ? 2 * 
prevNumberOfCyclesToSkip : 1;
+  if (numberOfCyclesToSkip > CYCLE_UPPER_BOUND) {
+numberOfCyclesToSkip = CYCLE_UPPER_BOUND;
+  }
+  prevNumberOfCyclesToSkip = numberOfCyclesToSkip;
+}
   }
 
   /**
@@ -166,8 +180,8 @@ public class FileSystemUtilizationChore extends 
ScheduledChore {
*
* @param onlineRegionSizes The computed region sizes to report.
*/
-  void reportRegionSizesToMaster(Map onlineRegionSizes) {
-this.rs.reportRegionSizesForQuotas(onlineRegionSizes);
+  boolean reportRegionSizesToMaster(Map onlineRegionSizes) {
+return this.rs.reportRegionSizesForQuotas(onlineRegionSizes);
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hbase/blob/d77ec498/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index 9be4131..053e4ac 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -66,6 +66,7 @@ import org.apache.hadoop.hbase.ChoreService;
 import org.apache.hadoop.hbase.ClockOutOfSyncException;
 import org.apache.hadoop.hbase.CoordinatedStateManager;
 import org.apache.hadoop.hbase.CoordinatedStateManagerFactory;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HBaseInterfaceAudience;
 import org.apache.hadoop.hbase.HConstants;
@@ -1248,13 +1249,14 @@ public class HRegionServer extends HasThread implements
* Reports the given map of Regions and their size on the filesystem to the 
active Master.
*
* 

[44/50] [abbrv] hbase git commit: HBASE-17516 Correctly handle case where table and NS quotas both apply

2017-04-17 Thread elserj
HBASE-17516 Correctly handle case where table and NS quotas both apply

The logic surrounding when a table and namespace quota both apply
to a table was incorrect, leading to a case where a table quota
violation which should have fired did not because of the less-strict
namespace quota.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/48332eeb
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/48332eeb
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/48332eeb

Branch: refs/heads/HBASE-16961
Commit: 48332eebfd7fdd0a2065db448e82e82e1548cfc4
Parents: 095fabf
Author: Josh Elser 
Authored: Wed Feb 22 18:32:55 2017 -0500
Committer: Josh Elser 
Committed: Mon Apr 17 15:44:00 2017 -0400

--
 .../hadoop/hbase/quotas/QuotaObserverChore.java | 10 ++-
 .../TestQuotaObserverChoreWithMiniCluster.java  | 66 
 .../hbase/quotas/TestQuotaStatusRPCs.java   | 21 ++-
 .../hadoop/hbase/quotas/TestSpaceQuotas.java| 32 +-
 4 files changed, 97 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/48332eeb/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
index 973ac8c..b9f4592 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
@@ -287,7 +287,8 @@ public class QuotaObserverChore extends ScheduledChore {
   // We want to have a policy of "NONE", moving out of violation
   if (!targetStatus.isInViolation()) {
 for (TableName tableInNS : tablesByNamespace.get(namespace)) {
-  if 
(!tableSnapshotStore.getCurrentState(tableInNS).getQuotaStatus().isInViolation())
 {
+  // If there is a quota on this table in violation
+  if 
(tableSnapshotStore.getCurrentState(tableInNS).getQuotaStatus().isInViolation())
 {
 // Table-level quota violation policy is being applied here.
 if (LOG.isTraceEnabled()) {
   LOG.trace("Not activating Namespace violation policy because a 
Table violation"
@@ -298,16 +299,21 @@ public class QuotaObserverChore extends ScheduledChore {
 this.snapshotNotifier.transitionTable(tableInNS, targetSnapshot);
   }
 }
+  // We want to move into violation at the NS level
   } else {
 // Moving tables in the namespace into violation or to a different 
violation policy
 for (TableName tableInNS : tablesByNamespace.get(namespace)) {
-  if 
(tableSnapshotStore.getCurrentState(tableInNS).getQuotaStatus().isInViolation())
 {
+  final SpaceQuotaSnapshot tableQuotaSnapshot =
+tableSnapshotStore.getCurrentState(tableInNS);
+  final boolean hasTableQuota = QuotaSnapshotStore.NO_QUOTA != 
tableQuotaSnapshot;
+  if (hasTableQuota && 
tableQuotaSnapshot.getQuotaStatus().isInViolation()) {
 // Table-level quota violation policy is being applied here.
 if (LOG.isTraceEnabled()) {
   LOG.trace("Not activating Namespace violation policy because a 
Table violation"
   + " policy is already in effect for " + tableInNS);
 }
   } else {
+// No table quota present or a table quota present that is not in 
violation
 LOG.info(tableInNS + " moving into violation of namespace space 
quota with policy " + targetStatus.getPolicy());
 this.snapshotNotifier.transitionTable(tableInNS, targetSnapshot);
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/48332eeb/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
index 943c898..63198a8 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
@@ -193,40 +193,42 @@ public class TestQuotaObserverChoreWithMiniCluster {
 
 helper.writeData(tn1, 2L * SpaceQuotaHelperForTests.ONE_MEGABYTE);
 admin.flush(tn1);
-Map violatedQuotas = 

[15/50] [abbrv] hbase git commit: Revert "HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)"

2017-04-17 Thread elserj
Revert "HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)"

This reverts commit c2c2178b2eebe4439eadec6b37fae2566944c16b.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ecdfb823
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ecdfb823
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ecdfb823

Branch: refs/heads/HBASE-16961
Commit: ecdfb82326035ad8221940919bbeb3fe16ec2658
Parents: c2c2178
Author: Ramkrishna 
Authored: Tue Apr 18 00:00:12 2017 +0530
Committer: Ramkrishna 
Committed: Tue Apr 18 00:00:12 2017 +0530

--
 .../java/org/apache/hadoop/hbase/CellUtil.java  |  24 ++
 .../org/apache/hadoop/hbase/ExtendedCell.java   |  10 -
 .../org/apache/hadoop/hbase/master/HMaster.java |   2 -
 .../hbase/regionserver/ByteBufferChunkCell.java |  48 ---
 .../apache/hadoop/hbase/regionserver/Chunk.java |  60 +--
 .../hadoop/hbase/regionserver/ChunkCreator.java | 404 ---
 .../hbase/regionserver/HRegionServer.java   |  14 +-
 .../hbase/regionserver/MemStoreChunkPool.java   | 265 
 .../hadoop/hbase/regionserver/MemStoreLAB.java  |   4 +-
 .../hbase/regionserver/MemStoreLABImpl.java | 171 
 .../regionserver/NoTagByteBufferChunkCell.java  |  48 ---
 .../hadoop/hbase/regionserver/OffheapChunk.java |  31 +-
 .../hadoop/hbase/regionserver/OnheapChunk.java  |  32 +-
 .../hadoop/hbase/HBaseTestingUtility.java   |   3 -
 .../coprocessor/TestCoprocessorInterface.java   |   4 -
 .../TestRegionObserverScannerOpenHook.java  |   3 -
 .../coprocessor/TestRegionObserverStacking.java |   3 -
 .../io/hfile/TestScannerFromBucketCache.java|   3 -
 .../hadoop/hbase/master/TestCatalogJanitor.java |   7 -
 .../hadoop/hbase/regionserver/TestBulkLoad.java |   2 +-
 .../hbase/regionserver/TestCellFlatSet.java |   2 +-
 .../regionserver/TestCompactingMemStore.java|  37 +-
 .../TestCompactingToCellArrayMapMemStore.java   |  16 +-
 .../TestCompactionArchiveConcurrentClose.java   |   1 -
 .../TestCompactionArchiveIOException.java   |   1 -
 .../regionserver/TestCompactionPolicy.java  |   1 -
 .../hbase/regionserver/TestDefaultMemStore.java |  14 +-
 .../regionserver/TestFailedAppendAndSync.java   |   1 -
 .../hbase/regionserver/TestHMobStore.java   |   2 +-
 .../hadoop/hbase/regionserver/TestHRegion.java  |   2 -
 .../regionserver/TestHRegionReplayEvents.java   |   2 +-
 .../regionserver/TestMemStoreChunkPool.java |  48 +--
 .../hbase/regionserver/TestMemStoreLAB.java |  27 +-
 .../TestMemstoreLABWithoutPool.java | 168 
 .../hbase/regionserver/TestRecoveredEdits.java  |   1 -
 .../hbase/regionserver/TestRegionIncrement.java |   1 -
 .../hadoop/hbase/regionserver/TestStore.java|   1 -
 .../TestStoreFileRefresherChore.java|   1 -
 .../hbase/regionserver/TestWALLockup.java   |   1 -
 .../TestWALMonotonicallyIncreasingSeqId.java|   1 -
 .../hbase/regionserver/wal/TestDurability.java  |   3 -
 41 files changed, 479 insertions(+), 990 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ecdfb823/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
--
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
index 56de21b..e1bc969 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
@@ -3135,4 +3135,28 @@ public final class CellUtil {
   return Type.DeleteFamily.getCode();
 }
   }
+
+  /**
+   * Clone the passed cell by copying its data into the passed buf.
+   */
+  public static Cell copyCellTo(Cell cell, ByteBuffer buf, int offset, int 
len) {
+int tagsLen = cell.getTagsLength();
+if (cell instanceof ExtendedCell) {
+  ((ExtendedCell) cell).write(buf, offset);
+} else {
+  // Normally all Cell impls within Server will be of type ExtendedCell. 
Just considering the
+  // other case also. The data fragments within Cell is copied into buf as 
in KeyValue
+  // serialization format only.
+  KeyValueUtil.appendTo(cell, buf, offset, true);
+}
+if (tagsLen == 0) {
+  // When tagsLen is 0, make a NoTagsByteBufferKeyValue version. This is 
an optimized class
+  // which directly return tagsLen as 0. So we avoid parsing many length 
components in
+  // reading the tagLength stored in the backing buffer. The Memstore 
addition of every Cell
+  // call getTagsLength().
+  return new NoTagsByteBufferKeyValue(buf, offset, len, 
cell.getSequenceId());
+} else {
+  return new 

[11/50] [abbrv] hbase git commit: HBASE-16875 Changed try-with-resources in the docs to recommended way

2017-04-17 Thread elserj
HBASE-16875 Changed try-with-resources in the docs to recommended way

Signed-off-by: Chia-Ping Tsai 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c8cd921b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c8cd921b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c8cd921b

Branch: refs/heads/HBASE-16961
Commit: c8cd921bededa67b2b0de823005830d750534d93
Parents: c1ac3f7
Author: Jan Hentschel 
Authored: Sat Mar 4 10:04:02 2017 +0100
Committer: Chia-Ping Tsai 
Committed: Mon Apr 17 10:59:46 2017 +0800

--
 src/main/asciidoc/_chapters/architecture.adoc |  7 +++---
 src/main/asciidoc/_chapters/security.adoc | 29 --
 2 files changed, 13 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c8cd921b/src/main/asciidoc/_chapters/architecture.adoc
--
diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 27aebd9..7f9ba07 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -219,10 +219,9 @@ For applications which require high-end multithreaded 
access (e.g., web-servers
 
 // Create a connection to the cluster.
 Configuration conf = HBaseConfiguration.create();
-try (Connection connection = ConnectionFactory.createConnection(conf)) {
-  try (Table table = connection.getTable(TableName.valueOf(tablename)) {
-// use table as needed, the table returned is lightweight
-  }
+try (Connection connection = ConnectionFactory.createConnection(conf);
+ Table table = connection.getTable(TableName.valueOf(tablename))) {
+  // use table as needed, the table returned is lightweight
 }
 
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/c8cd921b/src/main/asciidoc/_chapters/security.adoc
--
diff --git a/src/main/asciidoc/_chapters/security.adoc 
b/src/main/asciidoc/_chapters/security.adoc
index 0ed9ba2..ccb5adb 100644
--- a/src/main/asciidoc/_chapters/security.adoc
+++ b/src/main/asciidoc/_chapters/security.adoc
@@ -202,10 +202,9 @@ Set it in the `Configuration` supplied to `Table`:
 Configuration conf = HBaseConfiguration.create();
 Connection connection = ConnectionFactory.createConnection(conf);
 conf.set("hbase.rpc.protection", "privacy");
-try (Connection connection = ConnectionFactory.createConnection(conf)) {
-  try (Table table = connection.getTable(TableName.valueOf(tablename)) {
+try (Connection connection = ConnectionFactory.createConnection(conf);
+ Table table = connection.getTable(TableName.valueOf(tablename))) {
    do your stuff
-  }
 }
 
 
@@ -1014,24 +1013,16 @@ public static void grantOnTable(final 
HBaseTestingUtility util, final String use
   SecureTestUtil.updateACLs(util, new Callable() {
 @Override
 public Void call() throws Exception {
-  Configuration conf = HBaseConfiguration.create();
-  Connection connection = ConnectionFactory.createConnection(conf);
-  try (Connection connection = ConnectionFactory.createConnection(conf)) {
-try (Table table = connection.getTable(TableName.valueOf(tablename)) {
-  AccessControlLists.ACL_TABLE_NAME);
-  try {
-BlockingRpcChannel service = 
acl.coprocessorService(HConstants.EMPTY_START_ROW);
-AccessControlService.BlockingInterface protocol =
-AccessControlService.newBlockingStub(service);
-ProtobufUtil.grant(protocol, user, table, family, qualifier, 
actions);
-  } finally {
-acl.close();
-  }
-  return null;
-}
+  try (Connection connection = 
ConnectionFactory.createConnection(util.getConfiguration());
+   Table acl = connection.getTable(AccessControlLists.ACL_TABLE_NAME)) 
{
+BlockingRpcChannel service = 
acl.coprocessorService(HConstants.EMPTY_START_ROW);
+AccessControlService.BlockingInterface protocol =
+  AccessControlService.newBlockingStub(service);
+AccessControlUtil.grant(null, protocol, user, table, family, 
qualifier, false, actions);
   }
+  return null;
 }
-  }
+  });
 }
 
 



[10/50] [abbrv] hbase git commit: HBASE-17366 Run TestHFile#testReaderWithoutBlockCache failes

2017-04-17 Thread elserj
HBASE-17366 Run TestHFile#testReaderWithoutBlockCache failes

Signed-off-by: CHIA-PING TSAI 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c1ac3f77
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c1ac3f77
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c1ac3f77

Branch: refs/heads/HBASE-16961
Commit: c1ac3f7739f8c9e20f6aed428558128339467d04
Parents: 363f627
Author: huaxiang sun 
Authored: Mon Apr 17 10:32:17 2017 +0800
Committer: CHIA-PING TSAI 
Committed: Mon Apr 17 10:34:17 2017 +0800

--
 .../apache/hadoop/hbase/regionserver/StoreFileWriter.java   | 9 +
 .../java/org/apache/hadoop/hbase/io/hfile/TestHFile.java| 2 +-
 2 files changed, 10 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c1ac3f77/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
index ccfd735..88cba75 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
@@ -384,6 +384,15 @@ public class StoreFileWriter implements CellSink, 
ShipperListener {
 }
 
 /**
+ * Creates Builder with cache configuration disabled
+ */
+public Builder(Configuration conf, FileSystem fs) {
+  this.conf = conf;
+  this.cacheConf = CacheConfig.DISABLED;
+  this.fs = fs;
+}
+
+/**
  * @param trt A premade TimeRangeTracker to use rather than build one per 
append (building one
  * of these is expensive so good to pass one in if you have one).
  * @return this (for chained invocation)

http://git-wip-us.apache.org/repos/asf/hbase/blob/c1ac3f77/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
index 7074c9d..4db459a 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
@@ -115,7 +115,7 @@ public class TestHFile  {
 Path storeFileParentDir = new Path(TEST_UTIL.getDataTestDir(), 
"TestHFile");
 HFileContext meta = new HFileContextBuilder().withBlockSize(64 * 
1024).build();
 StoreFileWriter sfw =
-new StoreFileWriter.Builder(conf, cacheConf, 
fs).withOutputDir(storeFileParentDir)
+new StoreFileWriter.Builder(conf, fs).withOutputDir(storeFileParentDir)
 
.withComparator(CellComparator.COMPARATOR).withFileContext(meta).build();
 
 final int rowLen = 32;



[38/50] [abbrv] hbase git commit: HBASE-17025 Add shell commands for space quotas

2017-04-17 Thread elserj
HBASE-17025 Add shell commands for space quotas


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7a1f9d34
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7a1f9d34
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7a1f9d34

Branch: refs/heads/HBASE-16961
Commit: 7a1f9d34b89131f248f76d1fafde6b060bec2827
Parents: 0effca4
Author: Josh Elser 
Authored: Wed Jan 11 11:55:29 2017 -0500
Committer: Josh Elser 
Committed: Mon Apr 17 15:35:32 2017 -0400

--
 hbase-shell/src/main/ruby/hbase/quotas.rb   |  62 -
 hbase-shell/src/main/ruby/hbase_constants.rb|   1 +
 .../src/main/ruby/shell/commands/set_quota.rb   |  45 +-
 .../hadoop/hbase/client/AbstractTestShell.java  |   1 +
 hbase-shell/src/test/ruby/hbase/quotas_test.rb  | 137 +++
 hbase-shell/src/test/ruby/tests_runner.rb   |   1 +
 6 files changed, 242 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7a1f9d34/hbase-shell/src/main/ruby/hbase/quotas.rb
--
diff --git a/hbase-shell/src/main/ruby/hbase/quotas.rb 
b/hbase-shell/src/main/ruby/hbase/quotas.rb
index bf2dc63..d99fe72 100644
--- a/hbase-shell/src/main/ruby/hbase/quotas.rb
+++ b/hbase-shell/src/main/ruby/hbase/quotas.rb
@@ -24,14 +24,22 @@ java_import org.apache.hadoop.hbase.quotas.ThrottleType
 java_import org.apache.hadoop.hbase.quotas.QuotaFilter
 java_import org.apache.hadoop.hbase.quotas.QuotaRetriever
 java_import org.apache.hadoop.hbase.quotas.QuotaSettingsFactory
+java_import org.apache.hadoop.hbase.quotas.SpaceViolationPolicy
 
 module HBaseQuotasConstants
+  # RPC Quota constants
   GLOBAL_BYPASS = 'GLOBAL_BYPASS'
   THROTTLE_TYPE = 'THROTTLE_TYPE'
   THROTTLE = 'THROTTLE'
   REQUEST = 'REQUEST'
   WRITE = 'WRITE'
   READ = 'READ'
+  # Space quota constants
+  SPACE = 'SPACE'
+  NO_INSERTS = 'NO_INSERTS'
+  NO_WRITES = 'NO_WRITES'
+  NO_WRITES_COMPACTIONS = 'NO_WRITES_COMPACTIONS'
+  DISABLE = 'DISABLE'
 end
 
 module Hbase
@@ -107,6 +115,54 @@ module Hbase
   @admin.setQuota(settings)
 end
 
+def limit_space(args)
+  raise(ArgumentError, 'Argument should be a Hash') unless (not args.nil? 
and args.kind_of?(Hash))
+  # Let the user provide a raw number
+  if args[LIMIT].is_a?(Numeric)
+limit = args[LIMIT]
+  else
+# Parse a string a 1K, 2G, etc.
+limit = _parse_size(args[LIMIT])
+  end
+  # Extract the policy, failing if something bogus was provided
+  policy = SpaceViolationPolicy.valueOf(args[POLICY])
+  # Create a table or namespace quota
+  if args.key?(TABLE)
+if args.key?(NAMESPACE)
+  raise(ArgumentError, "Only one of TABLE or NAMESPACE can be 
specified.")
+end
+settings = 
QuotaSettingsFactory.limitTableSpace(TableName.valueOf(args.delete(TABLE)), 
limit, policy)
+  elsif args.key?(NAMESPACE)
+if args.key?(TABLE)
+  raise(ArgumentError, "Only one of TABLE or NAMESPACE can be 
specified.")
+end
+settings = 
QuotaSettingsFactory.limitNamespaceSpace(args.delete(NAMESPACE), limit, policy)
+  else
+raise(ArgumentError, 'One of TABLE or NAMESPACE must be specified.')
+  end
+  # Apply the quota
+  @admin.setQuota(settings)
+end
+
+def remove_space_limit(args)
+  raise(ArgumentError, 'Argument should be a Hash') unless (not args.nil? 
and args.kind_of?(Hash))
+  if args.key?(TABLE)
+if args.key?(NAMESPACE)
+  raise(ArgumentError, "Only one of TABLE or NAMESPACE can be 
specified.")
+end
+table = TableName.valueOf(args.delete(TABLE))
+settings = QuotaSettingsFactory.removeTableSpaceLimit(table)
+  elsif args.key?(NAMESPACE)
+if args.key?(TABLE)
+  raise(ArgumentError, "Only one of TABLE or NAMESPACE can be 
specified.")
+end
+settings = 
QuotaSettingsFactory.removeNamespaceSpaceLimit(args.delete(NAMESPACE))
+  else
+raise(ArgumentError, 'One of TABLE or NAMESPACE must be specified.')
+  end
+  @admin.setQuota(settings)
+end
+
 def set_global_bypass(bypass, args)
   raise(ArgumentError, "Arguments should be a Hash") unless 
args.kind_of?(Hash)
 
@@ -171,7 +227,7 @@ module Hbase
   return _size_from_str(match[1].to_i, match[2])
 end
   else
-raise "Invalid size limit syntax"
+raise(ArgumentError, "Invalid size limit syntax")
   end
 end
 
@@ -188,7 +244,7 @@ module Hbase
 end
 
 if limit <= 0
-  raise "Invalid throttle limit, must be greater then 0"
+  raise(ArgumentError, "Invalid throttle limit, must be greater then 
0")
 end
 
 

[24/50] [abbrv] hbase git commit: HBASE-16995 Build client Java API and client protobuf messages (Josh Elser)

2017-04-17 Thread elserj
HBASE-16995 Build client Java API and client protobuf messages (Josh Elser)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/990062a9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/990062a9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/990062a9

Branch: refs/heads/HBASE-16961
Commit: 990062a93a257a42adc84b7b8448d788517c9baa
Parents: ecdfb82
Author: tedyu 
Authored: Thu Nov 17 10:19:52 2016 -0800
Committer: Josh Elser 
Committed: Mon Apr 17 15:35:31 2017 -0400

--
 .../hbase/quotas/QuotaSettingsFactory.java  |   47 +
 .../apache/hadoop/hbase/quotas/QuotaType.java   |1 +
 .../hadoop/hbase/quotas/SpaceLimitSettings.java |  166 ++
 .../hbase/quotas/SpaceViolationPolicy.java  |   44 +
 .../hbase/shaded/protobuf/ProtobufUtil.java |   51 +
 .../hbase/quotas/TestQuotaSettingsFactory.java  |  148 ++
 .../hbase/quotas/TestSpaceLimitSettings.java|  119 ++
 .../shaded/protobuf/generated/MasterProtos.java |  584 --
 .../shaded/protobuf/generated/QuotaProtos.java  | 1739 +-
 .../src/main/protobuf/Master.proto  |2 +
 .../src/main/protobuf/Quota.proto   |   21 +
 .../hbase/protobuf/generated/QuotaProtos.java   | 1682 -
 hbase-protocol/src/main/protobuf/Quota.proto|   21 +
 13 files changed, 4291 insertions(+), 334 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/990062a9/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
index 3622a32..8512e39 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java
@@ -27,6 +27,7 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaRe
 import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota;
 
 @InterfaceAudience.Public
 public class QuotaSettingsFactory {
@@ -89,6 +90,9 @@ public class QuotaSettingsFactory {
 if (quotas.getBypassGlobals() == true) {
   settings.add(new QuotaGlobalsSettingsBypass(userName, tableName, 
namespace, true));
 }
+if (quotas.hasSpace()) {
+  settings.add(fromSpace(tableName, namespace, quotas.getSpace()));
+}
 return settings;
   }
 
@@ -122,6 +126,18 @@ public class QuotaSettingsFactory {
 return settings;
   }
 
+  static QuotaSettings fromSpace(TableName table, String namespace, SpaceQuota 
protoQuota) {
+if ((null == table && null == namespace) || (null != table && null != 
namespace)) {
+  throw new IllegalArgumentException("Can only construct 
SpaceLimitSettings for a table or namespace.");
+}
+if (null != table) {
+  return SpaceLimitSettings.fromSpaceQuota(table, protoQuota);
+} else {
+  // namespace must be non-null
+  return SpaceLimitSettings.fromSpaceQuota(namespace, protoQuota);
+}
+  }
+
   /* ==
*  RPC Throttle
*/
@@ -278,4 +294,35 @@ public class QuotaSettingsFactory {
   public static QuotaSettings bypassGlobals(final String userName, final 
boolean bypassGlobals) {
 return new QuotaGlobalsSettingsBypass(userName, null, null, bypassGlobals);
   }
+
+  /* ==
+   *  FileSystem Space Settings
+   */
+
+  /**
+   * Creates a {@link QuotaSettings} object to limit the FileSystem space 
usage for the given table to the given size in bytes.
+   * When the space usage is exceeded by the table, the provided {@link 
SpaceViolationPolicy} is enacted on the table.
+   *
+   * @param tableName The name of the table on which the quota should be 
applied.
+   * @param sizeLimit The limit of a table's size in bytes.
+   * @param violationPolicy The action to take when the quota is exceeded.
+   * @return An {@link QuotaSettings} object.
+   */
+  public static QuotaSettings limitTableSpace(final TableName tableName, long 
sizeLimit, final SpaceViolationPolicy violationPolicy) {
+return new SpaceLimitSettings(tableName, sizeLimit, violationPolicy);
+  }
+
+  /**
+   * Creates a {@link QuotaSettings} object to limit the FileSystem space 
usage for the given namespace to the given size in bytes.
+  

[29/50] [abbrv] hbase git commit: HBASE-17001 Enforce quota violation policies in the RegionServer

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/0d76d667/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
index c493b25..943c898 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
@@ -22,16 +22,12 @@ import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
-import java.io.IOException;
-import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.HashSet;
-import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
-import java.util.Random;
 import java.util.Set;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicLong;
@@ -40,20 +36,15 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.NamespaceDescriptor;
 import org.apache.hadoop.hbase.NamespaceNotFoundException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.master.HMaster;
 import org.apache.hadoop.hbase.quotas.QuotaObserverChore.TablesWithQuotas;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota;
 import org.apache.hadoop.hbase.testclassification.LargeTests;
-import org.apache.hadoop.hbase.util.Bytes;
 import org.junit.AfterClass;
 import org.junit.Before;
 import org.junit.BeforeClass;
@@ -62,7 +53,6 @@ import org.junit.Test;
 import org.junit.experimental.categories.Category;
 import org.junit.rules.TestName;
 
-import com.google.common.collect.HashMultimap;
 import com.google.common.collect.Iterables;
 import com.google.common.collect.Multimap;
 
@@ -72,11 +62,8 @@ import com.google.common.collect.Multimap;
 @Category(LargeTests.class)
 public class TestQuotaObserverChoreWithMiniCluster {
   private static final Log LOG = 
LogFactory.getLog(TestQuotaObserverChoreWithMiniCluster.class);
-  private static final int SIZE_PER_VALUE = 256;
-  private static final String F1 = "f1";
   private static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
   private static final AtomicLong COUNTER = new AtomicLong(0);
-  private static final long ONE_MEGABYTE = 1024L * 1024L;
   private static final long DEFAULT_WAIT_MILLIS = 500;
 
   @Rule
@@ -84,18 +71,19 @@ public class TestQuotaObserverChoreWithMiniCluster {
 
   private HMaster master;
   private QuotaObserverChore chore;
-  private SpaceQuotaViolationNotifierForTest violationNotifier;
+  private SpaceQuotaSnapshotNotifierForTest snapshotNotifier;
+  private SpaceQuotaHelperForTests helper;
 
   @BeforeClass
   public static void setUp() throws Exception {
 Configuration conf = TEST_UTIL.getConfiguration();
 conf.setInt(FileSystemUtilizationChore.FS_UTILIZATION_CHORE_DELAY_KEY, 
1000);
 conf.setInt(FileSystemUtilizationChore.FS_UTILIZATION_CHORE_PERIOD_KEY, 
1000);
-conf.setInt(QuotaObserverChore.VIOLATION_OBSERVER_CHORE_DELAY_KEY, 1000);
-conf.setInt(QuotaObserverChore.VIOLATION_OBSERVER_CHORE_PERIOD_KEY, 1000);
+conf.setInt(QuotaObserverChore.QUOTA_OBSERVER_CHORE_DELAY_KEY, 1000);
+conf.setInt(QuotaObserverChore.QUOTA_OBSERVER_CHORE_PERIOD_KEY, 1000);
 conf.setBoolean(QuotaUtil.QUOTA_CONF_KEY, true);
-conf.setClass(SpaceQuotaViolationNotifierFactory.VIOLATION_NOTIFIER_KEY,
-SpaceQuotaViolationNotifierForTest.class, 
SpaceQuotaViolationNotifier.class);
+conf.setClass(SpaceQuotaSnapshotNotifierFactory.SNAPSHOT_NOTIFIER_KEY,
+SpaceQuotaSnapshotNotifierForTest.class, 
SpaceQuotaSnapshotNotifier.class);
 TEST_UTIL.startMiniCluster(1);
   }
 
@@ -131,40 +119,55 @@ public class TestQuotaObserverChoreWithMiniCluster {
 }
 
 master = TEST_UTIL.getMiniHBaseCluster().getMaster();
-violationNotifier =
-(SpaceQuotaViolationNotifierForTest) 
master.getSpaceQuotaViolationNotifier();
-violationNotifier.clearTableViolations();
+snapshotNotifier =
+(SpaceQuotaSnapshotNotifierForTest) 
master.getSpaceQuotaSnapshotNotifier();
+snapshotNotifier.clearSnapshots();
 chore = master.getQuotaObserverChore();
+

[01/50] [abbrv] hbase git commit: HBASE-16775 Fix flaky TestExportSnapshot#testExportRetry. [Forced Update!]

2017-04-17 Thread elserj
Repository: hbase
Updated Branches:
  refs/heads/HBASE-16961 f46c16aa2 -> 2524f8cd2 (forced update)


HBASE-16775 Fix flaky TestExportSnapshot#testExportRetry.

Reason for flakyness: Current test is probability based fault injection and 
triggers failure 3% of the time. Earlier when test used LocalJobRunner which 
didn't honor "mapreduce.map.maxattempts", it'd pass 97% time (when no fault is 
injected) and fail 3% time (when fault was injected). Point being, even when 
the test was complete wrong, we couldn't catch it because it was probability 
based.

This change will inject fault in a deterministic manner.
On design side, it encapsulates all testing hooks in ExportSnapshot.java into 
single inner class.

Change-Id: Icba866e1d56a5281748df89f4dd374bc45bad249


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/da5fb27e
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/da5fb27e
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/da5fb27e

Branch: refs/heads/HBASE-16961
Commit: da5fb27eabed4a4b4d251be973ee945fb52895bf
Parents: cf3215d
Author: Apekshit Sharma 
Authored: Thu Oct 6 14:20:58 2016 -0700
Committer: Apekshit Sharma 
Committed: Wed Apr 12 11:11:31 2017 -0700

--
 .../hadoop/hbase/snapshot/ExportSnapshot.java   | 58 +++---
 .../hbase/snapshot/TestExportSnapshot.java  | 84 +++-
 .../snapshot/TestExportSnapshotNoCluster.java   |  2 +-
 .../hbase/snapshot/TestMobExportSnapshot.java   |  7 +-
 .../snapshot/TestMobSecureExportSnapshot.java   |  7 +-
 .../snapshot/TestSecureExportSnapshot.java  |  7 +-
 6 files changed, 93 insertions(+), 72 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/da5fb27e/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
index e2086e9..e3ad951 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
@@ -29,7 +29,6 @@ import java.util.Collections;
 import java.util.Comparator;
 import java.util.LinkedList;
 import java.util.List;
-import java.util.Random;
 
 import org.apache.commons.cli.CommandLine;
 import org.apache.commons.cli.Option;
@@ -110,9 +109,12 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   private static final String CONF_BANDWIDTH_MB = 
"snapshot.export.map.bandwidth.mb";
   protected static final String CONF_SKIP_TMP = "snapshot.export.skip.tmp";
 
-  static final String CONF_TEST_FAILURE = "test.snapshot.export.failure";
-  static final String CONF_TEST_RETRY = "test.snapshot.export.failure.retry";
-
+  static class Testing {
+static final String CONF_TEST_FAILURE = "test.snapshot.export.failure";
+static final String CONF_TEST_FAILURE_COUNT = 
"test.snapshot.export.failure.count";
+int failuresCountToInject = 0;
+int injectedFailureCount = 0;
+  }
 
   // Command line options and defaults.
   static final class Options {
@@ -149,12 +151,10 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
 
   private static class ExportMapper extends Mapper 
{
+private static final Log LOG = LogFactory.getLog(ExportMapper.class);
 final static int REPORT_SIZE = 1 * 1024 * 1024;
 final static int BUFFER_SIZE = 64 * 1024;
 
-private boolean testFailures;
-private Random random;
-
 private boolean verifyChecksum;
 private String filesGroup;
 private String filesUser;
@@ -169,9 +169,12 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
 private Path inputArchive;
 private Path inputRoot;
 
+private static Testing testing = new Testing();
+
 @Override
 public void setup(Context context) throws IOException {
   Configuration conf = context.getConfiguration();
+
   Configuration srcConf = HBaseConfiguration.createClusterConf(conf, null, 
CONF_SOURCE_PREFIX);
   Configuration destConf = HBaseConfiguration.createClusterConf(conf, 
null, CONF_DEST_PREFIX);
 
@@ -186,8 +189,6 @@ public class ExportSnapshot extends AbstractHBaseTool 
implements Tool {
   inputArchive = new Path(inputRoot, HConstants.HFILE_ARCHIVE_DIRECTORY);
   outputArchive = new Path(outputRoot, HConstants.HFILE_ARCHIVE_DIRECTORY);
 
-  testFailures = conf.getBoolean(CONF_TEST_FAILURE, false);
-
   try {
 srcConf.setBoolean("fs." + 

[37/50] [abbrv] hbase git commit: HBASE-16999 Implement master and regionserver synchronization of quota state

2017-04-17 Thread elserj
HBASE-16999 Implement master and regionserver synchronization of quota state

* Implement the RegionServer reading violation from the quota table
* Implement the Master reporting violations to the quota table
* RegionServers need to track its enforced policies


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/dccfc846
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/dccfc846
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/dccfc846

Branch: refs/heads/HBASE-16961
Commit: dccfc8464e22c0a4c0efde50c7087db49a2ec1b0
Parents: bcf6da4
Author: Josh Elser 
Authored: Fri Nov 18 15:38:19 2016 -0500
Committer: Josh Elser 
Committed: Mon Apr 17 15:35:32 2017 -0400

--
 .../hadoop/hbase/quotas/QuotaTableUtil.java |  92 -
 .../org/apache/hadoop/hbase/master/HMaster.java |  35 +++-
 .../hadoop/hbase/quotas/QuotaObserverChore.java |   5 +-
 .../hbase/quotas/RegionServerQuotaManager.java  | 200 ---
 .../quotas/RegionServerRpcQuotaManager.java | 200 +++
 .../quotas/RegionServerSpaceQuotaManager.java   | 169 
 .../quotas/SpaceQuotaViolationNotifier.java |  16 +-
 .../SpaceQuotaViolationNotifierFactory.java |  62 ++
 .../SpaceQuotaViolationNotifierForTest.java |   4 +
 ...SpaceQuotaViolationPolicyRefresherChore.java | 154 ++
 .../TableSpaceQuotaViolationNotifier.java   |  55 +
 .../hbase/regionserver/HRegionServer.java   |  21 +-
 .../hbase/regionserver/RSRpcServices.java   |   7 +-
 .../regionserver/RegionServerServices.java  |  12 +-
 .../hadoop/hbase/MockRegionServerServices.java  |  10 +-
 .../hadoop/hbase/master/MockRegionServer.java   |  10 +-
 .../TestQuotaObserverChoreWithMiniCluster.java  |   2 +
 .../hadoop/hbase/quotas/TestQuotaTableUtil.java |  47 +
 .../hadoop/hbase/quotas/TestQuotaThrottle.java  |   4 +-
 .../TestRegionServerSpaceQuotaManager.java  | 127 
 ...SpaceQuotaViolationPolicyRefresherChore.java | 131 
 .../TestTableSpaceQuotaViolationNotifier.java   | 144 +
 22 files changed, 1281 insertions(+), 226 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/dccfc846/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
index 8ef4f08..b5eac48 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
@@ -24,16 +24,20 @@ import java.io.IOException;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
+import java.util.Objects;
 import java.util.regex.Pattern;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.NamespaceDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
 import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.client.Table;
@@ -44,7 +48,12 @@ import org.apache.hadoop.hbase.filter.QualifierFilter;
 import org.apache.hadoop.hbase.filter.RegexStringComparator;
 import org.apache.hadoop.hbase.filter.RowFilter;
 import org.apache.hadoop.hbase.protobuf.ProtobufMagic;
+import org.apache.hadoop.hbase.shaded.com.google.protobuf.ByteString;
+import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferException;
+import org.apache.hadoop.hbase.shaded.com.google.protobuf.UnsafeByteOperations;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.Strings;
 
@@ -53,9 +62,8 @@ import org.apache.hadoop.hbase.util.Strings;
  * 
  * ROW-KEY  FAM/QUALDATA
  *   n.namespace q:s global-quotas
- *   n.namespace u:dusize in bytes
  *   t.table q:s global-quotas
- *   t.table u:dusize in bytes
+ *   t.table u:vspace violation policy
  *   u.user  q:s global-quotas
  

[12/50] [abbrv] hbase git commit: HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/c2c2178b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
index d56d6ec..095f4bd 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
@@ -116,6 +116,7 @@ import org.apache.hadoop.hbase.filter.BinaryComparator;
 import org.apache.hadoop.hbase.filter.ColumnCountGetFilter;
 import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.FilterAllFilter;
 import org.apache.hadoop.hbase.filter.FilterBase;
 import org.apache.hadoop.hbase.filter.FilterList;
 import org.apache.hadoop.hbase.filter.NullComparator;
@@ -4931,6 +4932,7 @@ public class TestHRegion {
   String callingMethod, Configuration conf, boolean isReadOnly, byte[]... 
families)
   throws IOException {
 Path logDir = TEST_UTIL.getDataTestDirOnTestFS(callingMethod + ".log");
+ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false, 0, 0, 
0, null);
 HRegionInfo hri = new HRegionInfo(tableName, startKey, stopKey);
 final WAL wal = HBaseTestingUtility.createWal(conf, logDir, hri);
 return initHRegion(tableName, startKey, stopKey, isReadOnly,

http://git-wip-us.apache.org/repos/asf/hbase/blob/c2c2178b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
index 0054642..6eed7df 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
@@ -153,7 +153,7 @@ public class TestHRegionReplayEvents {
 }
 
 time = System.currentTimeMillis();
-
+ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false, 0, 0, 
0, null);
 primaryHri = new HRegionInfo(htd.getTableName(),
   HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW,
   false, time, 0);

http://git-wip-us.apache.org/repos/asf/hbase/blob/c2c2178b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
index 37a7664..1768801 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
@@ -48,30 +48,30 @@ import static org.junit.Assert.assertTrue;
 @Category({RegionServerTests.class, SmallTests.class})
 public class TestMemStoreChunkPool {
   private final static Configuration conf = new Configuration();
-  private static MemStoreChunkPool chunkPool;
+  private static ChunkCreator chunkCreator;
   private static boolean chunkPoolDisabledBeforeTest;
 
   @BeforeClass
   public static void setUpBeforeClass() throws Exception {
 conf.setBoolean(MemStoreLAB.USEMSLAB_KEY, true);
 conf.setFloat(MemStoreLAB.CHUNK_POOL_MAXSIZE_KEY, 0.2f);
-chunkPoolDisabledBeforeTest = MemStoreChunkPool.chunkPoolDisabled;
-MemStoreChunkPool.chunkPoolDisabled = false;
+chunkPoolDisabledBeforeTest = ChunkCreator.chunkPoolDisabled;
+ChunkCreator.chunkPoolDisabled = false;
 long globalMemStoreLimit = (long) 
(ManagementFactory.getMemoryMXBean().getHeapMemoryUsage()
 .getMax() * MemorySizeUtil.getGlobalMemStoreHeapPercent(conf, false));
-chunkPool = MemStoreChunkPool.initialize(globalMemStoreLimit, 0.2f,
-MemStoreLAB.POOL_INITIAL_SIZE_DEFAULT, 
MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false);
-assertTrue(chunkPool != null);
+chunkCreator = ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, 
false,
+  globalMemStoreLimit, 0.2f, MemStoreLAB.POOL_INITIAL_SIZE_DEFAULT, null);
+assertTrue(chunkCreator != null);
   }
 
   @AfterClass
   public static void tearDownAfterClass() throws Exception {
-MemStoreChunkPool.chunkPoolDisabled = chunkPoolDisabledBeforeTest;
+ChunkCreator.chunkPoolDisabled = chunkPoolDisabledBeforeTest;
   }
 
   @Before
   public void tearDown() throws Exception {
-chunkPool.clearChunks();
+chunkCreator.clearChunksInPool();
   }
 
   @Test

[49/50] [abbrv] hbase git commit: HBASE-17002 JMX metrics and some UI additions for space quotas

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/96c6b8fa/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
index d56def5..4577bcf 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
@@ -13024,6 +13024,3031 @@ public final class QuotaProtos {
 
   }
 
+  public interface GetQuotaStatesRequestOrBuilder extends
+  // 
@@protoc_insertion_point(interface_extends:hbase.pb.GetQuotaStatesRequest)
+  org.apache.hadoop.hbase.shaded.com.google.protobuf.MessageOrBuilder {
+  }
+  /**
+   * Protobuf type {@code hbase.pb.GetQuotaStatesRequest}
+   */
+  public  static final class GetQuotaStatesRequest extends
+  org.apache.hadoop.hbase.shaded.com.google.protobuf.GeneratedMessageV3 
implements
+  // 
@@protoc_insertion_point(message_implements:hbase.pb.GetQuotaStatesRequest)
+  GetQuotaStatesRequestOrBuilder {
+// Use GetQuotaStatesRequest.newBuilder() to construct.
+private 
GetQuotaStatesRequest(org.apache.hadoop.hbase.shaded.com.google.protobuf.GeneratedMessageV3.Builder
 builder) {
+  super(builder);
+}
+private GetQuotaStatesRequest() {
+}
+
+@java.lang.Override
+public final 
org.apache.hadoop.hbase.shaded.com.google.protobuf.UnknownFieldSet
+getUnknownFields() {
+  return this.unknownFields;
+}
+private GetQuotaStatesRequest(
+org.apache.hadoop.hbase.shaded.com.google.protobuf.CodedInputStream 
input,
+
org.apache.hadoop.hbase.shaded.com.google.protobuf.ExtensionRegistryLite 
extensionRegistry)
+throws 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferException
 {
+  this();
+  
org.apache.hadoop.hbase.shaded.com.google.protobuf.UnknownFieldSet.Builder 
unknownFields =
+  
org.apache.hadoop.hbase.shaded.com.google.protobuf.UnknownFieldSet.newBuilder();
+  try {
+boolean done = false;
+while (!done) {
+  int tag = input.readTag();
+  switch (tag) {
+case 0:
+  done = true;
+  break;
+default: {
+  if (!parseUnknownField(input, unknownFields,
+ extensionRegistry, tag)) {
+done = true;
+  }
+  break;
+}
+  }
+}
+  } catch 
(org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferException
 e) {
+throw e.setUnfinishedMessage(this);
+  } catch (java.io.IOException e) {
+throw new 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferException(
+e).setUnfinishedMessage(this);
+  } finally {
+this.unknownFields = unknownFields.build();
+makeExtensionsImmutable();
+  }
+}
+public static final 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.Descriptor
+getDescriptor() {
+  return 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.internal_static_hbase_pb_GetQuotaStatesRequest_descriptor;
+}
+
+protected 
org.apache.hadoop.hbase.shaded.com.google.protobuf.GeneratedMessageV3.FieldAccessorTable
+internalGetFieldAccessorTable() {
+  return 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.internal_static_hbase_pb_GetQuotaStatesRequest_fieldAccessorTable
+  .ensureFieldAccessorsInitialized(
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.GetQuotaStatesRequest.class,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.GetQuotaStatesRequest.Builder.class);
+}
+
+private byte memoizedIsInitialized = -1;
+public final boolean isInitialized() {
+  byte isInitialized = memoizedIsInitialized;
+  if (isInitialized == 1) return true;
+  if (isInitialized == 0) return false;
+
+  memoizedIsInitialized = 1;
+  return true;
+}
+
+public void 
writeTo(org.apache.hadoop.hbase.shaded.com.google.protobuf.CodedOutputStream 
output)
+throws java.io.IOException {
+  unknownFields.writeTo(output);
+}
+
+public int getSerializedSize() {
+  int size = memoizedSize;
+  if (size != -1) return size;
+
+  size = 0;
+  size += unknownFields.getSerializedSize();
+  memoizedSize = size;
+  return size;
+}
+
+private static final long serialVersionUID = 0L;
+@java.lang.Override
+public boolean equals(final java.lang.Object obj) {
+  if (obj == this) {
+   return true;
+ 

[31/50] [abbrv] hbase git commit: HBASE-17001 Enforce quota violation policies in the RegionServer

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/0d76d667/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
index 8b127d9..973ac8c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
@@ -37,9 +37,8 @@ import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
 import org.apache.hadoop.hbase.client.Connection;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.master.HMaster;
-import org.apache.hadoop.hbase.quotas.QuotaViolationStore.ViolationState;
-import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas;
+import org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshot;
+import org.apache.hadoop.hbase.quotas.SpaceQuotaSnapshot.SpaceQuotaStatus;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -54,51 +53,51 @@ import com.google.common.collect.Multimap;
 @InterfaceAudience.Private
 public class QuotaObserverChore extends ScheduledChore {
   private static final Log LOG = LogFactory.getLog(QuotaObserverChore.class);
-  static final String VIOLATION_OBSERVER_CHORE_PERIOD_KEY =
-  "hbase.master.quotas.violation.observer.chore.period";
-  static final int VIOLATION_OBSERVER_CHORE_PERIOD_DEFAULT = 1000 * 60 * 5; // 
5 minutes in millis
+  static final String QUOTA_OBSERVER_CHORE_PERIOD_KEY =
+  "hbase.master.quotas.observer.chore.period";
+  static final int QUOTA_OBSERVER_CHORE_PERIOD_DEFAULT = 1000 * 60 * 5; // 5 
minutes in millis
 
-  static final String VIOLATION_OBSERVER_CHORE_DELAY_KEY =
-  "hbase.master.quotas.violation.observer.chore.delay";
-  static final long VIOLATION_OBSERVER_CHORE_DELAY_DEFAULT = 1000L * 60L; // 1 
minute
+  static final String QUOTA_OBSERVER_CHORE_DELAY_KEY =
+  "hbase.master.quotas.observer.chore.delay";
+  static final long QUOTA_OBSERVER_CHORE_DELAY_DEFAULT = 1000L * 60L; // 1 
minute
 
-  static final String VIOLATION_OBSERVER_CHORE_TIMEUNIT_KEY =
-  "hbase.master.quotas.violation.observer.chore.timeunit";
-  static final String VIOLATION_OBSERVER_CHORE_TIMEUNIT_DEFAULT = 
TimeUnit.MILLISECONDS.name();
+  static final String QUOTA_OBSERVER_CHORE_TIMEUNIT_KEY =
+  "hbase.master.quotas.observer.chore.timeunit";
+  static final String QUOTA_OBSERVER_CHORE_TIMEUNIT_DEFAULT = 
TimeUnit.MILLISECONDS.name();
 
-  static final String VIOLATION_OBSERVER_CHORE_REPORT_PERCENT_KEY =
-  "hbase.master.quotas.violation.observer.report.percent";
-  static final double VIOLATION_OBSERVER_CHORE_REPORT_PERCENT_DEFAULT= 0.95;
+  static final String QUOTA_OBSERVER_CHORE_REPORT_PERCENT_KEY =
+  "hbase.master.quotas.observer.report.percent";
+  static final double QUOTA_OBSERVER_CHORE_REPORT_PERCENT_DEFAULT= 0.95;
 
   private final Connection conn;
   private final Configuration conf;
   private final MasterQuotaManager quotaManager;
   /*
-   * Callback that changes in quota violation are passed to.
+   * Callback that changes in quota snapshots are passed to.
*/
-  private final SpaceQuotaViolationNotifier violationNotifier;
+  private final SpaceQuotaSnapshotNotifier snapshotNotifier;
 
   /*
-   * Preserves the state of quota violations for tables and namespaces
+   * Preserves the state of quota snapshots for tables and namespaces
*/
-  private final Map tableQuotaViolationStates;
-  private final Map namespaceQuotaViolationStates;
+  private final Map tableQuotaSnapshots;
+  private final Map namespaceQuotaSnapshots;
 
   /*
-   * Encapsulates logic for moving tables/namespaces into or out of quota 
violation
+   * Encapsulates logic for tracking the state of a table/namespace WRT space 
quotas
*/
-  private QuotaViolationStore tableViolationStore;
-  private QuotaViolationStore namespaceViolationStore;
+  private QuotaSnapshotStore tableSnapshotStore;
+  private QuotaSnapshotStore namespaceSnapshotStore;
 
   public QuotaObserverChore(HMaster master) {
 this(
 master.getConnection(), master.getConfiguration(),
-master.getSpaceQuotaViolationNotifier(), 
master.getMasterQuotaManager(),
+master.getSpaceQuotaSnapshotNotifier(), master.getMasterQuotaManager(),
 master);
   }
 
   QuotaObserverChore(
-  Connection conn, Configuration conf, SpaceQuotaViolationNotifier 
violationNotifier,
+  Connection conn, Configuration conf, SpaceQuotaSnapshotNotifier 
snapshotNotifier,
   

[25/50] [abbrv] hbase git commit: HBASE-16995 Build client Java API and client protobuf messages - addendum fixes white spaces (Josh Elser)

2017-04-17 Thread elserj
HBASE-16995 Build client Java API and client protobuf messages - addendum fixes 
white spaces (Josh Elser)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/eaeef44e
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/eaeef44e
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/eaeef44e

Branch: refs/heads/HBASE-16961
Commit: eaeef44e2fbd4f13a7f2d8dc5934eda3c54c529f
Parents: 990062a
Author: tedyu 
Authored: Thu Nov 17 10:42:18 2016 -0800
Committer: Josh Elser 
Committed: Mon Apr 17 15:35:31 2017 -0400

--
 .../hbase/quotas/TestQuotaSettingsFactory.java|  2 +-
 .../shaded/protobuf/generated/MasterProtos.java   |  2 +-
 .../shaded/protobuf/generated/QuotaProtos.java| 18 +-
 .../hbase/protobuf/generated/QuotaProtos.java |  4 ++--
 4 files changed, 13 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/eaeef44e/hbase-client/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaSettingsFactory.java
--
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaSettingsFactory.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaSettingsFactory.java
index 17015d6..e0012a7 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaSettingsFactory.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaSettingsFactory.java
@@ -44,7 +44,7 @@ import org.junit.experimental.categories.Category;
  */
 @Category(SmallTests.class)
 public class TestQuotaSettingsFactory {
-  
+
   @Test
   public void testAllQuotasAddedToList() {
 final SpaceQuota spaceQuota = SpaceQuota.newBuilder()

http://git-wip-us.apache.org/repos/asf/hbase/blob/eaeef44e/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
index 0c3248c..bbc6d1d 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
@@ -63752,7 +63752,7 @@ public final class MasterProtos {
* optional .hbase.pb.SpaceLimitRequest space_limit = 8;
*/
   private 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
-  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceLimitRequest,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceLimitRequest.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceLimitRequestOrBuilder>
 
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceLimitRequest,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceLimitRequest.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceLimitRequestOrBuilder>
   getSpaceLimitFieldBuilder() {
 if (spaceLimitBuilder_ == null) {
   spaceLimitBuilder_ = new 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<

http://git-wip-us.apache.org/repos/asf/hbase/blob/eaeef44e/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
index e3c6bfd..0ab2576 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
@@ -4362,7 +4362,7 @@ public final class QuotaProtos {
* optional .hbase.pb.SpaceQuota space = 3;
*/
   private 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
-  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaOrBuilder>
 
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota.Builder,
 

[08/50] [abbrv] hbase git commit: HBASE-17903 Corrected the alias for the link of HBASE-6580

2017-04-17 Thread elserj
HBASE-17903 Corrected the alias for the link of HBASE-6580

Signed-off-by: CHIA-PING TSAI 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/918aa465
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/918aa465
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/918aa465

Branch: refs/heads/HBASE-16961
Commit: 918aa4655c4109159f27b6d78460bd3681c11f06
Parents: 8db9760
Author: Jan Hentschel 
Authored: Sun Apr 16 17:02:47 2017 +0200
Committer: CHIA-PING TSAI 
Committed: Mon Apr 17 10:22:25 2017 +0800

--
 src/main/asciidoc/_chapters/architecture.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/918aa465/src/main/asciidoc/_chapters/architecture.adoc
--
diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 773d237..27aebd9 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -230,7 +230,7 @@ try (Connection connection = 
ConnectionFactory.createConnection(conf)) {
 .`HTablePool` is Deprecated
 [WARNING]
 
-Previous versions of this guide discussed `HTablePool`, which was deprecated 
in HBase 0.94, 0.95, and 0.96, and removed in 0.98.1, by 
link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6500], or 
`HConnection`, which is deprecated in HBase 1.0 by `Connection`.
+Previous versions of this guide discussed `HTablePool`, which was deprecated 
in HBase 0.94, 0.95, and 0.96, and removed in 0.98.1, by 
link:https://issues.apache.org/jira/browse/HBASE-6580[HBASE-6580], or 
`HConnection`, which is deprecated in HBase 1.0 by `Connection`.
 Please use 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Connection.html[Connection]
 instead.
 
 



[17/50] [abbrv] hbase git commit: HBASE-16998 Implement Master-side analysis of region space reports

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/bcf6da40/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
new file mode 100644
index 000..98236c2
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreWithMiniCluster.java
@@ -0,0 +1,596 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.quotas;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.NamespaceNotFoundException;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.quotas.QuotaObserverChore.TablesWithQuotas;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+
+import com.google.common.collect.HashMultimap;
+import com.google.common.collect.Iterables;
+import com.google.common.collect.Multimap;
+
+/**
+ * Test class for {@link QuotaObserverChore} that uses a live HBase cluster.
+ */
+@Category(LargeTests.class)
+public class TestQuotaObserverChoreWithMiniCluster {
+  private static final Log LOG = 
LogFactory.getLog(TestQuotaObserverChoreWithMiniCluster.class);
+  private static final int SIZE_PER_VALUE = 256;
+  private static final String F1 = "f1";
+  private static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
+  private static final AtomicLong COUNTER = new AtomicLong(0);
+  private static final long ONE_MEGABYTE = 1024L * 1024L;
+  private static final long DEFAULT_WAIT_MILLIS = 500;
+
+  @Rule
+  public TestName testName = new TestName();
+
+  private HMaster master;
+  private QuotaObserverChore chore;
+  private SpaceQuotaViolationNotifierForTest violationNotifier;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+Configuration conf = TEST_UTIL.getConfiguration();
+conf.setInt(FileSystemUtilizationChore.FS_UTILIZATION_CHORE_DELAY_KEY, 
1000);
+conf.setInt(FileSystemUtilizationChore.FS_UTILIZATION_CHORE_PERIOD_KEY, 
1000);
+conf.setInt(QuotaObserverChore.VIOLATION_OBSERVER_CHORE_DELAY_KEY, 1000);
+conf.setInt(QuotaObserverChore.VIOLATION_OBSERVER_CHORE_PERIOD_KEY, 1000);
+conf.setBoolean(QuotaUtil.QUOTA_CONF_KEY, true);
+TEST_UTIL.startMiniCluster(1);
+  }
+
+  @AfterClass
+  public static void tearDown() throws Exception {
+TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Before
+  public void removeAllQuotas() throws Exception {
+final Connection conn = TEST_UTIL.getConnection();
+

[05/50] [abbrv] hbase git commit: Revert "HBASE-17906 When a huge amount of data writing to hbase through thrift2, there will be a deadlock error. (Albert Lee)" Mistaken commit.

2017-04-17 Thread elserj
Revert "HBASE-17906 When a huge amount of data writing to hbase through 
thrift2, there will be a deadlock error. (Albert Lee)"
Mistaken commit.

This reverts commit 9dd5cda01747ffb91ac084792fa4a8670859e810.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0cd4cec5
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0cd4cec5
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0cd4cec5

Branch: refs/heads/HBASE-16961
Commit: 0cd4cec5d24b5e7194a903e4d900f5558ed8b9a7
Parents: c846145
Author: Michael Stack 
Authored: Fri Apr 14 12:07:40 2017 -0700
Committer: Michael Stack 
Committed: Fri Apr 14 12:07:40 2017 -0700

--
 .../main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java   | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0cd4cec5/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
index 8f56b10..560ae64 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
@@ -432,6 +432,9 @@ public class ThriftServer extends Configured implements 
Tool {
   throw new RuntimeException("Could not parse the value provided for the 
port option", e);
 }
 
+// Thrift's implementation uses '0' as a placeholder for 'use the default.'
+int backlog = conf.getInt(BACKLOG_CONF_KEY, 0);
+
 // Local hostname and user name,
 // used only if QOP is configured.
 String host = null;



[06/50] [abbrv] hbase git commit: HBASE-17904 Get runs into NoSuchElementException when using Read Replica, with hbase. ipc.client.specificThreadForWriting to be true and hbase.rpc.client.impl to be o

2017-04-17 Thread elserj
HBASE-17904 Get runs into NoSuchElementException when using Read Replica, with 
hbase. ipc.client.specificThreadForWriting
to be true and hbase.rpc.client.impl to be 
org.apache.hadoop.hbase.ipc.RpcClientImpl (Huaxiang Sun)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7678855f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7678855f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7678855f

Branch: refs/heads/HBASE-16961
Commit: 7678855fac011a9c02e5d6a42470c0178482a4ce
Parents: 0cd4cec
Author: Michael Stack 
Authored: Sun Apr 16 11:00:57 2017 -0700
Committer: Michael Stack 
Committed: Sun Apr 16 11:01:06 2017 -0700

--
 .../hadoop/hbase/ipc/BlockingRpcConnection.java |  2 +-
 .../hbase/client/TestReplicaWithCluster.java| 50 
 2 files changed, 51 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7678855f/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
index 15eb10c..1012ad0 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
@@ -156,7 +156,7 @@ class BlockingRpcConnection extends RpcConnection 
implements Runnable {
 }
 
 public void remove(Call call) {
-  callsToWrite.remove();
+  callsToWrite.remove(call);
   // By removing the call from the expected call list, we make the list 
smaller, but
   // it means as well that we don't know how many calls we cancelled.
   calls.remove(call.id);

http://git-wip-us.apache.org/repos/asf/hbase/blob/7678855f/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java
index becb2eb..2c77541 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java
@@ -40,6 +40,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.Waiter;
+
 import org.apache.hadoop.hbase.client.replication.ReplicationAdmin;
 import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
@@ -515,7 +516,56 @@ public class TestReplicaWithCluster {
 
   Assert.assertTrue(r.isStale());
 } finally {
+  HTU.getAdmin().disableTable(hdt.getTableName());
+  HTU.deleteTable(hdt.getTableName());
+}
+  }
+
+  @Test
+  public void testReplicaGetWithRpcClientImpl() throws IOException {
+
HTU.getConfiguration().setBoolean("hbase.ipc.client.specificThreadForWriting", 
true);
+HTU.getConfiguration().set("hbase.rpc.client.impl", 
"org.apache.hadoop.hbase.ipc.RpcClientImpl");
+// Create table then get the single region for our new table.
+HTableDescriptor hdt = 
HTU.createTableDescriptor("testReplicaGetWithRpcClientImpl");
+hdt.setRegionReplication(NB_SERVERS);
+hdt.addCoprocessor(SlowMeCopro.class.getName());
+
+try {
+  Table table = HTU.createTable(hdt, new byte[][] { f }, null);
+
+  Put p = new Put(row);
+  p.addColumn(f, row, row);
+  table.put(p);
 
+  // Flush so it can be picked by the replica refresher thread
+  HTU.flush(table.getName());
+
+  // Sleep for some time until data is picked up by replicas
+  try {
+Thread.sleep(2 * REFRESH_PERIOD);
+  } catch (InterruptedException e1) {
+LOG.error(e1);
+  }
+
+  try {
+// Create the new connection so new config can kick in
+Connection connection = 
ConnectionFactory.createConnection(HTU.getConfiguration());
+Table t = connection.getTable(hdt.getTableName());
+
+// But if we ask for stale we will get it
+SlowMeCopro.cdl.set(new CountDownLatch(1));
+Get g = new Get(row);
+g.setConsistency(Consistency.TIMELINE);
+Result r = t.get(g);
+Assert.assertTrue(r.isStale());
+SlowMeCopro.cdl.get().countDown();
+  } finally {
+SlowMeCopro.cdl.get().countDown();
+SlowMeCopro.sleepTime.set(0);
+  }
+} 

[39/50] [abbrv] hbase git commit: HBASE-17602 Reduce some quota chore periods/delays

2017-04-17 Thread elserj
HBASE-17602 Reduce some quota chore periods/delays


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/4db44ad6
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/4db44ad6
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/4db44ad6

Branch: refs/heads/HBASE-16961
Commit: 4db44ad66497729118b695e22fd2e0f4c7787cc9
Parents: 48332ee
Author: Josh Elser 
Authored: Tue Feb 7 11:21:08 2017 -0500
Committer: Josh Elser 
Committed: Mon Apr 17 15:44:00 2017 -0400

--
 .../java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java  | 4 ++--
 .../org/apache/hadoop/hbase/quotas/SpaceQuotaRefresherChore.java | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/4db44ad6/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
index b9f4592..7f894e4 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
@@ -55,11 +55,11 @@ public class QuotaObserverChore extends ScheduledChore {
   private static final Log LOG = LogFactory.getLog(QuotaObserverChore.class);
   static final String QUOTA_OBSERVER_CHORE_PERIOD_KEY =
   "hbase.master.quotas.observer.chore.period";
-  static final int QUOTA_OBSERVER_CHORE_PERIOD_DEFAULT = 1000 * 60 * 5; // 5 
minutes in millis
+  static final int QUOTA_OBSERVER_CHORE_PERIOD_DEFAULT = 1000 * 60 * 1; // 1 
minutes in millis
 
   static final String QUOTA_OBSERVER_CHORE_DELAY_KEY =
   "hbase.master.quotas.observer.chore.delay";
-  static final long QUOTA_OBSERVER_CHORE_DELAY_DEFAULT = 1000L * 60L; // 1 
minute
+  static final long QUOTA_OBSERVER_CHORE_DELAY_DEFAULT = 1000L * 15L; // 15 
seconds in millis
 
   static final String QUOTA_OBSERVER_CHORE_TIMEUNIT_KEY =
   "hbase.master.quotas.observer.chore.timeunit";

http://git-wip-us.apache.org/repos/asf/hbase/blob/4db44ad6/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/SpaceQuotaRefresherChore.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/SpaceQuotaRefresherChore.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/SpaceQuotaRefresherChore.java
index e1a2693..8587e79 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/SpaceQuotaRefresherChore.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/SpaceQuotaRefresherChore.java
@@ -44,11 +44,11 @@ public class SpaceQuotaRefresherChore extends 
ScheduledChore {
 
   static final String POLICY_REFRESHER_CHORE_PERIOD_KEY =
   "hbase.regionserver.quotas.policy.refresher.chore.period";
-  static final int POLICY_REFRESHER_CHORE_PERIOD_DEFAULT = 1000 * 60 * 5; // 5 
minutes in millis
+  static final int POLICY_REFRESHER_CHORE_PERIOD_DEFAULT = 1000 * 60 * 1; // 1 
minute in millis
 
   static final String POLICY_REFRESHER_CHORE_DELAY_KEY =
   "hbase.regionserver.quotas.policy.refresher.chore.delay";
-  static final long POLICY_REFRESHER_CHORE_DELAY_DEFAULT = 1000L * 60L; // 1 
minute
+  static final long POLICY_REFRESHER_CHORE_DELAY_DEFAULT = 1000L * 15L; // 15 
seconds in millis
 
   static final String POLICY_REFRESHER_CHORE_TIMEUNIT_KEY =
   "hbase.regionserver.quotas.policy.refresher.chore.timeunit";



[27/50] [abbrv] hbase git commit: HBASE-16996 Implement storage/retrieval of filesystem-use quotas into quota table (Josh Elser)

2017-04-17 Thread elserj
HBASE-16996 Implement storage/retrieval of filesystem-use quotas into quota 
table (Josh Elser)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a29abe64
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a29abe64
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a29abe64

Branch: refs/heads/HBASE-16961
Commit: a29abe646e2b1cd75ee1d0f186cc7486fe78b79e
Parents: 988a23e
Author: tedyu 
Authored: Sat Dec 3 14:30:48 2016 -0800
Committer: Josh Elser 
Committed: Mon Apr 17 15:35:31 2017 -0400

--
 .../hadoop/hbase/quotas/QuotaTableUtil.java |  13 +-
 .../hadoop/hbase/quotas/MasterQuotaManager.java |  30 +
 .../hadoop/hbase/quotas/TestQuotaAdmin.java | 125 ++-
 3 files changed, 165 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a29abe64/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
index c44090f..8ef4f08 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
@@ -53,7 +53,9 @@ import org.apache.hadoop.hbase.util.Strings;
  * 
  * ROW-KEY  FAM/QUALDATA
  *   n.namespace q:s global-quotas
+ *   n.namespace u:dusize in bytes
  *   t.table q:s global-quotas
+ *   t.table u:dusize in bytes
  *   u.user  q:s global-quotas
  *   u.user  q:s.table table-quotas
  *   u.user  q:s.ns:   namespace-quotas
@@ -72,6 +74,7 @@ public class QuotaTableUtil {
   protected static final byte[] QUOTA_FAMILY_USAGE = Bytes.toBytes("u");
   protected static final byte[] QUOTA_QUALIFIER_SETTINGS = Bytes.toBytes("s");
   protected static final byte[] QUOTA_QUALIFIER_SETTINGS_PREFIX = 
Bytes.toBytes("s.");
+  protected static final byte[] QUOTA_QUALIFIER_DISKUSAGE = 
Bytes.toBytes("du");
   protected static final byte[] QUOTA_USER_ROW_KEY_PREFIX = 
Bytes.toBytes("u.");
   protected static final byte[] QUOTA_TABLE_ROW_KEY_PREFIX = 
Bytes.toBytes("t.");
   protected static final byte[] QUOTA_NAMESPACE_ROW_KEY_PREFIX = 
Bytes.toBytes("n.");
@@ -330,11 +333,16 @@ public class QuotaTableUtil {
*  Quotas protobuf helpers
*/
   protected static Quotas quotasFromData(final byte[] data) throws IOException 
{
+return quotasFromData(data, 0, data.length);
+  }
+
+  protected static Quotas quotasFromData(
+  final byte[] data, int offset, int length) throws IOException {
 int magicLen = ProtobufMagic.lengthOfPBMagic();
-if (!ProtobufMagic.isPBMagicPrefix(data, 0, magicLen)) {
+if (!ProtobufMagic.isPBMagicPrefix(data, offset, magicLen)) {
   throw new IOException("Missing pb magic prefix");
 }
-return Quotas.parseFrom(new ByteArrayInputStream(data, magicLen, 
data.length - magicLen));
+return Quotas.parseFrom(new ByteArrayInputStream(data, offset + magicLen, 
length - magicLen));
   }
 
   protected static byte[] quotasToData(final Quotas data) throws IOException {
@@ -348,6 +356,7 @@ public class QuotaTableUtil {
 boolean hasSettings = false;
 hasSettings |= quotas.hasThrottle();
 hasSettings |= quotas.hasBypassGlobals();
+hasSettings |= quotas.hasSpace();
 return !hasSettings;
   }
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/a29abe64/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
index 5dab2e3..1469268 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
@@ -37,6 +37,8 @@ import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetQuotaResponse;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Quotas;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceLimitRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota;
 import org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.Throttle;
 import 

[13/50] [abbrv] hbase git commit: HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)

2017-04-17 Thread elserj
HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c2c2178b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c2c2178b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c2c2178b

Branch: refs/heads/HBASE-16961
Commit: c2c2178b2eebe4439eadec6b37fae2566944c16b
Parents: c8cd921
Author: Ramkrishna 
Authored: Mon Apr 17 09:10:59 2017 +0530
Committer: Ramkrishna 
Committed: Mon Apr 17 09:28:24 2017 +0530

--
 .../java/org/apache/hadoop/hbase/CellUtil.java  |  24 --
 .../org/apache/hadoop/hbase/ExtendedCell.java   |  10 +
 .../org/apache/hadoop/hbase/master/HMaster.java |   2 +
 .../hbase/regionserver/ByteBufferChunkCell.java |  48 +++
 .../apache/hadoop/hbase/regionserver/Chunk.java |  60 ++-
 .../hadoop/hbase/regionserver/ChunkCreator.java | 404 +++
 .../hbase/regionserver/HRegionServer.java   |  14 +-
 .../hbase/regionserver/MemStoreChunkPool.java   | 265 
 .../hadoop/hbase/regionserver/MemStoreLAB.java  |   4 +-
 .../hbase/regionserver/MemStoreLABImpl.java | 171 
 .../regionserver/NoTagByteBufferChunkCell.java  |  48 +++
 .../hadoop/hbase/regionserver/OffheapChunk.java |  31 +-
 .../hadoop/hbase/regionserver/OnheapChunk.java  |  32 +-
 .../hadoop/hbase/HBaseTestingUtility.java   |   3 +
 .../coprocessor/TestCoprocessorInterface.java   |   4 +
 .../TestRegionObserverScannerOpenHook.java  |   3 +
 .../coprocessor/TestRegionObserverStacking.java |   3 +
 .../io/hfile/TestScannerFromBucketCache.java|   3 +
 .../hadoop/hbase/master/TestCatalogJanitor.java |   7 +
 .../hadoop/hbase/regionserver/TestBulkLoad.java |   2 +-
 .../hbase/regionserver/TestCellFlatSet.java |   2 +-
 .../regionserver/TestCompactingMemStore.java|  37 +-
 .../TestCompactingToCellArrayMapMemStore.java   |  16 +-
 .../TestCompactionArchiveConcurrentClose.java   |   1 +
 .../TestCompactionArchiveIOException.java   |   1 +
 .../regionserver/TestCompactionPolicy.java  |   1 +
 .../hbase/regionserver/TestDefaultMemStore.java |  14 +-
 .../regionserver/TestFailedAppendAndSync.java   |   1 +
 .../hbase/regionserver/TestHMobStore.java   |   2 +-
 .../hadoop/hbase/regionserver/TestHRegion.java  |   2 +
 .../regionserver/TestHRegionReplayEvents.java   |   2 +-
 .../regionserver/TestMemStoreChunkPool.java |  48 +--
 .../hbase/regionserver/TestMemStoreLAB.java |  27 +-
 .../TestMemstoreLABWithoutPool.java | 168 
 .../hbase/regionserver/TestRecoveredEdits.java  |   1 +
 .../hbase/regionserver/TestRegionIncrement.java |   1 +
 .../hadoop/hbase/regionserver/TestStore.java|   1 +
 .../TestStoreFileRefresherChore.java|   1 +
 .../hbase/regionserver/TestWALLockup.java   |   1 +
 .../TestWALMonotonicallyIncreasingSeqId.java|   1 +
 .../hbase/regionserver/wal/TestDurability.java  |   3 +
 41 files changed, 990 insertions(+), 479 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c2c2178b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
--
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
index e1bc969..56de21b 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
@@ -3135,28 +3135,4 @@ public final class CellUtil {
   return Type.DeleteFamily.getCode();
 }
   }
-
-  /**
-   * Clone the passed cell by copying its data into the passed buf.
-   */
-  public static Cell copyCellTo(Cell cell, ByteBuffer buf, int offset, int 
len) {
-int tagsLen = cell.getTagsLength();
-if (cell instanceof ExtendedCell) {
-  ((ExtendedCell) cell).write(buf, offset);
-} else {
-  // Normally all Cell impls within Server will be of type ExtendedCell. 
Just considering the
-  // other case also. The data fragments within Cell is copied into buf as 
in KeyValue
-  // serialization format only.
-  KeyValueUtil.appendTo(cell, buf, offset, true);
-}
-if (tagsLen == 0) {
-  // When tagsLen is 0, make a NoTagsByteBufferKeyValue version. This is 
an optimized class
-  // which directly return tagsLen as 0. So we avoid parsing many length 
components in
-  // reading the tagLength stored in the backing buffer. The Memstore 
addition of every Cell
-  // call getTagsLength().
-  return new NoTagsByteBufferKeyValue(buf, offset, len, 
cell.getSequenceId());
-} else {
-  return new ByteBufferKeyValue(buf, offset, len, cell.getSequenceId());
-}
-  }
 }


[47/50] [abbrv] hbase git commit: HBASE-17794 Swap "violation" for "snapshot" where appropriate

2017-04-17 Thread elserj
HBASE-17794 Swap "violation" for "snapshot" where appropriate

A couple of variables and comments in which violation is incorrectly
used to describe what the code is doing. This was a hold over from early
implementation -- need to scrub these out for clarity.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/abe1c065
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/abe1c065
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/abe1c065

Branch: refs/heads/HBASE-16961
Commit: abe1c065ab803ba60fa28b17432746e756c31ed7
Parents: 5b3926b
Author: Josh Elser 
Authored: Thu Mar 16 19:26:14 2017 -0400
Committer: Josh Elser 
Committed: Mon Apr 17 15:47:49 2017 -0400

--
 .../java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java| 4 ++--
 hbase-protocol-shaded/src/main/protobuf/Quota.proto| 2 +-
 .../org/apache/hadoop/hbase/quotas/QuotaObserverChore.java | 6 +++---
 .../apache/hadoop/hbase/quotas/TableQuotaSnapshotStore.java| 2 +-
 4 files changed, 7 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/abe1c065/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
index ad59517..c008702 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
@@ -228,7 +228,7 @@ public class QuotaTableUtil {
   }
 
   /**
-   * Creates a {@link Scan} which returns only quota violations from the quota 
table.
+   * Creates a {@link Scan} which returns only quota snapshots from the quota 
table.
*/
   public static Scan makeQuotaSnapshotScan() {
 Scan s = new Scan();
@@ -246,7 +246,7 @@ public class QuotaTableUtil {
* will throw an {@link IllegalArgumentException}.
*
* @param result A row from the quota table.
-   * @param snapshots A map of violations to add the result of this method 
into.
+   * @param snapshots A map of snapshots to add the result of this method into.
*/
   public static void extractQuotaSnapshot(
   Result result, Map snapshots) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/abe1c065/hbase-protocol-shaded/src/main/protobuf/Quota.proto
--
diff --git a/hbase-protocol-shaded/src/main/protobuf/Quota.proto 
b/hbase-protocol-shaded/src/main/protobuf/Quota.proto
index 1a6d5ed..364c58b 100644
--- a/hbase-protocol-shaded/src/main/protobuf/Quota.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/Quota.proto
@@ -98,7 +98,7 @@ message SpaceLimitRequest {
 }
 
 // Represents the state of a quota on a table. Either the quota is not in 
violation
-// or it is in violatino there is a violation policy which should be in effect.
+// or it is in violation there is a violation policy which should be in effect.
 message SpaceQuotaStatus {
   optional SpaceViolationPolicy policy = 1;
   optional bool in_violation = 2;

http://git-wip-us.apache.org/repos/asf/hbase/blob/abe1c065/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
index 94c5c87..254f2a1 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaObserverChore.java
@@ -532,9 +532,9 @@ public class QuotaObserverChore extends ScheduledChore {
   }
 
   /**
-   * Stores the quota violation state for the given table.
+   * Stores the quota state for the given table.
*/
-  void setTableQuotaViolation(TableName table, SpaceQuotaSnapshot snapshot) {
+  void setTableQuotaSnapshot(TableName table, SpaceQuotaSnapshot snapshot) {
 this.tableQuotaSnapshots.put(table, snapshot);
   }
 
@@ -552,7 +552,7 @@ public class QuotaObserverChore extends ScheduledChore {
   }
 
   /**
-   * Stores the quota violation state for the given namespace.
+   * Stores the quota state for the given namespace.
*/
   void setNamespaceQuotaSnapshot(String namespace, SpaceQuotaSnapshot 
snapshot) {
 this.namespaceQuotaSnapshots.put(namespace, snapshot);


[14/50] [abbrv] hbase git commit: Revert "HBASE-16438 Create a cell type so that chunk id is embedded in it (Ram)"

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/ecdfb823/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
index 095f4bd..d56d6ec 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
@@ -116,7 +116,6 @@ import org.apache.hadoop.hbase.filter.BinaryComparator;
 import org.apache.hadoop.hbase.filter.ColumnCountGetFilter;
 import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.filter.FilterAllFilter;
 import org.apache.hadoop.hbase.filter.FilterBase;
 import org.apache.hadoop.hbase.filter.FilterList;
 import org.apache.hadoop.hbase.filter.NullComparator;
@@ -4932,7 +4931,6 @@ public class TestHRegion {
   String callingMethod, Configuration conf, boolean isReadOnly, byte[]... 
families)
   throws IOException {
 Path logDir = TEST_UTIL.getDataTestDirOnTestFS(callingMethod + ".log");
-ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false, 0, 0, 
0, null);
 HRegionInfo hri = new HRegionInfo(tableName, startKey, stopKey);
 final WAL wal = HBaseTestingUtility.createWal(conf, logDir, hri);
 return initHRegion(tableName, startKey, stopKey, isReadOnly,

http://git-wip-us.apache.org/repos/asf/hbase/blob/ecdfb823/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
index 6eed7df..0054642 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionReplayEvents.java
@@ -153,7 +153,7 @@ public class TestHRegionReplayEvents {
 }
 
 time = System.currentTimeMillis();
-ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false, 0, 0, 
0, null);
+
 primaryHri = new HRegionInfo(htd.getTableName(),
   HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW,
   false, time, 0);

http://git-wip-us.apache.org/repos/asf/hbase/blob/ecdfb823/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
index 1768801..37a7664 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
@@ -48,30 +48,30 @@ import static org.junit.Assert.assertTrue;
 @Category({RegionServerTests.class, SmallTests.class})
 public class TestMemStoreChunkPool {
   private final static Configuration conf = new Configuration();
-  private static ChunkCreator chunkCreator;
+  private static MemStoreChunkPool chunkPool;
   private static boolean chunkPoolDisabledBeforeTest;
 
   @BeforeClass
   public static void setUpBeforeClass() throws Exception {
 conf.setBoolean(MemStoreLAB.USEMSLAB_KEY, true);
 conf.setFloat(MemStoreLAB.CHUNK_POOL_MAXSIZE_KEY, 0.2f);
-chunkPoolDisabledBeforeTest = ChunkCreator.chunkPoolDisabled;
-ChunkCreator.chunkPoolDisabled = false;
+chunkPoolDisabledBeforeTest = MemStoreChunkPool.chunkPoolDisabled;
+MemStoreChunkPool.chunkPoolDisabled = false;
 long globalMemStoreLimit = (long) 
(ManagementFactory.getMemoryMXBean().getHeapMemoryUsage()
 .getMax() * MemorySizeUtil.getGlobalMemStoreHeapPercent(conf, false));
-chunkCreator = ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, 
false,
-  globalMemStoreLimit, 0.2f, MemStoreLAB.POOL_INITIAL_SIZE_DEFAULT, null);
-assertTrue(chunkCreator != null);
+chunkPool = MemStoreChunkPool.initialize(globalMemStoreLimit, 0.2f,
+MemStoreLAB.POOL_INITIAL_SIZE_DEFAULT, 
MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false);
+assertTrue(chunkPool != null);
   }
 
   @AfterClass
   public static void tearDownAfterClass() throws Exception {
-ChunkCreator.chunkPoolDisabled = chunkPoolDisabledBeforeTest;
+MemStoreChunkPool.chunkPoolDisabled = chunkPoolDisabledBeforeTest;
   }
 
   @Before
   public void tearDown() throws Exception {
-chunkCreator.clearChunksInPool();
+chunkPool.clearChunks();
   }
 
   @Test

[20/50] [abbrv] hbase git commit: HBASE-17000 Implement computation of online region sizes and report to the Master

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/2dea6764/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
index d7d4db0..e90c934 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
@@ -10164,6 +10164,1912 @@ public final class RegionServerStatusProtos {
 
   }
 
+  public interface RegionSpaceUseOrBuilder extends
+  // @@protoc_insertion_point(interface_extends:hbase.pb.RegionSpaceUse)
+  org.apache.hadoop.hbase.shaded.com.google.protobuf.MessageOrBuilder {
+
+/**
+ * 
+ * A region identifier
+ * 
+ *
+ * optional .hbase.pb.RegionInfo region = 1;
+ */
+boolean hasRegion();
+/**
+ * 
+ * A region identifier
+ * 
+ *
+ * optional .hbase.pb.RegionInfo region = 1;
+ */
+org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo 
getRegion();
+/**
+ * 
+ * A region identifier
+ * 
+ *
+ * optional .hbase.pb.RegionInfo region = 1;
+ */
+
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfoOrBuilder
 getRegionOrBuilder();
+
+/**
+ * 
+ * The size in bytes of the region
+ * 
+ *
+ * optional uint64 size = 2;
+ */
+boolean hasSize();
+/**
+ * 
+ * The size in bytes of the region
+ * 
+ *
+ * optional uint64 size = 2;
+ */
+long getSize();
+  }
+  /**
+   * Protobuf type {@code hbase.pb.RegionSpaceUse}
+   */
+  public  static final class RegionSpaceUse extends
+  org.apache.hadoop.hbase.shaded.com.google.protobuf.GeneratedMessageV3 
implements
+  // @@protoc_insertion_point(message_implements:hbase.pb.RegionSpaceUse)
+  RegionSpaceUseOrBuilder {
+// Use RegionSpaceUse.newBuilder() to construct.
+private 
RegionSpaceUse(org.apache.hadoop.hbase.shaded.com.google.protobuf.GeneratedMessageV3.Builder
 builder) {
+  super(builder);
+}
+private RegionSpaceUse() {
+  size_ = 0L;
+}
+
+@java.lang.Override
+public final 
org.apache.hadoop.hbase.shaded.com.google.protobuf.UnknownFieldSet
+getUnknownFields() {
+  return this.unknownFields;
+}
+private RegionSpaceUse(
+org.apache.hadoop.hbase.shaded.com.google.protobuf.CodedInputStream 
input,
+
org.apache.hadoop.hbase.shaded.com.google.protobuf.ExtensionRegistryLite 
extensionRegistry)
+throws 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferException
 {
+  this();
+  int mutable_bitField0_ = 0;
+  
org.apache.hadoop.hbase.shaded.com.google.protobuf.UnknownFieldSet.Builder 
unknownFields =
+  
org.apache.hadoop.hbase.shaded.com.google.protobuf.UnknownFieldSet.newBuilder();
+  try {
+boolean done = false;
+while (!done) {
+  int tag = input.readTag();
+  switch (tag) {
+case 0:
+  done = true;
+  break;
+default: {
+  if (!parseUnknownField(input, unknownFields,
+ extensionRegistry, tag)) {
+done = true;
+  }
+  break;
+}
+case 10: {
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo.Builder
 subBuilder = null;
+  if (((bitField0_ & 0x0001) == 0x0001)) {
+subBuilder = region_.toBuilder();
+  }
+  region_ = 
input.readMessage(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionInfo.PARSER,
 extensionRegistry);
+  if (subBuilder != null) {
+subBuilder.mergeFrom(region_);
+region_ = subBuilder.buildPartial();
+  }
+  bitField0_ |= 0x0001;
+  break;
+}
+case 16: {
+  bitField0_ |= 0x0002;
+  size_ = input.readUInt64();
+  break;
+}
+  }
+}
+  } catch 
(org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferException
 e) {
+throw e.setUnfinishedMessage(this);
+  } catch (java.io.IOException e) {
+throw new 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferException(
+e).setUnfinishedMessage(this);
+  } finally {
+this.unknownFields = unknownFields.build();
+makeExtensionsImmutable();
+  }
+}
+

[19/50] [abbrv] hbase git commit: HBASE-17000 Implement computation of online region sizes and report to the Master

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/2dea6764/hbase-protocol-shaded/src/main/protobuf/RegionServerStatus.proto
--
diff --git a/hbase-protocol-shaded/src/main/protobuf/RegionServerStatus.proto 
b/hbase-protocol-shaded/src/main/protobuf/RegionServerStatus.proto
index 1c373ee..23ddd43 100644
--- a/hbase-protocol-shaded/src/main/protobuf/RegionServerStatus.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/RegionServerStatus.proto
@@ -141,6 +141,22 @@ message SplitTableRegionResponse {
   optional uint64 proc_id = 1;
 }
 
+message RegionSpaceUse {
+  optional RegionInfo region = 1; // A region identifier
+  optional uint64 size = 2; // The size in bytes of the region
+}
+
+/**
+ * Reports filesystem usage for regions.
+ */
+message RegionSpaceUseReportRequest {
+  repeated RegionSpaceUse space_use = 1;
+}
+
+message RegionSpaceUseReportResponse {
+
+}
+
 service RegionServerStatusService {
   /** Called when a region server first starts. */
   rpc RegionServerStartup(RegionServerStartupRequest)
@@ -182,4 +198,10 @@ service RegionServerStatusService {
*/
   rpc getProcedureResult(GetProcedureResultRequest)
 returns(GetProcedureResultResponse);
+
+  /**
+   * Reports Region filesystem space use
+   */
+  rpc ReportRegionSpaceUse(RegionSpaceUseReportRequest)
+returns(RegionSpaceUseReportResponse);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/2dea6764/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
index 9af8f45..f454248 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
@@ -58,6 +58,7 @@ import org.apache.hadoop.hbase.mob.MobUtils;
 import org.apache.hadoop.hbase.procedure.MasterProcedureManager;
 import org.apache.hadoop.hbase.procedure2.Procedure;
 import org.apache.hadoop.hbase.procedure2.ProcedureUtil;
+import org.apache.hadoop.hbase.quotas.MasterQuotaManager;
 import org.apache.hadoop.hbase.regionserver.RSRpcServices;
 import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
@@ -216,6 +217,9 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProto
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionServerStartupRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionServerStartupResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionServerStatusService;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionSpaceUse;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionSpaceUseReportRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionSpaceUseReportResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionStateTransition;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.ReportRSFatalErrorRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.ReportRSFatalErrorResponse;
@@ -2006,4 +2010,19 @@ public class MasterRpcServices extends RSRpcServices
   throw new ServiceException(e);
 }
   }
+
+  @Override
+  public RegionSpaceUseReportResponse reportRegionSpaceUse(RpcController 
controller,
+  RegionSpaceUseReportRequest request) throws ServiceException {
+try {
+  master.checkInitialized();
+  MasterQuotaManager quotaManager = this.master.getMasterQuotaManager();
+  for (RegionSpaceUse report : request.getSpaceUseList()) {
+quotaManager.addRegionSize(HRegionInfo.convert(report.getRegion()), 
report.getSize());
+  }
+  return RegionSpaceUseReportResponse.newBuilder().build();
+} catch (Exception e) {
+  throw new ServiceException(e);
+}
+  }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/2dea6764/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileSystemUtilizationChore.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileSystemUtilizationChore.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileSystemUtilizationChore.java
new file mode 100644
index 000..01540eb
--- /dev/null
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileSystemUtilizationChore.java
@@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or 

[32/50] [abbrv] hbase git commit: HBASE-17001 Enforce quota violation policies in the RegionServer

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/0d76d667/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java
--
diff --git 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java
 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java
index cc40536..d466e59 100644
--- 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java
+++ 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java
@@ -5778,6 +5778,1284 @@ public final class QuotaProtos {
 // @@protoc_insertion_point(class_scope:hbase.pb.SpaceLimitRequest)
   }
 
+  public interface SpaceQuotaStatusOrBuilder
+  extends com.google.protobuf.MessageOrBuilder {
+
+// optional .hbase.pb.SpaceViolationPolicy policy = 1;
+/**
+ * optional .hbase.pb.SpaceViolationPolicy policy = 1;
+ */
+boolean hasPolicy();
+/**
+ * optional .hbase.pb.SpaceViolationPolicy policy = 1;
+ */
+
org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.SpaceViolationPolicy 
getPolicy();
+
+// optional bool in_violation = 2;
+/**
+ * optional bool in_violation = 2;
+ */
+boolean hasInViolation();
+/**
+ * optional bool in_violation = 2;
+ */
+boolean getInViolation();
+  }
+  /**
+   * Protobuf type {@code hbase.pb.SpaceQuotaStatus}
+   *
+   * 
+   * Represents the state of a quota on a table. Either the quota is not in 
violation
+   * or it is in violatino there is a violation policy which should be in 
effect.
+   * 
+   */
+  public static final class SpaceQuotaStatus extends
+  com.google.protobuf.GeneratedMessage
+  implements SpaceQuotaStatusOrBuilder {
+// Use SpaceQuotaStatus.newBuilder() to construct.
+private SpaceQuotaStatus(com.google.protobuf.GeneratedMessage.Builder 
builder) {
+  super(builder);
+  this.unknownFields = builder.getUnknownFields();
+}
+private SpaceQuotaStatus(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+
+private static final SpaceQuotaStatus defaultInstance;
+public static SpaceQuotaStatus getDefaultInstance() {
+  return defaultInstance;
+}
+
+public SpaceQuotaStatus getDefaultInstanceForType() {
+  return defaultInstance;
+}
+
+private final com.google.protobuf.UnknownFieldSet unknownFields;
+@java.lang.Override
+public final com.google.protobuf.UnknownFieldSet
+getUnknownFields() {
+  return this.unknownFields;
+}
+private SpaceQuotaStatus(
+com.google.protobuf.CodedInputStream input,
+com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+throws com.google.protobuf.InvalidProtocolBufferException {
+  initFields();
+  int mutable_bitField0_ = 0;
+  com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+  com.google.protobuf.UnknownFieldSet.newBuilder();
+  try {
+boolean done = false;
+while (!done) {
+  int tag = input.readTag();
+  switch (tag) {
+case 0:
+  done = true;
+  break;
+default: {
+  if (!parseUnknownField(input, unknownFields,
+ extensionRegistry, tag)) {
+done = true;
+  }
+  break;
+}
+case 8: {
+  int rawValue = input.readEnum();
+  
org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.SpaceViolationPolicy 
value = 
org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.SpaceViolationPolicy.valueOf(rawValue);
+  if (value == null) {
+unknownFields.mergeVarintField(1, rawValue);
+  } else {
+bitField0_ |= 0x0001;
+policy_ = value;
+  }
+  break;
+}
+case 16: {
+  bitField0_ |= 0x0002;
+  inViolation_ = input.readBool();
+  break;
+}
+  }
+}
+  } catch (com.google.protobuf.InvalidProtocolBufferException e) {
+throw e.setUnfinishedMessage(this);
+  } catch (java.io.IOException e) {
+throw new com.google.protobuf.InvalidProtocolBufferException(
+e.getMessage()).setUnfinishedMessage(this);
+  } finally {
+this.unknownFields = unknownFields.build();
+makeExtensionsImmutable();
+  }
+}
+public static final com.google.protobuf.Descriptors.Descriptor
+getDescriptor() {
+  return 
org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_hbase_pb_SpaceQuotaStatus_descriptor;
+}
+
+protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+internalGetFieldAccessorTable() {
+  return 

[42/50] [abbrv] hbase git commit: HBASE-17428 Implement informational RPCs for space quotas

2017-04-17 Thread elserj
http://git-wip-us.apache.org/repos/asf/hbase/blob/095fabf1/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
--
diff --git 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
index a4c6095..d56def5 100644
--- 
a/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
+++ 
b/hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
@@ -4362,7 +4362,7 @@ public final class QuotaProtos {
* optional .hbase.pb.SpaceQuota space = 3;
*/
   private 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
-  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaOrBuilder>
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaOrBuilder>
 
   getSpaceFieldBuilder() {
 if (spaceBuilder_ == null) {
   spaceBuilder_ = new 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
@@ -6077,7 +6077,7 @@ public final class QuotaProtos {
* optional .hbase.pb.SpaceQuota quota = 1;
*/
   private 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
-  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaOrBuilder>
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuota.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaOrBuilder>
 
   getQuotaFieldBuilder() {
 if (quotaBuilder_ == null) {
   quotaBuilder_ = new 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
@@ -6351,7 +6351,7 @@ public final class QuotaProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasPolicy()) {
 hash = (37 * hash) + POLICY_FIELD_NUMBER;
 hash = (53 * hash) + policy_;
@@ -6978,7 +6978,7 @@ public final class QuotaProtos {
 return memoizedHashCode;
   }
   int hash = 41;
-  hash = (19 * hash) + getDescriptorForType().hashCode();
+  hash = (19 * hash) + getDescriptor().hashCode();
   if (hasStatus()) {
 hash = (37 * hash) + STATUS_FIELD_NUMBER;
 hash = (53 * hash) + getStatus().hashCode();
@@ -7351,7 +7351,7 @@ public final class QuotaProtos {
* optional .hbase.pb.SpaceQuotaStatus status = 1;
*/
   private 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
-  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaStatus, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaStatus.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaStatusOrBuilder>
+  
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaStatus, 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaStatus.Builder,
 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.SpaceQuotaStatusOrBuilder>
 
   getStatusFieldBuilder() {
 if (statusBuilder_ == null) {
   statusBuilder_ = new 
org.apache.hadoop.hbase.shaded.com.google.protobuf.SingleFieldBuilderV3<
@@ -7476,163 +7476,5829 @@ public final class QuotaProtos {
 
   }
 
-  private static final 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.Descriptor
-internal_static_hbase_pb_TimedQuota_descriptor;
-  private static final
-
org.apache.hadoop.hbase.shaded.com.google.protobuf.GeneratedMessageV3.FieldAccessorTable
-  internal_static_hbase_pb_TimedQuota_fieldAccessorTable;
-  private static final 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.Descriptor
-internal_static_hbase_pb_Throttle_descriptor;
-  private static final
-
org.apache.hadoop.hbase.shaded.com.google.protobuf.GeneratedMessageV3.FieldAccessorTable
-  internal_static_hbase_pb_Throttle_fieldAccessorTable;
-  private static final 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.Descriptor
-   

[04/50] [abbrv] hbase git commit: HBASE-17888: Added generic methods for updating metrics on submit and finish of a procedure execution

2017-04-17 Thread elserj
HBASE-17888: Added generic methods for updating metrics on submit and finish of 
a procedure execution

Signed-off-by: Michael Stack 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c8461456
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c8461456
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c8461456

Branch: refs/heads/HBASE-16961
Commit: c8461456d0ae81b90d67d36e1e077ae1d01102e5
Parents: e2a7461
Author: Umesh Agashe 
Authored: Mon Apr 10 15:32:43 2017 -0700
Committer: Michael Stack 
Committed: Fri Apr 14 11:51:08 2017 -0700

--
 .../apache/hadoop/hbase/client/HBaseAdmin.java  |   2 +-
 .../org/apache/hadoop/hbase/ProcedureInfo.java  |  20 +-
 .../master/MetricsAssignmentManagerSource.java  |   9 +-
 .../MetricsAssignmentManagerSourceImpl.java |   9 +-
 .../hadoop/hbase/procedure2/Procedure.java  |  41 +-
 .../hbase/procedure2/ProcedureExecutor.java |  11 +
 .../hadoop/hbase/procedure2/ProcedureUtil.java  |  10 +-
 .../hbase/procedure2/TestProcedureMetrics.java  | 254 ++
 .../procedure2/TestStateMachineProcedure.java   |   1 -
 .../shaded/protobuf/generated/MasterProtos.java | 490 +--
 .../protobuf/generated/ProcedureProtos.java | 146 +++---
 .../src/main/protobuf/Master.proto  |   2 +-
 .../src/main/protobuf/Procedure.proto   |   2 +-
 .../hadoop/hbase/master/MasterRpcServices.java  |   4 +-
 .../master/procedure/ServerCrashProcedure.java  |   2 +-
 .../hbase-webapps/master/procedures.jsp |   2 +-
 .../main/ruby/shell/commands/list_procedures.rb |   6 +-
 17 files changed, 652 insertions(+), 359 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c8461456/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
index 155a272..cadd6cc 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
@@ -2114,7 +2114,7 @@ public class HBaseAdmin implements Admin {
 procedureState, procProto.hasParentId() ? procProto.getParentId() : 
-1, nonceKey,
 procProto.hasException()?
 ForeignExceptionUtil.toIOException(procProto.getException()): 
null,
-procProto.getLastUpdate(), procProto.getStartTime(),
+procProto.getLastUpdate(), procProto.getSubmittedTime(),
 procProto.hasResult()? procProto.getResult().toByteArray() : null);
   }
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/c8461456/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java
index bb8bb08..6104c22 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java
@@ -39,7 +39,7 @@ public class ProcedureInfo implements Cloneable {
   private final NonceKey nonceKey;
   private final IOException exception;
   private final long lastUpdate;
-  private final long startTime;
+  private final long submittedTime;
   private final byte[] result;
 
   private long clientAckTime = -1;
@@ -54,7 +54,7 @@ public class ProcedureInfo implements Cloneable {
   final NonceKey nonceKey,
   final IOException exception,
   final long lastUpdate,
-  final long startTime,
+  final long submittedTime,
   final byte[] result) {
 this.procId = procId;
 this.procName = procName;
@@ -63,7 +63,7 @@ public class ProcedureInfo implements Cloneable {
 this.parentId = parentId;
 this.nonceKey = nonceKey;
 this.lastUpdate = lastUpdate;
-this.startTime = startTime;
+this.submittedTime = submittedTime;
 
 // If the procedure is completed, we should treat exception and result 
differently
 this.exception = exception;
@@ -74,7 +74,7 @@ public class ProcedureInfo implements Cloneable {
   justification="Intentional; calling super class clone doesn't make sense 
here.")
   public ProcedureInfo clone() {
 return new ProcedureInfo(procId, procName, procOwner, procState, parentId, 
nonceKey,
-  exception, lastUpdate, startTime, result);
+  exception, lastUpdate, submittedTime, result);
   }
 
   @Override
@@ -96,10 +96,10 @@ public class ProcedureInfo implements Cloneable {
 sb.append(procState);

[09/50] [abbrv] hbase git commit: HBASE-15535 Correct link to Trafodion

2017-04-17 Thread elserj
HBASE-15535 Correct link to Trafodion

Signed-off-by: CHIA-PING TSAI 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/363f6275
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/363f6275
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/363f6275

Branch: refs/heads/HBASE-16961
Commit: 363f62751c760cc8056a2b1be40a410281e634f7
Parents: 918aa46
Author: Gábor Lipták 
Authored: Sat Apr 15 11:43:38 2017 -0400
Committer: CHIA-PING TSAI 
Committed: Mon Apr 17 10:26:28 2017 +0800

--
 src/main/asciidoc/_chapters/sql.adoc  | 2 +-
 src/main/site/xdoc/supportingprojects.xml | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/363f6275/src/main/asciidoc/_chapters/sql.adoc
--
diff --git a/src/main/asciidoc/_chapters/sql.adoc 
b/src/main/asciidoc/_chapters/sql.adoc
index b47104c..b1ad063 100644
--- a/src/main/asciidoc/_chapters/sql.adoc
+++ b/src/main/asciidoc/_chapters/sql.adoc
@@ -37,6 +37,6 @@ link:http://phoenix.apache.org[Apache Phoenix]
 
 === Trafodion
 
-link:https://wiki.trafodion.org/[Trafodion: Transactional SQL-on-HBase]
+link:http://trafodion.incubator.apache.org/[Trafodion: Transactional 
SQL-on-HBase]
 
 :numbered:

http://git-wip-us.apache.org/repos/asf/hbase/blob/363f6275/src/main/site/xdoc/supportingprojects.xml
--
diff --git a/src/main/site/xdoc/supportingprojects.xml 
b/src/main/site/xdoc/supportingprojects.xml
index f349c7f..f949a57 100644
--- a/src/main/site/xdoc/supportingprojects.xml
+++ b/src/main/site/xdoc/supportingprojects.xml
@@ -46,9 +46,9 @@ under the License.
 for HBase.
https://github.com/juwi/HBase-TAggregator;>HBase 
TAggregator
An HBase coprocessor for timeseries-based aggregations.
-   http://www.trafodion.org;>Trafodion
-   Trafodion is an HP-sponsored Apache-licensed open source SQL on HBase
-DBMS with full-ACID distributed transaction support.
+   http://trafodion.incubator.apache.org/;>Apache 
Trafodion
+   Apache Trafodion is a webscale SQL-on-Hadoop solution enabling
+transactional or operational workloads on Hadoop.
http://phoenix.apache.org/;>Apache Phoenix
Apache Phoenix is a relational database layer over HBase delivered as a
 client-embedded JDBC driver targeting low latency queries over HBase 
data.



[43/50] [abbrv] hbase git commit: HBASE-17428 Implement informational RPCs for space quotas

2017-04-17 Thread elserj
HBASE-17428 Implement informational RPCs for space quotas

Create some RPCs that can expose the in-memory state that the
RegionServers and Master hold to drive the space quota "state machine".
Then, create some hbase shell commands to interact with those.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/095fabf1
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/095fabf1
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/095fabf1

Branch: refs/heads/HBASE-16961
Commit: 095fabf16d91ce0e5c042b657a9f07a548d49a49
Parents: efd6edc
Author: Josh Elser 
Authored: Tue Feb 21 15:36:39 2017 -0500
Committer: Josh Elser 
Committed: Mon Apr 17 15:44:00 2017 -0400

--
 .../hbase/client/ConnectionImplementation.java  |9 +
 .../hadoop/hbase/client/QuotaStatusCalls.java   |  125 +
 .../client/ShortCircuitMasterConnection.java|7 +
 .../hadoop/hbase/quotas/QuotaTableUtil.java |   77 +
 .../hbase/shaded/protobuf/RequestConverter.java |   33 +
 .../shaded/protobuf/generated/AdminProtos.java  |  394 +-
 .../shaded/protobuf/generated/MasterProtos.java |   92 +-
 .../shaded/protobuf/generated/QuotaProtos.java  | 5986 +-
 .../generated/RegionServerStatusProtos.java |   28 +-
 .../src/main/protobuf/Admin.proto   |9 +
 .../src/main/protobuf/Master.proto  |4 +
 .../src/main/protobuf/Quota.proto   |   35 +
 .../hbase/protobuf/generated/QuotaProtos.java   |6 +-
 .../hadoop/hbase/master/MasterRpcServices.java  |   40 +
 .../hbase/quotas/ActivePolicyEnforcement.java   |8 +
 .../hbase/regionserver/RSRpcServices.java   |   57 +
 .../hadoop/hbase/master/MockRegionServer.java   |   18 +
 .../hbase/quotas/TestQuotaStatusRPCs.java   |  192 +
 hbase-shell/src/main/ruby/hbase/quotas.rb   |   16 +
 hbase-shell/src/main/ruby/shell.rb  |3 +
 .../ruby/shell/commands/list_quota_snapshots.rb |   59 +
 .../shell/commands/list_quota_table_sizes.rb|   47 +
 .../shell/commands/list_quota_violations.rb |   48 +
 hbase-shell/src/test/ruby/hbase/quotas_test.rb  |   24 -
 .../test/ruby/hbase/quotas_test_no_cluster.rb   |   69 +
 25 files changed, 7066 insertions(+), 320 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/095fabf1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
index 99feb14..3f27e1c 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
@@ -94,6 +94,8 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SecurityCa
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SecurityCapabilitiesResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetNormalizerRunningRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.SetNormalizerRunningResponse;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.GetSpaceQuotaRegionSizesRequest;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.QuotaProtos.GetSpaceQuotaRegionSizesResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos.AddReplicationPeerRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos.AddReplicationPeerResponse;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.ReplicationProtos.DisableReplicationPeerRequest;
@@ -1731,6 +1733,13 @@ class ConnectionImplementation implements 
ClusterConnection, Closeable {
   ListReplicationPeersRequest request) throws ServiceException {
 return stub.listReplicationPeers(controller, request);
   }
+
+  @Override
+  public GetSpaceQuotaRegionSizesResponse getSpaceQuotaRegionSizes(
+  RpcController controller, GetSpaceQuotaRegionSizesRequest request)
+  throws ServiceException {
+return stub.getSpaceQuotaRegionSizes(controller, request);
+  }
 };
   }
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/095fabf1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/QuotaStatusCalls.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/QuotaStatusCalls.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/QuotaStatusCalls.java
new file mode 100644
index 000..f0f385d
--- 

[21/50] [abbrv] hbase git commit: HBASE-17000 Implement computation of online region sizes and report to the Master

2017-04-17 Thread elserj
HBASE-17000 Implement computation of online region sizes and report to the 
Master

Includes a trivial implementation of the Master-side collection to
avoid. Only enough to write a test to verify RS collection.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2dea6764
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2dea6764
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2dea6764

Branch: refs/heads/HBASE-16961
Commit: 2dea67644982aa18d174de32f454f526354eea5c
Parents: a29abe6
Author: Josh Elser 
Authored: Mon Nov 7 13:46:42 2016 -0500
Committer: Josh Elser 
Committed: Mon Apr 17 15:35:31 2017 -0400

--
 .../generated/RegionServerStatusProtos.java | 2071 +-
 .../src/main/protobuf/RegionServerStatus.proto  |   22 +
 .../hadoop/hbase/master/MasterRpcServices.java  |   19 +
 .../quotas/FileSystemUtilizationChore.java  |  205 ++
 .../hadoop/hbase/quotas/MasterQuotaManager.java |   15 +
 .../hbase/regionserver/HRegionServer.java   |   72 +
 .../quotas/TestFileSystemUtilizationChore.java  |  357 +++
 .../hadoop/hbase/quotas/TestRegionSizeUse.java  |  194 ++
 .../TestRegionServerRegionSpaceUseReport.java   |   99 +
 9 files changed, 3032 insertions(+), 22 deletions(-)
--




  1   2   >