svn commit: r31733 [1/2] - in /dev/hbase: hbase-2.1.2RC0/ hbase-2.1.2RC1/

2018-12-31 Thread stack
Author: stack
Date: Tue Jan  1 00:19:33 2019
New Revision: 31733

Log:
Add 2.1.2RC1 and remove 2.1.2RC0

Added:
dev/hbase/hbase-2.1.2RC1/
dev/hbase/hbase-2.1.2RC1/CHANGES.md   (with props)
dev/hbase/hbase-2.1.2RC1/RELEASENOTES.md   (with props)
dev/hbase/hbase-2.1.2RC1/hbase-2.1.2-bin.tar.gz   (with props)
dev/hbase/hbase-2.1.2RC1/hbase-2.1.2-bin.tar.gz.asc
dev/hbase/hbase-2.1.2RC1/hbase-2.1.2-bin.tar.gz.sha512
dev/hbase/hbase-2.1.2RC1/hbase-2.1.2-client-bin.tar.gz   (with props)
dev/hbase/hbase-2.1.2RC1/hbase-2.1.2-client-bin.tar.gz.asc
dev/hbase/hbase-2.1.2RC1/hbase-2.1.2-client-bin.tar.gz.sha512
dev/hbase/hbase-2.1.2RC1/hbase-2.1.2-src.tar.gz   (with props)
dev/hbase/hbase-2.1.2RC1/hbase-2.1.2-src.tar.gz.asc
dev/hbase/hbase-2.1.2RC1/hbase-2.1.2-src.tar.gz.sha512
Removed:
dev/hbase/hbase-2.1.2RC0/

Added: dev/hbase/hbase-2.1.2RC1/CHANGES.md
==
--- dev/hbase/hbase-2.1.2RC1/CHANGES.md (added)
+++ dev/hbase/hbase-2.1.2RC1/CHANGES.md Tue Jan  1 00:19:33 2019
@@ -0,0 +1,639 @@
+# HBASE Changelog
+
+
+
+## Release 2.1.2 - Unreleased (as of 2018-12-29)
+
+
+
+### IMPROVEMENTS:
+
+| JIRA | Summary | Priority | Component |
+|: |: | :--- |: |
+| [HBASE-21640](https://issues.apache.org/jira/browse/HBASE-21640) | Remove 
the TODO when increment zero |  Major | . |
+| [HBASE-21631](https://issues.apache.org/jira/browse/HBASE-21631) | 
list\_quotas should print human readable values for LIMIT |  Minor | shell |
+| [HBASE-21635](https://issues.apache.org/jira/browse/HBASE-21635) | Use maven 
enforcer to ban imports from illegal packages |  Major | build |
+| [HBASE-21520](https://issues.apache.org/jira/browse/HBASE-21520) | 
TestMultiColumnScanner cost long time when using ROWCOL bloom type |  Major | 
test |
+| [HBASE-21590](https://issues.apache.org/jira/browse/HBASE-21590) | Optimize 
trySkipToNextColumn in StoreScanner a bit |  Critical | Performance, Scanners |
+| [HBASE-21554](https://issues.apache.org/jira/browse/HBASE-21554) | Show 
replication endpoint classname for replication peer on master web UI |  Minor | 
UI |
+| [HBASE-21549](https://issues.apache.org/jira/browse/HBASE-21549) | Add shell 
command for serial replication peer |  Major | . |
+| [HBASE-21413](https://issues.apache.org/jira/browse/HBASE-21413) | Empty 
meta log doesn't get split when restart whole cluster |  Major | . |
+| [HBASE-21524](https://issues.apache.org/jira/browse/HBASE-21524) | 
Unnecessary DEBUG log in ConnectionImplementation#isTableEnabled |  Major | 
Client |
+| [HBASE-21511](https://issues.apache.org/jira/browse/HBASE-21511) | Remove in 
progress snapshot check in SnapshotFileCache#getUnreferencedFiles |  Minor | . |
+| [HBASE-21480](https://issues.apache.org/jira/browse/HBASE-21480) | Taking 
snapshot when RS crashes prevent we bring the regions online |  Major | 
snapshots |
+| [HBASE-21485](https://issues.apache.org/jira/browse/HBASE-21485) | Add more 
debug logs for remote procedure execution |  Major | proc-v2 |
+| [HBASE-21388](https://issues.apache.org/jira/browse/HBASE-21388) | No need 
to instantiate MemStoreLAB for master which not carry table |  Major | . |
+
+
+### BUG FIXES:
+
+| JIRA | Summary | Priority | Component |
+|: |: | :--- |: |
+| [HBASE-21646](https://issues.apache.org/jira/browse/HBASE-21646) | Flakey 
TestTableSnapshotInputFormat; DisableTable not completing... |  Major | test |
+| [HBASE-21545](https://issues.apache.org/jira/browse/HBASE-21545) | 
NEW\_VERSION\_BEHAVIOR breaks Get/Scan with specified columns |  Major | API |
+| [HBASE-21629](https://issues.apache.org/jira/browse/HBASE-21629) | 
draining\_servers.rb is broken |  Major | scripts |
+| [HBASE-21621](https://issues.apache.org/jira/browse/HBASE-21621) | Reversed 
scan does not return expected  number of rows |  Critical | scan |
+| [HBASE-21620](https://issues.apache.org/jira/browse/HBASE-21620) | Problem 
in scan query when using more than one column prefix filter in some cases. |  
Major | scan |
+| [HBASE-21618](https://issues.apache.org/jira/browse/HBASE-21618) | Scan with 
the same startRow(inclusive=true) and stopRow(inclusive=false) returns one 
result |  Critical | Client |
+| [HBASE-21610](https://issues.apache.org/jira/browse/HBASE-21610) | 
numOpenConnections metric is set to -1 when zero server channel exist |  Minor 
| metrics |
+| [HBASE-21498](https://issues.apache.org/jira/browse/HBASE-21498) | Master 
OOM when SplitTableRegionProcedure new CacheConfig and instantiate a new 
BlockCache |  Major | . |
+| [HBASE-21592](https://issues.apache.org/jira/browse/HBASE-21592) | 
quota.addGetResult(r)  throw  NPE |  Major | . |
+| [HBASE-21589](https://issues.apache.org/jira/browse/HBASE-21589) | 
TestCleanupMetaWAL fails |  Blocker | test, wal |
+| [HBASE-21582](https://issues.apache.org/jira/browse/HBASE-21582) | If call 
HBaseAdmin#snapshotAsync but forget call 

svn commit: r31733 [2/2] - in /dev/hbase: hbase-2.1.2RC0/ hbase-2.1.2RC1/

2018-12-31 Thread stack
Added: dev/hbase/hbase-2.1.2RC1/RELEASENOTES.md
==
--- dev/hbase/hbase-2.1.2RC1/RELEASENOTES.md (added)
+++ dev/hbase/hbase-2.1.2RC1/RELEASENOTES.md Tue Jan  1 00:19:33 2019
@@ -0,0 +1,815 @@
+# HBASE  2.1.2 Release Notes
+
+
+
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-21635](https://issues.apache.org/jira/browse/HBASE-21635) | *Major* | 
**Use maven enforcer to ban imports from illegal packages**
+
+Use de.skuzzle.enforcer.restrict-imports-enforcer-rule extension for maven 
enforcer plugin to ban illegal imports at compile time. Now if you use illegal 
imports, for example, import com.google.common.\*, there will be a compile 
error, instead of a checkstyle warning.
+
+
+---
+
+* [HBASE-21401](https://issues.apache.org/jira/browse/HBASE-21401) | 
*Critical* | **Sanity check when constructing the KeyValue**
+
+Add a sanity check when constructing KeyValue from a byte[]. we use the 
constructor when we're reading kv from socket or HFIle or WAL(replication). the 
santiy check isn't designed for discovering the bits corruption in network 
transferring or disk IO. It is designed to detect bugs inside HBase in advance. 
and HBASE-21459 indicated that there's extremely small performance loss for 
diff kinds of keyvalue.
+
+
+---
+
+* [HBASE-21554](https://issues.apache.org/jira/browse/HBASE-21554) | *Minor* | 
**Show replication endpoint classname for replication peer on master web UI**
+
+The replication UI on master will show the replication endpoint classname.
+
+
+---
+
+* [HBASE-21549](https://issues.apache.org/jira/browse/HBASE-21549) | *Major* | 
**Add shell command for serial replication peer**
+
+Add a SERIAL flag for add\_peer command to identifiy whether or not the 
replication peer is a serial replication peer. The default serial flag is false.
+
+
+---
+
+* [HBASE-21551](https://issues.apache.org/jira/browse/HBASE-21551) | *Blocker* 
| **Memory leak when use scan with STREAM at server side**
+
+
+### Summary
+HBase clusters will experience Region Server failures due to out of memory 
errors due to a leak given any of the following:
+
+* User initiates Scan operations set to use the STREAM reading type
+* User initiates Scan operations set to use the default reading type that read 
more than 4 * the block size of column families involved in the scan (e.g. by 
default 4*64KiB)
+* Compactions run
+
+### Root cause
+
+When there are long running scans the Region Server process attempts to 
optimize access by using a different API geared towards sequential access. Due 
to an error in HBASE-20704 for HBase 2.0+ the Region Server fails to release 
related resources when those scans finish. That same optimization path is 
always used for the HBase internal file compaction process.
+
+### Workaround
+
+Impact for this error can be minimized by setting the config value 
“hbase.storescanner.pread.max.bytes” to MAX_INT to avoid the optimization 
for default user scans. Clients should also be checked to ensure they do not 
pass the STREAM read type to the Scan API. This will have a severe impact on 
performance for long scans.
+
+Compactions always use this sequential optimized reading mechanism so 
downstream users will need to periodically restart Region Server roles after 
compactions have happened.
+
+
+---
+
+* [HBASE-21387](https://issues.apache.org/jira/browse/HBASE-21387) | *Major* | 
**Race condition surrounding in progress snapshot handling in snapshot cache 
leads to loss of snapshot files**
+
+To prevent race condition between in progress snapshot (performed by 
TakeSnapshotHandler) and HFileCleaner which results in data loss, this JIRA 
introduced mutual exclusion between taking snapshot and running HFileCleaner. 
That is, at any given moment, either some snapshot can be taken or, 
HFileCleaner checks hfiles which are not referenced, but not both can be 
running.
+
+
+---
+
+* [HBASE-21423](https://issues.apache.org/jira/browse/HBASE-21423) | *Major* | 
**Procedures for meta table/region should be able to execute in separate 
workers**
+
+The procedure for meta table will be executed in a separate worker thread 
named 'Urgent Worker' to avoid stuck. A new config named 
'hbase.master.urgent.procedure.threads' is added, the default value for it is 
1. To disable the separate worker, set it to 0.
+
+
+---
+
+* [HBASE-21417](https://issues.apache.org/jira/browse/HBASE-21417) | 
*Critical* | **Pre commit build is broken due to surefire plugin crashes**
+
+Add -Djdk.net.URLClassPath.disableClassPathURLCheck=true when executing 
surefire plugin.
+
+
+---
+
+* [HBASE-21237](https://issues.apache.org/jira/browse/HBASE-21237) | *Blocker* 
| **Use CompatRemoteProcedureResolver to dispatch open/close region requests to 
RS**
+
+Use CompatRemoteProcedureResolver  instead of ExecuteProceduresRemoteCall to 
dispatch region 

[hbase] Git Push Summary

2018-12-31 Thread stack
Repository: hbase
Updated Tags:  refs/tags/2.1.2RC1 [created] 68d64f29b


[hbase] Git Push Summary

2018-12-31 Thread elserj
Repository: hbase
Updated Branches:
  refs/heads/masgter [deleted] e160b5ac8


svn commit: r31731 - /dev/hbase/hbase-1.3.3RC0/ /release/hbase/1.3.3/

2018-12-31 Thread apurtell
Author: apurtell
Date: Mon Dec 31 20:31:47 2018
New Revision: 31731

Log:
Release Apache HBase 1.3.3

Added:
release/hbase/1.3.3/
  - copied from r31730, dev/hbase/hbase-1.3.3RC0/
Removed:
dev/hbase/hbase-1.3.3RC0/



svn commit: r31732 - /release/hbase/1.3.2.1/

2018-12-31 Thread apurtell
Author: apurtell
Date: Mon Dec 31 20:32:02 2018
New Revision: 31732

Log:
Remove old artifacts for HBase release 1.3.2.1

Removed:
release/hbase/1.3.2.1/



[2/2] hbase git commit: HBASE-21492 CellCodec Written To WAL Before It's Verified

2018-12-31 Thread apurtell
HBASE-21492 CellCodec Written To WAL Before It's Verified

Conflicts:

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractProtobufLogWriter.java


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f7470a8b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f7470a8b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f7470a8b

Branch: refs/heads/branch-1.4
Commit: f7470a8b5734ab2ff05e4fc639f8b8fb8d9d8217
Parents: 6dbb1d4
Author: BELUGA BEHR 
Authored: Tue Nov 27 08:57:06 2018 -0800
Committer: Andrew Purtell 
Committed: Mon Dec 31 12:29:17 2018 -0800

--
 .../org/apache/hadoop/hbase/mapreduce/WALPlayer.java |  2 +-
 .../hadoop/hbase/regionserver/wal/ProtobufLogWriter.java |  2 +-
 .../hadoop/hbase/regionserver/wal/WALCellCodec.java  |  8 
 .../hbase/regionserver/wal/TestCustomWALCellCodec.java   | 11 +++
 4 files changed, 17 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f7470a8b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
index 377b6ea..bff110c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
@@ -295,7 +295,7 @@ public class WALPlayer extends Configured implements Tool {
   // No reducers.
   job.setNumReduceTasks(0);
 }
-String codecCls = WALCellCodec.getWALCellCodecClass(conf);
+String codecCls = WALCellCodec.getWALCellCodecClass(conf).getName();
 try {
   TableMapReduceUtil.addDependencyJarsForClasses(job.getConfiguration(), 
Class.forName(codecCls));
 } catch (Exception e) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/f7470a8b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
index 42abeae..436df87 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
@@ -69,7 +69,7 @@ public class ProtobufLogWriter extends WriterBase {
   builder.setWriterClsName(ProtobufLogWriter.class.getSimpleName());
 }
 if (!builder.hasCellCodecClsName()) {
-  builder.setCellCodecClsName(WALCellCodec.getWALCellCodecClass(conf));
+  
builder.setCellCodecClsName(WALCellCodec.getWALCellCodecClass(conf).getName());
 }
 return builder.build();
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/f7470a8b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
index 5c62ef2..11b6120 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
@@ -79,8 +79,8 @@ public class WALCellCodec implements Codec {
 this.compression = compression;
   }
 
-  public static String getWALCellCodecClass(Configuration conf) {
-return conf.get(WAL_CELL_CODEC_CLASS_KEY, WALCellCodec.class.getName());
+  public static Class getWALCellCodecClass(Configuration conf) {
+return conf.getClass(WAL_CELL_CODEC_CLASS_KEY, WALCellCodec.class);
   }
 
   /**
@@ -98,7 +98,7 @@ public class WALCellCodec implements Codec {
   public static WALCellCodec create(Configuration conf, String 
cellCodecClsName,
   CompressionContext compression) throws UnsupportedOperationException {
 if (cellCodecClsName == null) {
-  cellCodecClsName = getWALCellCodecClass(conf);
+  cellCodecClsName = getWALCellCodecClass(conf).getName();
 }
 return ReflectionUtils.instantiateWithCustomCtor(cellCodecClsName, new 
Class[]
 { Configuration.class, CompressionContext.class }, new Object[] { 
conf, compression });
@@ -117,7 +117,7 @@ public class WALCellCodec implements Codec {
*/
   public static WALCellCodec create(Configuration conf,
   CompressionContext compression) throws UnsupportedOperationException {
-

[1/2] hbase git commit: HBASE-21492 CellCodec Written To WAL Before It's Verified

2018-12-31 Thread apurtell
Repository: hbase
Updated Branches:
  refs/heads/branch-1 55a775b8d -> beeb0796e
  refs/heads/branch-1.4 6dbb1d407 -> f7470a8b5


HBASE-21492 CellCodec Written To WAL Before It's Verified

Conflicts:

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractProtobufLogWriter.java


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/beeb0796
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/beeb0796
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/beeb0796

Branch: refs/heads/branch-1
Commit: beeb0796e506dfb758c22dece51527e6247414e6
Parents: 55a775b
Author: BELUGA BEHR 
Authored: Tue Nov 27 08:57:06 2018 -0800
Committer: Andrew Purtell 
Committed: Mon Dec 31 12:29:04 2018 -0800

--
 .../org/apache/hadoop/hbase/mapreduce/WALPlayer.java |  2 +-
 .../hadoop/hbase/regionserver/wal/ProtobufLogWriter.java |  2 +-
 .../hadoop/hbase/regionserver/wal/WALCellCodec.java  |  8 
 .../hbase/regionserver/wal/TestCustomWALCellCodec.java   | 11 +++
 4 files changed, 17 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/beeb0796/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
index 377b6ea..bff110c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
@@ -295,7 +295,7 @@ public class WALPlayer extends Configured implements Tool {
   // No reducers.
   job.setNumReduceTasks(0);
 }
-String codecCls = WALCellCodec.getWALCellCodecClass(conf);
+String codecCls = WALCellCodec.getWALCellCodecClass(conf).getName();
 try {
   TableMapReduceUtil.addDependencyJarsForClasses(job.getConfiguration(), 
Class.forName(codecCls));
 } catch (Exception e) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/beeb0796/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
index cb9e5a5..2e4226f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
@@ -69,7 +69,7 @@ public class ProtobufLogWriter extends WriterBase {
   builder.setWriterClsName(ProtobufLogWriter.class.getSimpleName());
 }
 if (!builder.hasCellCodecClsName()) {
-  builder.setCellCodecClsName(WALCellCodec.getWALCellCodecClass(conf));
+  
builder.setCellCodecClsName(WALCellCodec.getWALCellCodecClass(conf).getName());
 }
 return builder.build();
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/beeb0796/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
index 5c62ef2..11b6120 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
@@ -79,8 +79,8 @@ public class WALCellCodec implements Codec {
 this.compression = compression;
   }
 
-  public static String getWALCellCodecClass(Configuration conf) {
-return conf.get(WAL_CELL_CODEC_CLASS_KEY, WALCellCodec.class.getName());
+  public static Class getWALCellCodecClass(Configuration conf) {
+return conf.getClass(WAL_CELL_CODEC_CLASS_KEY, WALCellCodec.class);
   }
 
   /**
@@ -98,7 +98,7 @@ public class WALCellCodec implements Codec {
   public static WALCellCodec create(Configuration conf, String 
cellCodecClsName,
   CompressionContext compression) throws UnsupportedOperationException {
 if (cellCodecClsName == null) {
-  cellCodecClsName = getWALCellCodecClass(conf);
+  cellCodecClsName = getWALCellCodecClass(conf).getName();
 }
 return ReflectionUtils.instantiateWithCustomCtor(cellCodecClsName, new 
Class[]
 { Configuration.class, CompressionContext.class }, new Object[] { 
conf, compression });
@@ -117,7 +117,7 @@ public class WALCellCodec implements Codec {
*/
   public 

[hbase] Git Push Summary

2018-12-31 Thread apurtell
Repository: hbase
Updated Tags:  refs/tags/rel/1.3.3 [created] 3327b6417


[06/47] hbase git commit: HBASE-21590 Optimize trySkipToNextColumn in StoreScanner a bit.

2018-12-31 Thread zhangduo
HBASE-21590 Optimize trySkipToNextColumn in StoreScanner a bit.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/cb1966dc
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/cb1966dc
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/cb1966dc

Branch: refs/heads/HBASE-21512
Commit: cb1966dc2d94fba10d9b6af3c5719e03f621df92
Parents: f32d261
Author: Lars Hofhansl 
Authored: Thu Dec 13 11:57:16 2018 -0800
Committer: Lars Hofhansl 
Committed: Thu Dec 13 11:57:16 2018 -0800

--
 .../apache/hadoop/hbase/regionserver/StoreScanner.java  | 12 ++--
 1 file changed, 10 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/cb1966dc/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
index 736c08a..e7a4528 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
@@ -802,12 +802,16 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
   @VisibleForTesting
   protected boolean trySkipToNextRow(Cell cell) throws IOException {
 Cell nextCell = null;
+// used to guard against a changed next indexed key by doing a identity 
comparison
+// when the identity changes we need to compare the bytes again
+Cell previousIndexedKey = null;
 do {
   Cell nextIndexedKey = getNextIndexedKey();
   if (nextIndexedKey != null && nextIndexedKey != 
KeyValueScanner.NO_NEXT_INDEXED_KEY
-  && matcher.compareKeyForNextRow(nextIndexedKey, cell) >= 0) {
+  && (nextIndexedKey == previousIndexedKey || 
matcher.compareKeyForNextRow(nextIndexedKey, cell) >= 0)) {
 this.heap.next();
 ++kvsScanned;
+previousIndexedKey = nextIndexedKey;
   } else {
 return false;
   }
@@ -823,12 +827,16 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
   @VisibleForTesting
   protected boolean trySkipToNextColumn(Cell cell) throws IOException {
 Cell nextCell = null;
+// used to guard against a changed next indexed key by doing a identity 
comparison
+// when the identity changes we need to compare the bytes again
+Cell previousIndexedKey = null;
 do {
   Cell nextIndexedKey = getNextIndexedKey();
   if (nextIndexedKey != null && nextIndexedKey != 
KeyValueScanner.NO_NEXT_INDEXED_KEY
-  && matcher.compareKeyForNextColumn(nextIndexedKey, cell) >= 0) {
+  && (nextIndexedKey == previousIndexedKey || 
matcher.compareKeyForNextColumn(nextIndexedKey, cell) >= 0)) {
 this.heap.next();
 ++kvsScanned;
+previousIndexedKey = nextIndexedKey;
   } else {
 return false;
   }



[45/47] hbase git commit: HBASE-21515 Also initialize an AsyncClusterConnection in HRegionServer

2018-12-31 Thread zhangduo
HBASE-21515 Also initialize an AsyncClusterConnection in HRegionServer


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f3caa018
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f3caa018
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f3caa018

Branch: refs/heads/HBASE-21512
Commit: f3caa0188b5b5f032d4b90a7b75e518a8752e0f4
Parents: 7755d4b
Author: zhangduo 
Authored: Fri Nov 30 08:23:47 2018 +0800
Committer: zhangduo 
Committed: Mon Dec 31 20:34:24 2018 +0800

--
 .../hbase/client/AsyncClusterConnection.java| 38 
 .../hbase/client/AsyncConnectionImpl.java   | 39 ++--
 .../hbase/client/ClusterConnectionFactory.java  | 63 
 .../hadoop/hbase/client/ConnectionFactory.java  |  5 +-
 .../hadoop/hbase/util/ReflectionUtils.java  | 22 ---
 .../java/org/apache/hadoop/hbase/Server.java| 20 +++
 .../org/apache/hadoop/hbase/master/HMaster.java |  3 +
 .../hbase/regionserver/HRegionServer.java   | 56 -
 .../regionserver/ReplicationSyncUp.java |  6 ++
 .../hadoop/hbase/MockRegionServerServices.java  |  5 ++
 .../client/TestAsyncNonMetaRegionLocator.java   |  2 +-
 ...syncNonMetaRegionLocatorConcurrenyLimit.java |  2 +-
 .../client/TestAsyncRegionLocatorTimeout.java   |  2 +-
 ...TestAsyncSingleRequestRpcRetryingCaller.java |  4 +-
 .../hbase/client/TestAsyncTableNoncedRetry.java |  2 +-
 .../hbase/master/MockNoopMasterServices.java|  6 ++
 .../hadoop/hbase/master/MockRegionServer.java   |  5 ++
 .../hbase/master/TestActiveMasterManager.java   |  6 ++
 .../hbase/master/cleaner/TestHFileCleaner.java  |  6 ++
 .../master/cleaner/TestHFileLinkCleaner.java|  6 ++
 .../hbase/master/cleaner/TestLogsCleaner.java   |  6 ++
 .../cleaner/TestReplicationHFileCleaner.java|  6 ++
 .../regionserver/TestHeapMemoryManager.java |  6 ++
 .../hbase/regionserver/TestSplitLogWorker.java  |  6 ++
 .../hbase/regionserver/TestWALLockup.java   |  6 ++
 .../TestReplicationTrackerZKImpl.java   |  6 ++
 .../TestReplicationSourceManager.java   |  6 ++
 .../security/token/TestTokenAuthentication.java |  6 ++
 .../apache/hadoop/hbase/util/MockServer.java|  6 ++
 29 files changed, 302 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f3caa018/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncClusterConnection.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncClusterConnection.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncClusterConnection.java
new file mode 100644
index 000..c7dea25
--- /dev/null
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncClusterConnection.java
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.ipc.RpcClient;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * The asynchronous connection for internal usage.
+ */
+@InterfaceAudience.Private
+public interface AsyncClusterConnection extends AsyncConnection {
+
+  /**
+   * Get the nonce generator for this connection.
+   */
+  NonceGenerator getNonceGenerator();
+
+  /**
+   * Get the rpc client we used to communicate with other servers.
+   */
+  RpcClient getRpcClient();
+}

http://git-wip-us.apache.org/repos/asf/hbase/blob/f3caa018/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
index 078395b..79ec54b 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
@@ 

[03/47] hbase git commit: HBASE-21568 Use CacheConfig.DISABLED where we don't expect to have blockcache running

2018-12-31 Thread zhangduo
HBASE-21568 Use CacheConfig.DISABLED where we don't expect to have blockcache 
running

This includes removing the "old way" of disabling blockcache in favor of the
new API.

Signed-off-by: Guanghao Zhang 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/67d6d508
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/67d6d508
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/67d6d508

Branch: refs/heads/HBASE-21512
Commit: 67d6d5084cf8fc094cda4bd3f091d8a0a9cb1d3e
Parents: f88224e
Author: Josh Elser 
Authored: Fri Dec 7 17:18:49 2018 -0500
Committer: Josh Elser 
Committed: Tue Dec 11 10:02:18 2018 -0500

--
 .../org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java  | 6 ++
 .../src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java  | 4 +---
 .../org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java   | 2 +-
 .../org/apache/hadoop/hbase/tool/LoadIncrementalHFiles.java| 6 +++---
 .../java/org/apache/hadoop/hbase/util/CompressionTest.java | 2 +-
 .../src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java  | 5 ++---
 .../apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java  | 2 +-
 7 files changed, 11 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/67d6d508/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java
index c911e8c..274a506 100644
--- 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java
@@ -414,8 +414,6 @@ public class HFileOutputFormat2
 DataBlockEncoding encoding = overriddenEncoding;
 encoding = encoding == null ? datablockEncodingMap.get(tableAndFamily) 
: encoding;
 encoding = encoding == null ? DataBlockEncoding.NONE : encoding;
-Configuration tempConf = new Configuration(conf);
-tempConf.setFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY, 0.0f);
 HFileContextBuilder contextBuilder = new HFileContextBuilder()
 .withCompression(compression)
 
.withChecksumType(HStore.getChecksumType(conf))
@@ -430,12 +428,12 @@ public class HFileOutputFormat2
 HFileContext hFileContext = contextBuilder.build();
 if (null == favoredNodes) {
   wl.writer =
-  new StoreFileWriter.Builder(conf, new CacheConfig(tempConf), fs)
+  new StoreFileWriter.Builder(conf, CacheConfig.DISABLED, fs)
   .withOutputDir(familydir).withBloomType(bloomType)
   
.withComparator(CellComparator.getInstance()).withFileContext(hFileContext).build();
 } else {
   wl.writer =
-  new StoreFileWriter.Builder(conf, new CacheConfig(tempConf), new 
HFileSystem(fs))
+  new StoreFileWriter.Builder(conf, CacheConfig.DISABLED, new 
HFileSystem(fs))
   .withOutputDir(familydir).withBloomType(bloomType)
   
.withComparator(CellComparator.getInstance()).withFileContext(hFileContext)
   .withFavoredNodes(favoredNodes).build();

http://git-wip-us.apache.org/repos/asf/hbase/blob/67d6d508/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
index 5bcaa17..78ebedc 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
@@ -356,9 +356,7 @@ public class HFile {
*/
   public static final WriterFactory getWriterFactoryNoCache(Configuration
conf) {
-Configuration tempConf = new Configuration(conf);
-tempConf.setFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY, 0.0f);
-return HFile.getWriterFactory(conf, new CacheConfig(tempConf));
+return HFile.getWriterFactory(conf, CacheConfig.DISABLED);
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hbase/blob/67d6d508/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
index 

[28/47] hbase git commit: HBASE-21629 draining_servers.rb is broken

2018-12-31 Thread zhangduo
HBASE-21629 draining_servers.rb is broken


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/59f77de7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/59f77de7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/59f77de7

Branch: refs/heads/HBASE-21512
Commit: 59f77de723849e4d330167f60e53e44b2763cafc
Parents: 97fd647
Author: Nihal Jain 
Authored: Sun Dec 23 14:29:53 2018 +0530
Committer: stack 
Committed: Sun Dec 23 20:48:43 2018 -0800

--
 bin/draining_servers.rb | 19 ---
 1 file changed, 12 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/59f77de7/bin/draining_servers.rb
--
diff --git a/bin/draining_servers.rb b/bin/draining_servers.rb
index 0d29c19..a8e20f0 100644
--- a/bin/draining_servers.rb
+++ b/bin/draining_servers.rb
@@ -27,6 +27,7 @@ java_import org.apache.hadoop.hbase.HBaseConfiguration
 java_import org.apache.hadoop.hbase.client.ConnectionFactory
 java_import org.apache.hadoop.hbase.client.HBaseAdmin
 java_import org.apache.hadoop.hbase.zookeeper.ZKUtil
+java_import org.apache.hadoop.hbase.zookeeper.ZNodePaths
 java_import org.slf4j.LoggerFactory
 
 # Name of this script
@@ -86,11 +87,11 @@ def addServers(_options, hostOrServers)
   servers = getServerNames(hostOrServers, config)
 
   zkw = org.apache.hadoop.hbase.zookeeper.ZKWatcher.new(config, 
'draining_servers', nil)
-  parentZnode = zkw.znodePaths.drainingZNode
 
   begin
+parentZnode = zkw.getZNodePaths.drainingZNode
 for server in servers
-  node = ZKUtil.joinZNode(parentZnode, server)
+  node = ZNodePaths.joinZNode(parentZnode, server)
   ZKUtil.createAndFailSilent(zkw, node)
 end
   ensure
@@ -103,11 +104,11 @@ def removeServers(_options, hostOrServers)
   servers = getServerNames(hostOrServers, config)
 
   zkw = org.apache.hadoop.hbase.zookeeper.ZKWatcher.new(config, 
'draining_servers', nil)
-  parentZnode = zkw.znodePaths.drainingZNode
 
   begin
+parentZnode = zkw.getZNodePaths.drainingZNode
 for server in servers
-  node = ZKUtil.joinZNode(parentZnode, server)
+  node = ZNodePaths.joinZNode(parentZnode, server)
   ZKUtil.deleteNodeFailSilent(zkw, node)
 end
   ensure
@@ -120,10 +121,14 @@ def listServers(_options)
   config = HBaseConfiguration.create
 
   zkw = org.apache.hadoop.hbase.zookeeper.ZKWatcher.new(config, 
'draining_servers', nil)
-  parentZnode = zkw.znodePaths.drainingZNode
 
-  servers = ZKUtil.listChildrenNoWatch(zkw, parentZnode)
-  servers.each { |server| puts server }
+  begin
+parentZnode = zkw.getZNodePaths.drainingZNode
+servers = ZKUtil.listChildrenNoWatch(zkw, parentZnode)
+servers.each { |server| puts server }
+  ensure
+zkw.close
+  end
 end
 
 hostOrServers = ARGV[1..ARGV.size]



[10/47] hbase git commit: Update downloads.xml for release 1.4.9

2018-12-31 Thread zhangduo
Update downloads.xml for release 1.4.9


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2b003c5d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2b003c5d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2b003c5d

Branch: refs/heads/HBASE-21512
Commit: 2b003c5d685160eeaf90387e887b433dadb8695e
Parents: 1b08ba7
Author: Andrew Purtell 
Authored: Fri Dec 14 13:54:57 2018 -0800
Committer: Andrew Purtell 
Committed: Fri Dec 14 13:54:57 2018 -0800

--
 src/site/xdoc/downloads.xml | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2b003c5d/src/site/xdoc/downloads.xml
--
diff --git a/src/site/xdoc/downloads.xml b/src/site/xdoc/downloads.xml
index 5d3f2a6..4bb7f51 100644
--- a/src/site/xdoc/downloads.xml
+++ b/src/site/xdoc/downloads.xml
@@ -88,23 +88,23 @@ under the License.
 
 
   
-1.4.8
+1.4.9
   
   
-2018/10/08
+2018/12/14
   
   
-https://apache.org/dist/hbase/1.4.8/compat-check-report.html;>1.4.7 vs 
1.4.8
+https://apache.org/dist/hbase/1.4.9/compat-check-report.html;>1.4.8 vs 
1.4.9
   
   
-https://github.com/apache/hbase/blob/rel/1.4.8/CHANGES.txt;>Changes
+https://github.com/apache/hbase/blob/rel/1.4.9/CHANGES.txt;>Changes
   
   
-https://s.apache.org/hbase-1.4.8-jira-release-notes;>Release 
Notes
+https://s.apache.org/hbase-1.4.9-jira-release-notes;>Release 
Notes
   
   
-https://www.apache.org/dyn/closer.lua/hbase/1.4.8/hbase-1.4.8-src.tar.gz;>src
 (https://apache.org/dist/hbase/1.4.8/hbase-1.4.8-src.tar.gz.sha512;>sha512
 https://apache.org/dist/hbase/1.4.8/hbase-1.4.8-src.tar.gz.asc;>asc) 

-https://www.apache.org/dyn/closer.lua/hbase/1.4.8/hbase-1.4.8-bin.tar.gz;>bin
 (https://apache.org/dist/hbase/1.4.8/hbase-1.4.8-bin.tar.gz.sha512;>sha512
 https://apache.org/dist/hbase/1.4.8/hbase-1.4.8-bin.tar.gz.asc;>asc)
+https://www.apache.org/dyn/closer.lua/hbase/1.4.9/hbase-1.4.9-src.tar.gz;>src
 (https://apache.org/dist/hbase/1.4.9/hbase-1.4.9-src.tar.gz.sha512;>sha512
 https://apache.org/dist/hbase/1.4.9/hbase-1.4.9-src.tar.gz.asc;>asc) 

+https://www.apache.org/dyn/closer.lua/hbase/1.4.9/hbase-1.4.9-bin.tar.gz;>bin
 (https://apache.org/dist/hbase/1.4.9/hbase-1.4.9-bin.tar.gz.sha512;>sha512
 https://apache.org/dist/hbase/1.4.9/hbase-1.4.9-bin.tar.gz.asc;>asc)
   
 
 



[37/47] hbase git commit: HBASE-21650 Add DDL operation and some other miscellaneous to thrift2

2018-12-31 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java
index e8f36a0..7388443 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java
@@ -34,7 +34,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-07-04")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
 public class THBaseService {
 
   public interface Iface {
@@ -282,6 +282,56 @@ public class THBaseService {
  */
 public boolean checkAndMutate(ByteBuffer table, ByteBuffer row, ByteBuffer 
family, ByteBuffer qualifier, TCompareOp compareOp, ByteBuffer value, 
TRowMutations rowMutations) throws TIOError, org.apache.thrift.TException;
 
+public TTableDescriptor getTableDescriptor(TTableName table) throws 
TIOError, org.apache.thrift.TException;
+
+public List getTableDescriptors(List tables) 
throws TIOError, org.apache.thrift.TException;
+
+public boolean tableExists(TTableName tableName) throws TIOError, 
org.apache.thrift.TException;
+
+public List getTableDescriptorsByPattern(String regex, 
boolean includeSysTables) throws TIOError, org.apache.thrift.TException;
+
+public List getTableDescriptorsByNamespace(String name) 
throws TIOError, org.apache.thrift.TException;
+
+public List getTableNamesByPattern(String regex, boolean 
includeSysTables) throws TIOError, org.apache.thrift.TException;
+
+public List getTableNamesByNamespace(String name) throws 
TIOError, org.apache.thrift.TException;
+
+public void createTable(TTableDescriptor desc, List splitKeys) 
throws TIOError, org.apache.thrift.TException;
+
+public void deleteTable(TTableName tableName) throws TIOError, 
org.apache.thrift.TException;
+
+public void truncateTable(TTableName tableName, boolean preserveSplits) 
throws TIOError, org.apache.thrift.TException;
+
+public void enableTable(TTableName tableName) throws TIOError, 
org.apache.thrift.TException;
+
+public void disableTable(TTableName tableName) throws TIOError, 
org.apache.thrift.TException;
+
+public boolean isTableEnabled(TTableName tableName) throws TIOError, 
org.apache.thrift.TException;
+
+public boolean isTableDisabled(TTableName tableName) throws TIOError, 
org.apache.thrift.TException;
+
+public boolean isTableAvailable(TTableName tableName) throws TIOError, 
org.apache.thrift.TException;
+
+public boolean isTableAvailableWithSplit(TTableName tableName, 
List splitKeys) throws TIOError, org.apache.thrift.TException;
+
+public void addColumnFamily(TTableName tableName, TColumnFamilyDescriptor 
column) throws TIOError, org.apache.thrift.TException;
+
+public void deleteColumnFamily(TTableName tableName, ByteBuffer column) 
throws TIOError, org.apache.thrift.TException;
+
+public void modifyColumnFamily(TTableName tableName, 
TColumnFamilyDescriptor column) throws TIOError, org.apache.thrift.TException;
+
+public void modifyTable(TTableDescriptor desc) throws TIOError, 
org.apache.thrift.TException;
+
+public void createNamespace(TNamespaceDescriptor namespaceDesc) throws 
TIOError, org.apache.thrift.TException;
+
+public void modifyNamespace(TNamespaceDescriptor namespaceDesc) throws 
TIOError, org.apache.thrift.TException;
+
+public void deleteNamespace(String name) throws TIOError, 
org.apache.thrift.TException;
+
+public TNamespaceDescriptor getNamespaceDescriptor(String name) throws 
TIOError, org.apache.thrift.TException;
+
+public List listNamespaceDescriptors() throws 
TIOError, org.apache.thrift.TException;
+
   }
 
   public interface AsyncIface {
@@ -326,6 +376,56 @@ public class THBaseService {
 
 public void checkAndMutate(ByteBuffer table, ByteBuffer row, ByteBuffer 
family, ByteBuffer qualifier, TCompareOp compareOp, ByteBuffer value, 
TRowMutations rowMutations, org.apache.thrift.async.AsyncMethodCallback 
resultHandler) throws org.apache.thrift.TException;
 
+public void getTableDescriptor(TTableName table, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void getTableDescriptors(List tables, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void tableExists(TTableName tableName, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void 

[43/47] hbase git commit: HBASE-21650 Add DDL operation and some other miscellaneous to thrift2 (ADDENDUM add some comments)

2018-12-31 Thread zhangduo
HBASE-21650 Add DDL operation and some other miscellaneous to thrift2 (ADDENDUM 
add some comments)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b620334c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b620334c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b620334c

Branch: refs/heads/HBASE-21512
Commit: b620334c20e84a4876226b508213ce11b8b187a0
Parents: 7820ba1
Author: Allan Yang 
Authored: Fri Dec 28 15:32:50 2018 +0800
Committer: Allan Yang 
Committed: Fri Dec 28 15:32:50 2018 +0800

--
 .../hbase/thrift/generated/AlreadyExists.java   |   2 +-
 .../hbase/thrift/generated/BatchMutation.java   |   2 +-
 .../thrift/generated/ColumnDescriptor.java  |   2 +-
 .../hadoop/hbase/thrift/generated/Hbase.java|   2 +-
 .../hadoop/hbase/thrift/generated/IOError.java  |   2 +-
 .../hbase/thrift/generated/IllegalArgument.java |   2 +-
 .../hadoop/hbase/thrift/generated/Mutation.java |   2 +-
 .../hadoop/hbase/thrift/generated/TAppend.java  |   2 +-
 .../hadoop/hbase/thrift/generated/TCell.java|   2 +-
 .../hadoop/hbase/thrift/generated/TColumn.java  |   2 +-
 .../hbase/thrift/generated/TIncrement.java  |   2 +-
 .../hbase/thrift/generated/TRegionInfo.java |   2 +-
 .../hbase/thrift/generated/TRowResult.java  |   2 +-
 .../hadoop/hbase/thrift/generated/TScan.java|   2 +-
 .../hadoop/hbase/thrift2/generated/TAppend.java |   2 +-
 .../hbase/thrift2/generated/TAuthorization.java |   2 +-
 .../thrift2/generated/TBloomFilterType.java |   4 +
 .../thrift2/generated/TCellVisibility.java  |   2 +-
 .../hadoop/hbase/thrift2/generated/TColumn.java |   2 +-
 .../generated/TColumnFamilyDescriptor.java  |   6 +-
 .../thrift2/generated/TColumnIncrement.java |   2 +-
 .../hbase/thrift2/generated/TColumnValue.java   |   2 +-
 .../generated/TCompressionAlgorithm.java|   4 +
 .../thrift2/generated/TDataBlockEncoding.java   |   4 +
 .../hadoop/hbase/thrift2/generated/TDelete.java |   2 +-
 .../hadoop/hbase/thrift2/generated/TGet.java|   2 +-
 .../hbase/thrift2/generated/THBaseService.java  | 571 ++-
 .../hbase/thrift2/generated/THRegionInfo.java   |   2 +-
 .../thrift2/generated/THRegionLocation.java |   2 +-
 .../hbase/thrift2/generated/TIOError.java   |   2 +-
 .../thrift2/generated/TIllegalArgument.java |   2 +-
 .../hbase/thrift2/generated/TIncrement.java |   2 +-
 .../thrift2/generated/TKeepDeletedCells.java|   4 +
 .../thrift2/generated/TNamespaceDescriptor.java |   6 +-
 .../hadoop/hbase/thrift2/generated/TPut.java|   2 +-
 .../hadoop/hbase/thrift2/generated/TResult.java |   2 +-
 .../hbase/thrift2/generated/TRowMutations.java  |   2 +-
 .../hadoop/hbase/thrift2/generated/TScan.java   |   2 +-
 .../hbase/thrift2/generated/TServerName.java|   2 +-
 .../thrift2/generated/TTableDescriptor.java |   6 +-
 .../hbase/thrift2/generated/TTableName.java |  30 +-
 .../hbase/thrift2/generated/TTimeRange.java |   2 +-
 .../apache/hadoop/hbase/thrift2/hbase.thrift| 168 +-
 43 files changed, 828 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b620334c/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
index 8ec3e32..4457b9f 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
@@ -38,7 +38,7 @@ import org.slf4j.LoggerFactory;
  * An AlreadyExists exceptions signals that a table with the specified
  * name already exists
  */
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-28")
 public class AlreadyExists extends TException implements 
org.apache.thrift.TBase, 
java.io.Serializable, Cloneable, Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("AlreadyExists");
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/b620334c/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
index 0872223..f605286 100644
--- 

[44/47] hbase git commit: HBASE-21646 Flakey TestTableSnapshotInputFormat; DisableTable not completing... Amendment to fix checkstyle complaint

2018-12-31 Thread zhangduo
HBASE-21646 Flakey TestTableSnapshotInputFormat; DisableTable not completing... 
Amendment to fix checkstyle complaint

Includes fix for checkstyle complaint.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7755d4be
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7755d4be
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7755d4be

Branch: refs/heads/HBASE-21512
Commit: 7755d4beeddfee9b72446ccd18d7918278eecc83
Parents: b620334
Author: stack 
Authored: Fri Dec 28 14:42:22 2018 -0800
Committer: stack 
Committed: Fri Dec 28 14:48:23 2018 -0800

--
 .../hadoop/hbase/mapred/TestTableSnapshotInputFormat.java | 7 +++
 .../hbase/mapreduce/TableSnapshotInputFormatTestBase.java | 5 -
 .../hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java  | 5 +
 3 files changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7755d4be/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java
index b61ed07..c591af6 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java
@@ -47,6 +47,7 @@ import org.apache.hadoop.mapred.RunningJob;
 import org.apache.hadoop.mapred.lib.NullOutputFormat;
 import org.junit.Assert;
 import org.junit.ClassRule;
+import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -310,4 +311,10 @@ public class TestTableSnapshotInputFormat extends 
TableSnapshotInputFormatTestBa
   }
 }
   }
+
+  @Ignore // Ignored in mapred package because it keeps failing but allowed in 
mapreduce package.
+  @Test
+  public void testWithMapReduceMultipleMappersPerRegion() throws Exception {
+testWithMapReduce(UTIL, "testWithMapReduceMultiRegion", 10, 5, 50, false);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/7755d4be/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatTestBase.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatTestBase.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatTestBase.java
index 744c356..5e7ea7a 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatTestBase.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatTestBase.java
@@ -108,11 +108,6 @@ public abstract class TableSnapshotInputFormatTestBase {
   }
 
   @Test
-  public void testWithMapReduceMultipleMappersPerRegion() throws Exception {
-testWithMapReduce(UTIL, "testWithMapReduceMultiRegion", 10, 5, 50, false);
-  }
-
-  @Test
   // run the MR job while HBase is offline
   public void testWithMapReduceAndOfflineHBaseMultiRegion() throws Exception {
 testWithMapReduce(UTIL, "testWithMapReduceAndOfflineHBaseMultiRegion", 10, 
1, 8, true);

http://git-wip-us.apache.org/repos/asf/hbase/blob/7755d4be/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
index f61c222..358af24 100644
--- 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
@@ -473,4 +473,9 @@ public class TestTableSnapshotInputFormat extends 
TableSnapshotInputFormatTestBa
   }
 }
   }
+
+  @Test
+  public void testWithMapReduceMultipleMappersPerRegion() throws Exception {
+testWithMapReduce(UTIL, "testWithMapReduceMultiRegion", 10, 5, 50, false);
+  }
 }



[12/47] hbase git commit: HBASE-21520 TestMultiColumnScanner cost long time when using ROWCOL bloom type

2018-12-31 Thread zhangduo
HBASE-21520 TestMultiColumnScanner cost long time when using ROWCOL bloom type


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ac0b3bb5
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ac0b3bb5
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ac0b3bb5

Branch: refs/heads/HBASE-21512
Commit: ac0b3bb5477612cb8844c4ef10fa2be0f1d1a025
Parents: 4911534
Author: huzheng 
Authored: Thu Dec 13 15:04:12 2018 +0800
Committer: huzheng 
Committed: Sat Dec 15 21:08:52 2018 +0800

--
 .../regionserver/TestMultiColumnScanner.java| 94 ++--
 ...olumnScannerWithAlgoGZAndNoDataEncoding.java | 48 ++
 ...lumnScannerWithAlgoGZAndUseDataEncoding.java | 48 ++
 ...iColumnScannerWithNoneAndNoDataEncoding.java | 48 ++
 ...ColumnScannerWithNoneAndUseDataEncoding.java | 48 ++
 5 files changed, 219 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ac0b3bb5/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java
index 2ff0d8c..bb97c9c 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java
@@ -32,11 +32,9 @@ import java.util.Map;
 import java.util.Random;
 import java.util.Set;
 import java.util.TreeSet;
-import org.apache.commons.lang3.ArrayUtils;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparatorImpl;
 import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.HBaseClassTestRule;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.KeyValue;
@@ -47,29 +45,27 @@ import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.io.compress.Compression;
 import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
-import org.apache.hadoop.hbase.testclassification.MediumTests;
-import org.apache.hadoop.hbase.testclassification.RegionServerTests;
 import org.apache.hadoop.hbase.util.BloomFilterUtil;
 import org.apache.hadoop.hbase.util.Bytes;
-import org.junit.ClassRule;
 import org.junit.Test;
-import org.junit.experimental.categories.Category;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
-import org.junit.runners.Parameterized.Parameters;
+import org.junit.runners.Parameterized.Parameter;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 /**
- * Tests optimized scanning of multiple columns.
+ * Tests optimized scanning of multiple columns. 
+ * We separated the big test into several sub-class UT, because When in ROWCOL 
bloom type, we will
+ * test the row-col bloom filter frequently for saving HDFS seek once we 
switch from one column to
+ * another in our UT. It's cpu time consuming (~45s for each case), so moved 
the ROWCOL case into a
+ * separated LargeTests to avoid timeout failure. 
+ * 
+ * To be clear: In TestMultiColumnScanner, we will flush 10 (NUM_FLUSHES=10) 
HFiles here, and the
+ * table will put ~1000 cells (rows=20, ts=6, qualifiers=8, total=20*6*8 ~ 
1000) . Each full table
+ * scan will check the ROWCOL bloom filter 20 (rows)* 8 (column) * 10 
(hfiles)= 1600 times, beside
+ * it will scan the full table 6*2^8=1536 times, so finally will have 
1600*1536=2457600 bloom filter
+ * testing. (See HBASE-21520)
  */
-@RunWith(Parameterized.class)
-@Category({RegionServerTests.class, MediumTests.class})
-public class TestMultiColumnScanner {
-
-  @ClassRule
-  public static final HBaseClassTestRule CLASS_RULE =
-  HBaseClassTestRule.forClass(TestMultiColumnScanner.class);
+public abstract class TestMultiColumnScanner {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(TestMultiColumnScanner.class);
 
@@ -104,20 +100,19 @@ public class TestMultiColumnScanner {
   /** The probability that a column is skipped in a store file. */
   private static final double COLUMN_SKIP_IN_STORE_FILE_PROB = 0.7;
 
-  /** The probability of skipping a column in a single row */
-  private static final double COLUMN_SKIP_IN_ROW_PROB = 0.1;
-
-  /** The probability of skipping a column everywhere */
-  private static final double COLUMN_SKIP_EVERYWHERE_PROB = 0.1;
-
   /** The probability to delete a row/column pair */
   private static final double DELETE_PROBABILITY = 0.02;
 
   private final static HBaseTestingUtility TEST_UTIL = 

[24/47] hbase git commit: HBASE-21620 Problem in scan query when using more than one column prefix filter in some cases

2018-12-31 Thread zhangduo
HBASE-21620 Problem in scan query when using more than one column prefix filter 
in some cases

Signed-off-by: Guanghao Zhang 
Signed-off-by: Michael Stack 
Signed-off-by: Allan Yang 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e160b5ac
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e160b5ac
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e160b5ac

Branch: refs/heads/HBASE-21512
Commit: e160b5ac8d82330911ea746e456ea53bf317ace8
Parents: 12786f8
Author: openinx 
Authored: Thu Dec 20 21:04:10 2018 +0800
Committer: stack 
Committed: Fri Dec 21 15:21:53 2018 -0800

--
 .../hadoop/hbase/filter/FilterListWithOR.java   | 65 ++--
 .../hadoop/hbase/regionserver/StoreScanner.java |  2 +-
 .../hadoop/hbase/filter/TestFilterList.java | 62 +--
 .../hbase/filter/TestFilterListOnMini.java  | 50 ++-
 4 files changed, 140 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e160b5ac/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
index 842fdc5..ba4cd88 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterListWithOR.java
@@ -83,30 +83,40 @@ public class FilterListWithOR extends FilterListBase {
* next family for RegionScanner, INCLUDE_AND_NEXT_ROW is the same. so we 
should pass current cell
* to the filter, if row mismatch or row match but column family mismatch. 
(HBASE-18368)
* @see org.apache.hadoop.hbase.filter.Filter.ReturnCode
+   * @param subFilter which sub-filter to calculate the return code by using 
previous cell and
+   *  previous return code.
+   * @param prevCell the previous cell passed to given sub-filter.
+   * @param currentCell the current cell which will pass to given sub-filter.
+   * @param prevCode the previous return code for given sub-filter.
+   * @return return code calculated by using previous cell and previous return 
code. null means can
+   * not decide which return code should return, so we will pass the 
currentCell to
+   * subFilter for getting currentCell's return code, and it won't 
impact the sub-filter's
+   * internal states.
*/
-  private boolean shouldPassCurrentCellToFilter(Cell prevCell, Cell 
currentCell,
-  ReturnCode prevCode) throws IOException {
+  private ReturnCode calculateReturnCodeByPrevCellAndRC(Filter subFilter, Cell 
currentCell,
+  Cell prevCell, ReturnCode prevCode) throws IOException {
 if (prevCell == null || prevCode == null) {
-  return true;
+  return null;
 }
 switch (prevCode) {
 case INCLUDE:
 case SKIP:
-  return true;
+return null;
 case SEEK_NEXT_USING_HINT:
-  Cell nextHintCell = getNextCellHint(prevCell);
-  return nextHintCell == null || this.compareCell(currentCell, 
nextHintCell) >= 0;
+Cell nextHintCell = subFilter.getNextCellHint(prevCell);
+return nextHintCell != null && compareCell(currentCell, nextHintCell) 
< 0
+  ? ReturnCode.SEEK_NEXT_USING_HINT : null;
 case NEXT_COL:
 case INCLUDE_AND_NEXT_COL:
-  // Once row changed, reset() will clear prevCells, so we need not to 
compare their rows
-  // because rows are the same here.
-  return !CellUtil.matchingColumn(prevCell, currentCell);
+// Once row changed, reset() will clear prevCells, so we need not to 
compare their rows
+// because rows are the same here.
+return CellUtil.matchingColumn(prevCell, currentCell) ? 
ReturnCode.NEXT_COL : null;
 case NEXT_ROW:
 case INCLUDE_AND_SEEK_NEXT_ROW:
-  // As described above, rows are definitely the same, so we only compare 
the family.
-  return !CellUtil.matchingFamily(prevCell, currentCell);
+// As described above, rows are definitely the same, so we only 
compare the family.
+return CellUtil.matchingFamily(prevCell, currentCell) ? 
ReturnCode.NEXT_ROW : null;
 default:
-  throw new IllegalStateException("Received code is not valid.");
+throw new IllegalStateException("Received code is not valid.");
 }
   }
 
@@ -240,7 +250,7 @@ public class FilterListWithOR extends FilterListBase {
   private void updatePrevCellList(int index, Cell currentCell, ReturnCode 
currentRC) {
 if (currentCell == null || currentRC == ReturnCode.INCLUDE || currentRC == 
ReturnCode.SKIP) {
   // If previous return code is INCLUDE or SKIP, we should 

[29/47] hbase git commit: HBASE-21545 NEW_VERSION_BEHAVIOR breaks Get/Scan with specified columns

2018-12-31 Thread zhangduo
HBASE-21545 NEW_VERSION_BEHAVIOR breaks Get/Scan with specified columns

Signed-off-by: Duo Zhang 
Signed-off-by: stack 
Signed-off-by: Sakthi


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/dbafa1be
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/dbafa1be
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/dbafa1be

Branch: refs/heads/HBASE-21512
Commit: dbafa1be83d6f5894f1cc3eadb07ae0c3096de3a
Parents: 59f77de
Author: Andrey Elenskiy 
Authored: Tue Dec 4 12:10:38 2018 -0800
Committer: stack 
Committed: Sun Dec 23 22:01:11 2018 -0800

--
 .../querymatcher/NewVersionBehaviorTracker.java |  39 +++
 .../hadoop/hbase/HBaseTestingUtility.java   |  58 --
 ...estGetScanColumnsWithNewVersionBehavior.java | 109 +++
 .../TestNewVersionBehaviorTracker.java  |  36 ++
 4 files changed, 214 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/dbafa1be/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/querymatcher/NewVersionBehaviorTracker.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/querymatcher/NewVersionBehaviorTracker.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/querymatcher/NewVersionBehaviorTracker.java
index 4027766..16ac84c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/querymatcher/NewVersionBehaviorTracker.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/querymatcher/NewVersionBehaviorTracker.java
@@ -277,26 +277,26 @@ public class NewVersionBehaviorTracker implements 
ColumnTracker, DeleteTracker {
 
   @Override
   public MatchCode checkColumn(Cell cell, byte type) throws IOException {
-if (done()) {
-  // No more columns left, we are done with this query
-  return ScanQueryMatcher.MatchCode.SEEK_NEXT_ROW; // done_row
+if (columns == null) {
+return MatchCode.INCLUDE;
 }
-if (columns != null) {
-  while (columnIndex < columns.length) {
-int c = Bytes.compareTo(columns[columnIndex], 0, 
columns[columnIndex].length,
-cell.getQualifierArray(), cell.getQualifierOffset(), 
cell.getQualifierLength());
-if (c < 0) {
-  columnIndex++;
-} else if (c == 0) {
-  // We drop old version in #isDeleted, so here we must return INCLUDE.
-  return MatchCode.INCLUDE;
-} else {
-  return MatchCode.SEEK_NEXT_COL;
-}
+
+while (!done()) {
+  int c = CellUtil.compareQualifiers(cell,
+columns[columnIndex], 0, columns[columnIndex].length);
+  if (c < 0) {
+return MatchCode.SEEK_NEXT_COL;
   }
-  return MatchCode.SEEK_NEXT_ROW;
+
+  if (c == 0) {
+// We drop old version in #isDeleted, so here we must return INCLUDE.
+return MatchCode.INCLUDE;
+  }
+
+  columnIndex++;
 }
-return MatchCode.INCLUDE;
+// No more columns left, we are done with this query
+return MatchCode.SEEK_NEXT_ROW;
   }
 
   @Override
@@ -351,10 +351,7 @@ public class NewVersionBehaviorTracker implements 
ColumnTracker, DeleteTracker {
 
   @Override
   public boolean done() {
-// lastCq* have been updated to this cell.
-return !(columns == null || lastCqArray == null) && Bytes
-.compareTo(lastCqArray, lastCqOffset, lastCqLength, 
columns[columnIndex], 0,
-columns[columnIndex].length) > 0;
+return columns != null && columnIndex >= columns.length;
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hbase/blob/dbafa1be/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
index 7bfbfe1..796dbc3 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
@@ -713,6 +713,18 @@ public class HBaseTestingUtility extends 
HBaseZKTestingUtility {
   new Path(root, "mapreduce-am-staging-root-dir").toString());
   }
 
+  /**
+   *  Check whether the tests should assume NEW_VERSION_BEHAVIOR when creating
+   *  new column families. Default to false.
+   */
+  public boolean isNewVersionBehaviorEnabled(){
+final String propName = "hbase.tests.new.version.behavior";
+String v = System.getProperty(propName);
+if (v != null){
+  return Boolean.parseBoolean(v);
+}
+return false;
+  }
 
   /**
*  Get the HBase setting for dfs.client.read.shortcircuit 

[07/47] hbase git commit: Revert "HIVE-21575 : memstore above high watermark message is logged too much"

2018-12-31 Thread zhangduo
Revert "HIVE-21575 : memstore above high watermark message is logged too much"

This reverts commit 4640ff5959af4865966126a503a7cd15e26a7408.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/9a25d0c2
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/9a25d0c2
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/9a25d0c2

Branch: refs/heads/HBASE-21512
Commit: 9a25d0c249e595a1f8aef41cd677b44ff1c72d73
Parents: cb1966d
Author: Sergey Shelukhin 
Authored: Thu Dec 13 12:46:39 2018 -0800
Committer: Sergey Shelukhin 
Committed: Thu Dec 13 12:46:39 2018 -0800

--
 .../apache/hadoop/hbase/regionserver/MemStoreFlusher.java| 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/9a25d0c2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
index 804a2f8..699c9b6 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
@@ -703,7 +703,6 @@ class MemStoreFlusher implements FlushRequester {
 if (flushType != FlushType.NORMAL) {
   TraceUtil.addTimelineAnnotation("Force Flush. We're above high water 
mark.");
   long start = EnvironmentEdgeManager.currentTime();
-  long nextLogTimeMs = start;
   synchronized (this.blockSignal) {
 boolean blocked = false;
 long startTime = 0;
@@ -745,11 +744,8 @@ class MemStoreFlusher implements FlushRequester {
   LOG.warn("Interrupted while waiting");
   interrupted = true;
 }
-long nowMs = EnvironmentEdgeManager.currentTime();
-if (nowMs >= nextLogTimeMs) {
-  LOG.warn("Memstore is above high water mark and block {} ms", 
nowMs - start);
-  nextLogTimeMs = nowMs + 1000;
-}
+long took = EnvironmentEdgeManager.currentTime() - start;
+LOG.warn("Memstore is above high water mark and block " + took + 
"ms");
 flushType = isAboveHighWaterMark();
   }
 } finally {



[08/47] hbase git commit: HBASE-21575 : memstore above high watermark message is logged too much

2018-12-31 Thread zhangduo
HBASE-21575 : memstore above high watermark message is logged too much


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3ff274e2
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3ff274e2
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3ff274e2

Branch: refs/heads/HBASE-21512
Commit: 3ff274e22eb5710f4301fb0fce364e22a11288d7
Parents: 9a25d0c
Author: Sergey Shelukhin 
Authored: Wed Dec 12 11:02:25 2018 -0800
Committer: Sergey Shelukhin 
Committed: Thu Dec 13 12:47:11 2018 -0800

--
 .../apache/hadoop/hbase/regionserver/MemStoreFlusher.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3ff274e2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
index 699c9b6..804a2f8 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
@@ -703,6 +703,7 @@ class MemStoreFlusher implements FlushRequester {
 if (flushType != FlushType.NORMAL) {
   TraceUtil.addTimelineAnnotation("Force Flush. We're above high water 
mark.");
   long start = EnvironmentEdgeManager.currentTime();
+  long nextLogTimeMs = start;
   synchronized (this.blockSignal) {
 boolean blocked = false;
 long startTime = 0;
@@ -744,8 +745,11 @@ class MemStoreFlusher implements FlushRequester {
   LOG.warn("Interrupted while waiting");
   interrupted = true;
 }
-long took = EnvironmentEdgeManager.currentTime() - start;
-LOG.warn("Memstore is above high water mark and block " + took + 
"ms");
+long nowMs = EnvironmentEdgeManager.currentTime();
+if (nowMs >= nextLogTimeMs) {
+  LOG.warn("Memstore is above high water mark and block {} ms", 
nowMs - start);
+  nextLogTimeMs = nowMs + 1000;
+}
 flushType = isAboveHighWaterMark();
   }
 } finally {



[34/47] hbase git commit: HBASE-21643 Introduce two new region coprocessor method and deprecated postMutationBeforeWAL

2018-12-31 Thread zhangduo
HBASE-21643 Introduce two new region coprocessor method and deprecated 
postMutationBeforeWAL


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f5ea00f7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f5ea00f7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f5ea00f7

Branch: refs/heads/HBASE-21512
Commit: f5ea00f72442e5c80f2a5fc6e99506127fa8d16b
Parents: c2d5991
Author: Guanghao Zhang 
Authored: Wed Dec 26 17:42:02 2018 +0800
Committer: Guanghao Zhang 
Committed: Thu Dec 27 18:27:06 2018 +0800

--
 .../hbase/coprocessor/RegionObserver.java   | 47 
 .../hadoop/hbase/regionserver/HRegion.java  | 26 ++-
 .../regionserver/RegionCoprocessorHost.java | 29 +---
 .../hbase/security/access/AccessController.java | 30 ++---
 .../visibility/VisibilityController.java| 30 +++--
 5 files changed, 134 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f5ea00f7/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
index c14cbd1..95b2150 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
@@ -20,6 +20,7 @@
 package org.apache.hadoop.hbase.coprocessor;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
 
@@ -1029,13 +1030,59 @@ public interface RegionObserver {
* @param oldCell old cell containing previous value
* @param newCell the new cell containing the computed value
* @return the new cell, possibly changed
+   * @deprecated Use {@link #postIncrementBeforeWAL} or {@link 
#postAppendBeforeWAL} instead.
*/
+  @Deprecated
   default Cell 
postMutationBeforeWAL(ObserverContext ctx,
   MutationType opType, Mutation mutation, Cell oldCell, Cell newCell) 
throws IOException {
 return newCell;
   }
 
   /**
+   * Called after a list of new cells has been created during an increment 
operation, but before
+   * they are committed to the WAL or memstore.
+   *
+   * @param ctx   the environment provided by the region server
+   * @param mutation  the current mutation
+   * @param cellPairs a list of cell pair. The first cell is old cell which 
may be null.
+   *  And the second cell is the new cell.
+   * @return a list of cell pair, possibly changed.
+   */
+  default List> postIncrementBeforeWAL(
+  ObserverContext ctx, Mutation mutation,
+  List> cellPairs) throws IOException {
+List> resultPairs = new ArrayList<>(cellPairs.size());
+for (Pair pair : cellPairs) {
+  resultPairs.add(new Pair<>(pair.getFirst(),
+  postMutationBeforeWAL(ctx, MutationType.INCREMENT, mutation, 
pair.getFirst(),
+  pair.getSecond(;
+}
+return resultPairs;
+  }
+
+  /**
+   * Called after a list of new cells has been created during an append 
operation, but before
+   * they are committed to the WAL or memstore.
+   *
+   * @param ctx   the environment provided by the region server
+   * @param mutation  the current mutation
+   * @param cellPairs a list of cell pair. The first cell is old cell which 
may be null.
+   *  And the second cell is the new cell.
+   * @return a list of cell pair, possibly changed.
+   */
+  default List> postAppendBeforeWAL(
+  ObserverContext ctx, Mutation mutation,
+  List> cellPairs) throws IOException {
+List> resultPairs = new ArrayList<>(cellPairs.size());
+for (Pair pair : cellPairs) {
+  resultPairs.add(new Pair<>(pair.getFirst(),
+  postMutationBeforeWAL(ctx, MutationType.INCREMENT, mutation, 
pair.getFirst(),
+  pair.getSecond(;
+}
+return resultPairs;
+  }
+
+  /**
* Called after the ScanQueryMatcher creates ScanDeleteTracker. Implementing
* this hook would help in creating customised DeleteTracker and returning
* the newly created DeleteTracker

http://git-wip-us.apache.org/repos/asf/hbase/blob/f5ea00f7/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 9bf9309..ec222c7 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 

[39/47] hbase git commit: HBASE-21650 Add DDL operation and some other miscellaneous to thrift2

2018-12-31 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnFamilyDescriptor.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnFamilyDescriptor.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnFamilyDescriptor.java
new file mode 100644
index 000..03cb2f6
--- /dev/null
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnFamilyDescriptor.java
@@ -0,0 +1,2519 @@
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.hadoop.hbase.thrift2.generated;
+
+import org.apache.thrift.scheme.IScheme;
+import org.apache.thrift.scheme.SchemeFactory;
+import org.apache.thrift.scheme.StandardScheme;
+
+import org.apache.thrift.scheme.TupleScheme;
+import org.apache.thrift.protocol.TTupleProtocol;
+import org.apache.thrift.protocol.TProtocolException;
+import org.apache.thrift.EncodingUtils;
+import org.apache.thrift.TException;
+import org.apache.thrift.async.AsyncMethodCallback;
+import org.apache.thrift.server.AbstractNonblockingServer.*;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import javax.annotation.Generated;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
+public class TColumnFamilyDescriptor implements 
org.apache.thrift.TBase, java.io.Serializable, Cloneable, 
Comparable {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("TColumnFamilyDescriptor");
+
+  private static final org.apache.thrift.protocol.TField NAME_FIELD_DESC = new 
org.apache.thrift.protocol.TField("name", 
org.apache.thrift.protocol.TType.STRING, (short)1);
+  private static final org.apache.thrift.protocol.TField ATTRIBUTES_FIELD_DESC 
= new org.apache.thrift.protocol.TField("attributes", 
org.apache.thrift.protocol.TType.MAP, (short)2);
+  private static final org.apache.thrift.protocol.TField 
CONFIGURATION_FIELD_DESC = new 
org.apache.thrift.protocol.TField("configuration", 
org.apache.thrift.protocol.TType.MAP, (short)3);
+  private static final org.apache.thrift.protocol.TField BLOCK_SIZE_FIELD_DESC 
= new org.apache.thrift.protocol.TField("blockSize", 
org.apache.thrift.protocol.TType.I32, (short)4);
+  private static final org.apache.thrift.protocol.TField 
BLOOMN_FILTER_TYPE_FIELD_DESC = new 
org.apache.thrift.protocol.TField("bloomnFilterType", 
org.apache.thrift.protocol.TType.I32, (short)5);
+  private static final org.apache.thrift.protocol.TField 
COMPRESSION_TYPE_FIELD_DESC = new 
org.apache.thrift.protocol.TField("compressionType", 
org.apache.thrift.protocol.TType.I32, (short)6);
+  private static final org.apache.thrift.protocol.TField 
DFS_REPLICATION_FIELD_DESC = new 
org.apache.thrift.protocol.TField("dfsReplication", 
org.apache.thrift.protocol.TType.I16, (short)7);
+  private static final org.apache.thrift.protocol.TField 
DATA_BLOCK_ENCODING_FIELD_DESC = new 
org.apache.thrift.protocol.TField("dataBlockEncoding", 
org.apache.thrift.protocol.TType.I32, (short)8);
+  private static final org.apache.thrift.protocol.TField 
KEEP_DELETED_CELLS_FIELD_DESC = new 
org.apache.thrift.protocol.TField("keepDeletedCells", 
org.apache.thrift.protocol.TType.I32, (short)9);
+  private static final org.apache.thrift.protocol.TField 
MAX_VERSIONS_FIELD_DESC = new org.apache.thrift.protocol.TField("maxVersions", 
org.apache.thrift.protocol.TType.I32, (short)10);
+  private static final org.apache.thrift.protocol.TField 
MIN_VERSIONS_FIELD_DESC = new org.apache.thrift.protocol.TField("minVersions", 
org.apache.thrift.protocol.TType.I32, (short)11);
+  private static final org.apache.thrift.protocol.TField SCOPE_FIELD_DESC = 
new org.apache.thrift.protocol.TField("scope", 
org.apache.thrift.protocol.TType.I32, (short)12);
+  private static final org.apache.thrift.protocol.TField 
TIME_TO_LIVE_FIELD_DESC = new org.apache.thrift.protocol.TField("timeToLive", 
org.apache.thrift.protocol.TType.I32, (short)13);
+  private static final org.apache.thrift.protocol.TField 
BLOCK_CACHE_ENABLED_FIELD_DESC = new 
org.apache.thrift.protocol.TField("blockCacheEnabled", 
org.apache.thrift.protocol.TType.BOOL, (short)14);
+  private static final org.apache.thrift.protocol.TField 
CACHE_BLOOMS_ON_WRITE_FIELD_DESC = new 
org.apache.thrift.protocol.TField("cacheBloomsOnWrite", 

[47/47] hbase git commit: HBASE-21526 Use AsyncClusterConnection in ServerManager for getRsAdmin

2018-12-31 Thread zhangduo
HBASE-21526 Use AsyncClusterConnection in ServerManager for getRsAdmin


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b33b072d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b33b072d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b33b072d

Branch: refs/heads/HBASE-21512
Commit: b33b072de945d5272a3d46c06fd278cd64d11142
Parents: a13292d
Author: zhangduo 
Authored: Thu Dec 6 21:25:34 2018 +0800
Committer: zhangduo 
Committed: Mon Dec 31 20:34:24 2018 +0800

--
 .../hbase/client/AsyncClusterConnection.java|   6 +
 .../hbase/client/AsyncConnectionImpl.java   |   5 +
 .../hbase/client/AsyncRegionServerAdmin.java| 210 +++
 .../apache/hadoop/hbase/util/FutureUtils.java   |  60 ++
 .../org/apache/hadoop/hbase/master/HMaster.java |  13 +-
 .../hadoop/hbase/master/ServerManager.java  |  67 --
 .../master/procedure/RSProcedureDispatcher.java |  44 ++--
 7 files changed, 320 insertions(+), 85 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b33b072d/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncClusterConnection.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncClusterConnection.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncClusterConnection.java
index c7dea25..1327fd7 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncClusterConnection.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncClusterConnection.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hbase.client;
 
+import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.ipc.RpcClient;
 import org.apache.yetus.audience.InterfaceAudience;
 
@@ -27,6 +28,11 @@ import org.apache.yetus.audience.InterfaceAudience;
 public interface AsyncClusterConnection extends AsyncConnection {
 
   /**
+   * Get the admin service for the given region server.
+   */
+  AsyncRegionServerAdmin getRegionServerAdmin(ServerName serverName);
+
+  /**
* Get the nonce generator for this connection.
*/
   NonceGenerator getNonceGenerator();

http://git-wip-us.apache.org/repos/asf/hbase/blob/b33b072d/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
index 79ec54b..b01c03e 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionImpl.java
@@ -331,4 +331,9 @@ class AsyncConnectionImpl implements AsyncClusterConnection 
{
 return new AsyncBufferedMutatorBuilderImpl(connConf, 
getTableBuilder(tableName, pool),
   RETRY_TIMER);
   }
+
+  @Override
+  public AsyncRegionServerAdmin getRegionServerAdmin(ServerName serverName) {
+return new AsyncRegionServerAdmin(serverName, this);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/b33b072d/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
new file mode 100644
index 000..9accd89
--- /dev/null
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncRegionServerAdmin.java
@@ -0,0 +1,210 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.concurrent.CompletableFuture;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.ipc.HBaseRpcController;
+import 

[19/47] hbase git commit: HBASE-21535, Zombie Master detector is not working

2018-12-31 Thread zhangduo
HBASE-21535, Zombie Master detector is not working


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/fb58a23e
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/fb58a23e
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/fb58a23e

Branch: refs/heads/HBASE-21512
Commit: fb58a23e56c8fe85820c97337da887eddf4bb9bb
Parents: c448604
Author: Pankaj 
Authored: Tue Dec 18 00:49:22 2018 +0530
Committer: stack 
Committed: Tue Dec 18 20:51:01 2018 -0800

--
 .../java/org/apache/hadoop/hbase/master/HMaster.java | 11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/fb58a23e/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index a16e09d..0bcef59 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -900,11 +900,6 @@ public class HMaster extends HRegionServer implements 
MasterServices {
*/
   private void finishActiveMasterInitialization(MonitoredTask status) throws 
IOException,
   InterruptedException, KeeperException, ReplicationException {
-Thread zombieDetector = new Thread(new InitializationMonitor(this),
-"ActiveMasterInitializationMonitor-" + System.currentTimeMillis());
-zombieDetector.setDaemon(true);
-zombieDetector.start();
-
 /*
  * We are active master now... go initialize components we need to run.
  */
@@ -1001,6 +996,12 @@ public class HMaster extends HRegionServer implements 
MasterServices {
 // Set ourselves as active Master now our claim has succeeded up in zk.
 this.activeMaster = true;
 
+// Start the Zombie master detector after setting master as active, see 
HBASE-21535
+Thread zombieDetector = new Thread(new InitializationMonitor(this),
+"ActiveMasterInitializationMonitor-" + System.currentTimeMillis());
+zombieDetector.setDaemon(true);
+zombieDetector.start();
+
 // This is for backwards compatibility
 // See HBASE-11393
 status.setStatus("Update TableCFs node in ZNode");



[21/47] hbase git commit: HBASE-21610, numOpenConnections metric is set to -1 when zero server channel exist

2018-12-31 Thread zhangduo
HBASE-21610, numOpenConnections metric is set to -1 when zero server channel 
exist


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/78756733
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/78756733
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/78756733

Branch: refs/heads/HBASE-21512
Commit: 787567336afb9c5c1e00aaa0326566a5522a5e31
Parents: 8991877
Author: Pankaj 
Authored: Tue Dec 18 01:31:55 2018 +0530
Committer: stack 
Committed: Thu Dec 20 16:36:42 2018 -0800

--
 .../src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/78756733/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java
index 8ea2057..742a728 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java
@@ -169,8 +169,9 @@ public class NettyRpcServer extends RpcServer {
 
   @Override
   public int getNumOpenConnections() {
+int channelsCount = allChannels.size();
 // allChannels also contains the server channel, so exclude that from the 
count.
-return allChannels.size() - 1;
+return channelsCount > 0 ? channelsCount - 1 : channelsCount;
   }
 
   @Override



[40/47] hbase git commit: HBASE-21650 Add DDL operation and some other miscellaneous to thrift2

2018-12-31 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TBloomFilterType.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TBloomFilterType.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TBloomFilterType.java
new file mode 100644
index 000..601d6b4
--- /dev/null
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TBloomFilterType.java
@@ -0,0 +1,69 @@
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.hadoop.hbase.thrift2.generated;
+
+
+import java.util.Map;
+import java.util.HashMap;
+import org.apache.thrift.TEnum;
+
+public enum TBloomFilterType implements org.apache.thrift.TEnum {
+  /**
+   * Bloomfilters disabled
+   */
+  NONE(0),
+  /**
+   * Bloom enabled with Table row as Key
+   */
+  ROW(1),
+  /**
+   * Bloom enabled with Table row  column (family+qualifier) as Key
+   */
+  ROWCOL(2),
+  /**
+   * Bloom enabled with Table row prefix as Key, specify the length of the 
prefix
+   */
+  ROWPREFIX_FIXED_LENGTH(3),
+  /**
+   * Bloom enabled with Table row prefix as Key, specify the delimiter of the 
prefix
+   */
+  ROWPREFIX_DELIMITED(4);
+
+  private final int value;
+
+  private TBloomFilterType(int value) {
+this.value = value;
+  }
+
+  /**
+   * Get the integer value of this enum value, as defined in the Thrift IDL.
+   */
+  public int getValue() {
+return value;
+  }
+
+  /**
+   * Find a the enum type by its integer value, as defined in the Thrift IDL.
+   * @return null if the value is not found.
+   */
+  public static TBloomFilterType findByValue(int value) { 
+switch (value) {
+  case 0:
+return NONE;
+  case 1:
+return ROW;
+  case 2:
+return ROWCOL;
+  case 3:
+return ROWPREFIX_FIXED_LENGTH;
+  case 4:
+return ROWPREFIX_DELIMITED;
+  default:
+return null;
+}
+  }
+}

http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java
index 7da4dda..464ac12 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java
@@ -34,7 +34,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2016-05-25")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
 public class TCellVisibility implements 
org.apache.thrift.TBase, 
java.io.Serializable, Cloneable, Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("TCellVisibility");
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumn.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumn.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumn.java
index d0d336c..24a7846 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumn.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumn.java
@@ -39,7 +39,7 @@ import org.slf4j.LoggerFactory;
  * in a HBase table by column family and optionally
  * a column qualifier and timestamp
  */
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2016-05-25")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
 public class TColumn implements org.apache.thrift.TBase, java.io.Serializable, Cloneable, Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("TColumn");
 



[01/47] hbase git commit: HBASE-21570 Add write buffer periodic flush support for AsyncBufferedMutator [Forced Update!]

2018-12-31 Thread zhangduo
Repository: hbase
Updated Branches:
  refs/heads/HBASE-21512 e7a122780 -> b33b072de (forced update)


HBASE-21570 Add write buffer periodic flush support for AsyncBufferedMutator


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b09b87d1
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b09b87d1
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b09b87d1

Branch: refs/heads/HBASE-21512
Commit: b09b87d143730db00ec56114a752d3a74f8982c4
Parents: da9508d
Author: zhangduo 
Authored: Tue Dec 11 08:39:43 2018 +0800
Committer: Duo Zhang 
Committed: Tue Dec 11 14:51:26 2018 +0800

--
 .../hbase/client/AsyncBufferedMutator.java  |  16 +-
 .../client/AsyncBufferedMutatorBuilder.java |  19 +++
 .../client/AsyncBufferedMutatorBuilderImpl.java |  19 ++-
 .../hbase/client/AsyncBufferedMutatorImpl.java  |  67 +---
 .../client/AsyncConnectionConfiguration.java|  37 +++--
 .../hbase/client/AsyncConnectionImpl.java   |  11 +-
 .../hbase/client/TestAsyncBufferMutator.java| 161 ++-
 7 files changed, 277 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b09b87d1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncBufferedMutator.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncBufferedMutator.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncBufferedMutator.java
index 6fe4b9a..7b21eb5 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncBufferedMutator.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncBufferedMutator.java
@@ -18,13 +18,16 @@
 package org.apache.hadoop.hbase.client;
 
 import java.io.Closeable;
+import java.util.Collections;
 import java.util.List;
 import java.util.concurrent.CompletableFuture;
-
+import java.util.concurrent.TimeUnit;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.yetus.audience.InterfaceAudience;
 
+import org.apache.hbase.thirdparty.com.google.common.collect.Iterables;
+
 /**
  * Used to communicate with a single HBase table in batches. Obtain an 
instance from a
  * {@link AsyncConnection} and call {@link #close()} afterwards.
@@ -52,7 +55,9 @@ public interface AsyncBufferedMutator extends Closeable {
* part of a batch. Currently only supports {@link Put} and {@link Delete} 
mutations.
* @param mutation The data to send.
*/
-  CompletableFuture mutate(Mutation mutation);
+  default CompletableFuture mutate(Mutation mutation) {
+return 
Iterables.getOnlyElement(mutate(Collections.singletonList(mutation)));
+  }
 
   /**
* Send some {@link Mutation}s to the table. The mutations will be buffered 
and sent over the wire
@@ -81,4 +86,11 @@ public interface AsyncBufferedMutator extends Closeable {
* @return The size of the write buffer in bytes.
*/
   long getWriteBufferSize();
+
+  /**
+   * Returns the periodical flush interval, 0 means disabled.
+   */
+  default long getPeriodicalFlushTimeout(TimeUnit unit) {
+throw new UnsupportedOperationException("Not implemented");
+  }
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/b09b87d1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncBufferedMutatorBuilder.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncBufferedMutatorBuilder.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncBufferedMutatorBuilder.java
index 45959bb..c617c8e 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncBufferedMutatorBuilder.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncBufferedMutatorBuilder.java
@@ -46,6 +46,25 @@ public interface AsyncBufferedMutatorBuilder {
   AsyncBufferedMutatorBuilder setRetryPause(long pause, TimeUnit unit);
 
   /**
+   * Set the periodical flush interval. If the data in the buffer has not been 
flush for a long
+   * time, i.e, reach this timeout limit, we will flush it automatically.
+   * 
+   * Notice that, set the timeout to 0 or a negative value means disable 
periodical flush, not
+   * 'flush immediately'. If you want to flush immediately then you should not 
use this class, as it
+   * is designed to be 'buffered'.
+   */
+  default AsyncBufferedMutatorBuilder setWriteBufferPeriodicFlush(long 
timeout, TimeUnit unit) {
+throw new UnsupportedOperationException("Not implemented");
+  }
+
+  /**
+   * Disable the periodical flush, i.e, set the timeout to 0.
+   */
+  default AsyncBufferedMutatorBuilder disableWriteBufferPeriodicFlush() {
+return setWriteBufferPeriodicFlush(0, 

[33/47] hbase git commit: HBASE-21642 CopyTable by reading snapshot and bulkloading will save a lot of time

2018-12-31 Thread zhangduo
HBASE-21642 CopyTable by reading snapshot and bulkloading will save a lot of 
time


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c2d5991b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c2d5991b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c2d5991b

Branch: refs/heads/HBASE-21512
Commit: c2d5991b82e3b807cb11f5735ef5068b73720725
Parents: c552088
Author: huzheng 
Authored: Wed Dec 26 16:17:55 2018 +0800
Committer: huzheng 
Committed: Thu Dec 27 18:22:54 2018 +0800

--
 .../hadoop/hbase/mapreduce/CopyTable.java   | 109 --
 .../hadoop/hbase/mapreduce/TestCopyTable.java   | 110 ---
 .../hbase/client/ClientSideRegionScanner.java   |  14 ++-
 .../hadoop/hbase/regionserver/HRegion.java  |   2 +-
 4 files changed, 187 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c2d5991b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
index 4e57f54..b59c9e6 100644
--- 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
@@ -21,7 +21,7 @@ package org.apache.hadoop.hbase.mapreduce;
 import java.io.IOException;
 import java.util.HashMap;
 import java.util.Map;
-import java.util.Random;
+import java.util.UUID;
 
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.fs.FileSystem;
@@ -29,6 +29,8 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.mapreduce.Import.CellImporter;
+import org.apache.hadoop.hbase.mapreduce.Import.Importer;
 import org.apache.hadoop.hbase.util.FSUtils;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.slf4j.Logger;
@@ -70,8 +72,34 @@ public class CopyTable extends Configured implements Tool {
   boolean bulkload = false;
   Path bulkloadDir = null;
 
+  boolean readingSnapshot = false;
+  String snapshot = null;
+
   private final static String JOB_NAME_CONF_KEY = "mapreduce.job.name";
 
+  private Path generateUniqTempDir(boolean withDirCreated) throws IOException {
+FileSystem fs = FSUtils.getCurrentFileSystem(getConf());
+Path dir = new Path(fs.getWorkingDirectory(), NAME);
+if (!fs.exists(dir)) {
+  fs.mkdirs(dir);
+}
+Path newDir = new Path(dir, UUID.randomUUID().toString());
+if (withDirCreated) {
+  fs.mkdirs(newDir);
+}
+return newDir;
+  }
+
+  private void initCopyTableMapperReducerJob(Job job, Scan scan) throws 
IOException {
+Class mapper = bulkload ? CellImporter.class : 
Importer.class;
+if (readingSnapshot) {
+  TableMapReduceUtil.initTableSnapshotMapperJob(snapshot, scan, mapper, 
null, null, job, true,
+generateUniqTempDir(true));
+} else {
+  TableMapReduceUtil.initTableMapperJob(tableName, scan, mapper, null, 
null, job);
+}
+  }
+
   /**
* Sets up the actual job.
*
@@ -79,13 +107,13 @@ public class CopyTable extends Configured implements Tool {
* @return The newly created job.
* @throws IOException When setting up the job fails.
*/
-  public Job createSubmittableJob(String[] args)
-  throws IOException {
+  public Job createSubmittableJob(String[] args) throws IOException {
 if (!doCommandLine(args)) {
   return null;
 }
 
-Job job = Job.getInstance(getConf(), getConf().get(JOB_NAME_CONF_KEY, NAME 
+ "_" + tableName));
+String jobName = NAME + "_" + (tableName == null ? snapshot : tableName);
+Job job = Job.getInstance(getConf(), getConf().get(JOB_NAME_CONF_KEY, 
jobName));
 job.setJarByClass(CopyTable.class);
 Scan scan = new Scan();
 
@@ -107,15 +135,15 @@ public class CopyTable extends Configured implements Tool 
{
   job.getConfiguration().set(TableInputFormat.SHUFFLE_MAPS, "true");
 }
 if (versions >= 0) {
-  scan.setMaxVersions(versions);
+  scan.readVersions(versions);
 }
 
 if (startRow != null) {
-  scan.setStartRow(Bytes.toBytesBinary(startRow));
+  scan.withStartRow(Bytes.toBytesBinary(startRow));
 }
 
 if (stopRow != null) {
-  scan.setStopRow(Bytes.toBytesBinary(stopRow));
+  scan.withStopRow(Bytes.toBytesBinary(stopRow));
 }
 
 if(families != null) {
@@ -140,24 +168,13 @@ public class CopyTable extends Configured implements Tool 
{
 job.setNumReduceTasks(0);
 
 if (bulkload) {
-  

[26/47] hbase git commit: HBASE-21621 Reversed scan does not return expected number of rows

2018-12-31 Thread zhangduo
HBASE-21621 Reversed scan does not return expected number of rows

The unit test is contributed by Nihal Jain


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7c0a3cc2
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7c0a3cc2
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7c0a3cc2

Branch: refs/heads/HBASE-21512
Commit: 7c0a3cc265f1351363dc88f2f70855b3273dd8c6
Parents: b2bf22e
Author: Guanghao Zhang 
Authored: Thu Dec 20 12:34:34 2018 +0800
Committer: Guanghao Zhang 
Committed: Sun Dec 23 16:19:05 2018 +0800

--
 .../regionserver/ReversedStoreScanner.java  |  5 +--
 .../hadoop/hbase/regionserver/StoreScanner.java | 10 -
 .../hbase/client/TestFromClientSide3.java   |  2 +-
 .../client/TestScannersFromClientSide.java  | 43 
 .../hbase/regionserver/TestStoreScanner.java|  8 +++-
 5 files changed, 61 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7c0a3cc2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ReversedStoreScanner.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ReversedStoreScanner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ReversedStoreScanner.java
index 90e1129..491e6ef 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ReversedStoreScanner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ReversedStoreScanner.java
@@ -59,10 +59,9 @@ public class ReversedStoreScanner extends StoreScanner 
implements KeyValueScanne
   }
 
   @Override
-  protected void resetKVHeap(List scanners,
+  protected KeyValueHeap newKVHeap(List scanners,
   CellComparator comparator) throws IOException {
-// Combine all seeked scanners with a heap
-heap = new ReversedKeyValueHeap(scanners, comparator);
+return new ReversedKeyValueHeap(scanners, comparator);
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hbase/blob/7c0a3cc2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
index b318950..d06 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
@@ -403,10 +403,16 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
 }
   }
 
+  @VisibleForTesting
   protected void resetKVHeap(List scanners,
   CellComparator comparator) throws IOException {
 // Combine all seeked scanners with a heap
-heap = new KeyValueHeap(scanners, comparator);
+heap = newKVHeap(scanners, comparator);
+  }
+
+  protected KeyValueHeap newKVHeap(List scanners,
+  CellComparator comparator) throws IOException {
+return new KeyValueHeap(scanners, comparator);
   }
 
   /**
@@ -1037,7 +1043,7 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
   newCurrentScanners = new ArrayList<>(fileScanners.size() + 
memstoreScanners.size());
   newCurrentScanners.addAll(fileScanners);
   newCurrentScanners.addAll(memstoreScanners);
-  newHeap = new KeyValueHeap(newCurrentScanners, comparator);
+  newHeap = newKVHeap(newCurrentScanners, comparator);
 } catch (Exception e) {
   LOG.warn("failed to switch to stream read", e);
   if (fileScanners != null) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/7c0a3cc2/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
index cbfa1bf..1315d4a 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
@@ -1059,7 +1059,7 @@ public class TestFromClientSide3 {
 }
   }
 
-  private static byte[] generateHugeValue(int size) {
+  static byte[] generateHugeValue(int size) {
 Random rand = ThreadLocalRandom.current();
 byte[] value = new byte[size];
 for (int i = 0; i < value.length; i++) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/7c0a3cc2/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannersFromClientSide.java

[31/47] hbase git commit: HBASE-21640 Remove the TODO when increment zero

2018-12-31 Thread zhangduo
HBASE-21640 Remove the TODO when increment zero


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/4281cb3b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/4281cb3b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/4281cb3b

Branch: refs/heads/HBASE-21512
Commit: 4281cb3b9574333fab0e7c028c9c0d7e5b320c73
Parents: 44dec60
Author: Guanghao Zhang 
Authored: Tue Dec 25 17:42:38 2018 +0800
Committer: Guanghao Zhang 
Committed: Wed Dec 26 21:47:44 2018 +0800

--
 .../hadoop/hbase/regionserver/HRegion.java  | 21 
 .../hbase/regionserver/wal/TestDurability.java  |  9 -
 2 files changed, 8 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/4281cb3b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 21458c4..dc0fa22 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -7963,8 +7963,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 
   /**
* Reckon the Cells to apply to WAL, memstore, and to return to the Client; 
these Sets are not
-   * always the same dependent on whether to write WAL or if the amount to 
increment is zero (in
-   * this case we write back nothing, just return latest Cell value to the 
client).
+   * always the same dependent on whether to write WAL.
*
* @param results Fill in here what goes back to the Client if it is 
non-null (if null, client
*  doesn't want results).
@@ -8006,9 +8005,8 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
* @param op Whether Increment or Append
* @param mutation The encompassing Mutation object
* @param deltas Changes to apply to this Store; either increment amount or 
data to append
-   * @param results In here we accumulate all the Cells we are to return to 
the client; this List
-   *  can be larger than what we return in case where delta is zero; i.e. 
don't write
-   *  out new values, just return current value. If null, client doesn't want 
results returned.
+   * @param results In here we accumulate all the Cells we are to return to 
the client. If null,
+   *client doesn't want results returned.
* @return Resulting Cells after deltas have been applied to 
current
*  values. Side effect is our filling out of the results List.
*/
@@ -8036,33 +8034,25 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 for (int i = 0; i < deltas.size(); i++) {
   Cell delta = deltas.get(i);
   Cell currentValue = null;
-  boolean firstWrite = false;
   if (currentValuesIndex < currentValues.size() &&
   CellUtil.matchingQualifier(currentValues.get(currentValuesIndex), 
delta)) {
 currentValue = currentValues.get(currentValuesIndex);
 if (i < (deltas.size() - 1) && !CellUtil.matchingQualifier(delta, 
deltas.get(i + 1))) {
   currentValuesIndex++;
 }
-  } else {
-firstWrite = true;
   }
   // Switch on whether this an increment or an append building the new 
Cell to apply.
   Cell newCell = null;
   MutationType mutationType = null;
-  boolean apply = true;
   switch (op) {
 case INCREMENT:
   mutationType = MutationType.INCREMENT;
-  // If delta amount to apply is 0, don't write WAL or MemStore.
   long deltaAmount = getLongValue(delta);
-  // TODO: Does zero value mean reset Cell? For example, the ttl.
-  apply = deltaAmount != 0;
   final long newValue = currentValue == null ? deltaAmount : 
getLongValue(currentValue) + deltaAmount;
   newCell = reckonDelta(delta, currentValue, columnFamily, now, 
mutation, (oldCell) -> Bytes.toBytes(newValue));
   break;
 case APPEND:
   mutationType = MutationType.APPEND;
-  // Always apply Append. TODO: Does empty delta value mean reset 
Cell? It seems to.
   newCell = reckonDelta(delta, currentValue, columnFamily, now, 
mutation, (oldCell) ->
 ByteBuffer.wrap(new byte[delta.getValueLength() + 
oldCell.getValueLength()])
 .put(oldCell.getValueArray(), oldCell.getValueOffset(), 
oldCell.getValueLength())
@@ -8078,10 +8068,7 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 newCell =
 

[25/47] hbase git commit: HBASE-21631: list_quotas should print human readable values for LIMIT

2018-12-31 Thread zhangduo
HBASE-21631: list_quotas should print human readable values for LIMIT

Signed-off-by: Guanghao Zhang 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b2bf22e2
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b2bf22e2
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b2bf22e2

Branch: refs/heads/HBASE-21512
Commit: b2bf22e209d2e87121986b35c5749b2b8ae45fa2
Parents: e160b5a
Author: Sakthi 
Authored: Fri Dec 21 16:23:08 2018 -0800
Committer: Guanghao Zhang 
Committed: Sat Dec 22 22:00:58 2018 +0800

--
 .../java/org/apache/hadoop/hbase/quotas/SpaceLimitSettings.java| 2 +-
 .../org/apache/hadoop/hbase/quotas/GlobalQuotaSettingsImpl.java| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b2bf22e2/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/SpaceLimitSettings.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/SpaceLimitSettings.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/SpaceLimitSettings.java
index 02bd6e4..8b31e94 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/SpaceLimitSettings.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/SpaceLimitSettings.java
@@ -205,7 +205,7 @@ class SpaceLimitSettings extends QuotaSettings {
 if (proto.getQuota().getRemove()) {
   sb.append(", REMOVE => ").append(proto.getQuota().getRemove());
 } else {
-  sb.append(", LIMIT => ").append(proto.getQuota().getSoftLimit());
+  sb.append(", LIMIT => 
").append(sizeToString(proto.getQuota().getSoftLimit()));
   sb.append(", VIOLATION_POLICY => 
").append(proto.getQuota().getViolationPolicy());
 }
 return sb.toString();

http://git-wip-us.apache.org/repos/asf/hbase/blob/b2bf22e2/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettingsImpl.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettingsImpl.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettingsImpl.java
index 0c6cb81..e47e4ff 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettingsImpl.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/GlobalQuotaSettingsImpl.java
@@ -276,7 +276,7 @@ public class GlobalQuotaSettingsImpl extends 
GlobalQuotaSettings {
   if (spaceProto.getRemove()) {
 builder.append(", REMOVE => ").append(spaceProto.getRemove());
   } else {
-builder.append(", LIMIT => ").append(spaceProto.getSoftLimit());
+builder.append(", LIMIT => 
").append(sizeToString(spaceProto.getSoftLimit()));
 builder.append(", VIOLATION_POLICY => 
").append(spaceProto.getViolationPolicy());
   }
   builder.append(" } ");



[14/47] hbase git commit: HBASE-21514 Refactor CacheConfig

2018-12-31 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/1971d02e/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java
index 444102d..2065c0c 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java
@@ -28,12 +28,14 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.HBaseClassTestRule;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.HRegionInfo;
-import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
 import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.RegionInfoBuilder;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
 import org.apache.hadoop.hbase.regionserver.HRegion;
 import org.apache.hadoop.hbase.regionserver.HStore;
 import org.apache.hadoop.hbase.regionserver.InternalScanner;
@@ -104,17 +106,15 @@ public class TestScannerSelectionUsingTTL {
   @Test
   public void testScannerSelection() throws IOException {
 Configuration conf = TEST_UTIL.getConfiguration();
-CacheConfig.instantiateBlockCache(conf);
 conf.setBoolean("hbase.store.delete.expired.storefile", false);
-HColumnDescriptor hcd =
-  new HColumnDescriptor(FAMILY_BYTES)
-  .setMaxVersions(Integer.MAX_VALUE)
-  .setTimeToLive(TTL_SECONDS);
-HTableDescriptor htd = new HTableDescriptor(TABLE);
-htd.addFamily(hcd);
-HRegionInfo info = new HRegionInfo(TABLE);
-HRegion region = HBaseTestingUtility.createRegionAndWAL(info,
-  TEST_UTIL.getDataTestDir(info.getEncodedName()), conf, htd);
+LruBlockCache cache = (LruBlockCache) 
BlockCacheFactory.createBlockCache(conf);
+
+TableDescriptor td = 
TableDescriptorBuilder.newBuilder(TABLE).setColumnFamily(
+
ColumnFamilyDescriptorBuilder.newBuilder(FAMILY_BYTES).setMaxVersions(Integer.MAX_VALUE)
+.setTimeToLive(TTL_SECONDS).build()).build();
+RegionInfo info = RegionInfoBuilder.newBuilder(TABLE).build();
+HRegion region = HBaseTestingUtility
+.createRegionAndWAL(info, 
TEST_UTIL.getDataTestDir(info.getEncodedName()), conf, td, cache);
 
 long ts = EnvironmentEdgeManager.currentTime();
 long version = 0; //make sure each new set of Put's have a new ts
@@ -136,10 +136,7 @@ public class TestScannerSelectionUsingTTL {
   version++;
 }
 
-Scan scan = new Scan();
-scan.setMaxVersions(Integer.MAX_VALUE);
-CacheConfig cacheConf = new CacheConfig(conf);
-LruBlockCache cache = (LruBlockCache) cacheConf.getBlockCache();
+Scan scan = new Scan().readVersions(Integer.MAX_VALUE);
 cache.clearCache();
 InternalScanner scanner = region.getScanner(scan);
 List results = new ArrayList<>();

http://git-wip-us.apache.org/repos/asf/hbase/blob/1971d02e/hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
index 844b705..a930d7f 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
@@ -26,6 +26,7 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.Random;
 import java.util.TreeMap;
 import java.util.concurrent.ConcurrentSkipListMap;
@@ -48,8 +49,10 @@ import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.client.locking.EntityLock;
 import org.apache.hadoop.hbase.executor.ExecutorService;
+import org.apache.hadoop.hbase.io.hfile.BlockCache;
 import org.apache.hadoop.hbase.ipc.HBaseRpcController;
 import org.apache.hadoop.hbase.ipc.RpcServerInterface;
+import org.apache.hadoop.hbase.mob.MobFileCache;
 import org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager;
 import org.apache.hadoop.hbase.quotas.RegionServerSpaceQuotaManager;
 import org.apache.hadoop.hbase.quotas.RegionSizeStore;
@@ -708,4 +711,14 @@ class 

[02/47] hbase git commit: HBASE-21453 Convert ReadOnlyZKClient to DEBUG instead of INFO

2018-12-31 Thread zhangduo
HBASE-21453 Convert ReadOnlyZKClient to DEBUG instead of INFO


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f88224ee
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f88224ee
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f88224ee

Branch: refs/heads/HBASE-21512
Commit: f88224ee34ba2c23f794ec1219ffd93783b20e51
Parents: b09b87d
Author: Sakthi 
Authored: Thu Nov 29 18:52:50 2018 -0800
Committer: Peter Somogyi 
Committed: Tue Dec 11 08:18:02 2018 +0100

--
 .../java/org/apache/hadoop/hbase/zookeeper/ReadOnlyZKClient.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f88224ee/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ReadOnlyZKClient.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ReadOnlyZKClient.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ReadOnlyZKClient.java
index fc2d5f0..09f8984 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ReadOnlyZKClient.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ReadOnlyZKClient.java
@@ -136,7 +136,7 @@ public final class ReadOnlyZKClient implements Closeable {
 this.retryIntervalMs =
 conf.getInt(RECOVERY_RETRY_INTERVAL_MILLIS, 
DEFAULT_RECOVERY_RETRY_INTERVAL_MILLIS);
 this.keepAliveTimeMs = conf.getInt(KEEPALIVE_MILLIS, 
DEFAULT_KEEPALIVE_MILLIS);
-LOG.info(
+LOG.debug(
   "Connect {} to {} with session timeout={}ms, retries {}, " +
 "retry interval {}ms, keepAlive={}ms",
   getId(), connectString, sessionTimeoutMs, maxRetries, retryIntervalMs, 
keepAliveTimeMs);
@@ -347,7 +347,7 @@ public final class ReadOnlyZKClient implements Closeable {
   @Override
   public void close() {
 if (closed.compareAndSet(false, true)) {
-  LOG.info("Close zookeeper connection {} to {}", getId(), connectString);
+  LOG.debug("Close zookeeper connection {} to {}", getId(), connectString);
   tasks.add(CLOSE);
 }
   }



[32/47] hbase git commit: HBASE-14939 Document bulk loaded hfile replication

2018-12-31 Thread zhangduo
HBASE-14939 Document bulk loaded hfile replication

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c5520888
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c5520888
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c5520888

Branch: refs/heads/HBASE-21512
Commit: c5520888779235a334583f7c369dcee49518e165
Parents: 4281cb3
Author: Wei-Chiu Chuang 
Authored: Wed Dec 26 20:14:18 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Dec 26 20:14:18 2018 +0530

--
 src/main/asciidoc/_chapters/architecture.adoc | 32 ++
 1 file changed, 26 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c5520888/src/main/asciidoc/_chapters/architecture.adoc
--
diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 17e9e13..27db26a 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -2543,12 +2543,6 @@ The most straightforward method is to either use the 
`TableOutputFormat` class f
 The bulk load feature uses a MapReduce job to output table data in HBase's 
internal data format, and then directly loads the generated StoreFiles into a 
running cluster.
 Using bulk load will use less CPU and network resources than simply using the 
HBase API.
 
-[[arch.bulk.load.limitations]]
-=== Bulk Load Limitations
-
-As bulk loading bypasses the write path, the WAL doesn't get written to as 
part of the process.
-Replication works by reading the WAL files so it won't see the bulk loaded 
data – and the same goes for the edits that use 
`Put.setDurability(SKIP_WAL)`. One way to handle that is to ship the raw files 
or the HFiles to the other cluster and do the other processing there.
-
 [[arch.bulk.load.arch]]
 === Bulk Load Architecture
 
@@ -2601,6 +2595,32 @@ To get started doing so, dig into `ImportTsv.java` and 
check the JavaDoc for HFi
 The import step of the bulk load can also be done programmatically.
 See the `LoadIncrementalHFiles` class for more information.
 
+[[arch.bulk.load.replication]]
+=== Bulk Loading Replication
+HBASE-13153 adds replication support for bulk loaded HFiles, available since 
HBase 1.3/2.0. This feature is enabled by setting 
`hbase.replication.bulkload.enabled` to `true` (default is `false`).
+You also need to copy the source cluster configuration files to the 
destination cluster.
+
+Additional configurations are required too:
+
+. `hbase.replication.source.fs.conf.provider`
++
+This defines the class which loads the source cluster file system client 
configuration in the destination cluster. This should be configured for all the 
RS in the destination cluster. Default is 
`org.apache.hadoop.hbase.replication.regionserver.DefaultSourceFSConfigurationProvider`.
++
+. `hbase.replication.conf.dir`
++
+This represents the base directory where the file system client configurations 
of the source cluster are copied to the destination cluster. This should be 
configured for all the RS in the destination cluster. Default is 
`$HBASE_CONF_DIR`.
++
+. `hbase.replication.cluster.id`
++
+This configuration is required in the cluster where replication for bulk 
loaded data is enabled. A source cluster is uniquely identified by the 
destination cluster using this id. This should be configured for all the RS in 
the source cluster configuration file for all the RS.
++
+
+
+
+For example: If source cluster FS client configurations are copied to the 
destination cluster under directory `/home/user/dc1/`, then 
`hbase.replication.cluster.id` should be configured as `dc1` and 
`hbase.replication.conf.dir` as `/home/user`.
+
+NOTE: `DefaultSourceFSConfigurationProvider` supports only `xml` type files. 
It loads source cluster FS client configuration only once, so if source cluster 
FS client configuration files are updated, every peer(s) cluster RS must be 
restarted to reload the configuration.
+
 [[arch.hdfs]]
 == HDFS
 



[15/47] hbase git commit: HBASE-21514 Refactor CacheConfig

2018-12-31 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/1971d02e/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index 6242d36..13f277b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -36,6 +36,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
 import java.util.Objects;
+import java.util.Optional;
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.Timer;
@@ -98,7 +99,7 @@ import org.apache.hadoop.hbase.executor.ExecutorType;
 import org.apache.hadoop.hbase.fs.HFileSystem;
 import org.apache.hadoop.hbase.http.InfoServer;
 import org.apache.hadoop.hbase.io.hfile.BlockCache;
-import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.BlockCacheFactory;
 import org.apache.hadoop.hbase.io.hfile.HFile;
 import org.apache.hadoop.hbase.io.util.MemorySizeUtil;
 import org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils;
@@ -114,7 +115,7 @@ import org.apache.hadoop.hbase.log.HBaseMarkers;
 import org.apache.hadoop.hbase.master.HMaster;
 import org.apache.hadoop.hbase.master.LoadBalancer;
 import org.apache.hadoop.hbase.master.RegionState.State;
-import org.apache.hadoop.hbase.mob.MobCacheConfig;
+import org.apache.hadoop.hbase.mob.MobFileCache;
 import org.apache.hadoop.hbase.procedure.RegionServerProcedureManagerHost;
 import org.apache.hadoop.hbase.procedure2.RSProcedureCallable;
 import org.apache.hadoop.hbase.quotas.FileSystemUtilizationChore;
@@ -410,10 +411,10 @@ public class HRegionServer extends HasThread implements
 
   private final RegionServerAccounting regionServerAccounting;
 
-  // Cache configuration and block cache reference
-  protected CacheConfig cacheConfig;
-  // Cache configuration for mob
-  final MobCacheConfig mobCacheConfig;
+  // Block cache
+  private BlockCache blockCache;
+  // The cache for mob files
+  private MobFileCache mobFileCache;
 
   /** The health check chore. */
   private HealthCheckChore healthCheckChore;
@@ -591,12 +592,12 @@ public class HRegionServer extends HasThread implements
 
   boolean isMasterNotCarryTable =
   this instanceof HMaster && !LoadBalancer.isTablesOnMaster(conf);
-  // no need to instantiate global block cache when master not carry table
+
+  // no need to instantiate block cache and mob file cache when master not 
carry table
   if (!isMasterNotCarryTable) {
-CacheConfig.instantiateBlockCache(conf);
+blockCache = BlockCacheFactory.createBlockCache(conf);
+mobFileCache = new MobFileCache(conf);
   }
-  cacheConfig = new CacheConfig(conf);
-  mobCacheConfig = new MobCacheConfig(conf);
 
   uncaughtExceptionHandler = new UncaughtExceptionHandler() {
 @Override
@@ -1062,10 +1063,12 @@ public class HRegionServer extends HasThread implements
   }
 }
 // Send cache a shutdown.
-if (cacheConfig != null && cacheConfig.isBlockCacheEnabled()) {
-  cacheConfig.getBlockCache().shutdown();
+if (blockCache != null) {
+  blockCache.shutdown();
+}
+if (mobFileCache != null) {
+  mobFileCache.shutdown();
 }
-mobCacheConfig.getMobFileCache().shutdown();
 
 if (movedRegionsCleaner != null) {
   movedRegionsCleaner.stop("Region Server stopping");
@@ -1607,9 +1610,9 @@ public class HRegionServer extends HasThread implements
   }
 
   private void startHeapMemoryManager() {
-this.hMemManager = HeapMemoryManager.create(this.conf, this.cacheFlusher, 
this,
-this.regionServerAccounting);
-if (this.hMemManager != null) {
+if (this.blockCache != null) {
+  this.hMemManager =
+  new HeapMemoryManager(this.blockCache, this.cacheFlusher, this, 
regionServerAccounting);
   this.hMemManager.start(getChoreService());
 }
   }
@@ -3614,10 +3617,23 @@ public class HRegionServer extends HasThread implements
   }
 
   /**
-   * @return The cache config instance used by the regionserver.
+   * May be null if this is a master which not carry table.
+   *
+   * @return The block cache instance used by the regionserver.
+   */
+  @Override
+  public Optional getBlockCache() {
+return Optional.ofNullable(this.blockCache);
+  }
+
+  /**
+   * May be null if this is a master which not carry table.
+   *
+   * @return The cache for mob files used by the regionserver.
*/
-  public CacheConfig getCacheConfig() {
-return this.cacheConfig;
+  @Override
+  public Optional getMobFileCache() {
+return Optional.ofNullable(this.mobFileCache);
   }
 
   /**
@@ -3646,7 +3662,6 @@ public class HRegionServer extends HasThread 

[42/47] hbase git commit: HBASE-21650 Add DDL operation and some other miscellaneous to thrift2 (ADDENDUM add some comments)

2018-12-31 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/b620334c/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableDescriptor.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableDescriptor.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableDescriptor.java
index 89a8a5e..8e53bdf 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableDescriptor.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableDescriptor.java
@@ -34,7 +34,11 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
+/**
+ * Thrift wrapper around
+ * org.apache.hadoop.hbase.client.TableDescriptor
+ */
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-28")
 public class TTableDescriptor implements 
org.apache.thrift.TBase, 
java.io.Serializable, Cloneable, Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("TTableDescriptor");
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/b620334c/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableName.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableName.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableName.java
index f2c0743..cec268a 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableName.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableName.java
@@ -34,7 +34,11 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
+/**
+ * Thrift wrapper around
+ * org.apache.hadoop.hbase.TableName
+ */
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-28")
 public class TTableName implements org.apache.thrift.TBase, java.io.Serializable, Cloneable, Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("TTableName");
 
@@ -47,12 +51,24 @@ public class TTableName implements 
org.apache.thrift.TBase byName = new HashMap();
@@ -157,6 +173,9 @@ public class TTableName implements 
org.apache.thrift.TBasehttp://git-wip-us.apache.org/repos/asf/hbase/blob/b620334c/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java
index 1e1898c..8ab746c 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java
@@ -34,7 +34,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-28")
 public class TTimeRange implements org.apache.thrift.TBase, java.io.Serializable, Cloneable, Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("TTimeRange");
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/b620334c/hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
--
diff --git 
a/hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift 
b/hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
index 6383329..c1b94ef 100644
--- 
a/hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
+++ 
b/hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
@@ -315,6 +315,10 @@ enum TCompareOp {
   NO_OP = 6
 }
 
+/**
+ * Thrift wrapper around
+ * org.apache.hadoop.hbase.regionserver.BloomType
+ */
 enum TBloomFilterType {
 /**
* Bloomfilters disabled
@@ -338,6 +342,10 @@ enum TBloomFilterType {
   ROWPREFIX_DELIMITED = 4
 }
 
+/**
+ * Thrift wrapper around
+ * org.apache.hadoop.hbase.io.compress.Algorithm
+ */
 enum TCompressionAlgorithm {
   LZO = 0,
   GZ = 1,
@@ -348,6 +356,10 @@ enum TCompressionAlgorithm {
   ZSTD = 6
 }
 
+/**
+ * Thrift wrapper around
+ * 

[41/47] hbase git commit: HBASE-21650 Add DDL operation and some other miscellaneous to thrift2

2018-12-31 Thread zhangduo
HBASE-21650 Add DDL operation and some other miscellaneous to thrift2


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7820ba1d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7820ba1d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7820ba1d

Branch: refs/heads/HBASE-21512
Commit: 7820ba1dbdba58b1002cdfde08eb21aa7a0bb6da
Parents: f5ea00f
Author: Allan Yang 
Authored: Thu Dec 27 22:25:33 2018 +0800
Committer: Allan Yang 
Committed: Thu Dec 27 22:25:33 2018 +0800

--
 .../hbase/thrift/generated/AlreadyExists.java   | 2 +-
 .../hbase/thrift/generated/BatchMutation.java   | 2 +-
 .../thrift/generated/ColumnDescriptor.java  | 2 +-
 .../hadoop/hbase/thrift/generated/Hbase.java| 2 +-
 .../hadoop/hbase/thrift/generated/IOError.java  | 2 +-
 .../hbase/thrift/generated/IllegalArgument.java | 2 +-
 .../hadoop/hbase/thrift/generated/Mutation.java | 2 +-
 .../hadoop/hbase/thrift/generated/TAppend.java  | 2 +-
 .../hadoop/hbase/thrift/generated/TCell.java| 2 +-
 .../hadoop/hbase/thrift/generated/TColumn.java  | 2 +-
 .../hbase/thrift/generated/TIncrement.java  | 2 +-
 .../hbase/thrift/generated/TRegionInfo.java | 2 +-
 .../hbase/thrift/generated/TRowResult.java  | 2 +-
 .../hadoop/hbase/thrift/generated/TScan.java| 2 +-
 .../thrift2/ThriftHBaseServiceHandler.java  |   290 +
 .../hadoop/hbase/thrift2/ThriftUtilities.java   |   411 +-
 .../thrift2/generated/NamespaceDescriptor.java  |   554 +
 .../hadoop/hbase/thrift2/generated/TAppend.java |   114 +-
 .../hbase/thrift2/generated/TAuthorization.java | 2 +-
 .../thrift2/generated/TBloomFilterType.java |69 +
 .../thrift2/generated/TCellVisibility.java  | 2 +-
 .../hadoop/hbase/thrift2/generated/TColumn.java | 2 +-
 .../generated/TColumnFamilyDescriptor.java  |  2519 +
 .../thrift2/generated/TColumnIncrement.java | 2 +-
 .../hbase/thrift2/generated/TColumnValue.java   |   110 +-
 .../generated/TCompressionAlgorithm.java|60 +
 .../thrift2/generated/TDataBlockEncoding.java   |57 +
 .../hadoop/hbase/thrift2/generated/TDelete.java | 2 +-
 .../hbase/thrift2/generated/TDurability.java| 3 +
 .../hadoop/hbase/thrift2/generated/TGet.java|   410 +-
 .../hbase/thrift2/generated/THBaseService.java  | 44644 +
 .../hbase/thrift2/generated/THRegionInfo.java   | 2 +-
 .../thrift2/generated/THRegionLocation.java | 2 +-
 .../hbase/thrift2/generated/TIOError.java   | 2 +-
 .../thrift2/generated/TIllegalArgument.java | 2 +-
 .../hbase/thrift2/generated/TIncrement.java |   114 +-
 .../thrift2/generated/TKeepDeletedCells.java|63 +
 .../thrift2/generated/TNamespaceDescriptor.java |   554 +
 .../hadoop/hbase/thrift2/generated/TPut.java| 2 +-
 .../hadoop/hbase/thrift2/generated/TResult.java |   112 +-
 .../hbase/thrift2/generated/TRowMutations.java  |38 +-
 .../hadoop/hbase/thrift2/generated/TScan.java   | 2 +-
 .../hbase/thrift2/generated/TServerName.java| 2 +-
 .../thrift2/generated/TTableDescriptor.java |   843 +
 .../hbase/thrift2/generated/TTableName.java |   512 +
 .../hbase/thrift2/generated/TTimeRange.java | 2 +-
 .../apache/hadoop/hbase/thrift2/hbase.thrift|   229 +-
 .../thrift2/TestThriftHBaseServiceHandler.java  |96 +
 48 files changed, 41553 insertions(+), 10303 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
index 68361c1..8ec3e32 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
@@ -38,7 +38,7 @@ import org.slf4j.LoggerFactory;
  * An AlreadyExists exceptions signals that a table with the specified
  * name already exists
  */
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2016-05-25")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
 public class AlreadyExists extends TException implements 
org.apache.thrift.TBase, 
java.io.Serializable, Cloneable, Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("AlreadyExists");
 


[27/47] hbase git commit: HBASE-21635 Use maven enforcer to ban imports from illegal packages

2018-12-31 Thread zhangduo
HBASE-21635 Use maven enforcer to ban imports from illegal packages


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/97fd647d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/97fd647d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/97fd647d

Branch: refs/heads/HBASE-21512
Commit: 97fd647de20e6c8df0cd6df248ec1365abc37378
Parents: 7c0a3cc
Author: zhangduo 
Authored: Sun Dec 23 18:25:42 2018 +0800
Committer: Duo Zhang 
Committed: Mon Dec 24 11:12:25 2018 +0800

--
 .../apache/hadoop/hbase/master/DeadServer.java  | 17 ++--
 .../TestBalancerStatusTagInJMXMetrics.java  | 32 
 pom.xml | 83 +++-
 3 files changed, 107 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/97fd647d/hbase-server/src/main/java/org/apache/hadoop/hbase/master/DeadServer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/DeadServer.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/DeadServer.java
index 4183201..0584792 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/DeadServer.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/DeadServer.java
@@ -18,15 +18,6 @@
  */
 package org.apache.hadoop.hbase.master;
 
-import org.apache.yetus.audience.InterfaceAudience;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-import org.apache.hadoop.hbase.ServerName;
-import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-import org.apache.hadoop.hbase.util.Pair;
-
-import com.google.common.base.Preconditions;
-
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.Comparator;
@@ -37,6 +28,14 @@ import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
 
 
 /**

http://git-wip-us.apache.org/repos/asf/hbase/blob/97fd647d/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBalancerStatusTagInJMXMetrics.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBalancerStatusTagInJMXMetrics.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBalancerStatusTagInJMXMetrics.java
index 9f56621..d23436d 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBalancerStatusTagInJMXMetrics.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBalancerStatusTagInJMXMetrics.java
@@ -1,22 +1,25 @@
 /**
- * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
- * agreements. See the NOTICE file distributed with this work for additional 
information regarding
- * copyright ownership. The ASF licenses this file to you under the Apache 
License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the 
License. You may obtain a
- * copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless 
required by applicable
- * law or agreed to in writing, software distributed under the License is 
distributed on an "AS IS"
- * BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License
- * for the specific language governing permissions and limitations under the 
License.
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
  */
-
 package org.apache.hadoop.hbase.master.balancer;
 
 import static org.junit.Assert.assertEquals;
 
 import java.util.Random;
-
-import org.apache.commons.logging.Log;
-import 

[22/47] hbase git commit: HBASE-21618 Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) returns one result

2018-12-31 Thread zhangduo
HBASE-21618 Scan with the same startRow(inclusive=true) and 
stopRow(inclusive=false) returns one result


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ad819380
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ad819380
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ad819380

Branch: refs/heads/HBASE-21512
Commit: ad819380c744678e719431fb8b1b5e1951bc31b6
Parents: 7875673
Author: Guanghao Zhang 
Authored: Thu Dec 20 11:03:54 2018 +0800
Committer: Guanghao Zhang 
Committed: Fri Dec 21 09:49:24 2018 +0800

--
 .../hadoop/hbase/protobuf/ProtobufUtil.java |  4 +--
 .../hbase/shaded/protobuf/ProtobufUtil.java |  4 +--
 .../hbase/shaded/protobuf/TestProtobufUtil.java |  1 +
 .../hbase/client/TestFromClientSide3.java   |  4 +--
 .../client/TestScannersFromClientSide.java  | 38 
 .../hadoop/hbase/protobuf/TestProtobufUtil.java |  1 +
 6 files changed, 44 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ad819380/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
index 4d54528..a3d49b5 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
@@ -937,9 +937,7 @@ public final class ProtobufUtil {
 if (!scan.includeStartRow()) {
   scanBuilder.setIncludeStartRow(false);
 }
-if (scan.includeStopRow()) {
-  scanBuilder.setIncludeStopRow(true);
-}
+scanBuilder.setIncludeStopRow(scan.includeStopRow());
 if (scan.getReadType() != Scan.ReadType.DEFAULT) {
   scanBuilder.setReadType(toReadType(scan.getReadType()));
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/ad819380/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
index cf4c831..fea81f1 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
@@ -1081,9 +1081,7 @@ public final class ProtobufUtil {
 if (!scan.includeStartRow()) {
   scanBuilder.setIncludeStartRow(false);
 }
-if (scan.includeStopRow()) {
-  scanBuilder.setIncludeStopRow(true);
-}
+scanBuilder.setIncludeStopRow(scan.includeStopRow());
 if (scan.getReadType() != Scan.ReadType.DEFAULT) {
   scanBuilder.setReadType(toReadType(scan.getReadType()));
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/ad819380/hbase-client/src/test/java/org/apache/hadoop/hbase/shaded/protobuf/TestProtobufUtil.java
--
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/shaded/protobuf/TestProtobufUtil.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/shaded/protobuf/TestProtobufUtil.java
index be51e96..2d8a74a 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/shaded/protobuf/TestProtobufUtil.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/shaded/protobuf/TestProtobufUtil.java
@@ -246,6 +246,7 @@ public class TestProtobufUtil {
 scanBuilder.setCacheBlocks(false);
 scanBuilder.setCaching(1024);
 scanBuilder.setTimeRange(ProtobufUtil.toTimeRange(TimeRange.allTime()));
+scanBuilder.setIncludeStopRow(false);
 ClientProtos.Scan expectedProto = scanBuilder.build();
 
 ClientProtos.Scan actualProto = ProtobufUtil.toScan(

http://git-wip-us.apache.org/repos/asf/hbase/blob/ad819380/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
index 0dee20b..cbfa1bf 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
@@ -1094,7 +1094,7 @@ public class TestFromClientSide3 {
 }
 
 Scan scan = new Scan();
-scan.withStartRow(ROW).withStopRow(ROW).addFamily(FAMILY).setBatch(3)
+

[36/47] hbase git commit: HBASE-21650 Add DDL operation and some other miscellaneous to thrift2

2018-12-31 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionInfo.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionInfo.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionInfo.java
index 129ab2e..8450f5b 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionInfo.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionInfo.java
@@ -34,7 +34,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2016-05-25")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
 public class THRegionInfo implements org.apache.thrift.TBase, java.io.Serializable, Cloneable, 
Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("THRegionInfo");
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionLocation.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionLocation.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionLocation.java
index 94b25ff..b1146e9 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionLocation.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionLocation.java
@@ -34,7 +34,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2016-05-25")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
 public class THRegionLocation implements 
org.apache.thrift.TBase, 
java.io.Serializable, Cloneable, Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("THRegionLocation");
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java
index 2e50d3d..9569c3f 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java
@@ -39,7 +39,7 @@ import org.slf4j.LoggerFactory;
  * to the HBase master or a HBase region server. Also used to return
  * more general HBase error conditions.
  */
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2016-05-25")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
 public class TIOError extends TException implements 
org.apache.thrift.TBase, java.io.Serializable, 
Cloneable, Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("TIOError");
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java
index 9387429..6734dec 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java
@@ -38,7 +38,7 @@ import org.slf4j.LoggerFactory;
  * A TIllegalArgument exception indicates an illegal or invalid
  * argument was passed into a procedure.
  */
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2016-05-25")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
 public class TIllegalArgument extends TException implements 
org.apache.thrift.TBase, 
java.io.Serializable, Cloneable, Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("TIllegalArgument");
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIncrement.java

[35/47] hbase git commit: HBASE-21650 Add DDL operation and some other miscellaneous to thrift2

2018-12-31 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableName.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableName.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableName.java
new file mode 100644
index 000..f2c0743
--- /dev/null
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTableName.java
@@ -0,0 +1,512 @@
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.hadoop.hbase.thrift2.generated;
+
+import org.apache.thrift.scheme.IScheme;
+import org.apache.thrift.scheme.SchemeFactory;
+import org.apache.thrift.scheme.StandardScheme;
+
+import org.apache.thrift.scheme.TupleScheme;
+import org.apache.thrift.protocol.TTupleProtocol;
+import org.apache.thrift.protocol.TProtocolException;
+import org.apache.thrift.EncodingUtils;
+import org.apache.thrift.TException;
+import org.apache.thrift.async.AsyncMethodCallback;
+import org.apache.thrift.server.AbstractNonblockingServer.*;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import javax.annotation.Generated;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
+public class TTableName implements org.apache.thrift.TBase, java.io.Serializable, Cloneable, Comparable {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("TTableName");
+
+  private static final org.apache.thrift.protocol.TField NS_FIELD_DESC = new 
org.apache.thrift.protocol.TField("ns", 
org.apache.thrift.protocol.TType.STRING, (short)1);
+  private static final org.apache.thrift.protocol.TField QUALIFIER_FIELD_DESC 
= new org.apache.thrift.protocol.TField("qualifier", 
org.apache.thrift.protocol.TType.STRING, (short)2);
+
+  private static final Map, SchemeFactory> schemes = 
new HashMap, SchemeFactory>();
+  static {
+schemes.put(StandardScheme.class, new TTableNameStandardSchemeFactory());
+schemes.put(TupleScheme.class, new TTableNameTupleSchemeFactory());
+  }
+
+  public ByteBuffer ns; // required
+  public ByteBuffer qualifier; // required
+
+  /** The set of fields this struct contains, along with convenience methods 
for finding and manipulating them. */
+  public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+NS((short)1, "ns"),
+QUALIFIER((short)2, "qualifier");
+
+private static final Map byName = new HashMap();
+
+static {
+  for (_Fields field : EnumSet.allOf(_Fields.class)) {
+byName.put(field.getFieldName(), field);
+  }
+}
+
+/**
+ * Find the _Fields constant that matches fieldId, or null if its not 
found.
+ */
+public static _Fields findByThriftId(int fieldId) {
+  switch(fieldId) {
+case 1: // NS
+  return NS;
+case 2: // QUALIFIER
+  return QUALIFIER;
+default:
+  return null;
+  }
+}
+
+/**
+ * Find the _Fields constant that matches fieldId, throwing an exception
+ * if it is not found.
+ */
+public static _Fields findByThriftIdOrThrow(int fieldId) {
+  _Fields fields = findByThriftId(fieldId);
+  if (fields == null) throw new IllegalArgumentException("Field " + 
fieldId + " doesn't exist!");
+  return fields;
+}
+
+/**
+ * Find the _Fields constant that matches name, or null if its not found.
+ */
+public static _Fields findByName(String name) {
+  return byName.get(name);
+}
+
+private final short _thriftId;
+private final String _fieldName;
+
+_Fields(short thriftId, String fieldName) {
+  _thriftId = thriftId;
+  _fieldName = fieldName;
+}
+
+public short getThriftFieldId() {
+  return _thriftId;
+}
+
+public String getFieldName() {
+  return _fieldName;
+}
+  }
+
+  // isset id assignments
+  public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> 
metaDataMap;
+  static {
+Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new 
EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+tmpMap.put(_Fields.NS, new org.apache.thrift.meta_data.FieldMetaData("ns", 
org.apache.thrift.TFieldRequirementType.REQUIRED, 
+new 
org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING
, true)));
+   

[30/47] hbase git commit: HBASE-21631 (addendum) Fixed TestQuotasShell failure (quotas_test.rb)

2018-12-31 Thread zhangduo
HBASE-21631 (addendum) Fixed TestQuotasShell failure (quotas_test.rb)

Signed-off-by: Guanghao Zhang 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/44dec600
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/44dec600
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/44dec600

Branch: refs/heads/HBASE-21512
Commit: 44dec60054d1c45880d591c74a023f7a534e6d73
Parents: dbafa1b
Author: Sakthi 
Authored: Sun Dec 23 21:25:07 2018 -0800
Committer: Guanghao Zhang 
Committed: Mon Dec 24 14:15:59 2018 +0800

--
 hbase-shell/src/test/ruby/hbase/quotas_test.rb | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/44dec600/hbase-shell/src/test/ruby/hbase/quotas_test.rb
--
diff --git a/hbase-shell/src/test/ruby/hbase/quotas_test.rb 
b/hbase-shell/src/test/ruby/hbase/quotas_test.rb
index be6b238..295d545 100644
--- a/hbase-shell/src/test/ruby/hbase/quotas_test.rb
+++ b/hbase-shell/src/test/ruby/hbase/quotas_test.rb
@@ -99,8 +99,7 @@ module Hbase
 define_test 'can set and remove quota' do
   command(:set_quota, TYPE => SPACE, LIMIT => '1G', POLICY => NO_INSERTS, 
TABLE => @test_name)
   output = capture_stdout{ command(:list_quotas) }
-  size = 1024 * 1024 * 1024
-  assert(output.include?("LIMIT => #{size}"))
+  assert(output.include?("LIMIT => 1G"))
   assert(output.include?("VIOLATION_POLICY => NO_INSERTS"))
   assert(output.include?("TYPE => SPACE"))
   assert(output.include?("TABLE => #{@test_name}"))



[05/47] hbase git commit: HBASE-21582 If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then SnapshotHFileCleaner will skip to run every time

2018-12-31 Thread zhangduo
HBASE-21582 If call HBaseAdmin#snapshotAsync but forget call 
isSnapshotFinished, then SnapshotHFileCleaner will skip to run every time


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f32d2618
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f32d2618
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f32d2618

Branch: refs/heads/HBASE-21512
Commit: f32d2618430f70e1b0db92785294b2c7892cc02b
Parents: 4640ff5
Author: huzheng 
Authored: Tue Dec 11 20:27:56 2018 +0800
Committer: huzheng 
Committed: Thu Dec 13 10:35:20 2018 +0800

--
 .../hbase/master/snapshot/SnapshotManager.java  | 48 ++--
 .../master/cleaner/TestSnapshotFromMaster.java  | 27 ++-
 .../master/snapshot/TestSnapshotManager.java| 36 +--
 3 files changed, 92 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f32d2618/hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java
index 2b963b2..05db4ab 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java
@@ -28,7 +28,11 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledFuture;
 import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
@@ -91,6 +95,8 @@ import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.NameStringP
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ProcedureDescription;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription.Type;
+import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import 
org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
 
 /**
  * This class manages the procedure of taking and restoring snapshots. There 
is only one
@@ -120,7 +126,9 @@ public class SnapshotManager extends MasterProcedureManager 
implements Stoppable
* At this point, if the user asks for the snapshot/restore status, the 
result will be
* snapshot done if exists or failed if it doesn't exists.
*/
-  private static final int SNAPSHOT_SENTINELS_CLEANUP_TIMEOUT = 60 * 1000;
+  public static final String HBASE_SNAPSHOT_SENTINELS_CLEANUP_TIMEOUT_MILLIS =
+  "hbase.snapshot.sentinels.cleanup.timeoutMillis";
+  public static final long SNAPSHOT_SENTINELS_CLEANUP_TIMEOUT_MILLS_DEFAULT = 
60 * 1000L;
 
   /** Enable or disable snapshot support */
   public static final String HBASE_SNAPSHOT_ENABLED = "hbase.snapshot.enabled";
@@ -151,7 +159,11 @@ public class SnapshotManager extends 
MasterProcedureManager implements Stoppable
   // The map is always accessed and modified under the object lock using 
synchronized.
   // snapshotTable() will insert an Handler in the table.
   // isSnapshotDone() will remove the handler requested if the operation is 
finished.
-  private Map snapshotHandlers = new 
ConcurrentHashMap<>();
+  private final Map snapshotHandlers = new 
ConcurrentHashMap<>();
+  private final ScheduledExecutorService scheduleThreadPool =
+  Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  
.setNameFormat("SnapshotHandlerChoreCleaner").setDaemon(true).build());
+  private ScheduledFuture snapshotHandlerChoreCleanerTask;
 
   // Restore map, with table name as key, procedure ID as value.
   // The map is always accessed and modified under the object lock using 
synchronized.
@@ -181,17 +193,21 @@ public class SnapshotManager extends 
MasterProcedureManager implements Stoppable
* @param coordinator procedure coordinator instance.  exposed for testing.
* @param pool HBase ExecutorServcie instance, exposed for testing.
*/
-  public SnapshotManager(final MasterServices master, final MetricsMaster 
metricsMaster,
-  ProcedureCoordinator coordinator, ExecutorService pool)
+  @VisibleForTesting
+  SnapshotManager(final MasterServices master, ProcedureCoordinator 
coordinator,
+  ExecutorService pool, int sentinelCleanInterval)
   throws IOException, 

[20/47] hbase git commit: HBASE-21514: Refactor CacheConfig(addendum)

2018-12-31 Thread zhangduo
HBASE-21514: Refactor CacheConfig(addendum)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8991877b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8991877b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8991877b

Branch: refs/heads/HBASE-21512
Commit: 8991877bb250ee1fe66c2b9a491645973927d674
Parents: fb58a23
Author: Guanghao Zhang 
Authored: Tue Dec 18 16:46:34 2018 +0800
Committer: Guanghao Zhang 
Committed: Wed Dec 19 13:55:13 2018 +0800

--
 .../java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/8991877b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
index 0fc9576..d095ceb 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
@@ -1300,8 +1300,8 @@ public class HFileReaderImpl implements HFile.Reader, 
Configurable {
   boolean isCompaction, boolean updateCacheMetrics, BlockType 
expectedBlockType,
   DataBlockEncoding expectedDataBlockEncoding) throws IOException {
 // Check cache for block. If found return.
-if (cacheConf.getBlockCache().isPresent()) {
-  BlockCache cache = cacheConf.getBlockCache().get();
+BlockCache cache = cacheConf.getBlockCache().orElse(null);
+if (cache != null) {
   HFileBlock cachedBlock =
   (HFileBlock) cache.getBlock(cacheKey, cacheBlock, useLock, 
updateCacheMetrics);
   if (cachedBlock != null) {



[18/47] hbase git commit: HBASE-21565 Delete dead server from dead server list too early leads to concurrent Server Crash Procedures(SCP) for a same server

2018-12-31 Thread zhangduo
HBASE-21565 Delete dead server from dead server list too early leads to 
concurrent Server Crash Procedures(SCP) for a same server


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c448604c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c448604c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c448604c

Branch: refs/heads/HBASE-21512
Commit: c448604ceb987d113913f0583452b2abce04db0d
Parents: f782846
Author: Jingyun Tian 
Authored: Mon Dec 17 19:32:23 2018 +0800
Committer: Jingyun Tian 
Committed: Tue Dec 18 16:57:11 2018 +0800

--
 .../hbase/master/RegionServerTracker.java   |  3 +
 .../hadoop/hbase/master/ServerManager.java  | 25 
 .../master/assignment/AssignmentManager.java| 28 ++---
 .../hbase/master/assignment/RegionStates.java   |  3 +-
 .../hbase/master/assignment/ServerState.java|  2 +-
 .../master/assignment/ServerStateNode.java  |  2 +-
 .../master/procedure/ServerCrashProcedure.java  | 16 ++---
 .../hadoop/hbase/HBaseTestingUtility.java   |  7 ++-
 .../hadoop/hbase/master/TestRestartCluster.java | 65 
 .../procedure/TestServerCrashProcedure.java | 38 
 10 files changed, 155 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c448604c/hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionServerTracker.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionServerTracker.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionServerTracker.java
index f419732..9d33a21 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionServerTracker.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionServerTracker.java
@@ -128,6 +128,9 @@ public class RegionServerTracker extends ZKListener {
 // '-SPLITTING'. Each splitting server should have a corresponding SCP. 
Log if not.
 splittingServersFromWALDir.stream().filter(s -> 
!deadServersFromPE.contains(s)).
   forEach(s -> LOG.error("{} has no matching ServerCrashProcedure", s));
+//create ServerNode for all possible live servers from wal directory
+liveServersFromWALDir.stream()
+.forEach(sn -> 
server.getAssignmentManager().getRegionStates().getOrCreateServer(sn));
 watcher.registerListener(this);
 synchronized (this) {
   List servers =

http://git-wip-us.apache.org/repos/asf/hbase/blob/c448604c/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
index dc76d72..86d72d1 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
@@ -602,19 +602,22 @@ public class ServerManager {
   return false;
 }
 LOG.info("Processing expiration of " + serverName + " on " + 
this.master.getServerName());
-master.getAssignmentManager().submitServerCrash(serverName, true);
-
-// Tell our listeners that a server was removed
-if (!this.listeners.isEmpty()) {
-  for (ServerListener listener : this.listeners) {
-listener.serverRemoved(serverName);
+long pid = master.getAssignmentManager().submitServerCrash(serverName, 
true);
+if(pid <= 0) {
+  return false;
+} else {
+  // Tell our listeners that a server was removed
+  if (!this.listeners.isEmpty()) {
+for (ServerListener listener : this.listeners) {
+  listener.serverRemoved(serverName);
+}
   }
+  // trigger a persist of flushedSeqId
+  if (flushedSeqIdFlusher != null) {
+flushedSeqIdFlusher.triggerNow();
+  }
+  return true;
 }
-// trigger a persist of flushedSeqId
-if (flushedSeqIdFlusher != null) {
-  flushedSeqIdFlusher.triggerNow();
-}
-return true;
   }
 
   @VisibleForTesting

http://git-wip-us.apache.org/repos/asf/hbase/blob/c448604c/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
index a564ea9..b7c2203 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
+++ 

[16/47] hbase git commit: HBASE-21514 Refactor CacheConfig

2018-12-31 Thread zhangduo
HBASE-21514 Refactor CacheConfig


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1971d02e
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1971d02e
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1971d02e

Branch: refs/heads/HBASE-21512
Commit: 1971d02e725341fdee79b7ee2308a9870debe2f6
Parents: 68b5df0
Author: Guanghao Zhang 
Authored: Thu Nov 29 10:30:45 2018 +0800
Committer: Guanghao Zhang 
Committed: Tue Dec 18 13:43:30 2018 +0800

--
 .../tmpl/regionserver/BlockCacheTmpl.jamon  |  36 +-
 .../tmpl/regionserver/BlockCacheViewTmpl.jamon  |   3 +-
 .../hbase/tmpl/regionserver/RSStatusTmpl.jamon  |   5 +-
 .../hbase/io/hfile/BlockCacheFactory.java   | 226 +
 .../hadoop/hbase/io/hfile/CacheConfig.java  | 499 ---
 .../hbase/io/hfile/CombinedBlockCache.java  |  12 +
 .../hadoop/hbase/io/hfile/HFileBlockIndex.java  |  17 +-
 .../hadoop/hbase/io/hfile/HFileReaderImpl.java  | 176 +++
 .../hadoop/hbase/io/hfile/HFileWriterImpl.java  |   9 +-
 .../hbase/io/hfile/bucket/BucketAllocator.java  |   4 +-
 .../assignment/MergeTableRegionsProcedure.java  |  15 +-
 .../assignment/SplitTableRegionProcedure.java   |  15 +-
 .../apache/hadoop/hbase/mob/MobCacheConfig.java |  64 ---
 .../apache/hadoop/hbase/mob/MobFileCache.java   |   5 +-
 .../hadoop/hbase/regionserver/HMobStore.java|  28 +-
 .../hadoop/hbase/regionserver/HRegion.java  |  40 +-
 .../hbase/regionserver/HRegionServer.java   |  55 +-
 .../hadoop/hbase/regionserver/HStore.java   |   2 +-
 .../hbase/regionserver/HeapMemoryManager.java   |  30 +-
 .../MetricsRegionServerWrapperImpl.java | 267 +++---
 .../hbase/regionserver/RSRpcServices.java   |   3 +-
 .../regionserver/RegionServerServices.java  |  15 +-
 .../hadoop/hbase/HBaseTestingUtility.java   |  48 +-
 .../hadoop/hbase/MockRegionServerServices.java  |  13 +
 ...estAvoidCellReferencesIntoShippedBlocks.java |   4 +-
 .../client/TestBlockEvictionFromClient.java |  20 +-
 .../hadoop/hbase/client/TestFromClientSide.java |   3 +-
 .../hbase/io/encoding/TestEncodedSeekers.java   |  26 +-
 .../hbase/io/hfile/TestBlockCacheReporting.java |  47 +-
 .../hadoop/hbase/io/hfile/TestCacheConfig.java  |  53 +-
 .../hadoop/hbase/io/hfile/TestCacheOnWrite.java |  34 +-
 .../io/hfile/TestForceCacheImportantBlocks.java |  22 +-
 .../apache/hadoop/hbase/io/hfile/TestHFile.java |   9 +-
 .../hadoop/hbase/io/hfile/TestHFileBlock.java   |   3 +-
 .../hbase/io/hfile/TestHFileBlockIndex.java |  10 +-
 .../hfile/TestLazyDataBlockDecompression.java   |  20 +-
 .../hadoop/hbase/io/hfile/TestPrefetch.java |  22 +-
 .../io/hfile/TestScannerFromBucketCache.java|  58 +--
 .../TestScannerSelectionUsingKeyRange.java  |   5 +-
 .../io/hfile/TestScannerSelectionUsingTTL.java  |  31 +-
 .../hadoop/hbase/master/MockRegionServer.java   |  13 +
 .../hbase/master/TestMasterNotCarryTable.java   |   8 +-
 .../hadoop/hbase/mob/TestMobFileCache.java  |  22 +-
 .../regionserver/DataBlockEncodingTool.java |   7 +-
 .../EncodedSeekPerformanceTest.java |   2 +-
 .../hbase/regionserver/TestAtomicOperation.java |  10 +-
 .../hbase/regionserver/TestBlocksRead.java  |  66 +--
 .../hbase/regionserver/TestBlocksScanned.java   |  38 +-
 .../regionserver/TestCacheOnWriteInSchema.java  |   6 +-
 .../regionserver/TestClearRegionBlockCache.java |  46 +-
 .../regionserver/TestCompoundBloomFilter.java   |  16 +-
 .../hbase/regionserver/TestHMobStore.java   |  84 ++--
 .../hbase/regionserver/TestHStoreFile.java  |  14 +-
 .../regionserver/TestMobStoreCompaction.java|  20 +-
 .../regionserver/TestMultiColumnScanner.java|  15 +-
 .../hbase/regionserver/TestRSStatusServlet.java |   9 +-
 .../hbase/regionserver/TestRecoveredEdits.java  |  38 +-
 .../regionserver/TestRowPrefixBloomFilter.java  |   2 +-
 .../regionserver/TestSecureBulkLoadManager.java |   2 +-
 59 files changed, 1096 insertions(+), 1276 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/1971d02e/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/BlockCacheTmpl.jamon
--
diff --git 
a/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/BlockCacheTmpl.jamon
 
b/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/BlockCacheTmpl.jamon
index 5ea5bcc..a18e6d4 100644
--- 
a/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/BlockCacheTmpl.jamon
+++ 
b/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/BlockCacheTmpl.jamon
@@ -20,17 +20,12 @@ Template for rendering Block Cache tabs in RegionServer 
Status page.
 <%args>
 CacheConfig cacheConfig;
 Configuration config;
+BlockCache bc;

[04/47] hbase git commit: HIVE-21575 : memstore above high watermark message is logged too much

2018-12-31 Thread zhangduo
HIVE-21575 : memstore above high watermark message is logged too much


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/4640ff59
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/4640ff59
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/4640ff59

Branch: refs/heads/HBASE-21512
Commit: 4640ff5959af4865966126a503a7cd15e26a7408
Parents: 67d6d50
Author: Sergey Shelukhin 
Authored: Wed Dec 12 11:02:25 2018 -0800
Committer: Sergey Shelukhin 
Committed: Wed Dec 12 11:02:25 2018 -0800

--
 .../apache/hadoop/hbase/regionserver/MemStoreFlusher.java| 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/4640ff59/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
index 699c9b6..804a2f8 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
@@ -703,6 +703,7 @@ class MemStoreFlusher implements FlushRequester {
 if (flushType != FlushType.NORMAL) {
   TraceUtil.addTimelineAnnotation("Force Flush. We're above high water 
mark.");
   long start = EnvironmentEdgeManager.currentTime();
+  long nextLogTimeMs = start;
   synchronized (this.blockSignal) {
 boolean blocked = false;
 long startTime = 0;
@@ -744,8 +745,11 @@ class MemStoreFlusher implements FlushRequester {
   LOG.warn("Interrupted while waiting");
   interrupted = true;
 }
-long took = EnvironmentEdgeManager.currentTime() - start;
-LOG.warn("Memstore is above high water mark and block " + took + 
"ms");
+long nowMs = EnvironmentEdgeManager.currentTime();
+if (nowMs >= nextLogTimeMs) {
+  LOG.warn("Memstore is above high water mark and block {} ms", 
nowMs - start);
+  nextLogTimeMs = nowMs + 1000;
+}
 flushType = isAboveHighWaterMark();
   }
 } finally {



[13/47] hbase git commit: HBASE-21589 TestCleanupMetaWAL fails

2018-12-31 Thread zhangduo
HBASE-21589 TestCleanupMetaWAL fails


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/68b5df00
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/68b5df00
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/68b5df00

Branch: refs/heads/HBASE-21512
Commit: 68b5df00951d3ee55efaa6068f4530dca17eae1f
Parents: ac0b3bb
Author: stack 
Authored: Sun Dec 16 14:15:00 2018 -0800
Committer: stack 
Committed: Mon Dec 17 09:31:59 2018 -0800

--
 .../apache/hadoop/hbase/regionserver/TestCleanupMetaWAL.java  | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/68b5df00/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCleanupMetaWAL.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCleanupMetaWAL.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCleanupMetaWAL.java
index 4a723c0..03b3316 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCleanupMetaWAL.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCleanupMetaWAL.java
@@ -66,10 +66,13 @@ public class TestCleanupMetaWAL {
 .getRegionServer(TEST_UTIL.getMiniHBaseCluster().getServerWithMeta());
 TEST_UTIL.getAdmin()
 .move(RegionInfoBuilder.FIRST_META_REGIONINFO.getEncodedNameAsBytes(), 
null);
+LOG.info("KILL");
 
TEST_UTIL.getMiniHBaseCluster().killRegionServer(serverWithMeta.getServerName());
-TEST_UTIL.waitFor(1, () ->
+LOG.info("WAIT");
+TEST_UTIL.waitFor(3, () ->
 TEST_UTIL.getMiniHBaseCluster().getMaster().getProcedures().stream()
 .filter(p -> p instanceof ServerCrashProcedure && 
p.isFinished()).count() > 0);
+LOG.info("DONE WAITING");
 MasterFileSystem fs = 
TEST_UTIL.getMiniHBaseCluster().getMaster().getMasterFileSystem();
 Path walPath = new Path(fs.getWALRootDir(), 
HConstants.HREGION_LOGDIR_NAME);
 for (FileStatus status : FSUtils.listStatus(fs.getFileSystem(), walPath)) {
@@ -77,7 +80,5 @@ public class TestCleanupMetaWAL {
 fail("Should not have splitting wal dir here:" + status);
   }
 }
-
-
   }
 }



[23/47] hbase git commit: HBASE-21401 Sanity check when constructing the KeyValue

2018-12-31 Thread zhangduo
HBASE-21401 Sanity check when constructing the KeyValue


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/12786f80
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/12786f80
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/12786f80

Branch: refs/heads/HBASE-21512
Commit: 12786f80c14c6f2c3c111a55bbf431fb2e81e828
Parents: ad81938
Author: huzheng 
Authored: Sat Oct 27 16:57:01 2018 +0800
Committer: huzheng 
Committed: Fri Dec 21 18:01:35 2018 +0800

--
 .../java/org/apache/hadoop/hbase/KeyValue.java  |  12 +-
 .../org/apache/hadoop/hbase/KeyValueUtil.java   | 148 +-
 .../hadoop/hbase/codec/KeyValueCodec.java   |   3 +-
 .../hbase/codec/KeyValueCodecWithTags.java  |   2 +-
 .../org/apache/hadoop/hbase/TestKeyValue.java   | 295 +--
 .../hadoop/hbase/regionserver/HStore.java   |   1 -
 .../io/encoding/TestDataBlockEncoders.java  |   2 +-
 .../hadoop/hbase/io/hfile/TestCacheOnWrite.java |   4 +-
 .../hadoop/hbase/io/hfile/TestHFileSeek.java|   9 +-
 9 files changed, 290 insertions(+), 186 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/12786f80/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
--
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
index f7f6c0d..bdaefff 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
@@ -252,6 +252,15 @@ public class KeyValue implements ExtendedCell, Cloneable {
 }
 
 /**
+ * True to indicate that the byte b is a valid type.
+ * @param b byte to check
+ * @return true or false
+ */
+static boolean isValidType(byte b) {
+  return codeArray[b & 0xff] != null;
+}
+
+/**
  * Cannot rely on enum ordinals . They change if item is removed or moved.
  * Do our own codes.
  * @param b
@@ -331,7 +340,8 @@ public class KeyValue implements ExtendedCell, Cloneable {
* @param offset offset to start of the KeyValue
* @param length length of the KeyValue
*/
-  public KeyValue(final byte [] bytes, final int offset, final int length) {
+  public KeyValue(final byte[] bytes, final int offset, final int length) {
+KeyValueUtil.checkKeyValueBytes(bytes, offset, length, true);
 this.bytes = bytes;
 this.offset = offset;
 this.length = length;

http://git-wip-us.apache.org/repos/asf/hbase/blob/12786f80/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
index 1b61d1e..16ebdbf 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
@@ -518,17 +518,145 @@ public class KeyValueUtil {
 return (long) length + Bytes.SIZEOF_INT;
   }
 
+  static String bytesToHex(byte[] buf, int offset, int length) {
+return ", KeyValueBytesHex=" + Bytes.toStringBinary(buf, offset, length) + 
", offset=" + offset
++ ", length=" + length;
+  }
+
+  static void checkKeyValueBytes(byte[] buf, int offset, int length, boolean 
withTags) {
+int pos = offset, endOffset = offset + length;
+// check the key
+if (pos + Bytes.SIZEOF_INT > endOffset) {
+  throw new IllegalArgumentException(
+  "Overflow when reading key length at position=" + pos + 
bytesToHex(buf, offset, length));
+}
+int keyLen = Bytes.toInt(buf, pos, Bytes.SIZEOF_INT);
+pos += Bytes.SIZEOF_INT;
+if (keyLen <= 0 || pos + keyLen > endOffset) {
+  throw new IllegalArgumentException(
+  "Invalid key length in KeyValue. keyLength=" + keyLen + 
bytesToHex(buf, offset, length));
+}
+// check the value
+if (pos + Bytes.SIZEOF_INT > endOffset) {
+  throw new IllegalArgumentException("Overflow when reading value length 
at position=" + pos
+  + bytesToHex(buf, offset, length));
+}
+int valLen = Bytes.toInt(buf, pos, Bytes.SIZEOF_INT);
+pos += Bytes.SIZEOF_INT;
+if (valLen < 0 || pos + valLen > endOffset) {
+  throw new IllegalArgumentException("Invalid value length in KeyValue, 
valueLength=" + valLen
+  + bytesToHex(buf, offset, length));
+}
+// check the row
+if (pos + Bytes.SIZEOF_SHORT > endOffset) {
+  throw new IllegalArgumentException(
+  "Overflow when reading row length at position=" + pos + 
bytesToHex(buf, offset, length));
+}
+short rowLen = 

[11/47] hbase git commit: HBASE-21590 Optimize trySkipToNextColumn in StoreScanner a bit. (addendum)

2018-12-31 Thread zhangduo
HBASE-21590 Optimize trySkipToNextColumn in StoreScanner a bit. (addendum)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/49115348
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/49115348
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/49115348

Branch: refs/heads/HBASE-21512
Commit: 491153488ee5b19de22fd72e55dd5039399bb727
Parents: 2b003c5
Author: Sean Busbey 
Authored: Fri Dec 14 11:23:36 2018 -0600
Committer: Sean Busbey 
Committed: Fri Dec 14 17:08:22 2018 -0600

--
 .../apache/hadoop/hbase/regionserver/StoreScanner.java| 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/49115348/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
index e7a4528..91ca592 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
@@ -807,8 +807,9 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
 Cell previousIndexedKey = null;
 do {
   Cell nextIndexedKey = getNextIndexedKey();
-  if (nextIndexedKey != null && nextIndexedKey != 
KeyValueScanner.NO_NEXT_INDEXED_KEY
-  && (nextIndexedKey == previousIndexedKey || 
matcher.compareKeyForNextRow(nextIndexedKey, cell) >= 0)) {
+  if (nextIndexedKey != null && nextIndexedKey != 
KeyValueScanner.NO_NEXT_INDEXED_KEY &&
+  (nextIndexedKey == previousIndexedKey ||
+  matcher.compareKeyForNextRow(nextIndexedKey, cell) >= 0)) {
 this.heap.next();
 ++kvsScanned;
 previousIndexedKey = nextIndexedKey;
@@ -832,8 +833,9 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
 Cell previousIndexedKey = null;
 do {
   Cell nextIndexedKey = getNextIndexedKey();
-  if (nextIndexedKey != null && nextIndexedKey != 
KeyValueScanner.NO_NEXT_INDEXED_KEY
-  && (nextIndexedKey == previousIndexedKey || 
matcher.compareKeyForNextColumn(nextIndexedKey, cell) >= 0)) {
+  if (nextIndexedKey != null && nextIndexedKey != 
KeyValueScanner.NO_NEXT_INDEXED_KEY &&
+  (nextIndexedKey == previousIndexedKey ||
+  matcher.compareKeyForNextColumn(nextIndexedKey, cell) >= 0)) {
 this.heap.next();
 ++kvsScanned;
 previousIndexedKey = nextIndexedKey;



[17/47] hbase git commit: HBASE-21592 quota.addGetResult(r) throw NPE

2018-12-31 Thread zhangduo
HBASE-21592 quota.addGetResult(r) throw NPE

Signed-off-by: huzheng 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f7828468
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f7828468
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f7828468

Branch: refs/heads/HBASE-21512
Commit: f78284685fc533230a0395d297ebacff32632396
Parents: 1971d02
Author: xuqinya 
Authored: Tue Dec 18 08:19:47 2018 +0800
Committer: huzheng 
Committed: Tue Dec 18 16:15:51 2018 +0800

--
 .../hadoop/hbase/regionserver/RSRpcServices.java   |  3 ++-
 .../hadoop/hbase/quotas/TestQuotaThrottle.java | 17 +
 2 files changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f7828468/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
index 31df37a..f788a86 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
@@ -2571,7 +2571,8 @@ public class RSRpcServices implements 
HBaseRPCErrorHandler,
 }
 builder.setResult(pbr);
   }
-  if (r != null) {
+  //r.cells is null when an table.exists(get) call
+  if (r != null && r.rawCells() != null) {
 quota.addGetResult(r);
   }
   return builder.build();

http://git-wip-us.apache.org/repos/asf/hbase/blob/f7828468/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaThrottle.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaThrottle.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaThrottle.java
index e506a08..c069403 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaThrottle.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaThrottle.java
@@ -553,6 +553,23 @@ public class TestQuotaThrottle {
 triggerTableCacheRefresh(true, TABLE_NAMES[0]);
   }
 
+  @Test
+  public void testTableExistsGetThrottle() throws Exception {
+final Admin admin = TEST_UTIL.getAdmin();
+
+// Add throttle quota
+admin.setQuota(QuotaSettingsFactory.throttleTable(TABLE_NAMES[0],
+ThrottleType.REQUEST_NUMBER, 100, TimeUnit.MINUTES));
+triggerTableCacheRefresh(false, TABLE_NAMES[0]);
+
+Table table = TEST_UTIL.getConnection().getTable(TABLE_NAMES[0]);
+// An exists call when having throttle quota
+table.exists(new Get(Bytes.toBytes("abc")));
+
+admin.setQuota(QuotaSettingsFactory.unthrottleTable(TABLE_NAMES[0]));
+triggerTableCacheRefresh(true, TABLE_NAMES[0]);
+  }
+
   private int doPuts(int maxOps, final Table... tables) throws Exception {
 return doPuts(maxOps, -1, tables);
   }



[38/47] hbase git commit: HBASE-21650 Add DDL operation and some other miscellaneous to thrift2

2018-12-31 Thread zhangduo
http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java
index 2fb3f76..0f27519 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java
@@ -37,7 +37,7 @@ import org.slf4j.LoggerFactory;
 /**
  * Represents a single cell and the amount to increment it by
  */
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2016-05-25")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
 public class TColumnIncrement implements 
org.apache.thrift.TBase, 
java.io.Serializable, Cloneable, Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("TColumnIncrement");
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
index 3ceb4c0..6cded1b 100644
--- 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
@@ -37,7 +37,7 @@ import org.slf4j.LoggerFactory;
 /**
  * Represents a single cell and its value.
  */
-@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2016-05-25")
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)", date = 
"2018-12-27")
 public class TColumnValue implements org.apache.thrift.TBase, java.io.Serializable, Cloneable, 
Comparable {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("TColumnValue");
 
@@ -46,6 +46,7 @@ public class TColumnValue implements 
org.apache.thrift.TBase, SchemeFactory> schemes = 
new HashMap, SchemeFactory>();
   static {
@@ -58,6 +59,7 @@ public class TColumnValue implements 
org.apache.thrift.TBase byName = new HashMap();
 
@@ -90,6 +93,8 @@ public class TColumnValue implements 
org.apache.thrift.TBase 
metaDataMap;
   static {
 Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new 
EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -146,6 +152,8 @@ public class TColumnValue implements 
org.apache.thrift.TBasehttp://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCompressionAlgorithm.java
--
diff --git 
a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCompressionAlgorithm.java
 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCompressionAlgorithm.java
new file mode 100644
index 000..46799be
--- /dev/null
+++ 
b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCompressionAlgorithm.java
@@ -0,0 +1,60 @@
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.hadoop.hbase.thrift2.generated;
+
+
+import java.util.Map;
+import java.util.HashMap;
+import org.apache.thrift.TEnum;
+
+public enum TCompressionAlgorithm implements org.apache.thrift.TEnum {
+  LZO(0),
+  GZ(1),
+  NONE(2),
+  SNAPPY(3),
+  LZ4(4),
+  BZIP2(5),
+  ZSTD(6);
+
+  private final int value;
+
+  private TCompressionAlgorithm(int value) {
+this.value = value;
+  }
+
+  /**
+   * Get the integer value of this enum value, as defined in the Thrift IDL.
+   */
+  public int getValue() {
+return value;
+  }
+
+  /**
+   * Find a the enum type by its integer value, as defined in the Thrift IDL.
+   * @return null if the value is not found.
+   */
+  public static TCompressionAlgorithm findByValue(int value) { 
+switch (value) {
+  case 0:
+return LZO;
+  case 1:
+return GZ;
+  case 2:
+return NONE;
+  case 3:
+return SNAPPY;
+  case 4:
+return LZ4;
+  case 5:
+return BZIP2;
+  case 6:
+return ZSTD;
+  default:
+return null;
+}
+  }
+}

http://git-wip-us.apache.org/repos/asf/hbase/blob/7820ba1d/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDataBlockEncoding.java

[09/47] hbase git commit: HBASE-21578 Fix wrong throttling exception for capacity unit

2018-12-31 Thread zhangduo
HBASE-21578 Fix wrong throttling exception for capacity unit

Signed-off-by: Guanghao Zhang 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1b08ba73
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1b08ba73
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1b08ba73

Branch: refs/heads/HBASE-21512
Commit: 1b08ba7385d0dd914a6fb9722b786e4ece116b28
Parents: 3ff274e
Author: meiyi 
Authored: Fri Dec 14 09:42:48 2018 +0800
Committer: Guanghao Zhang 
Committed: Fri Dec 14 18:17:47 2018 +0800

--
 .../hbase/quotas/RpcThrottlingException.java| 21 ++--
 .../hadoop/hbase/quotas/TimeBasedLimiter.java   |  8 
 2 files changed, 23 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/1b08ba73/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/RpcThrottlingException.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/RpcThrottlingException.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/RpcThrottlingException.java
index 9baf91f..4c48f65 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/RpcThrottlingException.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/RpcThrottlingException.java
@@ -29,13 +29,15 @@ public class RpcThrottlingException extends 
HBaseIOException {
   @InterfaceAudience.Public
   public enum Type {
 NumRequestsExceeded, RequestSizeExceeded, NumReadRequestsExceeded, 
NumWriteRequestsExceeded,
-WriteSizeExceeded, ReadSizeExceeded,
+WriteSizeExceeded, ReadSizeExceeded, RequestCapacityUnitExceeded, 
ReadCapacityUnitExceeded,
+WriteCapacityUnitExceeded
   }
 
   private static final String[] MSG_TYPE =
   new String[] { "number of requests exceeded", "request size limit 
exceeded",
 "number of read requests exceeded", "number of write requests 
exceeded",
-"write size limit exceeded", "read size limit exceeded", };
+"write size limit exceeded", "read size limit exceeded", "request 
capacity unit exceeded",
+"read capacity unit exceeded", "write capacity unit exceeded" };
 
   private static final String MSG_WAIT = " - wait ";
 
@@ -100,6 +102,21 @@ public class RpcThrottlingException extends 
HBaseIOException {
 throwThrottlingException(Type.ReadSizeExceeded, waitInterval);
   }
 
+  public static void throwRequestCapacityUnitExceeded(final long waitInterval)
+  throws RpcThrottlingException {
+throwThrottlingException(Type.RequestCapacityUnitExceeded, waitInterval);
+  }
+
+  public static void throwReadCapacityUnitExceeded(final long waitInterval)
+  throws RpcThrottlingException {
+throwThrottlingException(Type.ReadCapacityUnitExceeded, waitInterval);
+  }
+
+  public static void throwWriteCapacityUnitExceeded(final long waitInterval)
+  throws RpcThrottlingException {
+throwThrottlingException(Type.WriteCapacityUnitExceeded, waitInterval);
+  }
+
   private static void throwThrottlingException(final Type type, final long 
waitInterval)
   throws RpcThrottlingException {
 String msg = MSG_TYPE[type.ordinal()] + MSG_WAIT + 
StringUtils.formatTime(waitInterval);

http://git-wip-us.apache.org/repos/asf/hbase/blob/1b08ba73/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/TimeBasedLimiter.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/TimeBasedLimiter.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/TimeBasedLimiter.java
index 771eed1..6b5349f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/TimeBasedLimiter.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/TimeBasedLimiter.java
@@ -148,7 +148,7 @@ public class TimeBasedLimiter implements QuotaLimiter {
   reqSizeLimiter.waitInterval(estimateWriteSize + estimateReadSize));
 }
 if (!reqCapacityUnitLimiter.canExecute(estimateWriteCapacityUnit + 
estimateReadCapacityUnit)) {
-  RpcThrottlingException.throwRequestSizeExceeded(
+  RpcThrottlingException.throwRequestCapacityUnitExceeded(
 reqCapacityUnitLimiter.waitInterval(estimateWriteCapacityUnit + 
estimateReadCapacityUnit));
 }
 
@@ -161,7 +161,7 @@ public class TimeBasedLimiter implements QuotaLimiter {
 writeSizeLimiter.waitInterval(estimateWriteSize));
   }
   if (!writeCapacityUnitLimiter.canExecute(estimateWriteCapacityUnit)) {
-RpcThrottlingException.throwWriteSizeExceeded(
+RpcThrottlingException.throwWriteCapacityUnitExceeded(
   writeCapacityUnitLimiter.waitInterval(estimateWriteCapacityUnit));
   }
 }
@@ -175,8 +175,8 @@ 

[46/47] hbase git commit: HBASE-21516 Use AsyncConnection instead of Connection in SecureBulkLoadManager

2018-12-31 Thread zhangduo
HBASE-21516 Use AsyncConnection instead of Connection in SecureBulkLoadManager


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a13292db
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a13292db
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a13292db

Branch: refs/heads/HBASE-21512
Commit: a13292db13ac8d09940fe8c1e9ec6b3e84b09271
Parents: f3caa01
Author: zhangduo 
Authored: Sat Dec 1 21:15:48 2018 +0800
Committer: zhangduo 
Committed: Mon Dec 31 20:34:24 2018 +0800

--
 .../hadoop/hbase/protobuf/ProtobufUtil.java |  5 +-
 .../hbase/shaded/protobuf/ProtobufUtil.java |  7 ++-
 .../hbase/regionserver/HRegionServer.java   |  2 +-
 .../regionserver/SecureBulkLoadManager.java | 24 +
 .../hadoop/hbase/security/token/TokenUtil.java  | 57 +++-
 .../hbase/security/token/TestTokenUtil.java | 42 +++
 6 files changed, 96 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a13292db/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
index a3d49b5..d9e620b 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
@@ -261,13 +261,12 @@ public final class ProtobufUtil {
* just {@link ServiceException}. Prefer this method to
* {@link #getRemoteException(ServiceException)} because trying to
* contain direct protobuf references.
-   * @param e
*/
-  public static IOException handleRemoteException(Exception e) {
+  public static IOException handleRemoteException(Throwable e) {
 return makeIOExceptionOfException(e);
   }
 
-  private static IOException makeIOExceptionOfException(Exception e) {
+  private static IOException makeIOExceptionOfException(Throwable e) {
 Throwable t = e;
 if (e instanceof ServiceException ||
 e instanceof 
org.apache.hbase.thirdparty.com.google.protobuf.ServiceException) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/a13292db/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
index fea81f1..de2fb7d 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
@@ -40,7 +40,6 @@ import java.util.concurrent.TimeUnit;
 import java.util.function.Function;
 import java.util.regex.Pattern;
 import java.util.stream.Collectors;
-
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.ByteBufferExtendedCell;
@@ -123,6 +122,7 @@ import 
org.apache.hbase.thirdparty.com.google.protobuf.Service;
 import org.apache.hbase.thirdparty.com.google.protobuf.ServiceException;
 import org.apache.hbase.thirdparty.com.google.protobuf.TextFormat;
 import org.apache.hbase.thirdparty.com.google.protobuf.UnsafeByteOperations;
+
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.CloseRegionRequest;
 import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.GetOnlineRegionRequest;
@@ -343,13 +343,12 @@ public final class ProtobufUtil {
* just {@link ServiceException}. Prefer this method to
* {@link #getRemoteException(ServiceException)} because trying to
* contain direct protobuf references.
-   * @param e
*/
-  public static IOException handleRemoteException(Exception e) {
+  public static IOException handleRemoteException(Throwable e) {
 return makeIOExceptionOfException(e);
   }
 
-  private static IOException makeIOExceptionOfException(Exception e) {
+  private static IOException makeIOExceptionOfException(Throwable e) {
 Throwable t = e;
 if (e instanceof ServiceException) {
   t = e.getCause();

http://git-wip-us.apache.org/repos/asf/hbase/blob/a13292db/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java