hbase git commit: HBASE-14939 Document bulk loaded hfile replication

2018-12-26 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 4281cb3b9 -> c55208887


HBASE-14939 Document bulk loaded hfile replication

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c5520888
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c5520888
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c5520888

Branch: refs/heads/master
Commit: c5520888779235a334583f7c369dcee49518e165
Parents: 4281cb3
Author: Wei-Chiu Chuang 
Authored: Wed Dec 26 20:14:18 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Dec 26 20:14:18 2018 +0530

--
 src/main/asciidoc/_chapters/architecture.adoc | 32 ++
 1 file changed, 26 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c5520888/src/main/asciidoc/_chapters/architecture.adoc
--
diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 17e9e13..27db26a 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -2543,12 +2543,6 @@ The most straightforward method is to either use the 
`TableOutputFormat` class f
 The bulk load feature uses a MapReduce job to output table data in HBase's 
internal data format, and then directly loads the generated StoreFiles into a 
running cluster.
 Using bulk load will use less CPU and network resources than simply using the 
HBase API.
 
-[[arch.bulk.load.limitations]]
-=== Bulk Load Limitations
-
-As bulk loading bypasses the write path, the WAL doesn't get written to as 
part of the process.
-Replication works by reading the WAL files so it won't see the bulk loaded 
data – and the same goes for the edits that use 
`Put.setDurability(SKIP_WAL)`. One way to handle that is to ship the raw files 
or the HFiles to the other cluster and do the other processing there.
-
 [[arch.bulk.load.arch]]
 === Bulk Load Architecture
 
@@ -2601,6 +2595,32 @@ To get started doing so, dig into `ImportTsv.java` and 
check the JavaDoc for HFi
 The import step of the bulk load can also be done programmatically.
 See the `LoadIncrementalHFiles` class for more information.
 
+[[arch.bulk.load.replication]]
+=== Bulk Loading Replication
+HBASE-13153 adds replication support for bulk loaded HFiles, available since 
HBase 1.3/2.0. This feature is enabled by setting 
`hbase.replication.bulkload.enabled` to `true` (default is `false`).
+You also need to copy the source cluster configuration files to the 
destination cluster.
+
+Additional configurations are required too:
+
+. `hbase.replication.source.fs.conf.provider`
++
+This defines the class which loads the source cluster file system client 
configuration in the destination cluster. This should be configured for all the 
RS in the destination cluster. Default is 
`org.apache.hadoop.hbase.replication.regionserver.DefaultSourceFSConfigurationProvider`.
++
+. `hbase.replication.conf.dir`
++
+This represents the base directory where the file system client configurations 
of the source cluster are copied to the destination cluster. This should be 
configured for all the RS in the destination cluster. Default is 
`$HBASE_CONF_DIR`.
++
+. `hbase.replication.cluster.id`
++
+This configuration is required in the cluster where replication for bulk 
loaded data is enabled. A source cluster is uniquely identified by the 
destination cluster using this id. This should be configured for all the RS in 
the source cluster configuration file for all the RS.
++
+
+
+
+For example: If source cluster FS client configurations are copied to the 
destination cluster under directory `/home/user/dc1/`, then 
`hbase.replication.cluster.id` should be configured as `dc1` and 
`hbase.replication.conf.dir` as `/home/user`.
+
+NOTE: `DefaultSourceFSConfigurationProvider` supports only `xml` type files. 
It loads source cluster FS client configuration only once, so if source cluster 
FS client configuration files are updated, every peer(s) cluster RS must be 
restarted to reload the configuration.
+
 [[arch.hdfs]]
 == HDFS
 



hbase git commit: HBASE-20590 REST Java client is not able to negotiate with the server in the secure mode

2018-06-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 9cc38c0ba -> 35a19c5cf


HBASE-20590 REST Java client is not able to negotiate with the server in the 
secure mode

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/35a19c5c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/35a19c5c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/35a19c5c

Branch: refs/heads/branch-1.2
Commit: 35a19c5cf354fb56b12de3f9f0e9ab630021a509
Parents: 9cc38c0
Author: Ashish Singhi 
Authored: Mon Jun 4 14:24:13 2018 +0530
Committer: Ashish Singhi 
Committed: Mon Jun 4 14:24:13 2018 +0530

--
 hbase-examples/pom.xml  |   4 +
 .../hadoop/hbase/rest/RESTDemoClient.java   | 145 +++
 .../apache/hadoop/hbase/rest/client/Client.java |  48 ++
 3 files changed, 197 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/35a19c5c/hbase-examples/pom.xml
--
diff --git a/hbase-examples/pom.xml b/hbase-examples/pom.xml
index db05b53..f6d87fc 100644
--- a/hbase-examples/pom.xml
+++ b/hbase-examples/pom.xml
@@ -151,6 +151,10 @@
   zookeeper
 
 
+  org.apache.hbase
+  hbase-rest
+
+
   com.google.protobuf
   protobuf-java
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/35a19c5c/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
--
diff --git 
a/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
new file mode 100644
index 000..7a23554
--- /dev/null
+++ 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
@@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import com.google.common.base.Preconditions;
+
+import java.security.PrivilegedExceptionAction;
+import java.util.HashMap;
+import java.util.Map;
+
+import javax.security.auth.Subject;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import javax.security.auth.login.LoginContext;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.RemoteHTable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+@InterfaceAudience.Private
+public class RESTDemoClient {
+
+  private static String host = "localhost";
+  private static int port = 9090;
+  private static boolean secure = false;
+  private static org.apache.hadoop.conf.Configuration conf = null;
+
+  public static void main(String[] args) throws Exception {
+System.out.println("REST Demo");
+System.out.println("Usage: RESTDemoClient [host=localhost] [port=9090] 
[secure=false]");
+System.out.println("This demo assumes you have a table called \"example\""
++ " with a column family called \"family1\"");
+
+// use passed in arguments instead of defaults
+if (args.length >= 1) {
+  host = args[0];
+}
+if (args.length >= 2) {
+  port = Integer.parseInt(args[1]);
+}
+conf = HBaseConfiguration.create();
+String principal = conf.get(Constants.REST_KERBEROS_PRINCIPAL);
+if (principal != null) {
+  secure = true;
+}
+if (args.length >= 3) {
+  secure = Boolean.parseBoolean(args[2]);
+}
+
+final RESTDemoClient client = new RESTDemoClient();
+Subject.doAs(getSubject(), new PrivilegedExceptionAction() {
+  @Override
+  

hbase git commit: HBASE-20590 REST Java client is not able to negotiate with the server in the secure mode

2018-06-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 4f0d3ff74 -> bc2d66892


HBASE-20590 REST Java client is not able to negotiate with the server in the 
secure mode

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/bc2d6689
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/bc2d6689
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/bc2d6689

Branch: refs/heads/branch-1.3
Commit: bc2d6689293a2ac794f0e7e1855ee7ff36e22a06
Parents: 4f0d3ff
Author: Ashish Singhi 
Authored: Mon Jun 4 14:22:16 2018 +0530
Committer: Ashish Singhi 
Committed: Mon Jun 4 14:22:16 2018 +0530

--
 hbase-examples/pom.xml  |   4 +
 .../hadoop/hbase/rest/RESTDemoClient.java   | 145 +++
 .../apache/hadoop/hbase/rest/client/Client.java |  48 ++
 3 files changed, 197 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/bc2d6689/hbase-examples/pom.xml
--
diff --git a/hbase-examples/pom.xml b/hbase-examples/pom.xml
index 497ad5a..0919086 100644
--- a/hbase-examples/pom.xml
+++ b/hbase-examples/pom.xml
@@ -151,6 +151,10 @@
   zookeeper
 
 
+  org.apache.hbase
+  hbase-rest
+
+
   com.google.protobuf
   protobuf-java
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/bc2d6689/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
--
diff --git 
a/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
new file mode 100644
index 000..7a23554
--- /dev/null
+++ 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
@@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import com.google.common.base.Preconditions;
+
+import java.security.PrivilegedExceptionAction;
+import java.util.HashMap;
+import java.util.Map;
+
+import javax.security.auth.Subject;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import javax.security.auth.login.LoginContext;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.RemoteHTable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+@InterfaceAudience.Private
+public class RESTDemoClient {
+
+  private static String host = "localhost";
+  private static int port = 9090;
+  private static boolean secure = false;
+  private static org.apache.hadoop.conf.Configuration conf = null;
+
+  public static void main(String[] args) throws Exception {
+System.out.println("REST Demo");
+System.out.println("Usage: RESTDemoClient [host=localhost] [port=9090] 
[secure=false]");
+System.out.println("This demo assumes you have a table called \"example\""
++ " with a column family called \"family1\"");
+
+// use passed in arguments instead of defaults
+if (args.length >= 1) {
+  host = args[0];
+}
+if (args.length >= 2) {
+  port = Integer.parseInt(args[1]);
+}
+conf = HBaseConfiguration.create();
+String principal = conf.get(Constants.REST_KERBEROS_PRINCIPAL);
+if (principal != null) {
+  secure = true;
+}
+if (args.length >= 3) {
+  secure = Boolean.parseBoolean(args[2]);
+}
+
+final RESTDemoClient client = new RESTDemoClient();
+Subject.doAs(getSubject(), new PrivilegedExceptionAction() {
+  @Override
+  

hbase git commit: HBASE-20590 REST Java client is not able to negotiate with the server in the secure mode

2018-06-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.4 cc53ab37b -> 95e5dee54


HBASE-20590 REST Java client is not able to negotiate with the server in the 
secure mode

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/95e5dee5
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/95e5dee5
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/95e5dee5

Branch: refs/heads/branch-1.4
Commit: 95e5dee54ffa6fab4d4ac812b8f95771f8c63438
Parents: cc53ab3
Author: Ashish Singhi 
Authored: Mon Jun 4 14:19:12 2018 +0530
Committer: Ashish Singhi 
Committed: Mon Jun 4 14:19:12 2018 +0530

--
 hbase-examples/pom.xml  |   4 +
 .../hadoop/hbase/rest/RESTDemoClient.java   | 145 +++
 .../apache/hadoop/hbase/rest/client/Client.java |  48 ++
 3 files changed, 197 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/95e5dee5/hbase-examples/pom.xml
--
diff --git a/hbase-examples/pom.xml b/hbase-examples/pom.xml
index 7c19503..33625cd 100644
--- a/hbase-examples/pom.xml
+++ b/hbase-examples/pom.xml
@@ -151,6 +151,10 @@
   zookeeper
 
 
+  org.apache.hbase
+  hbase-rest
+
+
   com.google.protobuf
   protobuf-java
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/95e5dee5/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
--
diff --git 
a/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
new file mode 100644
index 000..7a23554
--- /dev/null
+++ 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
@@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import com.google.common.base.Preconditions;
+
+import java.security.PrivilegedExceptionAction;
+import java.util.HashMap;
+import java.util.Map;
+
+import javax.security.auth.Subject;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import javax.security.auth.login.LoginContext;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.RemoteHTable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+@InterfaceAudience.Private
+public class RESTDemoClient {
+
+  private static String host = "localhost";
+  private static int port = 9090;
+  private static boolean secure = false;
+  private static org.apache.hadoop.conf.Configuration conf = null;
+
+  public static void main(String[] args) throws Exception {
+System.out.println("REST Demo");
+System.out.println("Usage: RESTDemoClient [host=localhost] [port=9090] 
[secure=false]");
+System.out.println("This demo assumes you have a table called \"example\""
++ " with a column family called \"family1\"");
+
+// use passed in arguments instead of defaults
+if (args.length >= 1) {
+  host = args[0];
+}
+if (args.length >= 2) {
+  port = Integer.parseInt(args[1]);
+}
+conf = HBaseConfiguration.create();
+String principal = conf.get(Constants.REST_KERBEROS_PRINCIPAL);
+if (principal != null) {
+  secure = true;
+}
+if (args.length >= 3) {
+  secure = Boolean.parseBoolean(args[2]);
+}
+
+final RESTDemoClient client = new RESTDemoClient();
+Subject.doAs(getSubject(), new PrivilegedExceptionAction() {
+  @Override
+  

hbase git commit: HBASE-20590 REST Java client is not able to negotiate with the server in the secure mode

2018-06-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-2.0 76b1fb77f -> 3130cf4dc


HBASE-20590 REST Java client is not able to negotiate with the server in the 
secure mode

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3130cf4d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3130cf4d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3130cf4d

Branch: refs/heads/branch-2.0
Commit: 3130cf4dc8fe5d2718a0f781d882ab4694b827bb
Parents: 76b1fb7
Author: Ashish Singhi 
Authored: Mon Jun 4 14:15:25 2018 +0530
Committer: Ashish Singhi 
Committed: Mon Jun 4 14:15:25 2018 +0530

--
 hbase-examples/pom.xml  |   4 +
 .../hadoop/hbase/rest/RESTDemoClient.java   | 144 +++
 .../apache/hadoop/hbase/rest/client/Client.java |  55 ++-
 3 files changed, 200 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3130cf4d/hbase-examples/pom.xml
--
diff --git a/hbase-examples/pom.xml b/hbase-examples/pom.xml
index 53fa7e7..66c3c22 100644
--- a/hbase-examples/pom.xml
+++ b/hbase-examples/pom.xml
@@ -183,6 +183,10 @@
   findbugs-annotations
 
 
+  org.apache.hbase
+  hbase-rest
+
+
   junit
   junit
   test

http://git-wip-us.apache.org/repos/asf/hbase/blob/3130cf4d/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
--
diff --git 
a/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
new file mode 100644
index 000..19fae47
--- /dev/null
+++ 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
@@ -0,0 +1,144 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.security.PrivilegedExceptionAction;
+import java.util.HashMap;
+import java.util.Map;
+
+import javax.security.auth.Subject;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import javax.security.auth.login.LoginContext;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.RemoteHTable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
+@InterfaceAudience.Private
+public class RESTDemoClient {
+
+  private static String host = "localhost";
+  private static int port = 9090;
+  private static boolean secure = false;
+  private static org.apache.hadoop.conf.Configuration conf = null;
+
+  public static void main(String[] args) throws Exception {
+System.out.println("REST Demo");
+System.out.println("Usage: RESTDemoClient [host=localhost] [port=9090] 
[secure=false]");
+System.out.println("This demo assumes you have a table called \"example\""
++ " with a column family called \"family1\"");
+
+// use passed in arguments instead of defaults
+if (args.length >= 1) {
+  host = args[0];
+}
+if (args.length >= 2) {
+  port = Integer.parseInt(args[1]);
+}
+conf = HBaseConfiguration.create();
+String principal = conf.get(Constants.REST_KERBEROS_PRINCIPAL);
+if (principal != null) {
+  secure = true;
+}
+if (args.length >= 3) {
+  secure = Boolean.parseBoolean(args[2]);
+}
+
+final RESTDemoClient client = new RESTDemoClient();
+Subject.doAs(getSubject(), new PrivilegedExceptionAction() {
+ 

hbase git commit: HBASE-20590 REST Java client is not able to negotiate with the server in the secure mode

2018-06-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-2 81937df3b -> 805e2db3e


HBASE-20590 REST Java client is not able to negotiate with the server in the 
secure mode

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/805e2db3
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/805e2db3
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/805e2db3

Branch: refs/heads/branch-2
Commit: 805e2db3e2cddd47b9caa1bcd6ebea9b4159e365
Parents: 81937df
Author: Ashish Singhi 
Authored: Mon Jun 4 14:13:42 2018 +0530
Committer: Ashish Singhi 
Committed: Mon Jun 4 14:13:42 2018 +0530

--
 hbase-examples/pom.xml  |   4 +
 .../hadoop/hbase/rest/RESTDemoClient.java   | 144 +++
 .../apache/hadoop/hbase/rest/client/Client.java |  55 ++-
 3 files changed, 200 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/805e2db3/hbase-examples/pom.xml
--
diff --git a/hbase-examples/pom.xml b/hbase-examples/pom.xml
index d1881376..8b48a81 100644
--- a/hbase-examples/pom.xml
+++ b/hbase-examples/pom.xml
@@ -183,6 +183,10 @@
   findbugs-annotations
 
 
+  org.apache.hbase
+  hbase-rest
+
+
   junit
   junit
   test

http://git-wip-us.apache.org/repos/asf/hbase/blob/805e2db3/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
--
diff --git 
a/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
new file mode 100644
index 000..19fae47
--- /dev/null
+++ 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
@@ -0,0 +1,144 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.security.PrivilegedExceptionAction;
+import java.util.HashMap;
+import java.util.Map;
+
+import javax.security.auth.Subject;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import javax.security.auth.login.LoginContext;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.RemoteHTable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
+@InterfaceAudience.Private
+public class RESTDemoClient {
+
+  private static String host = "localhost";
+  private static int port = 9090;
+  private static boolean secure = false;
+  private static org.apache.hadoop.conf.Configuration conf = null;
+
+  public static void main(String[] args) throws Exception {
+System.out.println("REST Demo");
+System.out.println("Usage: RESTDemoClient [host=localhost] [port=9090] 
[secure=false]");
+System.out.println("This demo assumes you have a table called \"example\""
++ " with a column family called \"family1\"");
+
+// use passed in arguments instead of defaults
+if (args.length >= 1) {
+  host = args[0];
+}
+if (args.length >= 2) {
+  port = Integer.parseInt(args[1]);
+}
+conf = HBaseConfiguration.create();
+String principal = conf.get(Constants.REST_KERBEROS_PRINCIPAL);
+if (principal != null) {
+  secure = true;
+}
+if (args.length >= 3) {
+  secure = Boolean.parseBoolean(args[2]);
+}
+
+final RESTDemoClient client = new RESTDemoClient();
+Subject.doAs(getSubject(), new PrivilegedExceptionAction() {
+

hbase git commit: HBASE-20590 REST Java client is not able to negotiate with the server in the secure mode

2018-06-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 1b716ad5c -> 7da0015a3


HBASE-20590 REST Java client is not able to negotiate with the server in the 
secure mode

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7da0015a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7da0015a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7da0015a

Branch: refs/heads/master
Commit: 7da0015a3b58a28ccbae0b03ba7de9ce62b751e1
Parents: 1b716ad
Author: Ashish Singhi 
Authored: Mon Jun 4 14:11:19 2018 +0530
Committer: Ashish Singhi 
Committed: Mon Jun 4 14:11:19 2018 +0530

--
 hbase-examples/pom.xml  |   4 +
 .../hadoop/hbase/rest/RESTDemoClient.java   | 144 +++
 .../apache/hadoop/hbase/rest/client/Client.java |  55 ++-
 3 files changed, 200 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7da0015a/hbase-examples/pom.xml
--
diff --git a/hbase-examples/pom.xml b/hbase-examples/pom.xml
index 8814491..c74c1ba 100644
--- a/hbase-examples/pom.xml
+++ b/hbase-examples/pom.xml
@@ -183,6 +183,10 @@
   findbugs-annotations
 
 
+  org.apache.hbase
+  hbase-rest
+
+
   junit
   junit
   test

http://git-wip-us.apache.org/repos/asf/hbase/blob/7da0015a/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
--
diff --git 
a/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
new file mode 100644
index 000..19fae47
--- /dev/null
+++ 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/rest/RESTDemoClient.java
@@ -0,0 +1,144 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.security.PrivilegedExceptionAction;
+import java.util.HashMap;
+import java.util.Map;
+
+import javax.security.auth.Subject;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import javax.security.auth.login.LoginContext;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.RemoteHTable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
+@InterfaceAudience.Private
+public class RESTDemoClient {
+
+  private static String host = "localhost";
+  private static int port = 9090;
+  private static boolean secure = false;
+  private static org.apache.hadoop.conf.Configuration conf = null;
+
+  public static void main(String[] args) throws Exception {
+System.out.println("REST Demo");
+System.out.println("Usage: RESTDemoClient [host=localhost] [port=9090] 
[secure=false]");
+System.out.println("This demo assumes you have a table called \"example\""
++ " with a column family called \"family1\"");
+
+// use passed in arguments instead of defaults
+if (args.length >= 1) {
+  host = args[0];
+}
+if (args.length >= 2) {
+  port = Integer.parseInt(args[1]);
+}
+conf = HBaseConfiguration.create();
+String principal = conf.get(Constants.REST_KERBEROS_PRINCIPAL);
+if (principal != null) {
+  secure = true;
+}
+if (args.length >= 3) {
+  secure = Boolean.parseBoolean(args[2]);
+}
+
+final RESTDemoClient client = new RESTDemoClient();
+Subject.doAs(getSubject(), new PrivilegedExceptionAction() {
+  

hbase git commit: Addendum HBASE-20004 Client is not able to execute REST queries in a secure cluster

2018-05-10 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 ef9fd29ca -> 797a35276


Addendum HBASE-20004 Client is not able to execute REST queries in a secure 
cluster

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/797a3527
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/797a3527
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/797a3527

Branch: refs/heads/branch-1.2
Commit: 797a352763110413c4e806770ca13c74ef2a13ea
Parents: ef9fd29
Author: Ashish Singhi 
Authored: Thu May 10 23:19:56 2018 +0530
Committer: Ashish Singhi 
Committed: Thu May 10 23:19:56 2018 +0530

--
 .../org/apache/hadoop/hbase/rest/RESTServer.java |  8 +++-
 .../hbase/rest/HBaseRESTTestingUtility.java  |  2 +-
 .../apache/hadoop/hbase/util/HttpServerUtil.java | 19 ---
 3 files changed, 20 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/797a3527/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
index 28d0bc2..d603331 100644
--- a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
+++ b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
@@ -67,6 +67,11 @@ import com.sun.jersey.spi.container.servlet.ServletContainer;
 @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
 public class RESTServer implements Constants {
 
+  static String REST_HTTP_ALLOW_OPTIONS_METHOD = 
"hbase.rest.http.allow.options.method";
+  // HTTP OPTIONS method is commonly used in REST APIs for negotiation. It is 
disabled by default to
+  // maintain backward incompatibility
+  private static boolean REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT = false;
+
   private static void printUsageAndExit(Options options, int exitCode) {
 HelpFormatter formatter = new HelpFormatter();
 formatter.printHelp("bin/hbase rest start", "", options,
@@ -240,7 +245,8 @@ public class RESTServer implements Constants {
   filter = filter.trim();
   context.addFilter(Class.forName(filter), "/*", 0);
 }
-HttpServerUtil.constrainHttpMethods(context);
+HttpServerUtil.constrainHttpMethods(context, servlet.getConfiguration()
+.getBoolean(REST_HTTP_ALLOW_OPTIONS_METHOD, 
REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT));
 
 // Put up info server.
 int port = conf.getInt("hbase.rest.info.port", 8085);

http://git-wip-us.apache.org/repos/asf/hbase/blob/797a3527/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
index 628b17c..5624c59 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
@@ -75,7 +75,7 @@ public class HBaseRESTTestingUtility {
   filter = filter.trim();
   context.addFilter(Class.forName(filter), "/*", 0);
 }
-HttpServerUtil.constrainHttpMethods(context);
+HttpServerUtil.constrainHttpMethods(context, false);
 LOG.info("Loaded filter classes :" + filterClasses);
   // start the server
 server.start();

http://git-wip-us.apache.org/repos/asf/hbase/blob/797a3527/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
index a66251f..1811bac 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
@@ -29,8 +29,9 @@ public class HttpServerUtil {
   /**
* Add constraints to a Jetty Context to disallow undesirable Http methods.
* @param context The context to modify
+   * @param allowOptionsMethod if true then OPTIONS method will not be set in 
constraint mapping
*/
-  public static void constrainHttpMethods(Context context) {
+  public static void constrainHttpMethods(Context context, boolean 
allowOptionsMethod) {
 Constraint c = new Constraint();
 c.setAuthenticate(true);
 
@@ -39,13 +40,17 @@ public class HttpServerUtil {
 

hbase git commit: Addendum HBASE-20004 Client is not able to execute REST queries in a secure cluster

2018-05-10 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 939ee7fd0 -> 75e7714d2


Addendum HBASE-20004 Client is not able to execute REST queries in a secure 
cluster

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/75e7714d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/75e7714d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/75e7714d

Branch: refs/heads/branch-1.3
Commit: 75e7714d2057917523bb66464de921f180099f71
Parents: 939ee7f
Author: Ashish Singhi 
Authored: Thu May 10 23:18:33 2018 +0530
Committer: Ashish Singhi 
Committed: Thu May 10 23:18:33 2018 +0530

--
 .../java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/75e7714d/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
index 628b17c..5624c59 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
@@ -75,7 +75,7 @@ public class HBaseRESTTestingUtility {
   filter = filter.trim();
   context.addFilter(Class.forName(filter), "/*", 0);
 }
-HttpServerUtil.constrainHttpMethods(context);
+HttpServerUtil.constrainHttpMethods(context, false);
 LOG.info("Loaded filter classes :" + filterClasses);
   // start the server
 server.start();



hbase git commit: HBASE-20004 Client is not able to execute REST queries in a secure cluster

2018-05-10 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 d5f393852 -> 939ee7fd0


HBASE-20004 Client is not able to execute REST queries in a secure cluster

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/939ee7fd
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/939ee7fd
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/939ee7fd

Branch: refs/heads/branch-1.3
Commit: 939ee7fd0e5dfb3ab2c72a54c9929f28bc1c8331
Parents: d5f3938
Author: Ashish Singhi 
Authored: Thu May 10 23:14:46 2018 +0530
Committer: Ashish Singhi 
Committed: Thu May 10 23:14:46 2018 +0530

--
 .../org/apache/hadoop/hbase/rest/RESTServer.java |  8 +++-
 .../apache/hadoop/hbase/util/HttpServerUtil.java | 19 ---
 2 files changed, 19 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/939ee7fd/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
index 28d0bc2..d603331 100644
--- a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
+++ b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
@@ -67,6 +67,11 @@ import com.sun.jersey.spi.container.servlet.ServletContainer;
 @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
 public class RESTServer implements Constants {
 
+  static String REST_HTTP_ALLOW_OPTIONS_METHOD = 
"hbase.rest.http.allow.options.method";
+  // HTTP OPTIONS method is commonly used in REST APIs for negotiation. It is 
disabled by default to
+  // maintain backward incompatibility
+  private static boolean REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT = false;
+
   private static void printUsageAndExit(Options options, int exitCode) {
 HelpFormatter formatter = new HelpFormatter();
 formatter.printHelp("bin/hbase rest start", "", options,
@@ -240,7 +245,8 @@ public class RESTServer implements Constants {
   filter = filter.trim();
   context.addFilter(Class.forName(filter), "/*", 0);
 }
-HttpServerUtil.constrainHttpMethods(context);
+HttpServerUtil.constrainHttpMethods(context, servlet.getConfiguration()
+.getBoolean(REST_HTTP_ALLOW_OPTIONS_METHOD, 
REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT));
 
 // Put up info server.
 int port = conf.getInt("hbase.rest.info.port", 8085);

http://git-wip-us.apache.org/repos/asf/hbase/blob/939ee7fd/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
index a66251f..1811bac 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
@@ -29,8 +29,9 @@ public class HttpServerUtil {
   /**
* Add constraints to a Jetty Context to disallow undesirable Http methods.
* @param context The context to modify
+   * @param allowOptionsMethod if true then OPTIONS method will not be set in 
constraint mapping
*/
-  public static void constrainHttpMethods(Context context) {
+  public static void constrainHttpMethods(Context context, boolean 
allowOptionsMethod) {
 Constraint c = new Constraint();
 c.setAuthenticate(true);
 
@@ -39,13 +40,17 @@ public class HttpServerUtil {
 cmt.setMethod("TRACE");
 cmt.setPathSpec("/*");
 
-ConstraintMapping cmo = new ConstraintMapping();
-cmo.setConstraint(c);
-cmo.setMethod("OPTIONS");
-cmo.setPathSpec("/*");
-
 SecurityHandler sh = new SecurityHandler();
-sh.setConstraintMappings(new ConstraintMapping[]{ cmt, cmo });
+
+if (!allowOptionsMethod) {
+  ConstraintMapping cmo = new ConstraintMapping();
+  cmo.setConstraint(c);
+  cmo.setMethod("OPTIONS");
+  cmo.setPathSpec("/*");
+  sh.setConstraintMappings(new ConstraintMapping[] { cmt, cmo });
+} else {
+  sh.setConstraintMappings(new ConstraintMapping[] { cmt });
+}
 
 context.addHandler(sh);
   }



hbase git commit: HBASE-20004 Client is not able to execute REST queries in a secure cluster

2018-05-10 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.4 dc7b33eb6 -> 4e5c534ce


HBASE-20004 Client is not able to execute REST queries in a secure cluster

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/4e5c534c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/4e5c534c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/4e5c534c

Branch: refs/heads/branch-1.4
Commit: 4e5c534ce53fe0cfbe18c79c6b202903bca37fcc
Parents: dc7b33e
Author: Ashish Singhi 
Authored: Thu May 10 23:11:51 2018 +0530
Committer: Ashish Singhi 
Committed: Thu May 10 23:11:51 2018 +0530

--
 .../org/apache/hadoop/hbase/rest/RESTServer.java |  8 +++-
 .../hbase/rest/HBaseRESTTestingUtility.java  |  2 +-
 .../apache/hadoop/hbase/util/HttpServerUtil.java | 19 ---
 3 files changed, 20 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/4e5c534c/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
index d25af1e..bd12bc8 100644
--- a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
+++ b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
@@ -80,6 +80,11 @@ public class RESTServer implements Constants {
   static String REST_CSRF_METHODS_TO_IGNORE_KEY = 
"hbase.rest.csrf.methods.to.ignore";
   static String REST_CSRF_METHODS_TO_IGNORE_DEFAULT = "GET,OPTIONS,HEAD,TRACE";
 
+  static String REST_HTTP_ALLOW_OPTIONS_METHOD = 
"hbase.rest.http.allow.options.method";
+  // HTTP OPTIONS method is commonly used in REST APIs for negotiation. It is 
disabled by default to
+  // maintain backward incompatibility
+  private static boolean REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT = false;
+
   private static void printUsageAndExit(Options options, int exitCode) {
 HelpFormatter formatter = new HelpFormatter();
 formatter.printHelp("bin/hbase rest start", "", options,
@@ -294,7 +299,8 @@ public class RESTServer implements Constants {
   context.addFilter(Class.forName(filter), "/*", 0);
 }
 addCSRFFilter(context, conf);
-HttpServerUtil.constrainHttpMethods(context);
+HttpServerUtil.constrainHttpMethods(context, servlet.getConfiguration()
+.getBoolean(REST_HTTP_ALLOW_OPTIONS_METHOD, 
REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT));
 
 // Put up info server.
 int port = conf.getInt("hbase.rest.info.port", 8085);

http://git-wip-us.apache.org/repos/asf/hbase/blob/4e5c534c/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
index e319704..200c519 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
@@ -79,7 +79,7 @@ public class HBaseRESTTestingUtility {
 }
 conf.set(RESTServer.REST_CSRF_BROWSER_USERAGENTS_REGEX_KEY, ".*");
 RESTServer.addCSRFFilter(context, conf);
-HttpServerUtil.constrainHttpMethods(context);
+HttpServerUtil.constrainHttpMethods(context, false);
 LOG.info("Loaded filter classes :" + Arrays.toString(filterClasses));
   // start the server
 server.start();

http://git-wip-us.apache.org/repos/asf/hbase/blob/4e5c534c/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
index a66251f..1811bac 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
@@ -29,8 +29,9 @@ public class HttpServerUtil {
   /**
* Add constraints to a Jetty Context to disallow undesirable Http methods.
* @param context The context to modify
+   * @param allowOptionsMethod if true then OPTIONS method will not be set in 
constraint mapping
*/
-  public static void constrainHttpMethods(Context context) {
+  public static void constrainHttpMethods(Context context, boolean 
allowOptionsMethod) {
 Constraint c = new Constraint();
 

hbase git commit: HBASE-20004 Client is not able to execute REST queries in a secure cluster

2018-05-10 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 c191462ac -> ca544a155


HBASE-20004 Client is not able to execute REST queries in a secure cluster

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ca544a15
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ca544a15
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ca544a15

Branch: refs/heads/branch-1
Commit: ca544a155c7053a09ec0fee4c494e770a209a38b
Parents: c191462
Author: Ashish Singhi 
Authored: Thu May 10 22:49:08 2018 +0530
Committer: Ashish Singhi 
Committed: Thu May 10 22:49:08 2018 +0530

--
 .../org/apache/hadoop/hbase/rest/RESTServer.java |  7 ++-
 .../hbase/rest/HBaseRESTTestingUtility.java  |  2 +-
 .../apache/hadoop/hbase/util/HttpServerUtil.java | 19 ---
 .../hadoop/hbase/thrift/ThriftServerRunner.java  |  6 +-
 4 files changed, 24 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ca544a15/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
index d25af1e..be4b130 100644
--- a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
+++ b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
@@ -79,6 +79,10 @@ public class RESTServer implements Constants {
   static String REST_CSRF_CUSTOM_HEADER_DEFAULT = "X-XSRF-HEADER";
   static String REST_CSRF_METHODS_TO_IGNORE_KEY = 
"hbase.rest.csrf.methods.to.ignore";
   static String REST_CSRF_METHODS_TO_IGNORE_DEFAULT = "GET,OPTIONS,HEAD,TRACE";
+  static String REST_HTTP_ALLOW_OPTIONS_METHOD = 
"hbase.rest.http.allow.options.method";
+  // HTTP OPTIONS method is commonly used in REST APIs for negotiation. It is 
disabled by default to
+  // maintain backward incompatibility
+  private static boolean REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT = false;
 
   private static void printUsageAndExit(Options options, int exitCode) {
 HelpFormatter formatter = new HelpFormatter();
@@ -294,7 +298,8 @@ public class RESTServer implements Constants {
   context.addFilter(Class.forName(filter), "/*", 0);
 }
 addCSRFFilter(context, conf);
-HttpServerUtil.constrainHttpMethods(context);
+HttpServerUtil.constrainHttpMethods(context, servlet.getConfiguration()
+.getBoolean(REST_HTTP_ALLOW_OPTIONS_METHOD, 
REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT));
 
 // Put up info server.
 int port = conf.getInt("hbase.rest.info.port", 8085);

http://git-wip-us.apache.org/repos/asf/hbase/blob/ca544a15/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
index e319704..200c519 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
@@ -79,7 +79,7 @@ public class HBaseRESTTestingUtility {
 }
 conf.set(RESTServer.REST_CSRF_BROWSER_USERAGENTS_REGEX_KEY, ".*");
 RESTServer.addCSRFFilter(context, conf);
-HttpServerUtil.constrainHttpMethods(context);
+HttpServerUtil.constrainHttpMethods(context, false);
 LOG.info("Loaded filter classes :" + Arrays.toString(filterClasses));
   // start the server
 server.start();

http://git-wip-us.apache.org/repos/asf/hbase/blob/ca544a15/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
index a66251f..1811bac 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HttpServerUtil.java
@@ -29,8 +29,9 @@ public class HttpServerUtil {
   /**
* Add constraints to a Jetty Context to disallow undesirable Http methods.
* @param context The context to modify
+   * @param allowOptionsMethod if true then OPTIONS method will not be set in 
constraint mapping
*/
-  public static void constrainHttpMethods(Context context) {
+  public static void constrainHttpMethods(Context context, boolean 
allowOptionsMethod) {

hbase git commit: HBASE-20004 Client is not able to execute REST queries in a secure cluster

2018-05-10 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-2 61f96b6ff -> 32b114e86


HBASE-20004 Client is not able to execute REST queries in a secure cluster

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/32b114e8
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/32b114e8
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/32b114e8

Branch: refs/heads/branch-2
Commit: 32b114e86b071ff199867cd7173a0964364c3984
Parents: 61f96b6
Author: Ashish Singhi 
Authored: Thu May 10 22:47:44 2018 +0530
Committer: Ashish Singhi 
Committed: Thu May 10 22:47:44 2018 +0530

--
 .../hadoop/hbase/http/HttpServerUtil.java   | 20 +---
 .../apache/hadoop/hbase/rest/RESTServer.java|  7 ++-
 .../hbase/rest/HBaseRESTTestingUtility.java |  2 +-
 .../hadoop/hbase/thrift/ThriftServerRunner.java |  6 +-
 4 files changed, 25 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/32b114e8/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
--
diff --git 
a/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java 
b/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
index 777ced0..e41daf3 100644
--- a/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
+++ b/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
@@ -31,8 +31,10 @@ public final class HttpServerUtil {
   /**
* Add constraints to a Jetty Context to disallow undesirable Http methods.
* @param ctxHandler The context to modify
+   * @param allowOptionsMethod if true then OPTIONS method will not be set in 
constraint mapping
*/
-  public static void constrainHttpMethods(ServletContextHandler ctxHandler) {
+  public static void constrainHttpMethods(ServletContextHandler ctxHandler,
+  boolean allowOptionsMethod) {
 Constraint c = new Constraint();
 c.setAuthenticate(true);
 
@@ -41,13 +43,17 @@ public final class HttpServerUtil {
 cmt.setMethod("TRACE");
 cmt.setPathSpec("/*");
 
-ConstraintMapping cmo = new ConstraintMapping();
-cmo.setConstraint(c);
-cmo.setMethod("OPTIONS");
-cmo.setPathSpec("/*");
-
 ConstraintSecurityHandler securityHandler = new 
ConstraintSecurityHandler();
-securityHandler.setConstraintMappings(new ConstraintMapping[]{ cmt, cmo });
+
+if (!allowOptionsMethod) {
+  ConstraintMapping cmo = new ConstraintMapping();
+  cmo.setConstraint(c);
+  cmo.setMethod("OPTIONS");
+  cmo.setPathSpec("/*");
+  securityHandler.setConstraintMappings(new ConstraintMapping[] { cmt, cmo 
});
+} else {
+  securityHandler.setConstraintMappings(new ConstraintMapping[] { cmt });
+}
 
 ctxHandler.setSecurityHandler(securityHandler);
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/32b114e8/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
index 15c988f..63c9e42 100644
--- a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
+++ b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
@@ -95,6 +95,10 @@ public class RESTServer implements Constants {
 
   private static final String PATH_SPEC_ANY = "/*";
 
+  static String REST_HTTP_ALLOW_OPTIONS_METHOD = 
"hbase.rest.http.allow.options.method";
+  // HTTP OPTIONS method is commonly used in REST APIs for negotiation. So it 
is enabled by default.
+  private static boolean REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT = true;
+
   private static void printUsageAndExit(Options options, int exitCode) {
 HelpFormatter formatter = new HelpFormatter();
 formatter.printHelp("hbase rest start", "", options,
@@ -343,7 +347,8 @@ public class RESTServer implements Constants {
   ctxHandler.addFilter(filter, PATH_SPEC_ANY, 
EnumSet.of(DispatcherType.REQUEST));
 }
 addCSRFFilter(ctxHandler, conf);
-HttpServerUtil.constrainHttpMethods(ctxHandler);
+HttpServerUtil.constrainHttpMethods(ctxHandler, servlet.getConfiguration()
+.getBoolean(REST_HTTP_ALLOW_OPTIONS_METHOD, 
REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT));
 
 // Put up info server.
 int port = conf.getInt("hbase.rest.info.port", 8085);

http://git-wip-us.apache.org/repos/asf/hbase/blob/32b114e8/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java

hbase git commit: HBASE-20004 Client is not able to execute REST queries in a secure cluster

2018-05-10 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-2.0 f82b284fd -> f46f70921


HBASE-20004 Client is not able to execute REST queries in a secure cluster

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f46f7092
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f46f7092
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f46f7092

Branch: refs/heads/branch-2.0
Commit: f46f70921cd4bd0a3f5af027f1bd0f786a9e51d6
Parents: f82b284
Author: Ashish Singhi 
Authored: Thu May 10 22:41:48 2018 +0530
Committer: Ashish Singhi 
Committed: Thu May 10 22:41:48 2018 +0530

--
 .../hadoop/hbase/http/HttpServerUtil.java   | 20 +---
 .../apache/hadoop/hbase/rest/RESTServer.java|  8 +++-
 .../hbase/rest/HBaseRESTTestingUtility.java |  2 +-
 3 files changed, 21 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f46f7092/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
--
diff --git 
a/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java 
b/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
index 777ced0..e41daf3 100644
--- a/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
+++ b/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
@@ -31,8 +31,10 @@ public final class HttpServerUtil {
   /**
* Add constraints to a Jetty Context to disallow undesirable Http methods.
* @param ctxHandler The context to modify
+   * @param allowOptionsMethod if true then OPTIONS method will not be set in 
constraint mapping
*/
-  public static void constrainHttpMethods(ServletContextHandler ctxHandler) {
+  public static void constrainHttpMethods(ServletContextHandler ctxHandler,
+  boolean allowOptionsMethod) {
 Constraint c = new Constraint();
 c.setAuthenticate(true);
 
@@ -41,13 +43,17 @@ public final class HttpServerUtil {
 cmt.setMethod("TRACE");
 cmt.setPathSpec("/*");
 
-ConstraintMapping cmo = new ConstraintMapping();
-cmo.setConstraint(c);
-cmo.setMethod("OPTIONS");
-cmo.setPathSpec("/*");
-
 ConstraintSecurityHandler securityHandler = new 
ConstraintSecurityHandler();
-securityHandler.setConstraintMappings(new ConstraintMapping[]{ cmt, cmo });
+
+if (!allowOptionsMethod) {
+  ConstraintMapping cmo = new ConstraintMapping();
+  cmo.setConstraint(c);
+  cmo.setMethod("OPTIONS");
+  cmo.setPathSpec("/*");
+  securityHandler.setConstraintMappings(new ConstraintMapping[] { cmt, cmo 
});
+} else {
+  securityHandler.setConstraintMappings(new ConstraintMapping[] { cmt });
+}
 
 ctxHandler.setSecurityHandler(securityHandler);
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/f46f7092/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
index 15c988f..e5cfe32 100644
--- a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
+++ b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
@@ -95,6 +95,11 @@ public class RESTServer implements Constants {
 
   private static final String PATH_SPEC_ANY = "/*";
 
+  static String REST_HTTP_ALLOW_OPTIONS_METHOD = 
"hbase.rest.http.allow.options.method";
+  // HTTP OPTIONS method is commonly used in REST APIs for negotiation. It is 
disabled by default to
+  // maintain backward incompatibility
+  private static boolean REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT = false;
+
   private static void printUsageAndExit(Options options, int exitCode) {
 HelpFormatter formatter = new HelpFormatter();
 formatter.printHelp("hbase rest start", "", options,
@@ -343,7 +348,8 @@ public class RESTServer implements Constants {
   ctxHandler.addFilter(filter, PATH_SPEC_ANY, 
EnumSet.of(DispatcherType.REQUEST));
 }
 addCSRFFilter(ctxHandler, conf);
-HttpServerUtil.constrainHttpMethods(ctxHandler);
+HttpServerUtil.constrainHttpMethods(ctxHandler, servlet.getConfiguration()
+.getBoolean(REST_HTTP_ALLOW_OPTIONS_METHOD, 
REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT));
 
 // Put up info server.
 int port = conf.getInt("hbase.rest.info.port", 8085);

http://git-wip-us.apache.org/repos/asf/hbase/blob/f46f7092/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java

hbase git commit: HBASE-20004 Client is not able to execute REST queries in a secure cluster

2018-05-10 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 8ba2a7eeb -> c60578d98


HBASE-20004 Client is not able to execute REST queries in a secure cluster

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c60578d9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c60578d9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c60578d9

Branch: refs/heads/master
Commit: c60578d9829f29cf77b250d238a9e38dc0b513d7
Parents: 8ba2a7e
Author: Ashish Singhi 
Authored: Thu May 10 22:39:43 2018 +0530
Committer: Ashish Singhi 
Committed: Thu May 10 22:39:43 2018 +0530

--
 .../hadoop/hbase/http/HttpServerUtil.java   | 20 +---
 .../apache/hadoop/hbase/rest/RESTServer.java|  7 ++-
 .../hbase/rest/HBaseRESTTestingUtility.java |  2 +-
 .../hadoop/hbase/thrift/ThriftServerRunner.java |  6 +-
 4 files changed, 25 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c60578d9/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
--
diff --git 
a/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java 
b/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
index 777ced0..e41daf3 100644
--- a/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
+++ b/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServerUtil.java
@@ -31,8 +31,10 @@ public final class HttpServerUtil {
   /**
* Add constraints to a Jetty Context to disallow undesirable Http methods.
* @param ctxHandler The context to modify
+   * @param allowOptionsMethod if true then OPTIONS method will not be set in 
constraint mapping
*/
-  public static void constrainHttpMethods(ServletContextHandler ctxHandler) {
+  public static void constrainHttpMethods(ServletContextHandler ctxHandler,
+  boolean allowOptionsMethod) {
 Constraint c = new Constraint();
 c.setAuthenticate(true);
 
@@ -41,13 +43,17 @@ public final class HttpServerUtil {
 cmt.setMethod("TRACE");
 cmt.setPathSpec("/*");
 
-ConstraintMapping cmo = new ConstraintMapping();
-cmo.setConstraint(c);
-cmo.setMethod("OPTIONS");
-cmo.setPathSpec("/*");
-
 ConstraintSecurityHandler securityHandler = new 
ConstraintSecurityHandler();
-securityHandler.setConstraintMappings(new ConstraintMapping[]{ cmt, cmo });
+
+if (!allowOptionsMethod) {
+  ConstraintMapping cmo = new ConstraintMapping();
+  cmo.setConstraint(c);
+  cmo.setMethod("OPTIONS");
+  cmo.setPathSpec("/*");
+  securityHandler.setConstraintMappings(new ConstraintMapping[] { cmt, cmo 
});
+} else {
+  securityHandler.setConstraintMappings(new ConstraintMapping[] { cmt });
+}
 
 ctxHandler.setSecurityHandler(securityHandler);
   }

http://git-wip-us.apache.org/repos/asf/hbase/blob/c60578d9/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
index 591eae9..cad63a4 100644
--- a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
+++ b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
@@ -95,6 +95,10 @@ public class RESTServer implements Constants {
 
   private static final String PATH_SPEC_ANY = "/*";
 
+  static String REST_HTTP_ALLOW_OPTIONS_METHOD = 
"hbase.rest.http.allow.options.method";
+  // HTTP OPTIONS method is commonly used in REST APIs for negotiation. So it 
is enabled by default.
+  private static boolean REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT = true;
+
   private static void printUsageAndExit(Options options, int exitCode) {
 HelpFormatter formatter = new HelpFormatter();
 formatter.printHelp("hbase rest start", "", options,
@@ -341,7 +345,8 @@ public class RESTServer implements Constants {
   ctxHandler.addFilter(filter, PATH_SPEC_ANY, 
EnumSet.of(DispatcherType.REQUEST));
 }
 addCSRFFilter(ctxHandler, conf);
-HttpServerUtil.constrainHttpMethods(ctxHandler);
+HttpServerUtil.constrainHttpMethods(ctxHandler, servlet.getConfiguration()
+.getBoolean(REST_HTTP_ALLOW_OPTIONS_METHOD, 
REST_HTTP_ALLOW_OPTIONS_METHOD_DEFAULT));
 
 // Put up info server.
 int port = conf.getInt("hbase.rest.info.port", 8085);

http://git-wip-us.apache.org/repos/asf/hbase/blob/c60578d9/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java

hbase git commit: HBASE-15291 FileSystem not closed in secure bulkLoad

2018-04-11 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 1ea3a8bc8 -> 2954aeae2


HBASE-15291 FileSystem not closed in secure bulkLoad

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2954aeae
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2954aeae
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2954aeae

Branch: refs/heads/branch-1.2
Commit: 2954aeae2d10a35fdde0d619640a563dcc33f79c
Parents: 1ea3a8b
Author: Ashish Singhi 
Authored: Wed Apr 11 14:50:07 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 11 14:51:01 2018 +0530

--
 .../security/access/SecureBulkLoadEndpoint.java | 57 ++--
 1 file changed, 40 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2954aeae/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
index bd88b6c..7496e4e 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
@@ -204,6 +204,15 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
   done.run(CleanupBulkLoadResponse.newBuilder().build());
 } catch (IOException e) {
   ResponseConverter.setControllerException(controller, e);
+} finally {
+  UserGroupInformation ugi = getActiveUser().getUGI();
+  try {
+if (!UserGroupInformation.getLoginUser().equals(ugi)) {
+  FileSystem.closeAllForUGI(ugi);
+}
+  } catch (IOException e) {
+LOG.error("Failed to close FileSystem for: " + ugi, e);
+  }
 }
 done.run(null);
   }
@@ -374,7 +383,7 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
   Path p = new Path(srcPath);
   Path stageP = new Path(stagingDir, new Path(Bytes.toString(family), 
p.getName()));
   if (srcFs == null) {
-srcFs = FileSystem.get(p.toUri(), conf);
+srcFs = FileSystem.newInstance(p.toUri(), conf);
   }
 
   if(!isFile(p)) {
@@ -401,26 +410,40 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
 @Override
 public void doneBulkLoad(byte[] family, String srcPath) throws IOException 
{
   LOG.debug("Bulk Load done for: " + srcPath);
+  closeSrcFs();
+}
+
+private void closeSrcFs() throws IOException {
+  if (srcFs != null) {
+srcFs.close();
+srcFs = null;
+  }
 }
 
 @Override
 public void failedBulkLoad(final byte[] family, final String srcPath) 
throws IOException {
-  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
-// files are copied so no need to move them back
-return;
-  }
-  Path p = new Path(srcPath);
-  Path stageP = new Path(stagingDir,
-  new Path(Bytes.toString(family), p.getName()));
-  LOG.debug("Moving " + stageP + " back to " + p);
-  if(!fs.rename(stageP, p))
-throw new IOException("Failed to move HFile: " + stageP + " to " + p);
-
-  // restore original permission
-  if (origPermissions.containsKey(srcPath)) {
-fs.setPermission(p, origPermissions.get(srcPath));
-  } else {
-LOG.warn("Can't find previous permission for path=" + srcPath);
+  try {
+Path p = new Path(srcPath);
+if (srcFs == null) {
+  srcFs = FileSystem.newInstance(p.toUri(), conf);
+}
+if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
+  // files are copied so no need to move them back
+  return;
+}
+Path stageP = new Path(stagingDir, new Path(Bytes.toString(family), 
p.getName()));
+LOG.debug("Moving " + stageP + " back to " + p);
+if (!fs.rename(stageP, p))
+  throw new IOException("Failed to move HFile: " + stageP + " to " + 
p);
+
+// restore original permission
+if (origPermissions.containsKey(srcPath)) {
+  fs.setPermission(p, origPermissions.get(srcPath));
+} else {
+  LOG.warn("Can't find previous permission for path=" + srcPath);
+}
+  } finally {
+closeSrcFs();
   }
 }
 



hbase git commit: HBASE-15291 FileSystem not closed in secure bulkLoad

2018-04-11 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 fe8bd3ff0 -> e0536bfc5


HBASE-15291 FileSystem not closed in secure bulkLoad

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e0536bfc
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e0536bfc
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e0536bfc

Branch: refs/heads/branch-1.3
Commit: e0536bfc554a6a5081005c6fd0d8741081ff647a
Parents: fe8bd3f
Author: Ashish Singhi 
Authored: Wed Apr 11 13:30:12 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 11 13:30:12 2018 +0530

--
 .../security/access/SecureBulkLoadEndpoint.java | 67 +---
 1 file changed, 45 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e0536bfc/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
index 7401317..349747a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
@@ -204,6 +204,15 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
   done.run(CleanupBulkLoadResponse.newBuilder().build());
 } catch (IOException e) {
   ResponseConverter.setControllerException(controller, e);
+} finally {
+  UserGroupInformation ugi = getActiveUser().getUGI();
+  try {
+if (!UserGroupInformation.getLoginUser().equals(ugi)) {
+  FileSystem.closeAllForUGI(ugi);
+}
+  } catch (IOException e) {
+LOG.error("Failed to close FileSystem for: " + ugi, e);
+  }
 }
 done.run(null);
   }
@@ -382,7 +391,7 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
   }
 
   if (srcFs == null) {
-srcFs = FileSystem.get(p.toUri(), conf);
+srcFs = FileSystem.newInstance(p.toUri(), conf);
   }
 
   if(!isFile(p)) {
@@ -409,34 +418,48 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
 @Override
 public void doneBulkLoad(byte[] family, String srcPath) throws IOException 
{
   LOG.debug("Bulk Load done for: " + srcPath);
+  closeSrcFs();
+}
+
+private void closeSrcFs() throws IOException {
+  if (srcFs != null) {
+srcFs.close();
+srcFs = null;
+  }
 }
 
 @Override
 public void failedBulkLoad(final byte[] family, final String srcPath) 
throws IOException {
-  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
-// files are copied so no need to move them back
-return;
-  }
-  Path p = new Path(srcPath);
-  Path stageP = new Path(stagingDir,
-  new Path(Bytes.toString(family), p.getName()));
+  try {
+Path p = new Path(srcPath);
+if (srcFs == null) {
+  srcFs = FileSystem.newInstance(p.toUri(), conf);
+}
+if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
+  // files are copied so no need to move them back
+  return;
+}
+Path stageP = new Path(stagingDir, new Path(Bytes.toString(family), 
p.getName()));
 
-  // In case of Replication for bulk load files, hfiles are not renamed by 
end point during
-  // prepare stage, so no need of rename here again
-  if (p.equals(stageP)) {
-LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
-return;
-  }
+// In case of Replication for bulk load files, hfiles are not renamed 
by end point during
+// prepare stage, so no need of rename here again
+if (p.equals(stageP)) {
+  LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
+  return;
+}
 
-  LOG.debug("Moving " + stageP + " back to " + p);
-  if(!fs.rename(stageP, p))
-throw new IOException("Failed to move HFile: " + stageP + " to " + p);
+LOG.debug("Moving " + stageP + " back to " + p);
+if (!fs.rename(stageP, p))
+  throw new IOException("Failed to move HFile: " + stageP + " to " + 
p);
 
-  // restore original permission
-  if (origPermissions.containsKey(srcPath)) {
-fs.setPermission(p, origPermissions.get(srcPath));
-  } else {
-LOG.warn("Can't find previous permission for path=" + srcPath);
+// restore original permission
+

hbase git commit: HBASE-15291 FileSystem not closed in secure bulkLoad

2018-04-11 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.4 a5c456de9 -> 3c98b3d63


HBASE-15291 FileSystem not closed in secure bulkLoad

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3c98b3d6
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3c98b3d6
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3c98b3d6

Branch: refs/heads/branch-1.4
Commit: 3c98b3d6363b95e3b2b47dcce68a096e9fcc2416
Parents: a5c456d
Author: Ashish Singhi 
Authored: Wed Apr 11 13:12:25 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 11 13:12:25 2018 +0530

--
 .../security/access/SecureBulkLoadEndpoint.java | 67 +---
 1 file changed, 45 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3c98b3d6/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
index 37d66e5..68f31cc 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
@@ -236,6 +236,15 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
   done.run(CleanupBulkLoadResponse.newBuilder().build());
 } catch (IOException e) {
   ResponseConverter.setControllerException(controller, e);
+} finally {
+  UserGroupInformation ugi = getActiveUser().getUGI();
+  try {
+if (!UserGroupInformation.getLoginUser().equals(ugi)) {
+  FileSystem.closeAllForUGI(ugi);
+}
+  } catch (IOException e) {
+LOG.error("Failed to close FileSystem for: " + ugi, e);
+  }
 }
 done.run(null);
   }
@@ -425,7 +434,7 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
   }
 
   if (srcFs == null) {
-srcFs = FileSystem.get(p.toUri(), conf);
+srcFs = FileSystem.newInstance(p.toUri(), conf);
   }
 
   if(!isFile(p)) {
@@ -452,34 +461,48 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
 @Override
 public void doneBulkLoad(byte[] family, String srcPath) throws IOException 
{
   LOG.debug("Bulk Load done for: " + srcPath);
+  closeSrcFs();
+}
+
+private void closeSrcFs() throws IOException {
+  if (srcFs != null) {
+srcFs.close();
+srcFs = null;
+  }
 }
 
 @Override
 public void failedBulkLoad(final byte[] family, final String srcPath) 
throws IOException {
-  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
-// files are copied so no need to move them back
-return;
-  }
-  Path p = new Path(srcPath);
-  Path stageP = new Path(stagingDir,
-  new Path(Bytes.toString(family), p.getName()));
+  try {
+Path p = new Path(srcPath);
+if (srcFs == null) {
+  srcFs = FileSystem.newInstance(p.toUri(), conf);
+}
+if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
+  // files are copied so no need to move them back
+  return;
+}
+Path stageP = new Path(stagingDir, new Path(Bytes.toString(family), 
p.getName()));
 
-  // In case of Replication for bulk load files, hfiles are not renamed by 
end point during
-  // prepare stage, so no need of rename here again
-  if (p.equals(stageP)) {
-LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
-return;
-  }
+// In case of Replication for bulk load files, hfiles are not renamed 
by end point during
+// prepare stage, so no need of rename here again
+if (p.equals(stageP)) {
+  LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
+  return;
+}
 
-  LOG.debug("Moving " + stageP + " back to " + p);
-  if(!fs.rename(stageP, p))
-throw new IOException("Failed to move HFile: " + stageP + " to " + p);
+LOG.debug("Moving " + stageP + " back to " + p);
+if (!fs.rename(stageP, p))
+  throw new IOException("Failed to move HFile: " + stageP + " to " + 
p);
 
-  // restore original permission
-  if (origPermissions.containsKey(srcPath)) {
-fs.setPermission(p, origPermissions.get(srcPath));
-  } else {
-LOG.warn("Can't find previous permission for path=" + srcPath);
+// restore original permission
+

hbase git commit: HBASE-15291 FileSystem not closed in secure bulkLoad

2018-04-11 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 ff3e56293 -> a817f196a


HBASE-15291 FileSystem not closed in secure bulkLoad

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a817f196
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a817f196
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a817f196

Branch: refs/heads/branch-1
Commit: a817f196a19fdbe94d302e5f0e0e652457bc746d
Parents: ff3e562
Author: Ashish Singhi 
Authored: Wed Apr 11 12:59:52 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 11 12:59:52 2018 +0530

--
 .../security/access/SecureBulkLoadEndpoint.java | 67 +---
 1 file changed, 45 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a817f196/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
index 37d66e5..68f31cc 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
@@ -236,6 +236,15 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
   done.run(CleanupBulkLoadResponse.newBuilder().build());
 } catch (IOException e) {
   ResponseConverter.setControllerException(controller, e);
+} finally {
+  UserGroupInformation ugi = getActiveUser().getUGI();
+  try {
+if (!UserGroupInformation.getLoginUser().equals(ugi)) {
+  FileSystem.closeAllForUGI(ugi);
+}
+  } catch (IOException e) {
+LOG.error("Failed to close FileSystem for: " + ugi, e);
+  }
 }
 done.run(null);
   }
@@ -425,7 +434,7 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
   }
 
   if (srcFs == null) {
-srcFs = FileSystem.get(p.toUri(), conf);
+srcFs = FileSystem.newInstance(p.toUri(), conf);
   }
 
   if(!isFile(p)) {
@@ -452,34 +461,48 @@ public class SecureBulkLoadEndpoint extends 
SecureBulkLoadService
 @Override
 public void doneBulkLoad(byte[] family, String srcPath) throws IOException 
{
   LOG.debug("Bulk Load done for: " + srcPath);
+  closeSrcFs();
+}
+
+private void closeSrcFs() throws IOException {
+  if (srcFs != null) {
+srcFs.close();
+srcFs = null;
+  }
 }
 
 @Override
 public void failedBulkLoad(final byte[] family, final String srcPath) 
throws IOException {
-  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
-// files are copied so no need to move them back
-return;
-  }
-  Path p = new Path(srcPath);
-  Path stageP = new Path(stagingDir,
-  new Path(Bytes.toString(family), p.getName()));
+  try {
+Path p = new Path(srcPath);
+if (srcFs == null) {
+  srcFs = FileSystem.newInstance(p.toUri(), conf);
+}
+if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
+  // files are copied so no need to move them back
+  return;
+}
+Path stageP = new Path(stagingDir, new Path(Bytes.toString(family), 
p.getName()));
 
-  // In case of Replication for bulk load files, hfiles are not renamed by 
end point during
-  // prepare stage, so no need of rename here again
-  if (p.equals(stageP)) {
-LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
-return;
-  }
+// In case of Replication for bulk load files, hfiles are not renamed 
by end point during
+// prepare stage, so no need of rename here again
+if (p.equals(stageP)) {
+  LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
+  return;
+}
 
-  LOG.debug("Moving " + stageP + " back to " + p);
-  if(!fs.rename(stageP, p))
-throw new IOException("Failed to move HFile: " + stageP + " to " + p);
+LOG.debug("Moving " + stageP + " back to " + p);
+if (!fs.rename(stageP, p))
+  throw new IOException("Failed to move HFile: " + stageP + " to " + 
p);
 
-  // restore original permission
-  if (origPermissions.containsKey(srcPath)) {
-fs.setPermission(p, origPermissions.get(srcPath));
-  } else {
-LOG.warn("Can't find previous permission for path=" + srcPath);
+// restore original permission
+if 

hbase git commit: HBASE-15291 FileSystem not closed in secure bulkLoad

2018-04-11 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-2.0 2e6eff085 -> b3ec5f0ab


HBASE-15291 FileSystem not closed in secure bulkLoad

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b3ec5f0a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b3ec5f0a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b3ec5f0a

Branch: refs/heads/branch-2.0
Commit: b3ec5f0ab4e086b90a14df8ebda8849122ac7a70
Parents: 2e6eff0
Author: Ashish Singhi 
Authored: Wed Apr 11 12:22:46 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 11 12:24:16 2018 +0530

--
 .../regionserver/SecureBulkLoadManager.java | 82 +---
 1 file changed, 54 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b3ec5f0a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
index 264d985..a4ee517 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
@@ -145,15 +145,26 @@ public class SecureBulkLoadManager {
 
   public void cleanupBulkLoad(final HRegion region, final 
CleanupBulkLoadRequest request)
   throws IOException {
-region.getCoprocessorHost().preCleanupBulkLoad(getActiveUser());
+try {
+  region.getCoprocessorHost().preCleanupBulkLoad(getActiveUser());
 
-Path path = new Path(request.getBulkToken());
-if (!fs.delete(path, true)) {
-  if (fs.exists(path)) {
-throw new IOException("Failed to clean up " + path);
+  Path path = new Path(request.getBulkToken());
+  if (!fs.delete(path, true)) {
+if (fs.exists(path)) {
+  throw new IOException("Failed to clean up " + path);
+}
+  }
+  LOG.info("Cleaned up " + path + " successfully.");
+} finally {
+  UserGroupInformation ugi = getActiveUser().getUGI();
+  try {
+if (!UserGroupInformation.getLoginUser().equals(ugi)) {
+  FileSystem.closeAllForUGI(ugi);
+}
+  } catch (IOException e) {
+LOG.error("Failed to close FileSystem for: " + ugi, e);
   }
 }
-LOG.info("Cleaned up " + path + " successfully.");
   }
 
   public Map secureBulkLoadHFiles(final HRegion region,
@@ -304,7 +315,7 @@ public class SecureBulkLoadManager {
   }
 
   if (srcFs == null) {
-srcFs = FileSystem.get(p.toUri(), conf);
+srcFs = FileSystem.newInstance(p.toUri(), conf);
   }
 
   if(!isFile(p)) {
@@ -334,34 +345,49 @@ public class SecureBulkLoadManager {
 @Override
 public void doneBulkLoad(byte[] family, String srcPath) throws IOException 
{
   LOG.debug("Bulk Load done for: " + srcPath);
+  closeSrcFs();
+}
+
+private void closeSrcFs() throws IOException {
+  if (srcFs != null) {
+srcFs.close();
+srcFs = null;
+  }
 }
 
 @Override
 public void failedBulkLoad(final byte[] family, final String srcPath) 
throws IOException {
-  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
-// files are copied so no need to move them back
-return;
-  }
-  Path p = new Path(srcPath);
-  Path stageP = new Path(stagingDir,
-  new Path(Bytes.toString(family), p.getName()));
+  try {
+Path p = new Path(srcPath);
+if (srcFs == null) {
+  srcFs = FileSystem.newInstance(p.toUri(), conf);
+}
+if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
+  // files are copied so no need to move them back
+  return;
+}
+Path stageP = new Path(stagingDir, new Path(Bytes.toString(family), 
p.getName()));
 
-  // In case of Replication for bulk load files, hfiles are not renamed by 
end point during
-  // prepare stage, so no need of rename here again
-  if (p.equals(stageP)) {
-LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
-return;
-  }
+// In case of Replication for bulk load files, hfiles are not renamed 
by end point during
+// prepare stage, so no need of rename here again
+if (p.equals(stageP)) {
+  LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
+  return;
+}
 
-  LOG.debug("Moving " + stageP + " back to " + p);
-  

hbase git commit: HBASE-15291 FileSystem not closed in secure bulkLoad

2018-04-11 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-2 9bf087d28 -> 4bcb560e2


HBASE-15291 FileSystem not closed in secure bulkLoad

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/4bcb560e
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/4bcb560e
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/4bcb560e

Branch: refs/heads/branch-2
Commit: 4bcb560e226088b036ef935768ed3b7ec6986789
Parents: 9bf087d
Author: Ashish Singhi 
Authored: Wed Apr 11 12:11:41 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 11 12:11:41 2018 +0530

--
 .../regionserver/SecureBulkLoadManager.java | 82 +---
 1 file changed, 54 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/4bcb560e/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
index 264d985..a4ee517 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
@@ -145,15 +145,26 @@ public class SecureBulkLoadManager {
 
   public void cleanupBulkLoad(final HRegion region, final 
CleanupBulkLoadRequest request)
   throws IOException {
-region.getCoprocessorHost().preCleanupBulkLoad(getActiveUser());
+try {
+  region.getCoprocessorHost().preCleanupBulkLoad(getActiveUser());
 
-Path path = new Path(request.getBulkToken());
-if (!fs.delete(path, true)) {
-  if (fs.exists(path)) {
-throw new IOException("Failed to clean up " + path);
+  Path path = new Path(request.getBulkToken());
+  if (!fs.delete(path, true)) {
+if (fs.exists(path)) {
+  throw new IOException("Failed to clean up " + path);
+}
+  }
+  LOG.info("Cleaned up " + path + " successfully.");
+} finally {
+  UserGroupInformation ugi = getActiveUser().getUGI();
+  try {
+if (!UserGroupInformation.getLoginUser().equals(ugi)) {
+  FileSystem.closeAllForUGI(ugi);
+}
+  } catch (IOException e) {
+LOG.error("Failed to close FileSystem for: " + ugi, e);
   }
 }
-LOG.info("Cleaned up " + path + " successfully.");
   }
 
   public Map secureBulkLoadHFiles(final HRegion region,
@@ -304,7 +315,7 @@ public class SecureBulkLoadManager {
   }
 
   if (srcFs == null) {
-srcFs = FileSystem.get(p.toUri(), conf);
+srcFs = FileSystem.newInstance(p.toUri(), conf);
   }
 
   if(!isFile(p)) {
@@ -334,34 +345,49 @@ public class SecureBulkLoadManager {
 @Override
 public void doneBulkLoad(byte[] family, String srcPath) throws IOException 
{
   LOG.debug("Bulk Load done for: " + srcPath);
+  closeSrcFs();
+}
+
+private void closeSrcFs() throws IOException {
+  if (srcFs != null) {
+srcFs.close();
+srcFs = null;
+  }
 }
 
 @Override
 public void failedBulkLoad(final byte[] family, final String srcPath) 
throws IOException {
-  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
-// files are copied so no need to move them back
-return;
-  }
-  Path p = new Path(srcPath);
-  Path stageP = new Path(stagingDir,
-  new Path(Bytes.toString(family), p.getName()));
+  try {
+Path p = new Path(srcPath);
+if (srcFs == null) {
+  srcFs = FileSystem.newInstance(p.toUri(), conf);
+}
+if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
+  // files are copied so no need to move them back
+  return;
+}
+Path stageP = new Path(stagingDir, new Path(Bytes.toString(family), 
p.getName()));
 
-  // In case of Replication for bulk load files, hfiles are not renamed by 
end point during
-  // prepare stage, so no need of rename here again
-  if (p.equals(stageP)) {
-LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
-return;
-  }
+// In case of Replication for bulk load files, hfiles are not renamed 
by end point during
+// prepare stage, so no need of rename here again
+if (p.equals(stageP)) {
+  LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
+  return;
+}
 
-  LOG.debug("Moving " + stageP + " back to " + p);
-  

hbase git commit: HBASE-15291 FileSystem not closed in secure bulkLoad

2018-04-11 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 95ca38a53 -> 828a1c76c


HBASE-15291 FileSystem not closed in secure bulkLoad

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/828a1c76
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/828a1c76
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/828a1c76

Branch: refs/heads/master
Commit: 828a1c76c71b0179bd9709e3da5d988b18fea631
Parents: 95ca38a
Author: Ashish Singhi 
Authored: Wed Apr 11 12:01:28 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 11 12:01:28 2018 +0530

--
 .../regionserver/SecureBulkLoadManager.java | 82 +---
 1 file changed, 54 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/828a1c76/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
index 264d985..a4ee517 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java
@@ -145,15 +145,26 @@ public class SecureBulkLoadManager {
 
   public void cleanupBulkLoad(final HRegion region, final 
CleanupBulkLoadRequest request)
   throws IOException {
-region.getCoprocessorHost().preCleanupBulkLoad(getActiveUser());
+try {
+  region.getCoprocessorHost().preCleanupBulkLoad(getActiveUser());
 
-Path path = new Path(request.getBulkToken());
-if (!fs.delete(path, true)) {
-  if (fs.exists(path)) {
-throw new IOException("Failed to clean up " + path);
+  Path path = new Path(request.getBulkToken());
+  if (!fs.delete(path, true)) {
+if (fs.exists(path)) {
+  throw new IOException("Failed to clean up " + path);
+}
+  }
+  LOG.info("Cleaned up " + path + " successfully.");
+} finally {
+  UserGroupInformation ugi = getActiveUser().getUGI();
+  try {
+if (!UserGroupInformation.getLoginUser().equals(ugi)) {
+  FileSystem.closeAllForUGI(ugi);
+}
+  } catch (IOException e) {
+LOG.error("Failed to close FileSystem for: " + ugi, e);
   }
 }
-LOG.info("Cleaned up " + path + " successfully.");
   }
 
   public Map secureBulkLoadHFiles(final HRegion region,
@@ -304,7 +315,7 @@ public class SecureBulkLoadManager {
   }
 
   if (srcFs == null) {
-srcFs = FileSystem.get(p.toUri(), conf);
+srcFs = FileSystem.newInstance(p.toUri(), conf);
   }
 
   if(!isFile(p)) {
@@ -334,34 +345,49 @@ public class SecureBulkLoadManager {
 @Override
 public void doneBulkLoad(byte[] family, String srcPath) throws IOException 
{
   LOG.debug("Bulk Load done for: " + srcPath);
+  closeSrcFs();
+}
+
+private void closeSrcFs() throws IOException {
+  if (srcFs != null) {
+srcFs.close();
+srcFs = null;
+  }
 }
 
 @Override
 public void failedBulkLoad(final byte[] family, final String srcPath) 
throws IOException {
-  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
-// files are copied so no need to move them back
-return;
-  }
-  Path p = new Path(srcPath);
-  Path stageP = new Path(stagingDir,
-  new Path(Bytes.toString(family), p.getName()));
+  try {
+Path p = new Path(srcPath);
+if (srcFs == null) {
+  srcFs = FileSystem.newInstance(p.toUri(), conf);
+}
+if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
+  // files are copied so no need to move them back
+  return;
+}
+Path stageP = new Path(stagingDir, new Path(Bytes.toString(family), 
p.getName()));
 
-  // In case of Replication for bulk load files, hfiles are not renamed by 
end point during
-  // prepare stage, so no need of rename here again
-  if (p.equals(stageP)) {
-LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
-return;
-  }
+// In case of Replication for bulk load files, hfiles are not renamed 
by end point during
+// prepare stage, so no need of rename here again
+if (p.equals(stageP)) {
+  LOG.debug(p.getName() + " is already available in source directory. 
Skipping rename.");
+  return;
+}
 
-  LOG.debug("Moving " + stageP + " back to " + p);
-  

hbase git commit: HBASE-16499 slow replication for small HBase clusters - addendum for updating in the document

2018-04-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 5fed7fd3d -> e2b0490d1


HBASE-16499 slow replication for small HBase clusters - addendum for updating 
in the document

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e2b0490d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e2b0490d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e2b0490d

Branch: refs/heads/master
Commit: e2b0490d18f7cc03aa59475a1b423597ddc481fb
Parents: 5fed7fd
Author: Ashish Singhi 
Authored: Thu Apr 5 11:16:52 2018 +0530
Committer: Ashish Singhi 
Committed: Thu Apr 5 11:16:52 2018 +0530

--
 src/main/asciidoc/_chapters/upgrading.adoc | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e2b0490d/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 68adb14..f5cdff3 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -390,6 +390,7 @@ The following configuration settings changed their default 
value. Where applicab
 * hbase.client.max.perserver.tasks is now 2. Previously it was 5.
 * hbase.normalizer.period is now 5 minutes. Previously it was 30 minutes.
 * hbase.regionserver.region.split.policy is now SteppingSplitPolicy. 
Previously it was IncreasingToUpperBoundRegionSplitPolicy.
+* replication.source.ratio is now 0.5. Previously it was 0.1.
 
 [[upgrade2.0.regions.on.master]]
 ."Master hosting regions" feature broken and unsupported



hbase git commit: HBASE-20231 Not able to delete column family from a row using RemoteHTable

2018-04-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 76f599de9 -> 8eac32fe9


HBASE-20231 Not able to delete column family from a row using RemoteHTable

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8eac32fe
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8eac32fe
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8eac32fe

Branch: refs/heads/branch-1.2
Commit: 8eac32fe92cc960490a9f560133b5be2c05558b4
Parents: 76f599d
Author: Pankaj Kumar 
Authored: Wed Apr 4 14:44:12 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 14:44:12 2018 +0530

--
 .../hadoop/hbase/rest/client/RemoteHTable.java  |  7 +--
 .../hbase/rest/client/TestRemoteTable.java  | 22 
 2 files changed, 27 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/8eac32fe/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
index 8c5c168..e878794 100644
--- 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
+++ 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
@@ -109,13 +109,16 @@ public class RemoteHTable implements Table {
   Iterator ii = quals.iterator();
   while (ii.hasNext()) {
 sb.append(Bytes.toStringBinary((byte[])e.getKey()));
-sb.append(':');
 Object o = ii.next();
 // Puts use byte[] but Deletes use KeyValue
 if (o instanceof byte[]) {
+  sb.append(':');
   sb.append(Bytes.toStringBinary((byte[])o));
 } else if (o instanceof KeyValue) {
-  sb.append(Bytes.toStringBinary(((KeyValue)o).getQualifier()));
+  if (((KeyValue) o).getQualifierLength() != 0) {
+sb.append(':');
+sb.append(Bytes.toStringBinary(((KeyValue) o).getQualifier()));
+  }
 } else {
   throw new RuntimeException("object type not handled");
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/8eac32fe/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
index 121ff65..cd33edd 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
@@ -330,18 +330,27 @@ public class TestRemoteTable {
 Put put = new Put(ROW_3);
 put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
 put.add(COLUMN_2, QUALIFIER_2, VALUE_2);
+put.add(COLUMN_3, QUALIFIER_1, VALUE_1);
+put.add(COLUMN_3, QUALIFIER_2, VALUE_2);
 remoteTable.put(put);
 
 Get get = new Get(ROW_3);
 get.addFamily(COLUMN_1);
 get.addFamily(COLUMN_2);
+get.addFamily(COLUMN_3);
 Result result = remoteTable.get(get);
 byte[] value1 = result.getValue(COLUMN_1, QUALIFIER_1);
 byte[] value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+byte[] value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+byte[] value4 = result.getValue(COLUMN_3, QUALIFIER_2);
 assertNotNull(value1);
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNotNull(value2);
 assertTrue(Bytes.equals(VALUE_2, value2));
+assertNotNull(value3);
+assertTrue(Bytes.equals(VALUE_1, value3));
+assertNotNull(value4);
+assertTrue(Bytes.equals(VALUE_2, value4));
 
 Delete delete = new Delete(ROW_3);
 delete.addColumn(COLUMN_2, QUALIFIER_2);
@@ -371,6 +380,19 @@ public class TestRemoteTable {
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNull(value2);
 
+// Delete column family from row
+delete = new Delete(ROW_3);
+delete.addFamily(COLUMN_3);
+remoteTable.delete(delete);
+
+get = new Get(ROW_3);
+get.addFamily(COLUMN_3);
+result = remoteTable.get(get);
+value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+value4 = result.getValue(COLUMN_3, QUALIFIER_2);
+assertNull(value3);
+assertNull(value4);
+
 delete = new Delete(ROW_3);
 remoteTable.delete(delete);
 



hbase git commit: HBASE-20231 Not able to delete column family from a row using RemoteHTable

2018-04-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 0db4bd3aa -> 090adcd37


HBASE-20231 Not able to delete column family from a row using RemoteHTable

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/090adcd3
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/090adcd3
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/090adcd3

Branch: refs/heads/branch-1.3
Commit: 090adcd375e5df8d24e16f88c15cc2bfda383808
Parents: 0db4bd3
Author: Pankaj Kumar 
Authored: Wed Apr 4 14:43:02 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 14:43:02 2018 +0530

--
 .../hadoop/hbase/rest/client/RemoteHTable.java  |  7 +--
 .../hbase/rest/client/TestRemoteTable.java  | 22 
 2 files changed, 27 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/090adcd3/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
index 8fa1b8a..6b0aad1 100644
--- 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
+++ 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
@@ -109,13 +109,16 @@ public class RemoteHTable implements Table {
   Iterator ii = quals.iterator();
   while (ii.hasNext()) {
 sb.append(Bytes.toStringBinary((byte[])e.getKey()));
-sb.append(':');
 Object o = ii.next();
 // Puts use byte[] but Deletes use KeyValue
 if (o instanceof byte[]) {
+  sb.append(':');
   sb.append(Bytes.toStringBinary((byte[])o));
 } else if (o instanceof KeyValue) {
-  sb.append(Bytes.toStringBinary(((KeyValue)o).getQualifier()));
+  if (((KeyValue) o).getQualifierLength() != 0) {
+sb.append(':');
+sb.append(Bytes.toStringBinary(((KeyValue) o).getQualifier()));
+  }
 } else {
   throw new RuntimeException("object type not handled");
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/090adcd3/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
index 121ff65..cd33edd 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
@@ -330,18 +330,27 @@ public class TestRemoteTable {
 Put put = new Put(ROW_3);
 put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
 put.add(COLUMN_2, QUALIFIER_2, VALUE_2);
+put.add(COLUMN_3, QUALIFIER_1, VALUE_1);
+put.add(COLUMN_3, QUALIFIER_2, VALUE_2);
 remoteTable.put(put);
 
 Get get = new Get(ROW_3);
 get.addFamily(COLUMN_1);
 get.addFamily(COLUMN_2);
+get.addFamily(COLUMN_3);
 Result result = remoteTable.get(get);
 byte[] value1 = result.getValue(COLUMN_1, QUALIFIER_1);
 byte[] value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+byte[] value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+byte[] value4 = result.getValue(COLUMN_3, QUALIFIER_2);
 assertNotNull(value1);
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNotNull(value2);
 assertTrue(Bytes.equals(VALUE_2, value2));
+assertNotNull(value3);
+assertTrue(Bytes.equals(VALUE_1, value3));
+assertNotNull(value4);
+assertTrue(Bytes.equals(VALUE_2, value4));
 
 Delete delete = new Delete(ROW_3);
 delete.addColumn(COLUMN_2, QUALIFIER_2);
@@ -371,6 +380,19 @@ public class TestRemoteTable {
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNull(value2);
 
+// Delete column family from row
+delete = new Delete(ROW_3);
+delete.addFamily(COLUMN_3);
+remoteTable.delete(delete);
+
+get = new Get(ROW_3);
+get.addFamily(COLUMN_3);
+result = remoteTable.get(get);
+value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+value4 = result.getValue(COLUMN_3, QUALIFIER_2);
+assertNull(value3);
+assertNull(value4);
+
 delete = new Delete(ROW_3);
 remoteTable.delete(delete);
 



hbase git commit: HBASE-20231 Not able to delete column family from a row using RemoteHTable

2018-04-03 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.4 98c6f8a3f -> 0ccdffe95


HBASE-20231 Not able to delete column family from a row using RemoteHTable

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0ccdffe9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0ccdffe9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0ccdffe9

Branch: refs/heads/branch-1.4
Commit: 0ccdffe95236617678bf09f5bf670524cb2ae666
Parents: 98c6f8a
Author: Pankaj Kumar 
Authored: Wed Apr 4 10:16:58 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 10:16:58 2018 +0530

--
 .../hadoop/hbase/rest/client/RemoteHTable.java  |  7 +--
 .../hbase/rest/client/TestRemoteTable.java  | 22 
 2 files changed, 27 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0ccdffe9/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
index 463b232..fc6a90f 100644
--- 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
+++ 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
@@ -112,13 +112,16 @@ public class RemoteHTable implements Table {
   Iterator ii = quals.iterator();
   while (ii.hasNext()) {
 sb.append(toURLEncodedBytes((byte[])e.getKey()));
-sb.append(':');
 Object o = ii.next();
 // Puts use byte[] but Deletes use KeyValue
 if (o instanceof byte[]) {
+  sb.append(':');
   sb.append(toURLEncodedBytes((byte[])o));
 } else if (o instanceof KeyValue) {
-  sb.append(toURLEncodedBytes(((KeyValue)o).getQualifier()));
+  if (((KeyValue) o).getQualifierLength() != 0) {
+sb.append(':');
+sb.append(toURLEncodedBytes(((KeyValue) o).getQualifier()));
+  }
 } else {
   throw new RuntimeException("object type not handled");
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/0ccdffe9/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
index 342fc4e..28f3798 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
@@ -349,18 +349,27 @@ public class TestRemoteTable {
 Put put = new Put(ROW_3);
 put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
 put.add(COLUMN_2, QUALIFIER_2, VALUE_2);
+put.add(COLUMN_3, QUALIFIER_1, VALUE_1);
+put.add(COLUMN_3, QUALIFIER_2, VALUE_2);
 remoteTable.put(put);
 
 Get get = new Get(ROW_3);
 get.addFamily(COLUMN_1);
 get.addFamily(COLUMN_2);
+get.addFamily(COLUMN_3);
 Result result = remoteTable.get(get);
 byte[] value1 = result.getValue(COLUMN_1, QUALIFIER_1);
 byte[] value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+byte[] value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+byte[] value4 = result.getValue(COLUMN_3, QUALIFIER_2);
 assertNotNull(value1);
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNotNull(value2);
 assertTrue(Bytes.equals(VALUE_2, value2));
+assertNotNull(value3);
+assertTrue(Bytes.equals(VALUE_1, value3));
+assertNotNull(value4);
+assertTrue(Bytes.equals(VALUE_2, value4));
 
 Delete delete = new Delete(ROW_3);
 delete.addColumn(COLUMN_2, QUALIFIER_2);
@@ -390,6 +399,19 @@ public class TestRemoteTable {
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNull(value2);
 
+// Delete column family from row
+delete = new Delete(ROW_3);
+delete.addFamily(COLUMN_3);
+remoteTable.delete(delete);
+
+get = new Get(ROW_3);
+get.addFamily(COLUMN_3);
+result = remoteTable.get(get);
+value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+value4 = result.getValue(COLUMN_3, QUALIFIER_2);
+assertNull(value3);
+assertNull(value4);
+
 delete = new Delete(ROW_3);
 remoteTable.delete(delete);
 



hbase git commit: HBASE-20231 Not able to delete column family from a row using RemoteHTable

2018-04-03 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 9ced0c936 -> 2eae8104d


HBASE-20231 Not able to delete column family from a row using RemoteHTable

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2eae8104
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2eae8104
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2eae8104

Branch: refs/heads/branch-1
Commit: 2eae8104d19cc8be1b69f4969623b9a9f15e2593
Parents: 9ced0c9
Author: Pankaj Kumar 
Authored: Wed Apr 4 10:16:11 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 10:16:11 2018 +0530

--
 .../hadoop/hbase/rest/client/RemoteHTable.java  |  7 +--
 .../hbase/rest/client/TestRemoteTable.java  | 22 
 2 files changed, 27 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2eae8104/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
index 463b232..fc6a90f 100644
--- 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
+++ 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
@@ -112,13 +112,16 @@ public class RemoteHTable implements Table {
   Iterator ii = quals.iterator();
   while (ii.hasNext()) {
 sb.append(toURLEncodedBytes((byte[])e.getKey()));
-sb.append(':');
 Object o = ii.next();
 // Puts use byte[] but Deletes use KeyValue
 if (o instanceof byte[]) {
+  sb.append(':');
   sb.append(toURLEncodedBytes((byte[])o));
 } else if (o instanceof KeyValue) {
-  sb.append(toURLEncodedBytes(((KeyValue)o).getQualifier()));
+  if (((KeyValue) o).getQualifierLength() != 0) {
+sb.append(':');
+sb.append(toURLEncodedBytes(((KeyValue) o).getQualifier()));
+  }
 } else {
   throw new RuntimeException("object type not handled");
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/2eae8104/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
index 342fc4e..28f3798 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
@@ -349,18 +349,27 @@ public class TestRemoteTable {
 Put put = new Put(ROW_3);
 put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
 put.add(COLUMN_2, QUALIFIER_2, VALUE_2);
+put.add(COLUMN_3, QUALIFIER_1, VALUE_1);
+put.add(COLUMN_3, QUALIFIER_2, VALUE_2);
 remoteTable.put(put);
 
 Get get = new Get(ROW_3);
 get.addFamily(COLUMN_1);
 get.addFamily(COLUMN_2);
+get.addFamily(COLUMN_3);
 Result result = remoteTable.get(get);
 byte[] value1 = result.getValue(COLUMN_1, QUALIFIER_1);
 byte[] value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+byte[] value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+byte[] value4 = result.getValue(COLUMN_3, QUALIFIER_2);
 assertNotNull(value1);
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNotNull(value2);
 assertTrue(Bytes.equals(VALUE_2, value2));
+assertNotNull(value3);
+assertTrue(Bytes.equals(VALUE_1, value3));
+assertNotNull(value4);
+assertTrue(Bytes.equals(VALUE_2, value4));
 
 Delete delete = new Delete(ROW_3);
 delete.addColumn(COLUMN_2, QUALIFIER_2);
@@ -390,6 +399,19 @@ public class TestRemoteTable {
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNull(value2);
 
+// Delete column family from row
+delete = new Delete(ROW_3);
+delete.addFamily(COLUMN_3);
+remoteTable.delete(delete);
+
+get = new Get(ROW_3);
+get.addFamily(COLUMN_3);
+result = remoteTable.get(get);
+value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+value4 = result.getValue(COLUMN_3, QUALIFIER_2);
+assertNull(value3);
+assertNull(value4);
+
 delete = new Delete(ROW_3);
 remoteTable.delete(delete);
 



hbase git commit: HBASE-20231 Not able to delete column family from a row using RemoteHTable

2018-04-03 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-2.0 79bb54ddf -> d7cb0bd41


HBASE-20231 Not able to delete column family from a row using RemoteHTable

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d7cb0bd4
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d7cb0bd4
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d7cb0bd4

Branch: refs/heads/branch-2.0
Commit: d7cb0bd4179951d973d60eff6ad68b4a5822f507
Parents: 79bb54d
Author: Pankaj Kumar 
Authored: Wed Apr 4 10:14:46 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 10:14:46 2018 +0530

--
 .../hadoop/hbase/rest/client/RemoteHTable.java  |  9 +---
 .../hbase/rest/client/TestRemoteTable.java  | 22 
 2 files changed, 28 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d7cb0bd4/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
index cc3efdd..29b48e1 100644
--- 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
+++ 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
@@ -115,13 +115,16 @@ public class RemoteHTable implements Table {
   Iterator ii = quals.iterator();
   while (ii.hasNext()) {
 sb.append(toURLEncodedBytes((byte[])e.getKey()));
-sb.append(':');
 Object o = ii.next();
 // Puts use byte[] but Deletes use KeyValue
 if (o instanceof byte[]) {
-  sb.append(toURLEncodedBytes((byte[])o));
+  sb.append(':');
+  sb.append(toURLEncodedBytes((byte[]) o));
 } else if (o instanceof KeyValue) {
-  
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue)o)));
+  if (((KeyValue) o).getQualifierLength() != 0) {
+sb.append(':');
+sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue) 
o)));
+  }
 } else {
   throw new RuntimeException("object type not handled");
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/d7cb0bd4/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
index 5053d91..c6f5195 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
@@ -353,18 +353,27 @@ public class TestRemoteTable {
 Put put = new Put(ROW_3);
 put.addColumn(COLUMN_1, QUALIFIER_1, VALUE_1);
 put.addColumn(COLUMN_2, QUALIFIER_2, VALUE_2);
+put.addColumn(COLUMN_3, QUALIFIER_1, VALUE_1);
+put.addColumn(COLUMN_3, QUALIFIER_2, VALUE_2);
 remoteTable.put(put);
 
 Get get = new Get(ROW_3);
 get.addFamily(COLUMN_1);
 get.addFamily(COLUMN_2);
+get.addFamily(COLUMN_3);
 Result result = remoteTable.get(get);
 byte[] value1 = result.getValue(COLUMN_1, QUALIFIER_1);
 byte[] value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+byte[] value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+byte[] value4 = result.getValue(COLUMN_3, QUALIFIER_2);
 assertNotNull(value1);
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNotNull(value2);
 assertTrue(Bytes.equals(VALUE_2, value2));
+assertNotNull(value3);
+assertTrue(Bytes.equals(VALUE_1, value3));
+assertNotNull(value4);
+assertTrue(Bytes.equals(VALUE_2, value4));
 
 Delete delete = new Delete(ROW_3);
 delete.addColumn(COLUMN_2, QUALIFIER_2);
@@ -394,6 +403,19 @@ public class TestRemoteTable {
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNull(value2);
 
+// Delete column family from row
+delete = new Delete(ROW_3);
+delete.addFamily(COLUMN_3);
+remoteTable.delete(delete);
+
+get = new Get(ROW_3);
+get.addFamily(COLUMN_3);
+result = remoteTable.get(get);
+value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+value4 = result.getValue(COLUMN_3, QUALIFIER_2);
+assertNull(value3);
+assertNull(value4);
+
 delete = new Delete(ROW_3);
 remoteTable.delete(delete);
 



hbase git commit: HBASE-20231 Not able to delete column family from a row using RemoteHTable

2018-04-03 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-2 b8a13ba10 -> a761f175a


HBASE-20231 Not able to delete column family from a row using RemoteHTable

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a761f175
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a761f175
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a761f175

Branch: refs/heads/branch-2
Commit: a761f175ab1ab48be3462b4a2161a1663a719620
Parents: b8a13ba
Author: Pankaj Kumar 
Authored: Wed Apr 4 10:13:34 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 10:13:34 2018 +0530

--
 .../hadoop/hbase/rest/client/RemoteHTable.java  |  9 +---
 .../hbase/rest/client/TestRemoteTable.java  | 22 
 2 files changed, 28 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a761f175/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
index cc3efdd..29b48e1 100644
--- 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
+++ 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
@@ -115,13 +115,16 @@ public class RemoteHTable implements Table {
   Iterator ii = quals.iterator();
   while (ii.hasNext()) {
 sb.append(toURLEncodedBytes((byte[])e.getKey()));
-sb.append(':');
 Object o = ii.next();
 // Puts use byte[] but Deletes use KeyValue
 if (o instanceof byte[]) {
-  sb.append(toURLEncodedBytes((byte[])o));
+  sb.append(':');
+  sb.append(toURLEncodedBytes((byte[]) o));
 } else if (o instanceof KeyValue) {
-  
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue)o)));
+  if (((KeyValue) o).getQualifierLength() != 0) {
+sb.append(':');
+sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue) 
o)));
+  }
 } else {
   throw new RuntimeException("object type not handled");
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/a761f175/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
index 5053d91..c6f5195 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
@@ -353,18 +353,27 @@ public class TestRemoteTable {
 Put put = new Put(ROW_3);
 put.addColumn(COLUMN_1, QUALIFIER_1, VALUE_1);
 put.addColumn(COLUMN_2, QUALIFIER_2, VALUE_2);
+put.addColumn(COLUMN_3, QUALIFIER_1, VALUE_1);
+put.addColumn(COLUMN_3, QUALIFIER_2, VALUE_2);
 remoteTable.put(put);
 
 Get get = new Get(ROW_3);
 get.addFamily(COLUMN_1);
 get.addFamily(COLUMN_2);
+get.addFamily(COLUMN_3);
 Result result = remoteTable.get(get);
 byte[] value1 = result.getValue(COLUMN_1, QUALIFIER_1);
 byte[] value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+byte[] value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+byte[] value4 = result.getValue(COLUMN_3, QUALIFIER_2);
 assertNotNull(value1);
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNotNull(value2);
 assertTrue(Bytes.equals(VALUE_2, value2));
+assertNotNull(value3);
+assertTrue(Bytes.equals(VALUE_1, value3));
+assertNotNull(value4);
+assertTrue(Bytes.equals(VALUE_2, value4));
 
 Delete delete = new Delete(ROW_3);
 delete.addColumn(COLUMN_2, QUALIFIER_2);
@@ -394,6 +403,19 @@ public class TestRemoteTable {
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNull(value2);
 
+// Delete column family from row
+delete = new Delete(ROW_3);
+delete.addFamily(COLUMN_3);
+remoteTable.delete(delete);
+
+get = new Get(ROW_3);
+get.addFamily(COLUMN_3);
+result = remoteTable.get(get);
+value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+value4 = result.getValue(COLUMN_3, QUALIFIER_2);
+assertNull(value3);
+assertNull(value4);
+
 delete = new Delete(ROW_3);
 remoteTable.delete(delete);
 



hbase git commit: HBASE-20231 Not able to delete column family from a row using RemoteHTable

2018-04-03 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 5937202fd -> 7abaf22a1


HBASE-20231 Not able to delete column family from a row using RemoteHTable

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7abaf22a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7abaf22a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7abaf22a

Branch: refs/heads/master
Commit: 7abaf22a12cc9e2655ff57ad46f66e2189fd52e2
Parents: 5937202
Author: Pankaj Kumar 
Authored: Wed Apr 4 10:11:09 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 10:11:09 2018 +0530

--
 .../hadoop/hbase/rest/client/RemoteHTable.java  |  9 +---
 .../hbase/rest/client/TestRemoteTable.java  | 22 
 2 files changed, 28 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7abaf22a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
index cc3efdd..29b48e1 100644
--- 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
+++ 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
@@ -115,13 +115,16 @@ public class RemoteHTable implements Table {
   Iterator ii = quals.iterator();
   while (ii.hasNext()) {
 sb.append(toURLEncodedBytes((byte[])e.getKey()));
-sb.append(':');
 Object o = ii.next();
 // Puts use byte[] but Deletes use KeyValue
 if (o instanceof byte[]) {
-  sb.append(toURLEncodedBytes((byte[])o));
+  sb.append(':');
+  sb.append(toURLEncodedBytes((byte[]) o));
 } else if (o instanceof KeyValue) {
-  
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue)o)));
+  if (((KeyValue) o).getQualifierLength() != 0) {
+sb.append(':');
+sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue) 
o)));
+  }
 } else {
   throw new RuntimeException("object type not handled");
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/7abaf22a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
index 5053d91..c6f5195 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
@@ -353,18 +353,27 @@ public class TestRemoteTable {
 Put put = new Put(ROW_3);
 put.addColumn(COLUMN_1, QUALIFIER_1, VALUE_1);
 put.addColumn(COLUMN_2, QUALIFIER_2, VALUE_2);
+put.addColumn(COLUMN_3, QUALIFIER_1, VALUE_1);
+put.addColumn(COLUMN_3, QUALIFIER_2, VALUE_2);
 remoteTable.put(put);
 
 Get get = new Get(ROW_3);
 get.addFamily(COLUMN_1);
 get.addFamily(COLUMN_2);
+get.addFamily(COLUMN_3);
 Result result = remoteTable.get(get);
 byte[] value1 = result.getValue(COLUMN_1, QUALIFIER_1);
 byte[] value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+byte[] value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+byte[] value4 = result.getValue(COLUMN_3, QUALIFIER_2);
 assertNotNull(value1);
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNotNull(value2);
 assertTrue(Bytes.equals(VALUE_2, value2));
+assertNotNull(value3);
+assertTrue(Bytes.equals(VALUE_1, value3));
+assertNotNull(value4);
+assertTrue(Bytes.equals(VALUE_2, value4));
 
 Delete delete = new Delete(ROW_3);
 delete.addColumn(COLUMN_2, QUALIFIER_2);
@@ -394,6 +403,19 @@ public class TestRemoteTable {
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNull(value2);
 
+// Delete column family from row
+delete = new Delete(ROW_3);
+delete.addFamily(COLUMN_3);
+remoteTable.delete(delete);
+
+get = new Get(ROW_3);
+get.addFamily(COLUMN_3);
+result = remoteTable.get(get);
+value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+value4 = result.getValue(COLUMN_3, QUALIFIER_2);
+assertNull(value3);
+assertNull(value4);
+
 delete = new Delete(ROW_3);
 remoteTable.delete(delete);
 



hbase git commit: HBASE-16499 slow replication for small HBase clusters

2018-04-03 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-2.0 fb2a0eb66 -> 79bb54ddf


HBASE-16499 slow replication for small HBase clusters

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/79bb54dd
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/79bb54dd
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/79bb54dd

Branch: refs/heads/branch-2.0
Commit: 79bb54ddf4306796ab508a0612ff17c2e4ab863c
Parents: fb2a0eb
Author: Ashish Singhi 
Authored: Wed Apr 4 10:00:44 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 10:00:44 2018 +0530

--
 .../regionserver/ReplicationSinkManager.java|  2 +-
 .../TestReplicationSinkManager.java | 36 +++-
 2 files changed, 21 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/79bb54dd/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
index af6888c..3cd7884 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
@@ -58,7 +58,7 @@ public class ReplicationSinkManager {
* Default ratio of the total number of peer cluster region servers to 
consider
* replicating to.
*/
-  static final float DEFAULT_REPLICATION_SOURCE_RATIO = 0.1f;
+  static final float DEFAULT_REPLICATION_SOURCE_RATIO = 0.5f;
 
 
   private final Connection conn;

http://git-wip-us.apache.org/repos/asf/hbase/blob/79bb54dd/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
index 3be3bfb..39dabb4 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
@@ -27,7 +27,6 @@ import org.apache.hadoop.hbase.HBaseClassTestRule;
 import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.client.ClusterConnection;
 import org.apache.hadoop.hbase.replication.HBaseReplicationEndpoint;
-import org.apache.hadoop.hbase.replication.ReplicationPeers;
 import 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSinkManager.SinkPeer;
 import org.apache.hadoop.hbase.testclassification.ReplicationTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
@@ -49,13 +48,11 @@ public class TestReplicationSinkManager {
 
   private static final String PEER_CLUSTER_ID = "PEER_CLUSTER_ID";
 
-  private ReplicationPeers replicationPeers;
   private HBaseReplicationEndpoint replicationEndpoint;
   private ReplicationSinkManager sinkManager;
 
   @Before
   public void setUp() {
-replicationPeers = mock(ReplicationPeers.class);
 replicationEndpoint = mock(HBaseReplicationEndpoint.class);
 sinkManager = new ReplicationSinkManager(mock(ClusterConnection.class),
   PEER_CLUSTER_ID, replicationEndpoint, new 
Configuration());
@@ -64,7 +61,8 @@ public class TestReplicationSinkManager {
   @Test
   public void testChooseSinks() {
 List serverNames = Lists.newArrayList();
-for (int i = 0; i < 20; i++) {
+int totalServers = 20;
+for (int i = 0; i < totalServers; i++) {
   serverNames.add(mock(ServerName.class));
 }
 
@@ -73,7 +71,8 @@ public class TestReplicationSinkManager {
 
 sinkManager.chooseSinks();
 
-assertEquals(2, sinkManager.getNumSinks());
+int expected = (int) (totalServers * 
ReplicationSinkManager.DEFAULT_REPLICATION_SOURCE_RATIO);
+assertEquals(expected, sinkManager.getNumSinks());
 
   }
 
@@ -117,7 +116,8 @@ public class TestReplicationSinkManager {
   @Test
   public void testReportBadSink_PastThreshold() {
 List serverNames = Lists.newArrayList();
-for (int i = 0; i < 30; i++) {
+int totalServers = 30;
+for (int i = 0; i < totalServers; i++) {
   serverNames.add(mock(ServerName.class));
 }
 when(replicationEndpoint.getRegionServers())
@@ -126,7 +126,8 @@ 

hbase git commit: HBASE-16499 slow replication for small HBase clusters

2018-04-03 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-2 ed21f2617 -> 9a3488072


HBASE-16499 slow replication for small HBase clusters

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/9a348807
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/9a348807
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/9a348807

Branch: refs/heads/branch-2
Commit: 9a3488072456809dbd139c343f849410df4cc0ee
Parents: ed21f26
Author: Ashish Singhi 
Authored: Wed Apr 4 09:59:50 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 09:59:50 2018 +0530

--
 .../regionserver/ReplicationSinkManager.java|  2 +-
 .../TestReplicationSinkManager.java | 36 +++-
 2 files changed, 21 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/9a348807/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
index af6888c..3cd7884 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
@@ -58,7 +58,7 @@ public class ReplicationSinkManager {
* Default ratio of the total number of peer cluster region servers to 
consider
* replicating to.
*/
-  static final float DEFAULT_REPLICATION_SOURCE_RATIO = 0.1f;
+  static final float DEFAULT_REPLICATION_SOURCE_RATIO = 0.5f;
 
 
   private final Connection conn;

http://git-wip-us.apache.org/repos/asf/hbase/blob/9a348807/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
index 3be3bfb..39dabb4 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
@@ -27,7 +27,6 @@ import org.apache.hadoop.hbase.HBaseClassTestRule;
 import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.client.ClusterConnection;
 import org.apache.hadoop.hbase.replication.HBaseReplicationEndpoint;
-import org.apache.hadoop.hbase.replication.ReplicationPeers;
 import 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSinkManager.SinkPeer;
 import org.apache.hadoop.hbase.testclassification.ReplicationTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
@@ -49,13 +48,11 @@ public class TestReplicationSinkManager {
 
   private static final String PEER_CLUSTER_ID = "PEER_CLUSTER_ID";
 
-  private ReplicationPeers replicationPeers;
   private HBaseReplicationEndpoint replicationEndpoint;
   private ReplicationSinkManager sinkManager;
 
   @Before
   public void setUp() {
-replicationPeers = mock(ReplicationPeers.class);
 replicationEndpoint = mock(HBaseReplicationEndpoint.class);
 sinkManager = new ReplicationSinkManager(mock(ClusterConnection.class),
   PEER_CLUSTER_ID, replicationEndpoint, new 
Configuration());
@@ -64,7 +61,8 @@ public class TestReplicationSinkManager {
   @Test
   public void testChooseSinks() {
 List serverNames = Lists.newArrayList();
-for (int i = 0; i < 20; i++) {
+int totalServers = 20;
+for (int i = 0; i < totalServers; i++) {
   serverNames.add(mock(ServerName.class));
 }
 
@@ -73,7 +71,8 @@ public class TestReplicationSinkManager {
 
 sinkManager.chooseSinks();
 
-assertEquals(2, sinkManager.getNumSinks());
+int expected = (int) (totalServers * 
ReplicationSinkManager.DEFAULT_REPLICATION_SOURCE_RATIO);
+assertEquals(expected, sinkManager.getNumSinks());
 
   }
 
@@ -117,7 +116,8 @@ public class TestReplicationSinkManager {
   @Test
   public void testReportBadSink_PastThreshold() {
 List serverNames = Lists.newArrayList();
-for (int i = 0; i < 30; i++) {
+int totalServers = 30;
+for (int i = 0; i < totalServers; i++) {
   serverNames.add(mock(ServerName.class));
 }
 when(replicationEndpoint.getRegionServers())
@@ -126,7 +126,8 @@ 

hbase git commit: HBASE-16499 slow replication for small HBase clusters

2018-04-03 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master b1b0db319 -> 5937202fd


HBASE-16499 slow replication for small HBase clusters

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5937202f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5937202f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5937202f

Branch: refs/heads/master
Commit: 5937202fd5d6c5fba74bae21846f62da4ee35583
Parents: b1b0db3
Author: Ashish Singhi 
Authored: Wed Apr 4 09:54:41 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 09:54:41 2018 +0530

--
 .../regionserver/ReplicationSinkManager.java|  2 +-
 .../TestReplicationSinkManager.java | 36 +++-
 2 files changed, 21 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5937202f/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
index af6888c..3cd7884 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkManager.java
@@ -58,7 +58,7 @@ public class ReplicationSinkManager {
* Default ratio of the total number of peer cluster region servers to 
consider
* replicating to.
*/
-  static final float DEFAULT_REPLICATION_SOURCE_RATIO = 0.1f;
+  static final float DEFAULT_REPLICATION_SOURCE_RATIO = 0.5f;
 
 
   private final Connection conn;

http://git-wip-us.apache.org/repos/asf/hbase/blob/5937202f/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
index 3be3bfb..39dabb4 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java
@@ -27,7 +27,6 @@ import org.apache.hadoop.hbase.HBaseClassTestRule;
 import org.apache.hadoop.hbase.ServerName;
 import org.apache.hadoop.hbase.client.ClusterConnection;
 import org.apache.hadoop.hbase.replication.HBaseReplicationEndpoint;
-import org.apache.hadoop.hbase.replication.ReplicationPeers;
 import 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSinkManager.SinkPeer;
 import org.apache.hadoop.hbase.testclassification.ReplicationTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
@@ -49,13 +48,11 @@ public class TestReplicationSinkManager {
 
   private static final String PEER_CLUSTER_ID = "PEER_CLUSTER_ID";
 
-  private ReplicationPeers replicationPeers;
   private HBaseReplicationEndpoint replicationEndpoint;
   private ReplicationSinkManager sinkManager;
 
   @Before
   public void setUp() {
-replicationPeers = mock(ReplicationPeers.class);
 replicationEndpoint = mock(HBaseReplicationEndpoint.class);
 sinkManager = new ReplicationSinkManager(mock(ClusterConnection.class),
   PEER_CLUSTER_ID, replicationEndpoint, new 
Configuration());
@@ -64,7 +61,8 @@ public class TestReplicationSinkManager {
   @Test
   public void testChooseSinks() {
 List serverNames = Lists.newArrayList();
-for (int i = 0; i < 20; i++) {
+int totalServers = 20;
+for (int i = 0; i < totalServers; i++) {
   serverNames.add(mock(ServerName.class));
 }
 
@@ -73,7 +71,8 @@ public class TestReplicationSinkManager {
 
 sinkManager.chooseSinks();
 
-assertEquals(2, sinkManager.getNumSinks());
+int expected = (int) (totalServers * 
ReplicationSinkManager.DEFAULT_REPLICATION_SOURCE_RATIO);
+assertEquals(expected, sinkManager.getNumSinks());
 
   }
 
@@ -117,7 +116,8 @@ public class TestReplicationSinkManager {
   @Test
   public void testReportBadSink_PastThreshold() {
 List serverNames = Lists.newArrayList();
-for (int i = 0; i < 30; i++) {
+int totalServers = 30;
+for (int i = 0; i < totalServers; i++) {
   serverNames.add(mock(ServerName.class));
 }
 when(replicationEndpoint.getRegionServers())
@@ -126,7 +126,8 @@ public 

hbase git commit: HBASE-19905 ReplicationSyncUp tool will not exit if a peer replication is disabled

2018-02-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 a55f2c759 -> bdeab9319


HBASE-19905 ReplicationSyncUp tool will not exit if a peer replication is 
disabled

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/bdeab931
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/bdeab931
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/bdeab931

Branch: refs/heads/branch-1
Commit: bdeab93196a247c7e3dcb090f8288de0050c5f24
Parents: a55f2c7
Author: Ashish Singhi 
Authored: Sun Feb 4 18:24:32 2018 +0530
Committer: Ashish Singhi 
Committed: Sun Feb 4 18:24:32 2018 +0530

--
 .../replication/regionserver/ReplicationSourceManager.java| 7 +++
 1 file changed, 7 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/bdeab931/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
index 77fd837..6ec30de 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
@@ -63,6 +63,7 @@ import 
org.apache.hadoop.hbase.replication.ReplicationEndpoint;
 import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationListener;
 import org.apache.hadoop.hbase.replication.ReplicationPeer;
+import org.apache.hadoop.hbase.replication.ReplicationPeer.PeerState;
 import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
 import org.apache.hadoop.hbase.replication.ReplicationPeers;
 import org.apache.hadoop.hbase.replication.ReplicationQueueInfo;
@@ -754,6 +755,12 @@ public class ReplicationSourceManager implements 
ReplicationListener {
 replicationQueues.removeQueue(peerId);
 continue;
   }
+  if (server instanceof ReplicationSyncUp.DummyServer
+  && peer.getPeerState().equals(PeerState.DISABLED)) {
+LOG.warn("Peer " + actualPeerId + " is disbaled. ReplicationSyncUp 
tool will skip "
++ "replicating data to this peer.");
+continue;
+  }
   // track sources in walsByIdRecoveredQueues
   Map walsByGroup = new HashMap();
   walsByIdRecoveredQueues.put(peerId, walsByGroup);



hbase git commit: HBASE-19905 ReplicationSyncUp tool will not exit if a peer replication is disabled

2018-02-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-2 3b603d2c0 -> 2d5b36d19


HBASE-19905 ReplicationSyncUp tool will not exit if a peer replication is 
disabled

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2d5b36d1
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2d5b36d1
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2d5b36d1

Branch: refs/heads/branch-2
Commit: 2d5b36d194b90d4a43505c094464130506a079f6
Parents: 3b603d2
Author: Ashish Singhi 
Authored: Sun Feb 4 18:12:46 2018 +0530
Committer: Ashish Singhi 
Committed: Sun Feb 4 18:12:46 2018 +0530

--
 .../replication/regionserver/ReplicationSourceManager.java   | 8 
 1 file changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2d5b36d1/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
index cbbfca0..c0c2333 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
@@ -56,6 +56,7 @@ import 
org.apache.hadoop.hbase.replication.ReplicationEndpoint;
 import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationListener;
 import org.apache.hadoop.hbase.replication.ReplicationPeer;
+import org.apache.hadoop.hbase.replication.ReplicationPeer.PeerState;
 import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
 import org.apache.hadoop.hbase.replication.ReplicationPeers;
 import org.apache.hadoop.hbase.replication.ReplicationQueueInfo;
@@ -739,6 +740,13 @@ public class ReplicationSourceManager implements 
ReplicationListener {
 replicationQueues.removeQueue(peerId);
 continue;
   }
+  if (server instanceof ReplicationSyncUp.DummyServer
+  && peer.getPeerState().equals(PeerState.DISABLED)) {
+LOG.warn("Peer {} is disbaled. ReplicationSyncUp tool will skip "
++ "replicating data to this peer.",
+  actualPeerId);
+continue;
+  }
   // track sources in walsByIdRecoveredQueues
   Map walsByGroup = new HashMap<>();
   walsByIdRecoveredQueues.put(peerId, walsByGroup);



hbase git commit: HBASE-19905 ReplicationSyncUp tool will not exit if a peer replication is disabled

2018-02-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master b0e998f2a -> 397d34736


HBASE-19905 ReplicationSyncUp tool will not exit if a peer replication is 
disabled

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/397d3473
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/397d3473
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/397d3473

Branch: refs/heads/master
Commit: 397d34736e63d7661a2f01524f8b302e1309d40f
Parents: b0e998f
Author: Ashish Singhi 
Authored: Sun Feb 4 17:52:38 2018 +0530
Committer: Ashish Singhi 
Committed: Sun Feb 4 17:52:38 2018 +0530

--
 .../replication/regionserver/ReplicationSourceManager.java   | 8 
 1 file changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/397d3473/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
index 2147214..6e87563 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
@@ -55,6 +55,7 @@ import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.replication.ReplicationException;
 import org.apache.hadoop.hbase.replication.ReplicationListener;
 import org.apache.hadoop.hbase.replication.ReplicationPeer;
+import org.apache.hadoop.hbase.replication.ReplicationPeer.PeerState;
 import org.apache.hadoop.hbase.replication.ReplicationPeers;
 import org.apache.hadoop.hbase.replication.ReplicationQueueInfo;
 import org.apache.hadoop.hbase.replication.ReplicationQueueStorage;
@@ -747,6 +748,13 @@ public class ReplicationSourceManager implements 
ReplicationListener {
 abortWhenFail(() -> 
queueStorage.removeQueue(server.getServerName(), queueId));
 continue;
   }
+  if (server instanceof ReplicationSyncUp.DummyServer
+  && peer.getPeerState().equals(PeerState.DISABLED)) {
+LOG.warn("Peer {} is disbaled. ReplicationSyncUp tool will skip "
++ "replicating data to this peer.",
+  actualPeerId);
+continue;
+  }
   // track sources in walsByIdRecoveredQueues
   Map walsByGroup = new HashMap<>();
   walsByIdRecoveredQueues.put(queueId, walsByGroup);



hbase git commit: HBASE-19796 ReplicationSynUp tool is not replicating the data if the WAL is moved to splitting directory

2018-01-16 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 1a97b33e1 -> 45e99ffa6


HBASE-19796 ReplicationSynUp tool is not replicating the data if the WAL is 
moved to splitting directory

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/45e99ffa
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/45e99ffa
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/45e99ffa

Branch: refs/heads/branch-1.2
Commit: 45e99ffa68c9a7dd71173ffcb707110898950802
Parents: 1a97b33
Author: Ashish Singhi 
Authored: Wed Jan 17 10:47:04 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Jan 17 10:48:14 2018 +0530

--
 .../hbase/replication/regionserver/ReplicationSource.java   | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/45e99ffa/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index 4175ad2..ff79976 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -780,8 +780,13 @@ public class ReplicationSource extends Thread
   // We found the right new location
   LOG.info("Log " + this.currentPath + " still exists at " +
   possibleLogLocation);
-  // Breaking here will make us sleep since reader is null
-  // TODO why don't we need to set currentPath and call 
openReader here?
+  // When running ReplicationSyncUp tool, we should replicate 
the data from WAL
+  // which is moved to WAL splitting directory also.
+  if (stopper instanceof ReplicationSyncUp.DummyServer) {
+// Open the log at the this location
+this.currentPath = possibleLogLocation;
+this.openReader(sleepMultiplier);
+  }
   return true;
 }
   }



hbase git commit: HBASE-19796 ReplicationSynUp tool is not replicating the data if the WAL is moved to splitting directory

2018-01-16 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 28f811420 -> 04bb40824


HBASE-19796 ReplicationSynUp tool is not replicating the data if the WAL is 
moved to splitting directory

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/04bb4082
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/04bb4082
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/04bb4082

Branch: refs/heads/branch-1.3
Commit: 04bb4082438faf87c19627a7109c714bf17113b1
Parents: 28f8114
Author: Ashish Singhi 
Authored: Wed Jan 17 10:43:42 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Jan 17 10:43:42 2018 +0530

--
 .../hbase/replication/regionserver/ReplicationSource.java   | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/04bb4082/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
index d156a36..78b465c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -864,8 +864,13 @@ public class ReplicationSource extends Thread
   // We found the right new location
   LOG.info("Log " + this.currentPath + " still exists at " +
   possibleLogLocation);
-  // Breaking here will make us sleep since reader is null
-  // TODO why don't we need to set currentPath and call 
openReader here?
+  // When running ReplicationSyncUp tool, we should replicate 
the data from WAL
+  // which is moved to WAL splitting directory also.
+  if (stopper instanceof ReplicationSyncUp.DummyServer) {
+// Open the log at the this location
+this.currentPath = possibleLogLocation;
+this.openReader(sleepMultiplier);
+  }
   return true;
 }
   }



hbase git commit: HBASE-18939 Backport HBASE-16538 to branch-1.3

2017-10-05 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 82aee4ba3 -> af9de6ed8


HBASE-18939 Backport HBASE-16538 to branch-1.3

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/af9de6ed
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/af9de6ed
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/af9de6ed

Branch: refs/heads/branch-1.3
Commit: af9de6ed8b3b33d0f87103d98b194bd7e9ddb5d5
Parents: 82aee4b
Author: Ashish Singhi 
Authored: Thu Oct 5 21:47:35 2017 +0530
Committer: Ashish Singhi 
Committed: Thu Oct 5 21:47:35 2017 +0530

--
 .../apache/hadoop/hbase/VersionAnnotation.java  | 66 
 .../apache/hadoop/hbase/util/VersionInfo.java   | 32 +++---
 hbase-common/src/saveVersion.sh | 14 +++--
 3 files changed, 18 insertions(+), 94 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/af9de6ed/hbase-common/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java
deleted file mode 100644
index f3137ae..000
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java
+++ /dev/null
@@ -1,66 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase;
-
-import java.lang.annotation.*;
-
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-
-/**
- * A package attribute that captures the version of hbase that was compiled.
- * Copied down from hadoop.  All is same except name of interface.
- */
-@Retention(RetentionPolicy.RUNTIME)
-@Target(ElementType.PACKAGE)
-@InterfaceAudience.Private
-public @interface VersionAnnotation {
-
-  /**
-   * Get the Hadoop version
-   * @return the version string "0.6.3-dev"
-   */
-  String version();
-
-  /**
-   * Get the username that compiled Hadoop.
-   */
-  String user();
-
-  /**
-   * Get the date when Hadoop was compiled.
-   * @return the date in unix 'date' format
-   */
-  String date();
-
-  /**
-   * Get the url for the subversion repository.
-   */
-  String url();
-
-  /**
-   * Get the subversion revision.
-   * @return the revision number as a string (eg. "451451")
-   */
-  String revision();
-
-  /**
-   * Get a checksum of the source files from which HBase was compiled.
-   * @return a string that uniquely identifies the source
-   **/
-  String srcChecksum();
-}

http://git-wip-us.apache.org/repos/asf/hbase/blob/af9de6ed/hbase-common/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java
index 8061b4d..dc242d0 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java
@@ -24,39 +24,23 @@ import java.io.PrintWriter;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
 import org.apache.hadoop.hbase.classification.InterfaceStability;
-import org.apache.hadoop.hbase.VersionAnnotation;
+import org.apache.hadoop.hbase.Version;
 import org.apache.commons.logging.Log;
 
 /**
- * This class finds the package info for hbase and the VersionAnnotation
- * information.  Taken from hadoop.  Only name of annotation is different.
+ * This class finds the Version information for HBase.
  */
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 public class VersionInfo {
   private static final Log LOG = 
LogFactory.getLog(VersionInfo.class.getName());
-  private static Package myPackage;
-  private static VersionAnnotation version;
-
-  static {
-myPackage = 

hbase git commit: HBASE-14925 (Addendum) Develop HBase shell command/tool to list table's region info through command line

2017-05-05 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 2026540ea -> 7d819eb72


HBASE-14925 (Addendum) Develop HBase shell command/tool to list table's region 
info through command line

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7d819eb7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7d819eb7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7d819eb7

Branch: refs/heads/master
Commit: 7d819eb722dc7d027f98357f8b12d166a3f7723b
Parents: 2026540
Author: Karan Mehta 
Authored: Fri May 5 23:33:30 2017 +0530
Committer: Ashish Singhi 
Committed: Fri May 5 23:33:30 2017 +0530

--
 .../main/ruby/shell/commands/list_regions.rb| 110 +--
 1 file changed, 98 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7d819eb7/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
--
diff --git a/hbase-shell/src/main/ruby/shell/commands/list_regions.rb 
b/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
index 892653e..f2d4b41 100644
--- a/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
+++ b/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
@@ -25,17 +25,24 @@ module Shell
 return< list_regions 'table_name'
 hbase> list_regions 'table_name', 'server_name'
 hbase> list_regions 'table_name', {SERVER_NAME => 'server_name', 
LOCALITY_THRESHOLD => 0.8}
+hbase> list_regions 'table_name', {SERVER_NAME => 'server_name', 
LOCALITY_THRESHOLD => 0.8}, ['SERVER_NAME']
+hbase> list_regions 'table_name', {}, ['SERVER_NAME', 'start_key']
+hbase> list_regions 'table_name', '', ['SERVER_NAME', 'start_key']
 
 EOF
 return
   end
 
-  def command(table_name, options = nil)
+  def command(table_name, options = nil, cols = nil)
 if options.nil?
   options = {}
 elsif not options.is_a? Hash
@@ -43,6 +50,34 @@ EOF
   # and create the hash internally
   options = {SERVER_NAME => options}
 end
+
+size_hash = Hash.new
+if cols.nil?
+size_hash = { "SERVER_NAME" => 12, "REGION_NAME" => 12, 
"START_KEY" => 10, "END_KEY" => 10, "SIZE" => 5, "REQ" => 5, "LOCALITY" => 10 }
+elsif cols.is_a?(Array)
+  cols.each do |col|
+if col.upcase.eql?("SERVER_NAME")
+  size_hash.store("SERVER_NAME", 12)
+elsif col.upcase.eql?("REGION_NAME")
+  size_hash.store("REGION_NAME", 12)
+elsif col.upcase.eql?("START_KEY")
+  size_hash.store("START_KEY", 10)
+elsif col.upcase.eql?("END_KEY")
+  size_hash.store("END_KEY", 10)
+elsif col.upcase.eql?("SIZE")
+  size_hash.store("SIZE", 5)
+elsif col.upcase.eql?("REQ")
+  size_hash.store("REQ", 5)
+elsif col.upcase.eql?("LOCALITY")
+  size_hash.store("LOCALITY", 10)
+else
+  raise "#{col} is not a valid column. Possible values are 
SERVER_NAME, REGION_NAME, START_KEY, END_KEY, SIZE, REQ, LOCALITY."
+end
+  end
+else
+  raise "#{cols} must be an array of strings. Possible values are 
SERVER_NAME, REGION_NAME, START_KEY, END_KEY, SIZE, REQ, LOCALITY."
+end
+
 admin_instance = admin.instance_variable_get("@admin")
 conn_instance = admin_instance.getConnection()
 cluster_status = admin_instance.getClusterStatus()
@@ -64,19 +99,58 @@ EOF
 raise "#{LOCALITY_THRESHOLD} must be between 0 and 1.0, inclusive" 
unless valid_locality_threshold? value
 locality_threshold = value
   end
+
   regions.each do |hregion|
 hregion_info = hregion.getRegionInfo()
 server_name = hregion.getServerName()
 region_load_map = 
cluster_status.getLoad(server_name).getRegionsLoad()
 region_load = region_load_map.get(hregion_info.getRegionName())
+
 # Ignore regions which exceed our locality threshold
 if accept_region_for_locality? region_load.getDataLocality(), 
locality_threshold
-  startKey = Bytes.toString(hregion_info.getStartKey())
-  endKey = Bytes.toString(hregion_info.getEndKey())
-  region_store_file_size = region_load.getStorefileSizeMB()
-  region_requests = region_load.getRequestsCount()
-  results << { "server" => hregion.getServerName().toString(), 
"name" => hregion_info.getRegionNameAsString(), "startkey" => startKey, 
"endkey" => endKey,
- "size" => region_store_file_size, 

hbase git commit: HBASE-14925 Develop HBase shell command/tool to list table's region info through command line

2017-04-28 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 cdda1d030 -> 3765e7bed


HBASE-14925 Develop HBase shell command/tool to list table's region info 
through command line

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3765e7be
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3765e7be
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3765e7be

Branch: refs/heads/branch-1
Commit: 3765e7bedb937044c8e0a416a7b44d41165ee48c
Parents: cdda1d0
Author: Karan Mehta 
Authored: Fri Apr 28 14:08:04 2017 +0530
Committer: Ashish Singhi 
Committed: Fri Apr 28 14:08:04 2017 +0530

--
 hbase-shell/src/main/ruby/shell.rb  |  1 +
 .../main/ruby/shell/commands/list_regions.rb| 76 
 2 files changed, 77 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3765e7be/hbase-shell/src/main/ruby/shell.rb
--
diff --git a/hbase-shell/src/main/ruby/shell.rb 
b/hbase-shell/src/main/ruby/shell.rb
index 9576cc7..99adf73 100644
--- a/hbase-shell/src/main/ruby/shell.rb
+++ b/hbase-shell/src/main/ruby/shell.rb
@@ -272,6 +272,7 @@ Shell.load_command_group(
 alter_async
 get_table
 locate_region
+list_regions
   ],
   :aliases => {
 'describe' => ['desc']

http://git-wip-us.apache.org/repos/asf/hbase/blob/3765e7be/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
--
diff --git a/hbase-shell/src/main/ruby/shell/commands/list_regions.rb 
b/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
new file mode 100644
index 000..527a6cb
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
@@ -0,0 +1,76 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+class ListRegions < Command
+  def help
+
+return< list_regions 'table_name'
+hbase> list_regions 'table_name', 'server_name'
+
+EOF
+return
+  end
+
+  def command(table_name, region_server_name = "")
+admin_instance = admin.instance_variable_get("@admin")
+conn_instance = admin_instance.getConnection()
+cluster_status = admin_instance.getClusterStatus()
+hregion_locator_instance = 
conn_instance.getRegionLocator(TableName.valueOf(table_name))
+hregion_locator_list = hregion_locator_instance.getAllRegionLocations()
+results = Array.new
+
+begin
+  hregion_locator_list.each do |hregion|
+hregion_info = hregion.getRegionInfo()
+server_name = hregion.getServerName()
+if hregion.getServerName().toString.start_with? region_server_name
+  startKey = Bytes.toString(hregion.getRegionInfo().getStartKey())
+  endKey = Bytes.toString(hregion.getRegionInfo().getEndKey())
+  region_load_map = 
cluster_status.getLoad(server_name).getRegionsLoad()
+  region_load = region_load_map.get(hregion_info.getRegionName())
+  region_store_file_size = region_load.getStorefileSizeMB()
+  region_requests = region_load.getRequestsCount()
+  results << { "server" => hregion.getServerName().toString(), 
"name" => hregion_info.getRegionNameAsString(), "startkey" => startKey, 
"endkey" => endKey, "size" => region_store_file_size, "requests" => 
region_requests }
+end
+  end
+ensure
+  hregion_locator_instance.close()
+end
+
+@end_time = Time.now
+
+printf("%-60s | %-60s | %-15s | %-15s | %-20s | %-20s", "SERVER_NAME", 
"REGION_NAME", "START_KEY", "END_KEY", "SIZE", "REQ");
+printf("\n")
+for result in results
+  printf("%-60s | %-60s | %-15s | %-15s | %-20s | %-20s", 
result["server"], result["name"], result["startkey"], result["endkey"], 
result["size"], result["requests"]);
+printf("\n")
+ 

hbase git commit: HBase-14925 Develop HBase shell command/tool to list table's region info through command line

2017-04-28 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master c4cbb419a -> 68b2e0f7d


HBase-14925 Develop HBase shell command/tool to list table's region info 
through command line

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/68b2e0f7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/68b2e0f7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/68b2e0f7

Branch: refs/heads/master
Commit: 68b2e0f7d94c02aa82ac89f2ec2f052bdcd58704
Parents: c4cbb41
Author: Karan Mehta 
Authored: Fri Apr 28 14:06:03 2017 +0530
Committer: Ashish Singhi 
Committed: Fri Apr 28 14:06:03 2017 +0530

--
 hbase-shell/src/main/ruby/shell.rb  |  1 +
 .../main/ruby/shell/commands/list_regions.rb| 76 
 2 files changed, 77 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/68b2e0f7/hbase-shell/src/main/ruby/shell.rb
--
diff --git a/hbase-shell/src/main/ruby/shell.rb 
b/hbase-shell/src/main/ruby/shell.rb
index fc55f94..a6aba76 100644
--- a/hbase-shell/src/main/ruby/shell.rb
+++ b/hbase-shell/src/main/ruby/shell.rb
@@ -285,6 +285,7 @@ Shell.load_command_group(
 alter_async
 get_table
 locate_region
+list_regions
   ],
   :aliases => {
 'describe' => ['desc']

http://git-wip-us.apache.org/repos/asf/hbase/blob/68b2e0f7/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
--
diff --git a/hbase-shell/src/main/ruby/shell/commands/list_regions.rb 
b/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
new file mode 100644
index 000..527a6cb
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/list_regions.rb
@@ -0,0 +1,76 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+class ListRegions < Command
+  def help
+
+return< list_regions 'table_name'
+hbase> list_regions 'table_name', 'server_name'
+
+EOF
+return
+  end
+
+  def command(table_name, region_server_name = "")
+admin_instance = admin.instance_variable_get("@admin")
+conn_instance = admin_instance.getConnection()
+cluster_status = admin_instance.getClusterStatus()
+hregion_locator_instance = 
conn_instance.getRegionLocator(TableName.valueOf(table_name))
+hregion_locator_list = hregion_locator_instance.getAllRegionLocations()
+results = Array.new
+
+begin
+  hregion_locator_list.each do |hregion|
+hregion_info = hregion.getRegionInfo()
+server_name = hregion.getServerName()
+if hregion.getServerName().toString.start_with? region_server_name
+  startKey = Bytes.toString(hregion.getRegionInfo().getStartKey())
+  endKey = Bytes.toString(hregion.getRegionInfo().getEndKey())
+  region_load_map = 
cluster_status.getLoad(server_name).getRegionsLoad()
+  region_load = region_load_map.get(hregion_info.getRegionName())
+  region_store_file_size = region_load.getStorefileSizeMB()
+  region_requests = region_load.getRequestsCount()
+  results << { "server" => hregion.getServerName().toString(), 
"name" => hregion_info.getRegionNameAsString(), "startkey" => startKey, 
"endkey" => endKey, "size" => region_store_file_size, "requests" => 
region_requests }
+end
+  end
+ensure
+  hregion_locator_instance.close()
+end
+
+@end_time = Time.now
+
+printf("%-60s | %-60s | %-15s | %-15s | %-20s | %-20s", "SERVER_NAME", 
"REGION_NAME", "START_KEY", "END_KEY", "SIZE", "REQ");
+printf("\n")
+for result in results
+  printf("%-60s | %-60s | %-15s | %-15s | %-20s | %-20s", 
result["server"], result["name"], result["startkey"], result["endkey"], 
result["size"], result["requests"]);
+printf("\n")
+

hbase git commit: HBASE-17290 Potential loss of data for replication of bulk loaded hfiles

2017-01-06 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 667c5eb3a -> e8e40d862


HBASE-17290 Potential loss of data for replication of bulk loaded hfiles


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e8e40d86
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e8e40d86
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e8e40d86

Branch: refs/heads/branch-1
Commit: e8e40d86258a2f19838f3aea55dafbbc1b860942
Parents: 667c5eb
Author: Ashish Singhi 
Authored: Fri Jan 6 16:57:52 2017 +0530
Committer: Ashish Singhi 
Committed: Fri Jan 6 16:57:52 2017 +0530

--
 .../hbase/replication/ReplicationQueues.java|  6 +-
 .../replication/ReplicationQueuesZKImpl.java| 11 ++--
 .../hbase/regionserver/HRegionServer.java   |  4 ++
 .../regionserver/HFileReplicator.java   |  2 +-
 .../replication/regionserver/Replication.java   | 54 +++--
 .../regionserver/ReplicationObserver.java   | 63 
 .../regionserver/ReplicationSource.java | 11 ++--
 .../ReplicationSourceInterface.java |  6 +-
 .../regionserver/ReplicationSourceManager.java  |  4 +-
 .../cleaner/TestReplicationHFileCleaner.java|  9 +--
 .../replication/ReplicationSourceDummy.java |  3 +-
 .../replication/TestReplicationStateBasic.java  | 33 +-
 12 files changed, 138 insertions(+), 68 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e8e40d86/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
index 1b1c770..2409111 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
@@ -22,6 +22,7 @@ import java.util.List;
 import java.util.SortedMap;
 import java.util.SortedSet;
 
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
 import org.apache.hadoop.hbase.util.Pair;
 
@@ -151,10 +152,11 @@ public interface ReplicationQueues {
   /**
* Add new hfile references to the queue.
* @param peerId peer cluster id to which the hfiles need to be replicated
-   * @param files list of hfile references to be added
+   * @param pairs list of pairs of { HFile location in staging dir, HFile path 
in region dir which
+   *  will be added in the queue }
* @throws ReplicationException if fails to add a hfile reference
*/
-  void addHFileRefs(String peerId, List files) throws 
ReplicationException;
+  void addHFileRefs(String peerId, List> pairs) throws 
ReplicationException;
 
   /**
* Remove hfile references from the queue.

http://git-wip-us.apache.org/repos/asf/hbase/blob/e8e40d86/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
index a903159..a1bd829 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
@@ -30,6 +30,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.Abortable;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
@@ -508,16 +509,18 @@ public class ReplicationQueuesZKImpl extends 
ReplicationStateZKBase implements R
   }
 
   @Override
-  public void addHFileRefs(String peerId, List files) throws 
ReplicationException {
+  public void addHFileRefs(String peerId, List> pairs)
+  throws ReplicationException {
 String peerZnode = ZKUtil.joinZNode(this.hfileRefsZNode, peerId);
 boolean debugEnabled = LOG.isDebugEnabled();
 if (debugEnabled) {
-  LOG.debug("Adding hfile references " + files + " in queue " + peerZnode);
+  LOG.debug("Adding hfile references " + pairs + " in queue " + peerZnode);
 }
 List listOfOps = new ArrayList();
-int size = files.size();
+int size = pairs.size();
 for (int i = 0; 

hbase git commit: HBASE-17290 Potential loss of data for replication of bulk loaded hfiles

2017-01-06 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 629b04f44 -> 5f631b965


HBASE-17290 Potential loss of data for replication of bulk loaded hfiles


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5f631b96
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5f631b96
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5f631b96

Branch: refs/heads/master
Commit: 5f631b9653a4bf86a2bebed58abed747c04b704f
Parents: 629b04f
Author: Ashish Singhi 
Authored: Fri Jan 6 16:15:49 2017 +0530
Committer: Ashish Singhi 
Committed: Fri Jan 6 16:18:20 2017 +0530

--
 .../hbase/replication/ReplicationQueues.java|  6 +-
 .../replication/ReplicationQueuesZKImpl.java| 11 ++--
 .../TableBasedReplicationQueuesImpl.java|  4 +-
 .../hbase/regionserver/HRegionServer.java   |  4 ++
 .../regionserver/HFileReplicator.java   |  2 +-
 .../replication/regionserver/Replication.java   | 55 +++--
 .../regionserver/ReplicationObserver.java   | 62 
 .../regionserver/ReplicationSource.java | 11 ++--
 .../ReplicationSourceInterface.java |  6 +-
 .../regionserver/ReplicationSourceManager.java  |  4 +-
 .../cleaner/TestReplicationHFileCleaner.java|  9 +--
 .../replication/ReplicationSourceDummy.java |  3 +-
 .../replication/TestReplicationStateBasic.java  | 33 ++-
 13 files changed, 140 insertions(+), 70 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5f631b96/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
index 0ae27d0..be5a590 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.hbase.replication;
 import java.util.List;
 import java.util.SortedSet;
 
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
 import org.apache.hadoop.hbase.util.Pair;
 
@@ -144,10 +145,11 @@ public interface ReplicationQueues {
   /**
* Add new hfile references to the queue.
* @param peerId peer cluster id to which the hfiles need to be replicated
-   * @param files list of hfile references to be added
+   * @param pairs list of pairs of { HFile location in staging dir, HFile path 
in region dir which
+   *  will be added in the queue }
* @throws ReplicationException if fails to add a hfile reference
*/
-  void addHFileRefs(String peerId, List files) throws 
ReplicationException;
+  void addHFileRefs(String peerId, List> pairs) throws 
ReplicationException;
 
   /**
* Remove hfile references from the queue.

http://git-wip-us.apache.org/repos/asf/hbase/blob/5f631b96/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
index 7c548d9..1de1315 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
@@ -27,6 +27,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.Abortable;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.exceptions.DeserializationException;
@@ -319,16 +320,18 @@ public class ReplicationQueuesZKImpl extends 
ReplicationStateZKBase implements R
   }
 
   @Override
-  public void addHFileRefs(String peerId, List files) throws 
ReplicationException {
+  public void addHFileRefs(String peerId, List> pairs)
+  throws ReplicationException {
 String peerZnode = ZKUtil.joinZNode(this.hfileRefsZNode, peerId);
 boolean debugEnabled = LOG.isDebugEnabled();
 if (debugEnabled) {
-  LOG.debug("Adding hfile references " + files + " in queue " + peerZnode);
+  LOG.debug("Adding hfile references " + pairs + " in queue " + peerZnode);
 }
 List listOfOps = new ArrayList();
-int 

hbase git commit: HBASE-16302 age of last shipped op and age of last applied op should be histograms

2016-11-29 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 7b2673db1 -> b8da9f83c


HBASE-16302 age of last shipped op and age of last applied op should be 
histograms

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b8da9f83
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b8da9f83
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b8da9f83

Branch: refs/heads/branch-1
Commit: b8da9f83cbbaf8a1257e5abb1ac438b21ba5507e
Parents: 7b2673d
Author: Ashu Pachauri 
Authored: Tue Nov 29 13:54:28 2016 +0530
Committer: Ashish Singhi 
Committed: Tue Nov 29 13:54:28 2016 +0530

--
 .../regionserver/MetricsReplicationGlobalSourceSource.java  | 9 +
 .../regionserver/MetricsReplicationSinkSourceImpl.java  | 9 +
 .../regionserver/MetricsReplicationSourceSourceImpl.java| 9 +
 .../org/apache/hadoop/metrics2/lib/MutableHistogram.java| 4 
 .../hbase/replication/regionserver/MetricsSource.java   | 2 +-
 5 files changed, 20 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b8da9f83/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
--
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
index 0a67663..7a34e45 100644
--- 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
+++ 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
@@ -20,11 +20,12 @@ package org.apache.hadoop.hbase.replication.regionserver;
 
 import org.apache.hadoop.metrics2.lib.MutableFastCounter;
 import org.apache.hadoop.metrics2.lib.MutableGaugeLong;
+import org.apache.hadoop.metrics2.lib.MutableHistogram;
 
 public class MetricsReplicationGlobalSourceSource implements 
MetricsReplicationSourceSource{
   private final MetricsReplicationSourceImpl rms;
 
-  private final MutableGaugeLong ageOfLastShippedOpGauge;
+  private final MutableHistogram ageOfLastShippedOpHist;
   private final MutableGaugeLong sizeOfLogQueueGauge;
   private final MutableFastCounter logReadInEditsCounter;
   private final MutableFastCounter logEditsFilteredCounter;
@@ -47,7 +48,7 @@ public class MetricsReplicationGlobalSourceSource implements 
MetricsReplicationS
   public MetricsReplicationGlobalSourceSource(MetricsReplicationSourceImpl 
rms) {
 this.rms = rms;
 
-ageOfLastShippedOpGauge = 
rms.getMetricsRegistry().getGauge(SOURCE_AGE_OF_LAST_SHIPPED_OP, 0L);
+ageOfLastShippedOpHist = 
rms.getMetricsRegistry().getHistogram(SOURCE_AGE_OF_LAST_SHIPPED_OP);
 
 sizeOfLogQueueGauge = 
rms.getMetricsRegistry().getGauge(SOURCE_SIZE_OF_LOG_QUEUE, 0L);
 
@@ -80,7 +81,7 @@ public class MetricsReplicationGlobalSourceSource implements 
MetricsReplicationS
   }
 
   @Override public void setLastShippedAge(long age) {
-ageOfLastShippedOpGauge.set(age);
+ageOfLastShippedOpHist.add(age);
   }
 
   @Override public void incrSizeOfLogQueue(int size) {
@@ -137,7 +138,7 @@ public class MetricsReplicationGlobalSourceSource 
implements MetricsReplicationS
 
   @Override
   public long getLastShippedAge() {
-return ageOfLastShippedOpGauge.value();
+return ageOfLastShippedOpHist.getMax();
   }
 
   @Override public void incrHFilesShipped(long hfiles) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/b8da9f83/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java
--
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java
 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java
index 540212a..74592d9 100644
--- 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java
+++ 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java
@@ -20,23 +20,24 @@ package org.apache.hadoop.hbase.replication.regionserver;
 
 import org.apache.hadoop.metrics2.lib.MutableFastCounter;
 import org.apache.hadoop.metrics2.lib.MutableGaugeLong;
+import 

hbase git commit: HBASE-16302 age of last shipped op and age of last applied op should be histograms

2016-11-29 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 346e904a2 -> 7bcbac91a


HBASE-16302 age of last shipped op and age of last applied op should be 
histograms

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/7bcbac91
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/7bcbac91
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/7bcbac91

Branch: refs/heads/master
Commit: 7bcbac91a2385cd3009bcc277bb0f4d94084c926
Parents: 346e904
Author: Ashu Pachauri 
Authored: Tue Nov 29 13:51:32 2016 +0530
Committer: Ashish Singhi 
Committed: Tue Nov 29 13:51:32 2016 +0530

--
 .../regionserver/MetricsReplicationGlobalSourceSource.java  | 9 +
 .../regionserver/MetricsReplicationSinkSourceImpl.java  | 9 +
 .../regionserver/MetricsReplicationSourceSourceImpl.java| 9 +
 .../org/apache/hadoop/metrics2/lib/MutableHistogram.java| 4 
 .../hbase/replication/regionserver/MetricsSource.java   | 2 +-
 5 files changed, 20 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/7bcbac91/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
--
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
index 0a67663..7a34e45 100644
--- 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
+++ 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java
@@ -20,11 +20,12 @@ package org.apache.hadoop.hbase.replication.regionserver;
 
 import org.apache.hadoop.metrics2.lib.MutableFastCounter;
 import org.apache.hadoop.metrics2.lib.MutableGaugeLong;
+import org.apache.hadoop.metrics2.lib.MutableHistogram;
 
 public class MetricsReplicationGlobalSourceSource implements 
MetricsReplicationSourceSource{
   private final MetricsReplicationSourceImpl rms;
 
-  private final MutableGaugeLong ageOfLastShippedOpGauge;
+  private final MutableHistogram ageOfLastShippedOpHist;
   private final MutableGaugeLong sizeOfLogQueueGauge;
   private final MutableFastCounter logReadInEditsCounter;
   private final MutableFastCounter logEditsFilteredCounter;
@@ -47,7 +48,7 @@ public class MetricsReplicationGlobalSourceSource implements 
MetricsReplicationS
   public MetricsReplicationGlobalSourceSource(MetricsReplicationSourceImpl 
rms) {
 this.rms = rms;
 
-ageOfLastShippedOpGauge = 
rms.getMetricsRegistry().getGauge(SOURCE_AGE_OF_LAST_SHIPPED_OP, 0L);
+ageOfLastShippedOpHist = 
rms.getMetricsRegistry().getHistogram(SOURCE_AGE_OF_LAST_SHIPPED_OP);
 
 sizeOfLogQueueGauge = 
rms.getMetricsRegistry().getGauge(SOURCE_SIZE_OF_LOG_QUEUE, 0L);
 
@@ -80,7 +81,7 @@ public class MetricsReplicationGlobalSourceSource implements 
MetricsReplicationS
   }
 
   @Override public void setLastShippedAge(long age) {
-ageOfLastShippedOpGauge.set(age);
+ageOfLastShippedOpHist.add(age);
   }
 
   @Override public void incrSizeOfLogQueue(int size) {
@@ -137,7 +138,7 @@ public class MetricsReplicationGlobalSourceSource 
implements MetricsReplicationS
 
   @Override
   public long getLastShippedAge() {
-return ageOfLastShippedOpGauge.value();
+return ageOfLastShippedOpHist.getMax();
   }
 
   @Override public void incrHFilesShipped(long hfiles) {

http://git-wip-us.apache.org/repos/asf/hbase/blob/7bcbac91/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java
--
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java
 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java
index 540212a..74592d9 100644
--- 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java
+++ 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java
@@ -20,23 +20,24 @@ package org.apache.hadoop.hbase.replication.regionserver;
 
 import org.apache.hadoop.metrics2.lib.MutableFastCounter;
 import org.apache.hadoop.metrics2.lib.MutableGaugeLong;
+import 

hbase git commit: HBASE-16910 Avoid NPE when starting StochasticLoadBalancer

2016-10-25 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 16823ff55 -> ae502a9d5


HBASE-16910 Avoid NPE when starting StochasticLoadBalancer

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ae502a9d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ae502a9d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ae502a9d

Branch: refs/heads/branch-1
Commit: ae502a9d5ce3dc5c4a485c3ff364d433bdf29a10
Parents: 16823ff
Author: Guanghao Zhang 
Authored: Tue Oct 25 11:58:41 2016 +0530
Committer: Ashish Singhi 
Committed: Tue Oct 25 11:58:41 2016 +0530

--
 .../src/main/java/org/apache/hadoop/hbase/master/HMaster.java  | 2 +-
 .../hadoop/hbase/master/balancer/StochasticLoadBalancer.java   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ae502a9d/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index e079b3b..ba067e7 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -774,8 +774,8 @@ public class HMaster extends HRegionServer implements 
MasterServices, Server {
 }
 
 //initialize load balancer
-this.balancer.setClusterStatus(getClusterStatus());
 this.balancer.setMasterServices(this);
+this.balancer.setClusterStatus(getClusterStatus());
 this.balancer.initialize();
 
 // Check if master is shutting down because of some issue

http://git-wip-us.apache.org/repos/asf/hbase/blob/ae502a9d/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
index d497d42..7d7dc8e 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
@@ -232,7 +232,7 @@ public class StochasticLoadBalancer extends 
BaseLoadBalancer {
 
   updateMetricsSize(tablesCount * (functionsCount + 1)); // +1 for overall
 } catch (Exception e) {
-  LOG.error("failed to get the size of all tables, exception = " + 
e.getMessage());
+  LOG.error("failed to get the size of all tables", e);
 }
   }
 



hbase git commit: HBASE-16724 Snapshot owner can't clone

2016-10-15 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 05b010cac -> b7f283c6f


HBASE-16724 Snapshot owner can't clone

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b7f283c6
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b7f283c6
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b7f283c6

Branch: refs/heads/branch-1
Commit: b7f283c6f6728238bb553c80aa6eafce0df0d650
Parents: 05b010c
Author: Pankaj Kumar 
Authored: Sat Oct 15 11:57:00 2016 +0530
Committer: Ashish Singhi 
Committed: Sat Oct 15 11:57:00 2016 +0530

--
 .../hadoop/hbase/security/access/AccessController.java   | 11 ++-
 .../hbase/security/access/TestAccessController.java  | 10 --
 src/main/asciidoc/_chapters/appendix_acl_matrix.adoc |  2 +-
 3 files changed, 15 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b7f283c6/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
index 2152440..7be4540 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
@@ -1342,7 +1342,16 @@ public class AccessController extends 
BaseMasterAndRegionObserver
   public void preCloneSnapshot(final 
ObserverContext ctx,
   final SnapshotDescription snapshot, final HTableDescriptor 
hTableDescriptor)
   throws IOException {
-requirePermission("cloneSnapshot " + snapshot.getName(), Action.ADMIN);
+User user = getActiveUser();
+if (SnapshotDescriptionUtils.isSnapshotOwner(snapshot, user)
+&& hTableDescriptor.getNameAsString().equals(snapshot.getTable())) {
+  // Snapshot owner is allowed to create a table with the same name as the 
snapshot he took
+  AuthResult result = AuthResult.allow("cloneSnapshot " + 
snapshot.getName(),
+"Snapshot owner check allowed", user, null, 
hTableDescriptor.getTableName(), null);
+  logResult(result);
+} else {
+  requirePermission("cloneSnapshot " + snapshot.getName(), Action.ADMIN);
+}
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hbase/blob/b7f283c6/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
index 221241e..79d65cd 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
@@ -2122,15 +2122,13 @@ public class TestAccessController extends 
SecureTestUtil {
   @Override
   public Object run() throws Exception {
 
ACCESS_CONTROLLER.preCloneSnapshot(ObserverContext.createAndPrepare(CP_ENV, 
null),
-  snapshot, null);
+  snapshot, htd);
 return null;
   }
 };
-// Clone by snapshot owner is not allowed , because clone operation 
creates a new table,
-// which needs global admin permission.
-verifyAllowed(cloneAction, SUPERUSER, USER_ADMIN, USER_GROUP_ADMIN);
-verifyDenied(cloneAction, USER_CREATE, USER_RW, USER_RO, USER_NONE, 
USER_OWNER,
-  USER_GROUP_READ, USER_GROUP_WRITE, USER_GROUP_CREATE);
+verifyAllowed(cloneAction, SUPERUSER, USER_ADMIN, USER_GROUP_ADMIN, 
USER_OWNER);
+verifyDenied(cloneAction, USER_CREATE, USER_RW, USER_RO, USER_NONE, 
USER_GROUP_READ,
+  USER_GROUP_WRITE, USER_GROUP_CREATE);
   }
 
   @Test (timeout=18)

http://git-wip-us.apache.org/repos/asf/hbase/blob/b7f283c6/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
--
diff --git a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc 
b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
index cb285f3..adc2b1f 100644
--- a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
+++ b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
@@ -100,7 +100,7 @@ In case the table goes out of date, the unit tests which 
check for accuracy of p
 || stopMaster | superuser\|global(A)
 || snapshot | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
 || 

hbase git commit: HBASE-16724 Snapshot owner can't clone

2016-10-13 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 90d83d5b3 -> c9c67d1a9


HBASE-16724 Snapshot owner can't clone

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c9c67d1a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c9c67d1a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c9c67d1a

Branch: refs/heads/master
Commit: c9c67d1a946d19bed96c92f2ff2142ac15770696
Parents: 90d83d5
Author: Pankaj Kumar 
Authored: Thu Oct 13 17:20:52 2016 +0530
Committer: Ashish Singhi 
Committed: Thu Oct 13 17:20:52 2016 +0530

--
 .../hadoop/hbase/security/access/AccessController.java   | 11 ++-
 .../hbase/security/access/TestAccessController.java  | 10 --
 src/main/asciidoc/_chapters/appendix_acl_matrix.adoc |  2 +-
 3 files changed, 15 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c9c67d1a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
index d8e61a4..3fc2ef5 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
@@ -1341,7 +1341,16 @@ public class AccessController extends 
BaseMasterAndRegionObserver
   public void preCloneSnapshot(final 
ObserverContext ctx,
   final SnapshotDescription snapshot, final HTableDescriptor 
hTableDescriptor)
   throws IOException {
-requirePermission(getActiveUser(ctx), "cloneSnapshot " + 
snapshot.getName(), Action.ADMIN);
+User user = getActiveUser(ctx);
+if (SnapshotDescriptionUtils.isSnapshotOwner(snapshot, user)
+&& hTableDescriptor.getNameAsString().equals(snapshot.getTable())) {
+  // Snapshot owner is allowed to create a table with the same name as the 
snapshot he took
+  AuthResult result = AuthResult.allow("cloneSnapshot " + 
snapshot.getName(),
+"Snapshot owner check allowed", user, null, 
hTableDescriptor.getTableName(), null);
+  logResult(result);
+} else {
+  requirePermission(user, "cloneSnapshot " + snapshot.getName(), 
Action.ADMIN);
+}
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hbase/blob/c9c67d1a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
index 9ba0d0e..ef44693 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
@@ -2124,15 +2124,13 @@ public class TestAccessController extends 
SecureTestUtil {
   @Override
   public Object run() throws Exception {
 
ACCESS_CONTROLLER.preCloneSnapshot(ObserverContext.createAndPrepare(CP_ENV, 
null),
-  snapshot, null);
+  snapshot, htd);
 return null;
   }
 };
-// Clone by snapshot owner is not allowed , because clone operation 
creates a new table,
-// which needs global admin permission.
-verifyAllowed(cloneAction, SUPERUSER, USER_ADMIN, USER_GROUP_ADMIN);
-verifyDenied(cloneAction, USER_CREATE, USER_RW, USER_RO, USER_NONE, 
USER_OWNER,
-  USER_GROUP_READ, USER_GROUP_WRITE, USER_GROUP_CREATE);
+verifyAllowed(cloneAction, SUPERUSER, USER_ADMIN, USER_GROUP_ADMIN, 
USER_OWNER);
+verifyDenied(cloneAction, USER_CREATE, USER_RW, USER_RO, USER_NONE, 
USER_GROUP_READ,
+  USER_GROUP_WRITE, USER_GROUP_CREATE);
   }
 
   @Test (timeout=18)

http://git-wip-us.apache.org/repos/asf/hbase/blob/c9c67d1a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
--
diff --git a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc 
b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
index 698ae82..e222875 100644
--- a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
+++ b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
@@ -100,7 +100,7 @@ In case the table goes out of date, the unit tests which 
check for accuracy of p
 || stopMaster | superuser\|global(A)
 || snapshot | 

hbase git commit: HBASE-16723 RMI registry is not destroyed after stopping JMX Connector Server

2016-10-07 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 c7007ac57 -> 017bc3337


HBASE-16723 RMI registry is not destroyed after stopping JMX Connector Server

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/017bc333
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/017bc333
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/017bc333

Branch: refs/heads/branch-1.2
Commit: 017bc3337eb8c1ab56c388b7302de231cafba6f7
Parents: c7007ac
Author: Pankaj Kumar 
Authored: Fri Oct 7 12:13:48 2016 +0530
Committer: Ashish Singhi 
Committed: Fri Oct 7 12:13:48 2016 +0530

--
 .../java/org/apache/hadoop/hbase/JMXListener.java  | 17 +
 1 file changed, 13 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/017bc333/hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
index 2872cfa..9265fb8 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
@@ -27,8 +27,10 @@ import org.apache.hadoop.hbase.coprocessor.*;
 import java.io.IOException;
 import java.lang.management.ManagementFactory;
 import java.rmi.registry.LocateRegistry;
+import java.rmi.registry.Registry;
 import java.rmi.server.RMIClientSocketFactory;
 import java.rmi.server.RMIServerSocketFactory;
+import java.rmi.server.UnicastRemoteObject;
 import java.util.HashMap;
 
 import javax.management.MBeanServer;
@@ -36,8 +38,6 @@ import javax.management.remote.JMXConnectorServer;
 import javax.management.remote.JMXConnectorServerFactory;
 import javax.management.remote.JMXServiceURL;
 import javax.management.remote.rmi.RMIConnectorServer;
-import javax.rmi.ssl.SslRMIClientSocketFactory;
-import javax.rmi.ssl.SslRMIServerSocketFactory;
 
 /**
  * Pluggable JMX Agent for HBase(to fix the 2 random TCP ports issue
@@ -61,6 +61,7 @@ public class JMXListener implements Coprocessor {
* we only load regionserver coprocessor on master
*/
   private static JMXConnectorServer JMX_CS = null;
+  private Registry rmiRegistry = null;
 
   public static JMXServiceURL buildJMXServiceURL(int rmiRegistryPort,
   int rmiConnectorPort) throws IOException {
@@ -128,7 +129,7 @@ public class JMXListener implements Coprocessor {
 }
 
 // Create the RMI registry
-LocateRegistry.createRegistry(rmiRegistryPort);
+rmiRegistry = LocateRegistry.createRegistry(rmiRegistryPort);
 // Retrieve the PlatformMBeanServer.
 MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
 
@@ -147,17 +148,25 @@ public class JMXListener implements Coprocessor {
   LOG.info("ConnectorServer started!");
 } catch (IOException e) {
   LOG.error("fail to start connector server!", e);
+  // deregister the RMI registry
+  if (rmiRegistry != null) {
+UnicastRemoteObject.unexportObject(rmiRegistry, true);
+  }
 }
 
   }
 
   public void stopConnectorServer() throws IOException {
-synchronized(JMXListener.class) {
+synchronized (JMXListener.class) {
   if (JMX_CS != null) {
 JMX_CS.stop();
 LOG.info("ConnectorServer stopped!");
 JMX_CS = null;
   }
+  // deregister the RMI registry
+  if (rmiRegistry != null) {
+UnicastRemoteObject.unexportObject(rmiRegistry, true);
+  }
 }
   }
 



hbase git commit: HBASE-16723 RMI registry is not destroyed after stopping JMX Connector Server

2016-10-07 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 e8f0ccc81 -> a52188f97


HBASE-16723 RMI registry is not destroyed after stopping JMX Connector Server

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a52188f9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a52188f9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a52188f9

Branch: refs/heads/branch-1.3
Commit: a52188f97e7e32fc79b35d7155a44a5ad31bbd6f
Parents: e8f0ccc
Author: Pankaj Kumar 
Authored: Fri Oct 7 12:08:50 2016 +0530
Committer: Ashish Singhi 
Committed: Fri Oct 7 12:08:50 2016 +0530

--
 .../java/org/apache/hadoop/hbase/JMXListener.java  | 17 +
 1 file changed, 13 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a52188f9/hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
index 2872cfa..9265fb8 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
@@ -27,8 +27,10 @@ import org.apache.hadoop.hbase.coprocessor.*;
 import java.io.IOException;
 import java.lang.management.ManagementFactory;
 import java.rmi.registry.LocateRegistry;
+import java.rmi.registry.Registry;
 import java.rmi.server.RMIClientSocketFactory;
 import java.rmi.server.RMIServerSocketFactory;
+import java.rmi.server.UnicastRemoteObject;
 import java.util.HashMap;
 
 import javax.management.MBeanServer;
@@ -36,8 +38,6 @@ import javax.management.remote.JMXConnectorServer;
 import javax.management.remote.JMXConnectorServerFactory;
 import javax.management.remote.JMXServiceURL;
 import javax.management.remote.rmi.RMIConnectorServer;
-import javax.rmi.ssl.SslRMIClientSocketFactory;
-import javax.rmi.ssl.SslRMIServerSocketFactory;
 
 /**
  * Pluggable JMX Agent for HBase(to fix the 2 random TCP ports issue
@@ -61,6 +61,7 @@ public class JMXListener implements Coprocessor {
* we only load regionserver coprocessor on master
*/
   private static JMXConnectorServer JMX_CS = null;
+  private Registry rmiRegistry = null;
 
   public static JMXServiceURL buildJMXServiceURL(int rmiRegistryPort,
   int rmiConnectorPort) throws IOException {
@@ -128,7 +129,7 @@ public class JMXListener implements Coprocessor {
 }
 
 // Create the RMI registry
-LocateRegistry.createRegistry(rmiRegistryPort);
+rmiRegistry = LocateRegistry.createRegistry(rmiRegistryPort);
 // Retrieve the PlatformMBeanServer.
 MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
 
@@ -147,17 +148,25 @@ public class JMXListener implements Coprocessor {
   LOG.info("ConnectorServer started!");
 } catch (IOException e) {
   LOG.error("fail to start connector server!", e);
+  // deregister the RMI registry
+  if (rmiRegistry != null) {
+UnicastRemoteObject.unexportObject(rmiRegistry, true);
+  }
 }
 
   }
 
   public void stopConnectorServer() throws IOException {
-synchronized(JMXListener.class) {
+synchronized (JMXListener.class) {
   if (JMX_CS != null) {
 JMX_CS.stop();
 LOG.info("ConnectorServer stopped!");
 JMX_CS = null;
   }
+  // deregister the RMI registry
+  if (rmiRegistry != null) {
+UnicastRemoteObject.unexportObject(rmiRegistry, true);
+  }
 }
   }
 



hbase git commit: HBASE-16880 Reduce garbage in BufferChain (binlijin)

2016-09-22 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 2ff2c0ba6 -> ce493642c


HBASE-16880 Reduce garbage in BufferChain (binlijin)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ce493642
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ce493642
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ce493642

Branch: refs/heads/master
Commit: ce493642c0e295a08701cdcfe3ddc6755cdd7718
Parents: 2ff2c0b
Author: Ashish Singhi 
Authored: Thu Sep 22 13:59:18 2016 +0530
Committer: Ashish Singhi 
Committed: Thu Sep 22 13:59:18 2016 +0530

--
 .../apache/hadoop/hbase/ipc/BufferChain.java| 17 ++--
 .../org/apache/hadoop/hbase/ipc/RpcServer.java  | 29 ++--
 2 files changed, 23 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ce493642/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BufferChain.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BufferChain.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BufferChain.java
index 7adc94d..26bc56c 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BufferChain.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BufferChain.java
@@ -20,8 +20,6 @@ package org.apache.hadoop.hbase.ipc;
 import java.io.IOException;
 import java.nio.ByteBuffer;
 import java.nio.channels.GatheringByteChannel;
-import java.util.ArrayList;
-import java.util.List;
 
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
 
@@ -35,22 +33,11 @@ class BufferChain {
   private int remaining = 0;
   private int bufferOffset = 0;
 
-  BufferChain(ByteBuffer ... buffers) {
-// Some of the incoming buffers can be null
-List bbs = new ArrayList(buffers.length);
+  BufferChain(ByteBuffer[] buffers) {
 for (ByteBuffer b : buffers) {
-  if (b == null) continue;
-  bbs.add(b);
   this.remaining += b.remaining();
 }
-this.buffers = bbs.toArray(new ByteBuffer[bbs.size()]);
-  }
-
-  BufferChain(List buffers) {
-for (ByteBuffer b : buffers) {
-  this.remaining += b.remaining();
-}
-this.buffers = buffers.toArray(new ByteBuffer[buffers.size()]);
+this.buffers = buffers;
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hbase/blob/ce493642/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
index 12c21d9..0dbaf04 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
@@ -411,7 +411,9 @@ public class RpcServer implements RpcServerInterface, 
ConfigurationObserver {
 }
 
 protected synchronized void setSaslTokenResponse(ByteBuffer response) {
-  this.response = new BufferChain(response);
+  ByteBuffer[] responseBufs = new ByteBuffer[1];
+  responseBufs[0] = response;
+  this.response = new BufferChain(responseBufs);
 }
 
 protected synchronized void setResponse(Object m, final CellScanner cells,
@@ -458,10 +460,20 @@ public class RpcServer implements RpcServerInterface, 
ConfigurationObserver {
 }
 Message header = headerBuilder.build();
 byte[] b = createHeaderAndMessageBytes(result, header, cellBlockSize);
-List responseBufs = new ArrayList(
-(cellBlock == null ? 1 : cellBlock.size()) + 1);
-responseBufs.add(ByteBuffer.wrap(b));
-if (cellBlock != null) responseBufs.addAll(cellBlock);
+ByteBuffer[] responseBufs = null;
+int cellBlockBufferSize = 0;
+if (cellBlock != null) {
+  cellBlockBufferSize = cellBlock.size();
+  responseBufs = new ByteBuffer[1 + cellBlockBufferSize];
+} else {
+  responseBufs = new ByteBuffer[1];
+}
+responseBufs[0] = ByteBuffer.wrap(b);
+if (cellBlock != null) {
+  for (int i = 0; i < cellBlockBufferSize; i++) {
+responseBufs[i + 1] = cellBlock.get(i);
+  }
+}
 bc = new BufferChain(responseBufs);
 if (connection.useWrap) {
   bc = wrapWithSasl(bc);
@@ -555,9 +567,10 @@ public class RpcServer implements RpcServerInterface, 
ConfigurationObserver {
 + " as call response.");
   }
 
-  ByteBuffer bbTokenLength = ByteBuffer.wrap(Bytes.toBytes(token.length));
-  ByteBuffer bbTokenBytes = ByteBuffer.wrap(token);
-  return new BufferChain(bbTokenLength, bbTokenBytes);

hbase git commit: HBASE-16471 Region Server metrics context will be wrong when machine hostname contain "master" word (Pankaj Kumar)

2016-08-24 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/0.98 7ca58503b -> 02badbdf9


HBASE-16471 Region Server metrics context will be wrong when machine hostname 
contain "master" word (Pankaj Kumar)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/02badbdf
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/02badbdf
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/02badbdf

Branch: refs/heads/0.98
Commit: 02badbdf9d069f515d3198c799d5575539bd0459
Parents: 7ca5850
Author: Ashish Singhi 
Authored: Wed Aug 24 19:07:47 2016 +0530
Committer: Ashish Singhi 
Committed: Wed Aug 24 19:07:47 2016 +0530

--
 .../ipc/MetricsHBaseServerSourceFactory.java|  4 +--
 .../apache/hadoop/hbase/ipc/TestRpcMetrics.java | 26 
 2 files changed, 28 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/02badbdf/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
--
diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
index 4ad9f33..66c477b 100644
--- 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
@@ -47,9 +47,9 @@ public abstract class MetricsHBaseServerSourceFactory {
* @return The Camel Cased context name.
*/
   protected static String createContextName(String serverName) {
-if (serverName.contains("HMaster")) {
+if (serverName.startsWith("HMaster")) {
   return "Master";
-} else if (serverName.contains("HRegion")) {
+} else if (serverName.startsWith("HRegion")) {
   return "RegionServer";
 }
 return "IPC";

http://git-wip-us.apache.org/repos/asf/hbase/blob/02badbdf/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
index e81f47a..6f3f732 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
@@ -136,5 +136,31 @@ public class TestRpcMetrics {
 HELPER.assertCounter("exceptions", 5, serverSource);
   }
 
+  @Test
+  public void testServerContextNameWithHostName() {
+String[] masterServerNames =
+{ "HMaster/node-xyz/10.19.250.253:16020", 
"HMaster/node-HRegion-xyz/10.19.250.253:16020" };
+
+String[] regionServerNames = { 
"HRegionserver/node-xyz/10.19.250.253:16020",
+"HRegionserver/node-HMaster1-xyz/10.19.250.253:16020" };
+
+MetricsHBaseServerSource masterSource = null;
+for (String serverName : masterServerNames) {
+  masterSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("master", masterSource.getMetricsContext());
+  assertEquals("Master,sub=IPC", masterSource.getMetricsJmxContext());
+  assertEquals("IPC", masterSource.getMetricsName());
+}
+
+MetricsHBaseServerSource rsSource = null;
+for (String serverName : regionServerNames) {
+  rsSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("regionserver", rsSource.getMetricsContext());
+  assertEquals("RegionServer,sub=IPC", rsSource.getMetricsJmxContext());
+  assertEquals("IPC", rsSource.getMetricsName());
+}
+  }
 }
 



hbase git commit: HBASE-16471 Region Server metrics context will be wrong when machine hostname contain "master" word (Pankaj Kumar)

2016-08-24 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.1 ecedea0b0 -> 43c93d4c6


HBASE-16471 Region Server metrics context will be wrong when machine hostname 
contain "master" word (Pankaj Kumar)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/43c93d4c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/43c93d4c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/43c93d4c

Branch: refs/heads/branch-1.1
Commit: 43c93d4c69a490723f176c1e1e4aed181cc6ef85
Parents: ecedea0
Author: Ashish Singhi 
Authored: Wed Aug 24 18:59:44 2016 +0530
Committer: Ashish Singhi 
Committed: Wed Aug 24 19:05:18 2016 +0530

--
 .../ipc/MetricsHBaseServerSourceFactory.java|  4 +--
 .../apache/hadoop/hbase/ipc/TestRpcMetrics.java | 29 
 2 files changed, 31 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/43c93d4c/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
--
diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
index d6b1392..e9a3348 100644
--- 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
@@ -47,9 +47,9 @@ public abstract class MetricsHBaseServerSourceFactory {
* @return The Camel Cased context name.
*/
   protected static String createContextName(String serverName) {
-if (serverName.contains("HMaster") || serverName.contains("master")) {
+if (serverName.startsWith("HMaster") || serverName.startsWith("master")) {
   return "Master";
-} else if (serverName.contains("HRegion") || 
serverName.contains("regionserver")) {
+} else if (serverName.startsWith("HRegion") || 
serverName.startsWith("regionserver")) {
   return "RegionServer";
 }
 return "IPC";

http://git-wip-us.apache.org/repos/asf/hbase/blob/43c93d4c/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
index e33a0d7..dd8f226 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
@@ -132,5 +132,34 @@ public class TestRpcMetrics {
 HELPER.assertCounter("exceptions", 5, serverSource);
   }
 
+  @Test
+  public void testServerContextNameWithHostName() {
+String[] masterServerNames = { "master/node-xyz/10.19.250.253:16020",
+"master/node-regionserver-xyz/10.19.250.253:16020", 
"HMaster/node-xyz/10.19.250.253:16020",
+"HMaster/node-regionserver-xyz/10.19.250.253:16020" };
+
+String[] regionServerNames = { "regionserver/node-xyz/10.19.250.253:16020",
+"regionserver/node-master1-xyz/10.19.250.253:16020",
+"HRegionserver/node-xyz/10.19.250.253:16020",
+"HRegionserver/node-master1-xyz/10.19.250.253:16020" };
+
+MetricsHBaseServerSource masterSource = null;
+for (String serverName : masterServerNames) {
+  masterSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("master", masterSource.getMetricsContext());
+  assertEquals("Master,sub=IPC", masterSource.getMetricsJmxContext());
+  assertEquals("Master", masterSource.getMetricsName());
+}
+
+MetricsHBaseServerSource rsSource = null;
+for (String serverName : regionServerNames) {
+  rsSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("regionserver", rsSource.getMetricsContext());
+  assertEquals("RegionServer,sub=IPC", rsSource.getMetricsJmxContext());
+  assertEquals("RegionServer", rsSource.getMetricsName());
+}
+  }
 }
 



hbase git commit: HBASE-16471 Region Server metrics context will be wrong when machine hostname contain "master" word (Pankaj Kumar)

2016-08-24 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 46f3f5bf7 -> 4dcd7fb6d


HBASE-16471 Region Server metrics context will be wrong when machine hostname 
contain "master" word (Pankaj Kumar)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/4dcd7fb6
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/4dcd7fb6
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/4dcd7fb6

Branch: refs/heads/branch-1.2
Commit: 4dcd7fb6d1ea670073ce80bad0dd6a372b5836f1
Parents: 46f3f5b
Author: Ashish Singhi 
Authored: Wed Aug 24 18:59:44 2016 +0530
Committer: Ashish Singhi 
Committed: Wed Aug 24 19:04:28 2016 +0530

--
 .../ipc/MetricsHBaseServerSourceFactory.java|  4 +--
 .../apache/hadoop/hbase/ipc/TestRpcMetrics.java | 29 
 2 files changed, 31 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/4dcd7fb6/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
--
diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
index d6b1392..e9a3348 100644
--- 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
@@ -47,9 +47,9 @@ public abstract class MetricsHBaseServerSourceFactory {
* @return The Camel Cased context name.
*/
   protected static String createContextName(String serverName) {
-if (serverName.contains("HMaster") || serverName.contains("master")) {
+if (serverName.startsWith("HMaster") || serverName.startsWith("master")) {
   return "Master";
-} else if (serverName.contains("HRegion") || 
serverName.contains("regionserver")) {
+} else if (serverName.startsWith("HRegion") || 
serverName.startsWith("regionserver")) {
   return "RegionServer";
 }
 return "IPC";

http://git-wip-us.apache.org/repos/asf/hbase/blob/4dcd7fb6/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
index 52518f8..2b8bdd7 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
@@ -137,5 +137,34 @@ public class TestRpcMetrics {
 HELPER.assertCounter("exceptions", 5, serverSource);
   }
 
+  @Test
+  public void testServerContextNameWithHostName() {
+String[] masterServerNames = { "master/node-xyz/10.19.250.253:16020",
+"master/node-regionserver-xyz/10.19.250.253:16020", 
"HMaster/node-xyz/10.19.250.253:16020",
+"HMaster/node-regionserver-xyz/10.19.250.253:16020" };
+
+String[] regionServerNames = { "regionserver/node-xyz/10.19.250.253:16020",
+"regionserver/node-master1-xyz/10.19.250.253:16020",
+"HRegionserver/node-xyz/10.19.250.253:16020",
+"HRegionserver/node-master1-xyz/10.19.250.253:16020" };
+
+MetricsHBaseServerSource masterSource = null;
+for (String serverName : masterServerNames) {
+  masterSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("master", masterSource.getMetricsContext());
+  assertEquals("Master,sub=IPC", masterSource.getMetricsJmxContext());
+  assertEquals("Master", masterSource.getMetricsName());
+}
+
+MetricsHBaseServerSource rsSource = null;
+for (String serverName : regionServerNames) {
+  rsSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("regionserver", rsSource.getMetricsContext());
+  assertEquals("RegionServer,sub=IPC", rsSource.getMetricsJmxContext());
+  assertEquals("RegionServer", rsSource.getMetricsName());
+}
+  }
 }
 



hbase git commit: HBASE-16471 Region Server metrics context will be wrong when machine hostname contain "master" word (Pankaj Kumar)

2016-08-24 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 dae97549e -> 6b9a0b3b0


HBASE-16471 Region Server metrics context will be wrong when machine hostname 
contain "master" word (Pankaj Kumar)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/6b9a0b3b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/6b9a0b3b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/6b9a0b3b

Branch: refs/heads/branch-1.3
Commit: 6b9a0b3b0259c360e33d6171e26898b166ad8863
Parents: dae9754
Author: Ashish Singhi 
Authored: Wed Aug 24 18:59:44 2016 +0530
Committer: Ashish Singhi 
Committed: Wed Aug 24 19:03:27 2016 +0530

--
 .../ipc/MetricsHBaseServerSourceFactory.java|  4 +--
 .../apache/hadoop/hbase/ipc/TestRpcMetrics.java | 29 
 2 files changed, 31 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/6b9a0b3b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
--
diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
index d6b1392..e9a3348 100644
--- 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
@@ -47,9 +47,9 @@ public abstract class MetricsHBaseServerSourceFactory {
* @return The Camel Cased context name.
*/
   protected static String createContextName(String serverName) {
-if (serverName.contains("HMaster") || serverName.contains("master")) {
+if (serverName.startsWith("HMaster") || serverName.startsWith("master")) {
   return "Master";
-} else if (serverName.contains("HRegion") || 
serverName.contains("regionserver")) {
+} else if (serverName.startsWith("HRegion") || 
serverName.startsWith("regionserver")) {
   return "RegionServer";
 }
 return "IPC";

http://git-wip-us.apache.org/repos/asf/hbase/blob/6b9a0b3b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
index 52518f8..2b8bdd7 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
@@ -137,5 +137,34 @@ public class TestRpcMetrics {
 HELPER.assertCounter("exceptions", 5, serverSource);
   }
 
+  @Test
+  public void testServerContextNameWithHostName() {
+String[] masterServerNames = { "master/node-xyz/10.19.250.253:16020",
+"master/node-regionserver-xyz/10.19.250.253:16020", 
"HMaster/node-xyz/10.19.250.253:16020",
+"HMaster/node-regionserver-xyz/10.19.250.253:16020" };
+
+String[] regionServerNames = { "regionserver/node-xyz/10.19.250.253:16020",
+"regionserver/node-master1-xyz/10.19.250.253:16020",
+"HRegionserver/node-xyz/10.19.250.253:16020",
+"HRegionserver/node-master1-xyz/10.19.250.253:16020" };
+
+MetricsHBaseServerSource masterSource = null;
+for (String serverName : masterServerNames) {
+  masterSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("master", masterSource.getMetricsContext());
+  assertEquals("Master,sub=IPC", masterSource.getMetricsJmxContext());
+  assertEquals("Master", masterSource.getMetricsName());
+}
+
+MetricsHBaseServerSource rsSource = null;
+for (String serverName : regionServerNames) {
+  rsSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("regionserver", rsSource.getMetricsContext());
+  assertEquals("RegionServer,sub=IPC", rsSource.getMetricsJmxContext());
+  assertEquals("RegionServer", rsSource.getMetricsName());
+}
+  }
 }
 



hbase git commit: HBASE-16471 Region Server metrics context will be wrong when machine hostname contain "master" word (Pankaj Kumar)

2016-08-24 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 c64c0e85c -> 3606b890f


HBASE-16471 Region Server metrics context will be wrong when machine hostname 
contain "master" word (Pankaj Kumar)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3606b890
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3606b890
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3606b890

Branch: refs/heads/branch-1
Commit: 3606b890f86a826462e42da62e6244515a1710c9
Parents: c64c0e8
Author: Ashish Singhi 
Authored: Wed Aug 24 18:59:44 2016 +0530
Committer: Ashish Singhi 
Committed: Wed Aug 24 19:01:58 2016 +0530

--
 .../ipc/MetricsHBaseServerSourceFactory.java|  4 +--
 .../apache/hadoop/hbase/ipc/TestRpcMetrics.java | 29 
 2 files changed, 31 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/3606b890/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
--
diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
index d6b1392..e9a3348 100644
--- 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
@@ -47,9 +47,9 @@ public abstract class MetricsHBaseServerSourceFactory {
* @return The Camel Cased context name.
*/
   protected static String createContextName(String serverName) {
-if (serverName.contains("HMaster") || serverName.contains("master")) {
+if (serverName.startsWith("HMaster") || serverName.startsWith("master")) {
   return "Master";
-} else if (serverName.contains("HRegion") || 
serverName.contains("regionserver")) {
+} else if (serverName.startsWith("HRegion") || 
serverName.startsWith("regionserver")) {
   return "RegionServer";
 }
 return "IPC";

http://git-wip-us.apache.org/repos/asf/hbase/blob/3606b890/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
index 52518f8..2b8bdd7 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
@@ -137,5 +137,34 @@ public class TestRpcMetrics {
 HELPER.assertCounter("exceptions", 5, serverSource);
   }
 
+  @Test
+  public void testServerContextNameWithHostName() {
+String[] masterServerNames = { "master/node-xyz/10.19.250.253:16020",
+"master/node-regionserver-xyz/10.19.250.253:16020", 
"HMaster/node-xyz/10.19.250.253:16020",
+"HMaster/node-regionserver-xyz/10.19.250.253:16020" };
+
+String[] regionServerNames = { "regionserver/node-xyz/10.19.250.253:16020",
+"regionserver/node-master1-xyz/10.19.250.253:16020",
+"HRegionserver/node-xyz/10.19.250.253:16020",
+"HRegionserver/node-master1-xyz/10.19.250.253:16020" };
+
+MetricsHBaseServerSource masterSource = null;
+for (String serverName : masterServerNames) {
+  masterSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("master", masterSource.getMetricsContext());
+  assertEquals("Master,sub=IPC", masterSource.getMetricsJmxContext());
+  assertEquals("Master", masterSource.getMetricsName());
+}
+
+MetricsHBaseServerSource rsSource = null;
+for (String serverName : regionServerNames) {
+  rsSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("regionserver", rsSource.getMetricsContext());
+  assertEquals("RegionServer,sub=IPC", rsSource.getMetricsJmxContext());
+  assertEquals("RegionServer", rsSource.getMetricsName());
+}
+  }
 }
 



hbase git commit: HBASE-16471 Region Server metrics context will be wrong when machine hostname contain "master" word (Pankaj Kumar)

2016-08-24 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 1ca849269 -> 31f16d6ae


HBASE-16471 Region Server metrics context will be wrong when machine hostname 
contain "master" word (Pankaj Kumar)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/31f16d6a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/31f16d6a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/31f16d6a

Branch: refs/heads/master
Commit: 31f16d6aec7d09f0a4102ed7651db9ff0d190cf9
Parents: 1ca8492
Author: Ashish Singhi 
Authored: Wed Aug 24 18:59:44 2016 +0530
Committer: Ashish Singhi 
Committed: Wed Aug 24 18:59:44 2016 +0530

--
 .../ipc/MetricsHBaseServerSourceFactory.java|  4 +--
 .../apache/hadoop/hbase/ipc/TestRpcMetrics.java | 29 
 2 files changed, 31 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/31f16d6a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
--
diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
index d6b1392..e9a3348 100644
--- 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceFactory.java
@@ -47,9 +47,9 @@ public abstract class MetricsHBaseServerSourceFactory {
* @return The Camel Cased context name.
*/
   protected static String createContextName(String serverName) {
-if (serverName.contains("HMaster") || serverName.contains("master")) {
+if (serverName.startsWith("HMaster") || serverName.startsWith("master")) {
   return "Master";
-} else if (serverName.contains("HRegion") || 
serverName.contains("regionserver")) {
+} else if (serverName.startsWith("HRegion") || 
serverName.startsWith("regionserver")) {
   return "RegionServer";
 }
 return "IPC";

http://git-wip-us.apache.org/repos/asf/hbase/blob/31f16d6a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
index 9f1b63a..4de618f 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
@@ -138,5 +138,34 @@ public class TestRpcMetrics {
 HELPER.assertCounter("exceptions", 5, serverSource);
   }
 
+  @Test
+  public void testServerContextNameWithHostName() {
+String[] masterServerNames = { "master/node-xyz/10.19.250.253:16020",
+"master/node-regionserver-xyz/10.19.250.253:16020", 
"HMaster/node-xyz/10.19.250.253:16020",
+"HMaster/node-regionserver-xyz/10.19.250.253:16020" };
+
+String[] regionServerNames = { "regionserver/node-xyz/10.19.250.253:16020",
+"regionserver/node-master1-xyz/10.19.250.253:16020",
+"HRegionserver/node-xyz/10.19.250.253:16020",
+"HRegionserver/node-master1-xyz/10.19.250.253:16020" };
+
+MetricsHBaseServerSource masterSource = null;
+for (String serverName : masterServerNames) {
+  masterSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("master", masterSource.getMetricsContext());
+  assertEquals("Master,sub=IPC", masterSource.getMetricsJmxContext());
+  assertEquals("Master", masterSource.getMetricsName());
+}
+
+MetricsHBaseServerSource rsSource = null;
+for (String serverName : regionServerNames) {
+  rsSource = new MetricsHBaseServer(serverName, new 
MetricsHBaseServerWrapperStub())
+  .getMetricsSource();
+  assertEquals("regionserver", rsSource.getMetricsContext());
+  assertEquals("RegionServer,sub=IPC", rsSource.getMetricsJmxContext());
+  assertEquals("RegionServer", rsSource.getMetricsName());
+}
+  }
 }
 



hbase git commit: HBASE-16446 append_peer_tableCFs failed when there already have this table's partial cfs in the peer (Guanghao Zhang)

2016-08-23 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 2f7b9b542 -> 77a7394f1


HBASE-16446 append_peer_tableCFs failed when there already have this table's 
partial cfs in the peer (Guanghao Zhang)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/77a7394f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/77a7394f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/77a7394f

Branch: refs/heads/master
Commit: 77a7394f1770249f33d07df6bac6cf16ef34140e
Parents: 2f7b9b5
Author: Ashish Singhi 
Authored: Tue Aug 23 15:28:33 2016 +0530
Committer: Ashish Singhi 
Committed: Tue Aug 23 15:28:33 2016 +0530

--
 .../client/replication/ReplicationAdmin.java| 11 +++---
 .../replication/TestReplicationAdmin.java   | 38 
 2 files changed, 43 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/77a7394f/hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
index ee26e38..de6cb7f 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
@@ -289,13 +289,12 @@ public class ReplicationAdmin implements Closeable {
   Collection appendCfs = entry.getValue();
   if (preTableCfs.containsKey(table)) {
 List cfs = preTableCfs.get(table);
-if (cfs == null || appendCfs == null) {
+if (cfs == null || appendCfs == null || appendCfs.isEmpty()) {
   preTableCfs.put(table, null);
 } else {
   Set cfSet = new HashSet(cfs);
   cfSet.addAll(appendCfs);
   preTableCfs.put(table, Lists.newArrayList(cfSet));
-
 }
   } else {
 if (appendCfs == null || appendCfs.isEmpty()) {
@@ -342,9 +341,9 @@ public class ReplicationAdmin implements Closeable {
   Collection removeCfs = entry.getValue();
   if (preTableCfs.containsKey(table)) {
 List cfs = preTableCfs.get(table);
-if (cfs == null && removeCfs == null) {
+if (cfs == null && (removeCfs == null || removeCfs.isEmpty())) {
   preTableCfs.remove(table);
-} else if (cfs != null && removeCfs != null) {
+} else if (cfs != null && (removeCfs != null && !removeCfs.isEmpty())) 
{
   Set cfSet = new HashSet(cfs);
   cfSet.removeAll(removeCfs);
   if (cfSet.isEmpty()) {
@@ -352,10 +351,10 @@ public class ReplicationAdmin implements Closeable {
   } else {
 preTableCfs.put(table, Lists.newArrayList(cfSet));
   }
-} else if (cfs == null && removeCfs != null) {
+} else if (cfs == null && (removeCfs != null && !removeCfs.isEmpty())) 
{
   throw new ReplicationException("Cannot remove cf of table: " + table
   + " which doesn't specify cfs from table-cfs config in peer: " + 
id);
-} else if (cfs != null && removeCfs == null) {
+} else if (cfs != null && (removeCfs == null || removeCfs.isEmpty())) {
   throw new ReplicationException("Cannot remove table: " + table
   + " which has specified cfs from table-cfs config in peer: " + 
id);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/77a7394f/hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
index 9c3f23a..85820af 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
@@ -216,6 +216,8 @@ public class TestReplicationAdmin {
 TableName tab2 = TableName.valueOf("t2");
 TableName tab3 = TableName.valueOf("t3");
 TableName tab4 = TableName.valueOf("t4");
+TableName tab5 = TableName.valueOf("t5");
+TableName tab6 = TableName.valueOf("t6");
 
 // Add a valid peer
 admin.addPeer(ID_ONE, rpc1, null);
@@ -275,6 +277,34 @@ public class TestReplicationAdmin {
 assertEquals("f1", result.get(tab4).get(0));
 assertEquals("f2", result.get(tab4).get(1));
 
+// append "table5" => [], then append "table5" => ["f1"]
+   

hbase git commit: HBASE-15952 Bulk load data replication is not working when RS user does not have permission on hfile-refs node

2016-06-09 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 ba9c11ef1 -> d320ac28e


HBASE-15952 Bulk load data replication is not working when RS user does not 
have permission on hfile-refs node


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d320ac28
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d320ac28
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d320ac28

Branch: refs/heads/branch-1.3
Commit: d320ac28ee04e0e7cbac86281f04876ca3a905e5
Parents: ba9c11e
Author: Ashish Singhi 
Authored: Thu Jun 9 18:50:03 2016 +0530
Committer: Ashish Singhi 
Committed: Thu Jun 9 18:50:11 2016 +0530

--
 .../replication/ReplicationPeersZKImpl.java | 21 -
 .../hbase/replication/ReplicationQueues.java|  6 
 .../replication/ReplicationQueuesZKImpl.java| 33 
 .../regionserver/ReplicationSourceManager.java  | 11 +--
 .../cleaner/TestReplicationHFileCleaner.java|  1 +
 .../replication/TestReplicationStateBasic.java  |  5 +++
 6 files changed, 47 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d320ac28/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
index b0d6e83..d5445ed 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
@@ -124,17 +124,6 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
 
   ZKUtil.createWithParents(this.zookeeper, this.peersZNode);
 
-  // Irrespective of bulk load hfile replication is enabled or not we add 
peerId node to
-  // hfile-refs node -- HBASE-15397
-  try {
-String peerId = ZKUtil.joinZNode(this.hfileRefsZNode, id);
-LOG.info("Adding peer " + peerId + " to hfile reference queue.");
-ZKUtil.createWithParents(this.zookeeper, peerId);
-  } catch (KeeperException e) {
-throw new ReplicationException("Failed to add peer with id=" + id
-+ ", node under hfile references node.", e);
-  }
-
   List listOfOps = new ArrayList();
   ZKUtilOp op1 = 
ZKUtilOp.createAndFailSilent(ZKUtil.joinZNode(this.peersZNode, id),
 toByteArray(peerConfig));
@@ -164,16 +153,6 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
 + " because that id does not exist.");
   }
   ZKUtil.deleteNodeRecursively(this.zookeeper, 
ZKUtil.joinZNode(this.peersZNode, id));
-  // Delete peerId node from hfile-refs node irrespective of whether bulk 
loaded hfile
-  // replication is enabled or not
-
-  String peerId = ZKUtil.joinZNode(this.hfileRefsZNode, id);
-  try {
-LOG.info("Removing peer " + peerId + " from hfile reference queue.");
-ZKUtil.deleteNodeRecursively(this.zookeeper, peerId);
-  } catch (NoNodeException e) {
-LOG.info("Did not find node " + peerId + " to delete.", e);
-  }
 } catch (KeeperException e) {
   throw new ReplicationException("Could not remove peer with id=" + id, e);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/d320ac28/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
index 0d47a88..507367b 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
@@ -123,6 +123,12 @@ public interface ReplicationQueues {
   void addPeerToHFileRefs(String peerId) throws ReplicationException;
 
   /**
+   * Remove a peer from hfile reference queue.
+   * @param peerId peer cluster id to be removed
+   */
+  void removePeerFromHFileRefs(String peerId);
+
+  /**
* Add new hfile references to the queue.
* @param peerId peer cluster id to which the hfiles need to be replicated
* @param files list of hfile references to be added

http://git-wip-us.apache.org/repos/asf/hbase/blob/d320ac28/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java

hbase git commit: HBASE-15952 Bulk load data replication is not working when RS user does not have permission on hfile-refs node

2016-06-09 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 13d06a2cc -> a40ec70da


HBASE-15952 Bulk load data replication is not working when RS user does not 
have permission on hfile-refs node


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a40ec70d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a40ec70d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a40ec70d

Branch: refs/heads/branch-1
Commit: a40ec70da9d9891f5af074535717fe20658a00cc
Parents: 13d06a2
Author: Ashish Singhi 
Authored: Thu Jun 9 18:46:07 2016 +0530
Committer: Ashish Singhi 
Committed: Thu Jun 9 18:46:07 2016 +0530

--
 .../replication/ReplicationPeersZKImpl.java | 21 -
 .../hbase/replication/ReplicationQueues.java|  6 
 .../replication/ReplicationQueuesZKImpl.java| 33 
 .../regionserver/ReplicationSourceManager.java  | 11 +--
 .../cleaner/TestReplicationHFileCleaner.java|  1 +
 .../replication/TestReplicationStateBasic.java  |  5 +++
 6 files changed, 47 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/a40ec70d/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
index 076167e..d717b0b 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
@@ -128,17 +128,6 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
 
   ZKUtil.createWithParents(this.zookeeper, this.peersZNode);
 
-  // Irrespective of bulk load hfile replication is enabled or not we add 
peerId node to
-  // hfile-refs node -- HBASE-15397
-  try {
-String peerId = ZKUtil.joinZNode(this.hfileRefsZNode, id);
-LOG.info("Adding peer " + peerId + " to hfile reference queue.");
-ZKUtil.createWithParents(this.zookeeper, peerId);
-  } catch (KeeperException e) {
-throw new ReplicationException("Failed to add peer with id=" + id
-+ ", node under hfile references node.", e);
-  }
-
   List listOfOps = new ArrayList();
   ZKUtilOp op1 = 
ZKUtilOp.createAndFailSilent(ZKUtil.joinZNode(this.peersZNode, id),
 ReplicationSerDeHelper.toByteArray(peerConfig));
@@ -168,16 +157,6 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
 + " because that id does not exist.");
   }
   ZKUtil.deleteNodeRecursively(this.zookeeper, 
ZKUtil.joinZNode(this.peersZNode, id));
-  // Delete peerId node from hfile-refs node irrespective of whether bulk 
loaded hfile
-  // replication is enabled or not
-
-  String peerId = ZKUtil.joinZNode(this.hfileRefsZNode, id);
-  try {
-LOG.info("Removing peer " + peerId + " from hfile reference queue.");
-ZKUtil.deleteNodeRecursively(this.zookeeper, peerId);
-  } catch (NoNodeException e) {
-LOG.info("Did not find node " + peerId + " to delete.", e);
-  }
 } catch (KeeperException e) {
   throw new ReplicationException("Could not remove peer with id=" + id, e);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/a40ec70d/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
index 0d47a88..507367b 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
@@ -123,6 +123,12 @@ public interface ReplicationQueues {
   void addPeerToHFileRefs(String peerId) throws ReplicationException;
 
   /**
+   * Remove a peer from hfile reference queue.
+   * @param peerId peer cluster id to be removed
+   */
+  void removePeerFromHFileRefs(String peerId);
+
+  /**
* Add new hfile references to the queue.
* @param peerId peer cluster id to which the hfiles need to be replicated
* @param files list of hfile references to be added

http://git-wip-us.apache.org/repos/asf/hbase/blob/a40ec70d/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java

hbase git commit: HBASE-15952 Bulk load data replication is not working when RS user does not have permission on hfile-refs node

2016-06-09 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 41cc21554 -> 9012a0b12


HBASE-15952 Bulk load data replication is not working when RS user does not 
have permission on hfile-refs node


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/9012a0b1
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/9012a0b1
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/9012a0b1

Branch: refs/heads/master
Commit: 9012a0b123b3eea8b08c8687cef812e83e9b491d
Parents: 41cc215
Author: Ashish Singhi 
Authored: Thu Jun 9 18:44:29 2016 +0530
Committer: Ashish Singhi 
Committed: Thu Jun 9 18:44:29 2016 +0530

--
 .../replication/ReplicationPeersZKImpl.java | 21 -
 .../hbase/replication/ReplicationQueues.java|  6 
 .../replication/ReplicationQueuesHBaseImpl.java |  6 
 .../replication/ReplicationQueuesZKImpl.java| 33 
 .../regionserver/ReplicationSourceManager.java  | 11 +--
 .../cleaner/TestReplicationHFileCleaner.java|  1 +
 .../replication/TestReplicationStateBasic.java  |  5 +++
 7 files changed, 53 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/9012a0b1/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
index 15265d9..5af97c2 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
@@ -129,17 +129,6 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
 
   ZKUtil.createWithParents(this.zookeeper, this.peersZNode);
 
-  // Irrespective of bulk load hfile replication is enabled or not we add 
peerId node to
-  // hfile-refs node -- HBASE-15397
-  try {
-String peerId = ZKUtil.joinZNode(this.hfileRefsZNode, id);
-LOG.info("Adding peer " + peerId + " to hfile reference queue.");
-ZKUtil.createWithParents(this.zookeeper, peerId);
-  } catch (KeeperException e) {
-throw new ReplicationException("Failed to add peer with id=" + id
-+ ", node under hfile references node.", e);
-  }
-
   List listOfOps = new ArrayList();
   ZKUtilOp op1 = ZKUtilOp.createAndFailSilent(getPeerNode(id),
 ReplicationSerDeHelper.toByteArray(peerConfig));
@@ -166,16 +155,6 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
 + " because that id does not exist.");
   }
   ZKUtil.deleteNodeRecursively(this.zookeeper, 
ZKUtil.joinZNode(this.peersZNode, id));
-  // Delete peerId node from hfile-refs node irrespective of whether bulk 
loaded hfile
-  // replication is enabled or not
-
-  String peerId = ZKUtil.joinZNode(this.hfileRefsZNode, id);
-  try {
-LOG.info("Removing peer " + peerId + " from hfile reference queue.");
-ZKUtil.deleteNodeRecursively(this.zookeeper, peerId);
-  } catch (NoNodeException e) {
-LOG.info("Did not find node " + peerId + " to delete.", e);
-  }
 } catch (KeeperException e) {
   throw new ReplicationException("Could not remove peer with id=" + id, e);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/9012a0b1/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
index db6da91..809b122 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
@@ -123,6 +123,12 @@ public interface ReplicationQueues {
   void addPeerToHFileRefs(String peerId) throws ReplicationException;
 
   /**
+   * Remove a peer from hfile reference queue.
+   * @param peerId peer cluster id to be removed
+   */
+  void removePeerFromHFileRefs(String peerId);
+
+  /**
* Add new hfile references to the queue.
* @param peerId peer cluster id to which the hfiles need to be replicated
* @param files list of hfile references to be added

http://git-wip-us.apache.org/repos/asf/hbase/blob/9012a0b1/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesHBaseImpl.java

hbase git commit: HBASE-15888 Extend HBASE-12769 for bulk load data replication

2016-06-03 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 5824f2236 -> b0e1fdae3


HBASE-15888 Extend HBASE-12769 for bulk load data replication


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b0e1fdae
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b0e1fdae
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b0e1fdae

Branch: refs/heads/branch-1.3
Commit: b0e1fdae346b64af4188cf5df29488617753416f
Parents: 5824f22
Author: Ashish Singhi 
Authored: Fri Jun 3 18:48:47 2016 +0530
Committer: Ashish Singhi 
Committed: Fri Jun 3 18:48:47 2016 +0530

--
 .../replication/ReplicationPeersZKImpl.java |  6 ++
 .../hbase/util/hbck/ReplicationChecker.java | 59 ++--
 2 files changed, 61 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b0e1fdae/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
index ad634fa..b0d6e83 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
@@ -601,6 +601,12 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
   }
 }
   }
+  // Check for hfile-refs queue
+  if (-1 != ZKUtil.checkExists(zookeeper, hfileRefsZNode)
+  && queuesClient.getAllPeersFromHFileRefsQueue().contains(peerId)) {
+throw new ReplicationException("Undeleted queue for peerId: " + peerId
++ ", found in hfile-refs node path " + hfileRefsZNode);
+  }
 } catch (KeeperException e) {
   throw new ReplicationException("Could not check queues deleted with id=" 
+ peerId, e);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/b0e1fdae/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/ReplicationChecker.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/ReplicationChecker.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/ReplicationChecker.java
index bf44a50..64212c9 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/ReplicationChecker.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/ReplicationChecker.java
@@ -51,16 +51,21 @@ import org.apache.zookeeper.KeeperException;
 @InterfaceAudience.Private
 public class ReplicationChecker {
   private static final Log LOG = LogFactory.getLog(ReplicationChecker.class);
+  private final ZooKeeperWatcher zkw;
   private ErrorReporter errorReporter;
   private ReplicationQueuesClient queuesClient;
   private ReplicationPeers replicationPeers;
   private ReplicationQueueDeletor queueDeletor;
   // replicator with its queueIds for removed peers
   private Map undeletedQueueIds = new HashMap();
-  
+  // Set of un deleted hfile refs queue Ids
+  private Set undeletedHFileRefsQueueIds = new HashSet<>();
+  private final String hfileRefsZNode;
+
   public ReplicationChecker(Configuration conf, ZooKeeperWatcher zkw, 
HConnection connection,
   ErrorReporter errorReporter) throws IOException {
 try {
+  this.zkw = zkw;
   this.errorReporter = errorReporter;
   this.queuesClient = ReplicationFactory.getReplicationQueuesClient(zkw, 
conf, connection);
   this.queuesClient.init();
@@ -71,6 +76,13 @@ public class ReplicationChecker {
 } catch (ReplicationException e) {
   throw new IOException("failed to construct ReplicationChecker", e);
 }
+
+String replicationZNodeName = conf.get("zookeeper.znode.replication", 
"replication");
+String replicationZNode = ZKUtil.joinZNode(this.zkw.baseZNode, 
replicationZNodeName);
+String hfileRefsZNodeName =
+
conf.get(ReplicationStateZKBase.ZOOKEEPER_ZNODE_REPLICATION_HFILE_REFS_KEY,
+  
ReplicationStateZKBase.ZOOKEEPER_ZNODE_REPLICATION_HFILE_REFS_DEFAULT);
+hfileRefsZNode = ZKUtil.joinZNode(replicationZNode, hfileRefsZNodeName);
   }
 
   public boolean hasUnDeletedQueues() {
@@ -103,13 +115,37 @@ public class ReplicationChecker {
 } catch (KeeperException ke) {
   throw new IOException(ke);
 }
+
+checkUnDeletedHFileRefsQueues(peerIds);
+  }
+
+  private void checkUnDeletedHFileRefsQueues(Set peerIds) throws 
IOException {
+try {
+  if (-1 == ZKUtil.checkExists(zkw, hfileRefsZNode)) {
+   

hbase git commit: HBASE-15888 Extend HBASE-12769 for bulk load data replication

2016-06-03 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 a8c8bfd5e -> 950a09b03


HBASE-15888 Extend HBASE-12769 for bulk load data replication


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/950a09b0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/950a09b0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/950a09b0

Branch: refs/heads/branch-1
Commit: 950a09b03cf1a306ed5a2754c11457daf30e7d23
Parents: a8c8bfd
Author: Ashish Singhi 
Authored: Fri Jun 3 18:46:41 2016 +0530
Committer: Ashish Singhi 
Committed: Fri Jun 3 18:47:31 2016 +0530

--
 .../replication/ReplicationPeersZKImpl.java |  6 ++
 .../hbase/util/hbck/ReplicationChecker.java | 59 ++--
 2 files changed, 61 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/950a09b0/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
index c0c3f7e..076167e 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
@@ -567,6 +567,12 @@ public class ReplicationPeersZKImpl extends 
ReplicationStateZKBase implements Re
   }
 }
   }
+  // Check for hfile-refs queue
+  if (-1 != ZKUtil.checkExists(zookeeper, hfileRefsZNode)
+  && queuesClient.getAllPeersFromHFileRefsQueue().contains(peerId)) {
+throw new ReplicationException("Undeleted queue for peerId: " + peerId
++ ", found in hfile-refs node path " + hfileRefsZNode);
+  }
 } catch (KeeperException e) {
   throw new ReplicationException("Could not check queues deleted with id=" 
+ peerId, e);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/950a09b0/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/ReplicationChecker.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/ReplicationChecker.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/ReplicationChecker.java
index bf44a50..64212c9 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/ReplicationChecker.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/ReplicationChecker.java
@@ -51,16 +51,21 @@ import org.apache.zookeeper.KeeperException;
 @InterfaceAudience.Private
 public class ReplicationChecker {
   private static final Log LOG = LogFactory.getLog(ReplicationChecker.class);
+  private final ZooKeeperWatcher zkw;
   private ErrorReporter errorReporter;
   private ReplicationQueuesClient queuesClient;
   private ReplicationPeers replicationPeers;
   private ReplicationQueueDeletor queueDeletor;
   // replicator with its queueIds for removed peers
   private Map undeletedQueueIds = new HashMap();
-  
+  // Set of un deleted hfile refs queue Ids
+  private Set undeletedHFileRefsQueueIds = new HashSet<>();
+  private final String hfileRefsZNode;
+
   public ReplicationChecker(Configuration conf, ZooKeeperWatcher zkw, 
HConnection connection,
   ErrorReporter errorReporter) throws IOException {
 try {
+  this.zkw = zkw;
   this.errorReporter = errorReporter;
   this.queuesClient = ReplicationFactory.getReplicationQueuesClient(zkw, 
conf, connection);
   this.queuesClient.init();
@@ -71,6 +76,13 @@ public class ReplicationChecker {
 } catch (ReplicationException e) {
   throw new IOException("failed to construct ReplicationChecker", e);
 }
+
+String replicationZNodeName = conf.get("zookeeper.znode.replication", 
"replication");
+String replicationZNode = ZKUtil.joinZNode(this.zkw.baseZNode, 
replicationZNodeName);
+String hfileRefsZNodeName =
+
conf.get(ReplicationStateZKBase.ZOOKEEPER_ZNODE_REPLICATION_HFILE_REFS_KEY,
+  
ReplicationStateZKBase.ZOOKEEPER_ZNODE_REPLICATION_HFILE_REFS_DEFAULT);
+hfileRefsZNode = ZKUtil.joinZNode(replicationZNode, hfileRefsZNodeName);
   }
 
   public boolean hasUnDeletedQueues() {
@@ -103,13 +115,37 @@ public class ReplicationChecker {
 } catch (KeeperException ke) {
   throw new IOException(ke);
 }
+
+checkUnDeletedHFileRefsQueues(peerIds);
+  }
+
+  private void checkUnDeletedHFileRefsQueues(Set peerIds) throws 
IOException {
+try {
+  if (-1 == ZKUtil.checkExists(zkw, hfileRefsZNode)) {
+

hbase git commit: HBASE-15669 HFile size is not considered correctly in a replication request

2016-05-06 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 f5b57cd04 -> 5ef9d4752


HBASE-15669 HFile size is not considered correctly in a replication request


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5ef9d475
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5ef9d475
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5ef9d475

Branch: refs/heads/branch-1.3
Commit: 5ef9d475281b498a3c97b3842aa15699965109a7
Parents: f5b57cd
Author: Ashish Singhi 
Authored: Fri May 6 17:28:06 2016 +0530
Committer: Ashish Singhi 
Committed: Fri May 6 17:30:22 2016 +0530

--
 .../hadoop/hbase/protobuf/ProtobufUtil.java |  12 +-
 .../hbase/protobuf/generated/WALProtos.java | 159 ---
 hbase-protocol/src/main/protobuf/WAL.proto  |   1 +
 .../hadoop/hbase/regionserver/HRegion.java  |  18 ++-
 .../regionserver/ReplicationSource.java |  44 -
 .../regionserver/TestReplicationSink.java   |   4 +-
 .../TestReplicationSourceManager.java   |  25 ++-
 7 files changed, 229 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5ef9d475/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
index 82f5f0d..08cf6fa 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
@@ -3128,13 +3128,16 @@ public final class ProtobufUtil {
* @param tableName The tableName into which the bulk load is being 
imported into.
* @param encodedRegionName Encoded region name of the region which is being 
bulk loaded.
* @param storeFilesA set of store files of a column family are bulk 
loaded.
+   * @param storeFilesSize  Map of store files and their lengths
* @param bulkloadSeqId sequence ID (by a force flush) used to create 
bulk load hfile
*  name
* @return The WAL log marker for bulk loads.
*/
   public static WALProtos.BulkLoadDescriptor toBulkLoadDescriptor(TableName 
tableName,
-  ByteString encodedRegionName, Map storeFiles, long 
bulkloadSeqId) {
-BulkLoadDescriptor.Builder desc = BulkLoadDescriptor.newBuilder()
+  ByteString encodedRegionName, Map storeFiles,
+  Map storeFilesSize, long bulkloadSeqId) {
+BulkLoadDescriptor.Builder desc =
+BulkLoadDescriptor.newBuilder()
 .setTableName(ProtobufUtil.toProtoTableName(tableName))
 
.setEncodedRegionName(encodedRegionName).setBulkloadSeqNum(bulkloadSeqId);
 
@@ -3143,7 +3146,10 @@ public final class ProtobufUtil {
   .setFamilyName(ByteStringer.wrap(entry.getKey()))
   .setStoreHomeDir(Bytes.toString(entry.getKey())); // relative to 
region
   for (Path path : entry.getValue()) {
-builder.addStoreFile(path.getName());
+String name = path.getName();
+builder.addStoreFile(name);
+Long size = storeFilesSize.get(name) == null ? (Long) 0L : 
storeFilesSize.get(name);
+builder.setStoreFileSize(size);
   }
   desc.addStores(builder);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/5ef9d475/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
--
diff --git 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
index d74688e..6252d51 100644
--- 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
+++ 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
@@ -7821,6 +7821,24 @@ public final class WALProtos {
  */
 com.google.protobuf.ByteString
 getStoreFileBytes(int index);
+
+// optional uint64 store_file_size = 4;
+/**
+ * optional uint64 store_file_size = 4;
+ *
+ * 
+ * size of store file
+ * 
+ */
+boolean hasStoreFileSize();
+/**
+ * optional uint64 store_file_size = 4;
+ *
+ * 
+ * size of store file
+ * 
+ */
+long getStoreFileSize();
   }
   /**
* Protobuf type {@code hbase.pb.StoreDescriptor}
@@ -7891,6 +7909,11 @@ public final class WALProtos {
   storeFile_.add(input.readBytes());
   break;
 }
+case 32: {
+   

hbase git commit: HBASE-15669 HFile size is not considered correctly in a replication request

2016-05-06 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 b9df7978f -> 0964884b9


HBASE-15669 HFile size is not considered correctly in a replication request


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0964884b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0964884b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0964884b

Branch: refs/heads/branch-1
Commit: 0964884b925f251725bcd101f23f77a5d3d829e1
Parents: b9df797
Author: Ashish Singhi 
Authored: Fri May 6 17:28:06 2016 +0530
Committer: Ashish Singhi 
Committed: Fri May 6 17:28:06 2016 +0530

--
 .../hadoop/hbase/protobuf/ProtobufUtil.java |  12 +-
 .../hbase/protobuf/generated/WALProtos.java | 159 ---
 hbase-protocol/src/main/protobuf/WAL.proto  |   1 +
 .../hadoop/hbase/regionserver/HRegion.java  |  18 ++-
 .../regionserver/ReplicationSource.java |  44 -
 .../regionserver/TestReplicationSink.java   |   4 +-
 .../TestReplicationSourceManager.java   |  25 ++-
 7 files changed, 229 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0964884b/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
index 82f5f0d..08cf6fa 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
@@ -3128,13 +3128,16 @@ public final class ProtobufUtil {
* @param tableName The tableName into which the bulk load is being 
imported into.
* @param encodedRegionName Encoded region name of the region which is being 
bulk loaded.
* @param storeFilesA set of store files of a column family are bulk 
loaded.
+   * @param storeFilesSize  Map of store files and their lengths
* @param bulkloadSeqId sequence ID (by a force flush) used to create 
bulk load hfile
*  name
* @return The WAL log marker for bulk loads.
*/
   public static WALProtos.BulkLoadDescriptor toBulkLoadDescriptor(TableName 
tableName,
-  ByteString encodedRegionName, Map storeFiles, long 
bulkloadSeqId) {
-BulkLoadDescriptor.Builder desc = BulkLoadDescriptor.newBuilder()
+  ByteString encodedRegionName, Map storeFiles,
+  Map storeFilesSize, long bulkloadSeqId) {
+BulkLoadDescriptor.Builder desc =
+BulkLoadDescriptor.newBuilder()
 .setTableName(ProtobufUtil.toProtoTableName(tableName))
 
.setEncodedRegionName(encodedRegionName).setBulkloadSeqNum(bulkloadSeqId);
 
@@ -3143,7 +3146,10 @@ public final class ProtobufUtil {
   .setFamilyName(ByteStringer.wrap(entry.getKey()))
   .setStoreHomeDir(Bytes.toString(entry.getKey())); // relative to 
region
   for (Path path : entry.getValue()) {
-builder.addStoreFile(path.getName());
+String name = path.getName();
+builder.addStoreFile(name);
+Long size = storeFilesSize.get(name) == null ? (Long) 0L : 
storeFilesSize.get(name);
+builder.setStoreFileSize(size);
   }
   desc.addStores(builder);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/0964884b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
--
diff --git 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
index d74688e..6252d51 100644
--- 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
+++ 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
@@ -7821,6 +7821,24 @@ public final class WALProtos {
  */
 com.google.protobuf.ByteString
 getStoreFileBytes(int index);
+
+// optional uint64 store_file_size = 4;
+/**
+ * optional uint64 store_file_size = 4;
+ *
+ * 
+ * size of store file
+ * 
+ */
+boolean hasStoreFileSize();
+/**
+ * optional uint64 store_file_size = 4;
+ *
+ * 
+ * size of store file
+ * 
+ */
+long getStoreFileSize();
   }
   /**
* Protobuf type {@code hbase.pb.StoreDescriptor}
@@ -7891,6 +7909,11 @@ public final class WALProtos {
   storeFile_.add(input.readBytes());
   break;
 }
+case 32: {
+   

hbase git commit: HBASE-15669 HFile size is not considered correctly in a replication request

2016-05-06 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master bec81b197 -> 34e9a6ff3


HBASE-15669 HFile size is not considered correctly in a replication request


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/34e9a6ff
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/34e9a6ff
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/34e9a6ff

Branch: refs/heads/master
Commit: 34e9a6ff301f40aa3f6ce33ac1b86f9e50fa6694
Parents: bec81b1
Author: Ashish Singhi 
Authored: Fri May 6 17:26:17 2016 +0530
Committer: Ashish Singhi 
Committed: Fri May 6 17:26:17 2016 +0530

--
 .../hadoop/hbase/protobuf/ProtobufUtil.java |  12 +-
 .../hbase/protobuf/generated/WALProtos.java | 159 ---
 hbase-protocol/src/main/protobuf/WAL.proto  |   1 +
 .../hadoop/hbase/regionserver/HRegion.java  |  18 ++-
 .../regionserver/ReplicationSource.java |  44 -
 .../regionserver/TestReplicationSink.java   |   4 +-
 .../TestReplicationSourceManager.java   |  25 ++-
 7 files changed, 229 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/34e9a6ff/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
index 50a4920..62dfd45 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
@@ -3063,13 +3063,16 @@ public final class ProtobufUtil {
* @param tableName The tableName into which the bulk load is being 
imported into.
* @param encodedRegionName Encoded region name of the region which is being 
bulk loaded.
* @param storeFilesA set of store files of a column family are bulk 
loaded.
+   * @param storeFilesSize  Map of store files and their lengths
* @param bulkloadSeqId sequence ID (by a force flush) used to create 
bulk load hfile
*  name
* @return The WAL log marker for bulk loads.
*/
   public static WALProtos.BulkLoadDescriptor toBulkLoadDescriptor(TableName 
tableName,
-  ByteString encodedRegionName, Map storeFiles, long 
bulkloadSeqId) {
-BulkLoadDescriptor.Builder desc = BulkLoadDescriptor.newBuilder()
+  ByteString encodedRegionName, Map storeFiles,
+  Map storeFilesSize, long bulkloadSeqId) {
+BulkLoadDescriptor.Builder desc =
+BulkLoadDescriptor.newBuilder()
 .setTableName(ProtobufUtil.toProtoTableName(tableName))
 
.setEncodedRegionName(encodedRegionName).setBulkloadSeqNum(bulkloadSeqId);
 
@@ -3078,7 +3081,10 @@ public final class ProtobufUtil {
   .setFamilyName(ByteStringer.wrap(entry.getKey()))
   .setStoreHomeDir(Bytes.toString(entry.getKey())); // relative to 
region
   for (Path path : entry.getValue()) {
-builder.addStoreFile(path.getName());
+String name = path.getName();
+builder.addStoreFile(name);
+Long size = storeFilesSize.get(name) == null ? (Long) 0L : 
storeFilesSize.get(name);
+builder.setStoreFileSize(size);
   }
   desc.addStores(builder);
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/34e9a6ff/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
--
diff --git 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
index d74688e..6252d51 100644
--- 
a/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
+++ 
b/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
@@ -7821,6 +7821,24 @@ public final class WALProtos {
  */
 com.google.protobuf.ByteString
 getStoreFileBytes(int index);
+
+// optional uint64 store_file_size = 4;
+/**
+ * optional uint64 store_file_size = 4;
+ *
+ * 
+ * size of store file
+ * 
+ */
+boolean hasStoreFileSize();
+/**
+ * optional uint64 store_file_size = 4;
+ *
+ * 
+ * size of store file
+ * 
+ */
+long getStoreFileSize();
   }
   /**
* Protobuf type {@code hbase.pb.StoreDescriptor}
@@ -7891,6 +7909,11 @@ public final class WALProtos {
   storeFile_.add(input.readBytes());
   break;
 }
+case 32: {
+   

hbase git commit: HBASE-15668 HFileReplicator fails to replicate other hfiles in the request when a hfile in not found in FS anywhere

2016-04-18 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 ee78b6da7 -> 6d40b7a0e


HBASE-15668 HFileReplicator fails to replicate other hfiles in the request when 
a hfile in not found in FS anywhere


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/6d40b7a0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/6d40b7a0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/6d40b7a0

Branch: refs/heads/branch-1
Commit: 6d40b7a0e4b8fb0bb3ada214e790aaf496070989
Parents: ee78b6d
Author: Ashish Singhi 
Authored: Mon Apr 18 22:17:02 2016 +0530
Committer: Ashish Singhi 
Committed: Mon Apr 18 22:18:46 2016 +0530

--
 .../hadoop/hbase/replication/regionserver/HFileReplicator.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/6d40b7a0/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
index 17f6780..1a1044d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
@@ -378,11 +378,11 @@ public class HFileReplicator {
 FileUtil.copy(sourceFs, sourceHFilePath, sinkFs, localHFilePath, 
false, conf);
   } catch (FileNotFoundException e1) {
 // This will mean that the hfile does not exists any where in 
source cluster FS. So we
-// cannot do anything here just log and return.
+// cannot do anything here just log and continue.
 LOG.error("Failed to copy hfile from " + sourceHFilePath + " to " 
+ localHFilePath
 + ". Hence ignoring this hfile from replication..",
   e1);
-return null;
+continue;
   }
 }
 sinkFs.setPermission(localHFilePath, PERM_ALL_ACCESS);



hbase git commit: HBASE-15668 HFileReplicator fails to replicate other hfiles in the request when a hfile in not found in FS anywhere

2016-04-18 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master f2e0aca2b -> 70687c18b


HBASE-15668 HFileReplicator fails to replicate other hfiles in the request when 
a hfile in not found in FS anywhere


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/70687c18
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/70687c18
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/70687c18

Branch: refs/heads/master
Commit: 70687c18bbebf86235091a2b0cbf89600e52ec63
Parents: f2e0aca
Author: Ashish Singhi 
Authored: Mon Apr 18 22:17:02 2016 +0530
Committer: Ashish Singhi 
Committed: Mon Apr 18 22:17:02 2016 +0530

--
 .../hadoop/hbase/replication/regionserver/HFileReplicator.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/70687c18/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
index 17f6780..1a1044d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
@@ -378,11 +378,11 @@ public class HFileReplicator {
 FileUtil.copy(sourceFs, sourceHFilePath, sinkFs, localHFilePath, 
false, conf);
   } catch (FileNotFoundException e1) {
 // This will mean that the hfile does not exists any where in 
source cluster FS. So we
-// cannot do anything here just log and return.
+// cannot do anything here just log and continue.
 LOG.error("Failed to copy hfile from " + sourceHFilePath + " to " 
+ localHFilePath
 + ". Hence ignoring this hfile from replication..",
   e1);
-return null;
+continue;
   }
 }
 sinkFs.setPermission(localHFilePath, PERM_ALL_ACCESS);



hbase git commit: HBASE-15578 Handle HBASE-15234 for ReplicationHFileCleaner

2016-04-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 643116d0b -> dc89473fa


HBASE-15578 Handle HBASE-15234 for ReplicationHFileCleaner


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/dc89473f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/dc89473f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/dc89473f

Branch: refs/heads/branch-1.3
Commit: dc89473faf902897441e41552d727e081e5a94f5
Parents: 643116d
Author: Ashish Singhi 
Authored: Mon Apr 4 15:02:19 2016 +0530
Committer: Ashish Singhi 
Committed: Mon Apr 4 15:10:31 2016 +0530

--
 .../master/ReplicationHFileCleaner.java | 48 +-
 .../cleaner/TestReplicationHFileCleaner.java| 70 
 2 files changed, 100 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/dc89473f/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
index 9bfea4b..5df9379 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
@@ -10,6 +10,7 @@
  */
 package org.apache.hadoop.hbase.replication.master;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Predicate;
 import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.Iterables;
@@ -41,12 +42,11 @@ import org.apache.zookeeper.KeeperException;
  * deleting it from hfile archive directory.
  */
 @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG)
-public class ReplicationHFileCleaner extends BaseHFileCleanerDelegate 
implements Abortable {
+public class ReplicationHFileCleaner extends BaseHFileCleanerDelegate {
   private static final Log LOG = 
LogFactory.getLog(ReplicationHFileCleaner.class);
   private ZooKeeperWatcher zkw;
   private ReplicationQueuesClient rqc;
   private boolean stopped = false;
-  private boolean aborted;
 
   @Override
   public Iterable getDeletableFiles(Iterable files) {
@@ -129,18 +129,27 @@ public class ReplicationHFileCleaner extends 
BaseHFileCleanerDelegate implements
 // Make my own Configuration. Then I'll have my own connection to zk that
 // I can close myself when time comes.
 Configuration conf = new Configuration(config);
+try {
+  setConf(conf, new ZooKeeperWatcher(conf, "replicationHFileCleaner", 
null));
+} catch (IOException e) {
+  LOG.error("Error while configuring " + this.getClass().getName(), e);
+}
+  }
+
+  @VisibleForTesting
+  public void setConf(Configuration conf, ZooKeeperWatcher zk) {
 super.setConf(conf);
 try {
-  initReplicationQueuesClient(conf);
+  initReplicationQueuesClient(conf, zk);
 } catch (IOException e) {
   LOG.error("Error while configuring " + this.getClass().getName(), e);
 }
   }
 
-  private void initReplicationQueuesClient(Configuration conf)
+  private void initReplicationQueuesClient(Configuration conf, 
ZooKeeperWatcher zk)
   throws ZooKeeperConnectionException, IOException {
-this.zkw = new ZooKeeperWatcher(conf, "replicationHFileCleaner", null);
-this.rqc = ReplicationFactory.getReplicationQueuesClient(zkw, conf, this);
+this.zkw = zk;
+this.rqc = ReplicationFactory.getReplicationQueuesClient(zkw, conf, new 
WarnOnlyAbortable());
   }
 
   @Override
@@ -161,18 +170,6 @@ public class ReplicationHFileCleaner extends 
BaseHFileCleanerDelegate implements
   }
 
   @Override
-  public void abort(String why, Throwable e) {
-LOG.warn("Aborting ReplicationHFileCleaner because " + why, e);
-this.aborted = true;
-stop(why);
-  }
-
-  @Override
-  public boolean isAborted() {
-return this.aborted;
-  }
-
-  @Override
   public boolean isFileDeletable(FileStatus fStat) {
 Set hfileRefsFromQueue;
 // all members of this class are null if replication is disabled,
@@ -190,4 +187,19 @@ public class ReplicationHFileCleaner extends 
BaseHFileCleanerDelegate implements
 }
 return !hfileRefsFromQueue.contains(fStat.getPath().getName());
   }
+
+  private static class WarnOnlyAbortable implements Abortable {
+@Override
+public void abort(String why, Throwable e) {
+  LOG.warn("ReplicationHFileCleaner received abort, ignoring.  Reason: " + 
why);
+  if (LOG.isDebugEnabled()) {
+LOG.debug(e);
+  }
+}
+
+@Override
+public 

hbase git commit: HBASE-15578 Handle HBASE-15234 for ReplicationHFileCleaner

2016-04-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 4bae771b6 -> e5fb045aa


HBASE-15578 Handle HBASE-15234 for ReplicationHFileCleaner


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e5fb045a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e5fb045a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e5fb045a

Branch: refs/heads/branch-1
Commit: e5fb045aa9f56969f9ac0f444be90f92bde37af0
Parents: 4bae771
Author: Ashish Singhi 
Authored: Mon Apr 4 15:02:19 2016 +0530
Committer: Ashish Singhi 
Committed: Mon Apr 4 15:07:56 2016 +0530

--
 .../master/ReplicationHFileCleaner.java | 48 +-
 .../cleaner/TestReplicationHFileCleaner.java| 70 
 2 files changed, 100 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e5fb045a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
index 9bfea4b..5df9379 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
@@ -10,6 +10,7 @@
  */
 package org.apache.hadoop.hbase.replication.master;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Predicate;
 import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.Iterables;
@@ -41,12 +42,11 @@ import org.apache.zookeeper.KeeperException;
  * deleting it from hfile archive directory.
  */
 @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG)
-public class ReplicationHFileCleaner extends BaseHFileCleanerDelegate 
implements Abortable {
+public class ReplicationHFileCleaner extends BaseHFileCleanerDelegate {
   private static final Log LOG = 
LogFactory.getLog(ReplicationHFileCleaner.class);
   private ZooKeeperWatcher zkw;
   private ReplicationQueuesClient rqc;
   private boolean stopped = false;
-  private boolean aborted;
 
   @Override
   public Iterable getDeletableFiles(Iterable files) {
@@ -129,18 +129,27 @@ public class ReplicationHFileCleaner extends 
BaseHFileCleanerDelegate implements
 // Make my own Configuration. Then I'll have my own connection to zk that
 // I can close myself when time comes.
 Configuration conf = new Configuration(config);
+try {
+  setConf(conf, new ZooKeeperWatcher(conf, "replicationHFileCleaner", 
null));
+} catch (IOException e) {
+  LOG.error("Error while configuring " + this.getClass().getName(), e);
+}
+  }
+
+  @VisibleForTesting
+  public void setConf(Configuration conf, ZooKeeperWatcher zk) {
 super.setConf(conf);
 try {
-  initReplicationQueuesClient(conf);
+  initReplicationQueuesClient(conf, zk);
 } catch (IOException e) {
   LOG.error("Error while configuring " + this.getClass().getName(), e);
 }
   }
 
-  private void initReplicationQueuesClient(Configuration conf)
+  private void initReplicationQueuesClient(Configuration conf, 
ZooKeeperWatcher zk)
   throws ZooKeeperConnectionException, IOException {
-this.zkw = new ZooKeeperWatcher(conf, "replicationHFileCleaner", null);
-this.rqc = ReplicationFactory.getReplicationQueuesClient(zkw, conf, this);
+this.zkw = zk;
+this.rqc = ReplicationFactory.getReplicationQueuesClient(zkw, conf, new 
WarnOnlyAbortable());
   }
 
   @Override
@@ -161,18 +170,6 @@ public class ReplicationHFileCleaner extends 
BaseHFileCleanerDelegate implements
   }
 
   @Override
-  public void abort(String why, Throwable e) {
-LOG.warn("Aborting ReplicationHFileCleaner because " + why, e);
-this.aborted = true;
-stop(why);
-  }
-
-  @Override
-  public boolean isAborted() {
-return this.aborted;
-  }
-
-  @Override
   public boolean isFileDeletable(FileStatus fStat) {
 Set hfileRefsFromQueue;
 // all members of this class are null if replication is disabled,
@@ -190,4 +187,19 @@ public class ReplicationHFileCleaner extends 
BaseHFileCleanerDelegate implements
 }
 return !hfileRefsFromQueue.contains(fStat.getPath().getName());
   }
+
+  private static class WarnOnlyAbortable implements Abortable {
+@Override
+public void abort(String why, Throwable e) {
+  LOG.warn("ReplicationHFileCleaner received abort, ignoring.  Reason: " + 
why);
+  if (LOG.isDebugEnabled()) {
+LOG.debug(e);
+  }
+}
+
+@Override
+public 

hbase git commit: HBASE-15578 Handle HBASE-15234 for ReplicationHFileCleaner

2016-04-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 79868bd39 -> 33396c362


HBASE-15578 Handle HBASE-15234 for ReplicationHFileCleaner


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/33396c36
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/33396c36
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/33396c36

Branch: refs/heads/master
Commit: 33396c3629a83f2379a69f3a3b493ae8e6ee0a13
Parents: 79868bd
Author: Ashish Singhi 
Authored: Mon Apr 4 15:02:19 2016 +0530
Committer: Ashish Singhi 
Committed: Mon Apr 4 15:02:19 2016 +0530

--
 .../master/ReplicationHFileCleaner.java | 48 +-
 .../cleaner/TestReplicationHFileCleaner.java| 70 
 2 files changed, 100 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/33396c36/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
index 9bfea4b..5df9379 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
@@ -10,6 +10,7 @@
  */
 package org.apache.hadoop.hbase.replication.master;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Predicate;
 import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.Iterables;
@@ -41,12 +42,11 @@ import org.apache.zookeeper.KeeperException;
  * deleting it from hfile archive directory.
  */
 @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG)
-public class ReplicationHFileCleaner extends BaseHFileCleanerDelegate 
implements Abortable {
+public class ReplicationHFileCleaner extends BaseHFileCleanerDelegate {
   private static final Log LOG = 
LogFactory.getLog(ReplicationHFileCleaner.class);
   private ZooKeeperWatcher zkw;
   private ReplicationQueuesClient rqc;
   private boolean stopped = false;
-  private boolean aborted;
 
   @Override
   public Iterable getDeletableFiles(Iterable files) {
@@ -129,18 +129,27 @@ public class ReplicationHFileCleaner extends 
BaseHFileCleanerDelegate implements
 // Make my own Configuration. Then I'll have my own connection to zk that
 // I can close myself when time comes.
 Configuration conf = new Configuration(config);
+try {
+  setConf(conf, new ZooKeeperWatcher(conf, "replicationHFileCleaner", 
null));
+} catch (IOException e) {
+  LOG.error("Error while configuring " + this.getClass().getName(), e);
+}
+  }
+
+  @VisibleForTesting
+  public void setConf(Configuration conf, ZooKeeperWatcher zk) {
 super.setConf(conf);
 try {
-  initReplicationQueuesClient(conf);
+  initReplicationQueuesClient(conf, zk);
 } catch (IOException e) {
   LOG.error("Error while configuring " + this.getClass().getName(), e);
 }
   }
 
-  private void initReplicationQueuesClient(Configuration conf)
+  private void initReplicationQueuesClient(Configuration conf, 
ZooKeeperWatcher zk)
   throws ZooKeeperConnectionException, IOException {
-this.zkw = new ZooKeeperWatcher(conf, "replicationHFileCleaner", null);
-this.rqc = ReplicationFactory.getReplicationQueuesClient(zkw, conf, this);
+this.zkw = zk;
+this.rqc = ReplicationFactory.getReplicationQueuesClient(zkw, conf, new 
WarnOnlyAbortable());
   }
 
   @Override
@@ -161,18 +170,6 @@ public class ReplicationHFileCleaner extends 
BaseHFileCleanerDelegate implements
   }
 
   @Override
-  public void abort(String why, Throwable e) {
-LOG.warn("Aborting ReplicationHFileCleaner because " + why, e);
-this.aborted = true;
-stop(why);
-  }
-
-  @Override
-  public boolean isAborted() {
-return this.aborted;
-  }
-
-  @Override
   public boolean isFileDeletable(FileStatus fStat) {
 Set hfileRefsFromQueue;
 // all members of this class are null if replication is disabled,
@@ -190,4 +187,19 @@ public class ReplicationHFileCleaner extends 
BaseHFileCleanerDelegate implements
 }
 return !hfileRefsFromQueue.contains(fStat.getPath().getName());
   }
+
+  private static class WarnOnlyAbortable implements Abortable {
+@Override
+public void abort(String why, Throwable e) {
+  LOG.warn("ReplicationHFileCleaner received abort, ignoring.  Reason: " + 
why);
+  if (LOG.isDebugEnabled()) {
+LOG.debug(e);
+  }
+}
+
+@Override
+public boolean 

hbase git commit: HBASE-15424 Add bulk load hfile-refs for replication in ZK after the event is appended in the WAL

2016-04-01 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 8a572b92b -> ec2822c00


HBASE-15424 Add bulk load hfile-refs for replication in ZK after the event is 
appended in the WAL

(cherry picked from commit bcbef7b401e211ad7cfcdcd176abafe7f30dbbe8)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ec2822c0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ec2822c0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ec2822c0

Branch: refs/heads/branch-1.3
Commit: ec2822c002d30a081590c2431af2f0dd522452a9
Parents: 8a572b9
Author: Ashish Singhi 
Authored: Fri Apr 1 15:55:08 2016 +0530
Committer: Ashish Singhi 
Committed: Fri Apr 1 16:00:32 2016 +0530

--
 .../hadoop/hbase/regionserver/wal/FSHLog.java   |  4 +--
 .../hbase/regionserver/wal/MetricsWAL.java  |  7 +++-
 .../regionserver/wal/WALActionsListener.java| 10 --
 .../replication/regionserver/Replication.java   | 35 +---
 .../hadoop/hbase/wal/DisabledWALProvider.java   |  4 +--
 .../hbase/regionserver/wal/TestMetricsWAL.java  |  4 +--
 .../hbase/wal/WALPerformanceEvaluation.java |  3 +-
 7 files changed, 53 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ec2822c0/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
index eb1cf57..c01cc1c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
@@ -1399,14 +1399,14 @@ public class FSHLog implements WAL {
 }
   }
 
-  private long postAppend(final Entry e, final long elapsedTime) {
+  private long postAppend(final Entry e, final long elapsedTime) throws 
IOException {
 long len = 0;
 if (!listeners.isEmpty()) {
   for (Cell cell : e.getEdit().getCells()) {
 len += CellUtil.estimatedSerializedSizeOf(cell);
   }
   for (WALActionsListener listener : listeners) {
-listener.postAppend(len, elapsedTime);
+listener.postAppend(len, elapsedTime, e.getKey(), e.getEdit());
   }
 }
 return len;

http://git-wip-us.apache.org/repos/asf/hbase/blob/ec2822c0/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
index 99792e5..69a31cd 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
@@ -20,9 +20,13 @@
 package org.apache.hadoop.hbase.regionserver.wal;
 
 import com.google.common.annotations.VisibleForTesting;
+
+import java.io.IOException;
+
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.wal.WALKey;
 import org.apache.hadoop.hbase.CompatibilitySingletonFactory;
 import org.apache.hadoop.util.StringUtils;
 
@@ -51,7 +55,8 @@ public class MetricsWAL extends WALActionsListener.Base {
   }
 
   @Override
-  public void postAppend(final long size, final long time) {
+  public void postAppend(final long size, final long time, final WALKey logkey,
+  final WALEdit logEdit) throws IOException {
 source.incrementAppendCount();
 source.incrementAppendTime(time);
 source.incrementAppendSize(size);

http://git-wip-us.apache.org/repos/asf/hbase/blob/ec2822c0/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
index db98083..60ab7b8 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
@@ -101,8 +101,12 @@ public interface WALActionsListener {
* TODO: Combine this with above.
* @param entryLen approx length of cells in this append.
* @param elapsedTimeMillis elapsed time in milliseconds.
+   * @param logKey A WAL 

hbase git commit: HBASE-15424 Add bulk load hfile-refs for replication in ZK after the event is appended in the WAL

2016-04-01 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1 d7d12aedd -> bcbef7b40


HBASE-15424 Add bulk load hfile-refs for replication in ZK after the event is 
appended in the WAL


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/bcbef7b4
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/bcbef7b4
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/bcbef7b4

Branch: refs/heads/branch-1
Commit: bcbef7b401e211ad7cfcdcd176abafe7f30dbbe8
Parents: d7d12ae
Author: Ashish Singhi 
Authored: Fri Apr 1 15:55:08 2016 +0530
Committer: Ashish Singhi 
Committed: Fri Apr 1 15:55:08 2016 +0530

--
 .../hadoop/hbase/regionserver/wal/FSHLog.java   |  4 +--
 .../hbase/regionserver/wal/MetricsWAL.java  |  7 +++-
 .../regionserver/wal/WALActionsListener.java| 10 --
 .../replication/regionserver/Replication.java   | 35 +---
 .../hadoop/hbase/wal/DisabledWALProvider.java   |  4 +--
 .../hbase/regionserver/wal/TestMetricsWAL.java  |  4 +--
 .../hbase/wal/WALPerformanceEvaluation.java |  3 +-
 7 files changed, 53 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/bcbef7b4/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
index eb1cf57..c01cc1c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
@@ -1399,14 +1399,14 @@ public class FSHLog implements WAL {
 }
   }
 
-  private long postAppend(final Entry e, final long elapsedTime) {
+  private long postAppend(final Entry e, final long elapsedTime) throws 
IOException {
 long len = 0;
 if (!listeners.isEmpty()) {
   for (Cell cell : e.getEdit().getCells()) {
 len += CellUtil.estimatedSerializedSizeOf(cell);
   }
   for (WALActionsListener listener : listeners) {
-listener.postAppend(len, elapsedTime);
+listener.postAppend(len, elapsedTime, e.getKey(), e.getEdit());
   }
 }
 return len;

http://git-wip-us.apache.org/repos/asf/hbase/blob/bcbef7b4/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
index 99792e5..69a31cd 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
@@ -20,9 +20,13 @@
 package org.apache.hadoop.hbase.regionserver.wal;
 
 import com.google.common.annotations.VisibleForTesting;
+
+import java.io.IOException;
+
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.wal.WALKey;
 import org.apache.hadoop.hbase.CompatibilitySingletonFactory;
 import org.apache.hadoop.util.StringUtils;
 
@@ -51,7 +55,8 @@ public class MetricsWAL extends WALActionsListener.Base {
   }
 
   @Override
-  public void postAppend(final long size, final long time) {
+  public void postAppend(final long size, final long time, final WALKey logkey,
+  final WALEdit logEdit) throws IOException {
 source.incrementAppendCount();
 source.incrementAppendTime(time);
 source.incrementAppendSize(size);

http://git-wip-us.apache.org/repos/asf/hbase/blob/bcbef7b4/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
index db98083..60ab7b8 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
@@ -101,8 +101,12 @@ public interface WALActionsListener {
* TODO: Combine this with above.
* @param entryLen approx length of cells in this append.
* @param elapsedTimeMillis elapsed time in milliseconds.
+   * @param logKey A WAL key
+   * @param logEdit A WAL edit containing list of cells.
+   * @throws 

hbase git commit: HBASE-15424 Add bulk load hfile-refs for replication in ZK after the event is appended in the WAL

2016-04-01 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master 5d79790c5 -> 25419d8b1


HBASE-15424 Add bulk load hfile-refs for replication in ZK after the event is 
appended in the WAL


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/25419d8b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/25419d8b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/25419d8b

Branch: refs/heads/master
Commit: 25419d8b18dd8f35a102614cd31b274659f747ef
Parents: 5d79790
Author: Ashish Singhi 
Authored: Fri Apr 1 15:40:36 2016 +0530
Committer: Ashish Singhi 
Committed: Fri Apr 1 15:40:36 2016 +0530

--
 .../hbase/regionserver/wal/AbstractFSWAL.java   |  4 +-
 .../hbase/regionserver/wal/MetricsWAL.java  |  7 ++-
 .../regionserver/wal/WALActionsListener.java| 10 +++-
 .../replication/regionserver/Replication.java   | 50 
 .../hadoop/hbase/wal/DisabledWALProvider.java   |  7 +--
 .../hbase/regionserver/wal/TestMetricsWAL.java  | 10 ++--
 .../hbase/wal/WALPerformanceEvaluation.java |  3 +-
 7 files changed, 58 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/25419d8b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
index f189ff1..b89488a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
@@ -840,14 +840,14 @@ public abstract class AbstractFSWAL implements WAL {
 return true;
   }
 
-  private long postAppend(final Entry e, final long elapsedTime) {
+  private long postAppend(final Entry e, final long elapsedTime) throws 
IOException {
 long len = 0;
 if (!listeners.isEmpty()) {
   for (Cell cell : e.getEdit().getCells()) {
 len += CellUtil.estimatedSerializedSizeOf(cell);
   }
   for (WALActionsListener listener : listeners) {
-listener.postAppend(len, elapsedTime);
+listener.postAppend(len, elapsedTime, e.getKey(), e.getEdit());
   }
 }
 return len;

http://git-wip-us.apache.org/repos/asf/hbase/blob/25419d8b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
index 99792e5..69a31cd 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/MetricsWAL.java
@@ -20,9 +20,13 @@
 package org.apache.hadoop.hbase.regionserver.wal;
 
 import com.google.common.annotations.VisibleForTesting;
+
+import java.io.IOException;
+
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.wal.WALKey;
 import org.apache.hadoop.hbase.CompatibilitySingletonFactory;
 import org.apache.hadoop.util.StringUtils;
 
@@ -51,7 +55,8 @@ public class MetricsWAL extends WALActionsListener.Base {
   }
 
   @Override
-  public void postAppend(final long size, final long time) {
+  public void postAppend(final long size, final long time, final WALKey logkey,
+  final WALEdit logEdit) throws IOException {
 source.incrementAppendCount();
 source.incrementAppendTime(time);
 source.incrementAppendSize(size);

http://git-wip-us.apache.org/repos/asf/hbase/blob/25419d8b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
index a6452e2..adcc6eb 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALActionsListener.java
@@ -98,8 +98,12 @@ public interface WALActionsListener {
* TODO: Combine this with above.
* @param entryLen approx length of cells in this append.
* @param elapsedTimeMillis elapsed time in milliseconds.
+   * @param logKey A WAL key
+   * @param logEdit 

hbase git commit: Add ashishsinghi to pom.xml

2016-03-29 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/master afdfd1bd9 -> f1fc5208a


Add ashishsinghi to pom.xml

Change-Id: Ib0709d92622350c50bee7e8a0bae0554d40df882


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f1fc5208
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f1fc5208
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f1fc5208

Branch: refs/heads/master
Commit: f1fc5208aa724de7e31cdd4e2e4a696cf823929c
Parents: afdfd1b
Author: Ashish Singhi <ashishsin...@apache.org>
Authored: Wed Mar 30 11:08:21 2016 +0530
Committer: Ashish Singhi <ashish.sin...@huawei.com>
Committed: Wed Mar 30 11:09:30 2016 +0530

--
 pom.xml | 8 
 1 file changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f1fc5208/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 450275c..0324c1c 100644
--- a/pom.xml
+++ b/pom.xml
@@ -169,6 +169,14 @@
   http://www.facebook.com/
 
 
+      ashishsinghi
+  Ashish Singhi
+  ashishsin...@apache.org
+  +5
+  Huawei
+  http://www.huawei.com/en/
+
+
   busbey
   Sean Busbey
   bus...@apache.org