[jira] [Commented] (HBASE-19528) Major Compaction Tool

2018-02-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351295#comment-16351295
 ] 

stack commented on HBASE-19528:
---

Looks to be getting further on this retry [~churromorales]...

> Major Compaction Tool 
> --
>
> Key: HBASE-19528
> URL: https://issues.apache.org/jira/browse/HBASE-19528
> Project: HBase
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, 
> HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, 
> HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v2.branch-1.patch, HBASE-19528.v8.patch
>
>
> The basic overview of how this tool works is:
> Parameters:
> Table
> Stores
> ClusterConcurrency
> Timestamp
> So you input a table, desired concurrency and the list of stores you wish to 
> major compact.  The tool first checks the filesystem to see which stores need 
> compaction based on the timestamp you provide (default is current time).  It 
> takes that list of stores that require compaction and executes those requests 
> concurrently with at most N distinct RegionServers compacting at a given 
> time.  Each thread waits for the compaction to complete before moving to the 
> next queue.  If a region split, merge or move happens this tool ensures those 
> regions get major compacted as well. 
> This helps us in two ways, we can limit how much I/O bandwidth we are using 
> for major compaction cluster wide and we are guaranteed after the tool 
> completes that all requested compactions complete regardless of moves, merges 
> and splits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19528) Major Compaction Tool

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19528:
--
Attachment: HBASE-19528.v2.branch-1.patch

> Major Compaction Tool 
> --
>
> Key: HBASE-19528
> URL: https://issues.apache.org/jira/browse/HBASE-19528
> Project: HBase
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, 
> HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, 
> HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v2.branch-1.patch, HBASE-19528.v8.patch
>
>
> The basic overview of how this tool works is:
> Parameters:
> Table
> Stores
> ClusterConcurrency
> Timestamp
> So you input a table, desired concurrency and the list of stores you wish to 
> major compact.  The tool first checks the filesystem to see which stores need 
> compaction based on the timestamp you provide (default is current time).  It 
> takes that list of stores that require compaction and executes those requests 
> concurrently with at most N distinct RegionServers compacting at a given 
> time.  Each thread waits for the compaction to complete before moving to the 
> next queue.  If a region split, merge or move happens this tool ensures those 
> regions get major compacted as well. 
> This helps us in two ways, we can limit how much I/O bandwidth we are using 
> for major compaction cluster wide and we are guaranteed after the tool 
> completes that all requested compactions complete regardless of moves, merges 
> and splits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-19919) Tidying up logging

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-19919.
---
   Resolution: Fixed
 Assignee: stack
Fix Version/s: 2.0.0-beta-2

Pushed to branch-2 and master.

> Tidying up logging
> --
>
> Key: HBASE-19919
> URL: https://issues.apache.org/jira/browse/HBASE-19919
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 0001-HBASE-19919-Tidying-up-logging.patch, 
> HBASE-19919.branch-2.001.patch
>
>
> Reading logs, there is a bunch of stuff we don't need, thread names are too 
> long, etc. Doing a little tidying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19919) Tidying up logging

2018-02-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351285#comment-16351285
 ] 

stack commented on HBASE-19919:
---

I went a bit further. Attached is what I pushed to master and branch-2.

There is a bunch of work to be done still cutting down thread counts, 
connections.. etc. Our defaults are sloppy and in need of another tidying. The 
log narrative needs more passes than this one. Lines are a bit more curt now 
though. Will do for now. There are more failing unit tests to fix!

Thanks for the review [~appy]


> Tidying up logging
> --
>
> Key: HBASE-19919
> URL: https://issues.apache.org/jira/browse/HBASE-19919
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: 0001-HBASE-19919-Tidying-up-logging.patch, 
> HBASE-19919.branch-2.001.patch
>
>
> Reading logs, there is a bunch of stuff we don't need, thread names are too 
> long, etc. Doing a little tidying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19919) Tidying up logging

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19919:
--
Attachment: 0001-HBASE-19919-Tidying-up-logging.patch

> Tidying up logging
> --
>
> Key: HBASE-19919
> URL: https://issues.apache.org/jira/browse/HBASE-19919
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: 0001-HBASE-19919-Tidying-up-logging.patch, 
> HBASE-19919.branch-2.001.patch
>
>
> Reading logs, there is a bunch of stuff we don't need, thread names are too 
> long, etc. Doing a little tidying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19925) Delete an unreachable peer will triggers all regionservers abort

2018-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351275#comment-16351275
 ] 

Hadoop QA commented on HBASE-19925:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
48s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
45s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
20m 17s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}114m 
48s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}154m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19925 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909079/HBASE-19925.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux e9329571bae2 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 41974efa85 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11371/testReport/ |
| Max. process+thread count | 4950 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11371/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Delete an unreachable peer will 

[jira] [Commented] (HBASE-19917) Improve RSGroupBasedLoadBalancer#filterServers() to be more efficient

2018-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351271#comment-16351271
 ] 

Hadoop QA commented on HBASE-19917:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
52s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
39s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m 12s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19917 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909082/HBASE-19917.master.000.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 87d66b2e93af 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 41974efa85 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372/testReport/ |
| Max. process+thread count | 1528 (vs. ulimit of 1) |
| modules | C: hbase-rsgroup U: hbase-rsgroup |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Improve 

[jira] [Commented] (HBASE-19918) Promote TestAsyncClusterAdminApi to LargeTests

2018-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351258#comment-16351258
 ] 

Hudson commented on HBASE-19918:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4516 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4516/])
HBASE-19918 Promote TestAsyncClusterAdminApi to LargeTests (zghao: rev 
ad580acc893bc87845a0f65e5d32b2b18cccf9c6)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncClusterAdminApi.java


> Promote TestAsyncClusterAdminApi to LargeTests
> --
>
> Key: HBASE-19918
> URL: https://issues.apache.org/jira/browse/HBASE-19918
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.0.0-beta-1
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19918.master.001.patch
>
>
> https://builds.apache.org/job/HBase%20Nightly/job/branch-2/221/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncClusterAdminApi/org_apache_hadoop_hbase_client_TestAsyncClusterAdminApi/
> org.junit.runners.model.TestTimedOutException: test timed out after 180 
> seconds
> Found this timeout in our branch-2 nightly jobs. And this test run more than 
> 110 seconds on my local computer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19917) Improve RSGroupBasedLoadBalancer#filterServers() to be more efficient

2018-02-02 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-19917:
-
Status: Patch Available  (was: Open)

> Improve RSGroupBasedLoadBalancer#filterServers() to be more efficient
> -
>
> Key: HBASE-19917
> URL: https://issues.apache.org/jira/browse/HBASE-19917
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-19917.master.000.patch
>
>
> {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java|borderStyle=solid}
> private List filterServers(Collection servers,
> Collection onlineServers) {
>   ArrayList finalList = new ArrayList();
>   for (Address server : servers) {
> for(ServerName curr: onlineServers) {
>   if(curr.getAddress().equals(server)) {
> finalList.add(curr);
>   }
> }
>   }
>   return finalList;
> }
> {code}
> filterServers is to return the union of servers and onlineServers. The 
> current implementation has time complexity as O(m * n) (2 loops), could be in 
> O(m + n) if HashSet is used. The trade-off is space complexity is increased.
> Another point which could be improved: filterServers() is only called in 
> filterOfflineServers(). filterOfflineServers calls filterServers(Set, List). 
> The current filterServers(Collection, Collection) seems could be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19917) Improve RSGroupBasedLoadBalancer#filterServers() to be more efficient

2018-02-02 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351247#comment-16351247
 ] 

Xiang Li commented on HBASE-19917:
--

Uploaded the very first patch. All UT under hbase-rsgroup get passed on my 
local machine. Running full UT.

> Improve RSGroupBasedLoadBalancer#filterServers() to be more efficient
> -
>
> Key: HBASE-19917
> URL: https://issues.apache.org/jira/browse/HBASE-19917
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-19917.master.000.patch
>
>
> {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java|borderStyle=solid}
> private List filterServers(Collection servers,
> Collection onlineServers) {
>   ArrayList finalList = new ArrayList();
>   for (Address server : servers) {
> for(ServerName curr: onlineServers) {
>   if(curr.getAddress().equals(server)) {
> finalList.add(curr);
>   }
> }
>   }
>   return finalList;
> }
> {code}
> filterServers is to return the union of servers and onlineServers. The 
> current implementation has time complexity as O(m * n) (2 loops), could be in 
> O(m + n) if HashSet is used. The trade-off is space complexity is increased.
> Another point which could be improved: filterServers() is only called in 
> filterOfflineServers(). filterOfflineServers calls filterServers(Set, List). 
> The current filterServers(Collection, Collection) seems could be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19917) Improve RSGroupBasedLoadBalancer#filterServers() to be more efficient

2018-02-02 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-19917:
-
Attachment: HBASE-19917.master.000.patch

> Improve RSGroupBasedLoadBalancer#filterServers() to be more efficient
> -
>
> Key: HBASE-19917
> URL: https://issues.apache.org/jira/browse/HBASE-19917
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-19917.master.000.patch
>
>
> {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java|borderStyle=solid}
> private List filterServers(Collection servers,
> Collection onlineServers) {
>   ArrayList finalList = new ArrayList();
>   for (Address server : servers) {
> for(ServerName curr: onlineServers) {
>   if(curr.getAddress().equals(server)) {
> finalList.add(curr);
>   }
> }
>   }
>   return finalList;
> }
> {code}
> filterServers is to return the union of servers and onlineServers. The 
> current implementation has time complexity as O(m * n) (2 loops), could be in 
> O(m + n) if HashSet is used. The trade-off is space complexity is increased.
> Another point which could be improved: filterServers() is only called in 
> filterOfflineServers(). filterOfflineServers calls filterServers(Set, List). 
> The current filterServers(Collection, Collection) seems could be improved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19855) Refactor RegionScannerImpl.nextInternal method

2018-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351237#comment-16351237
 ] 

Hadoop QA commented on HBASE-19855:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
 8s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} hbase-server: The patch generated 0 new + 253 
unchanged - 2 fixed = 253 total (was 255) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
49s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m 22s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m 
38s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19855 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909072/HBASE-19855.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 9e9f10123546 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / ad580acc89 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11370/testReport/ |
| Max. process+thread count | 5070 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11370/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Refactor RegionScannerImpl.nextInternal method
> 

[jira] [Updated] (HBASE-19925) Delete an unreachable peer will triggers all regionservers abort

2018-02-02 Thread Yun Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Zhao updated HBASE-19925:
-
Assignee: Yun Zhao
  Status: Patch Available  (was: Open)

> Delete an unreachable peer will triggers all regionservers abort
> 
>
> Key: HBASE-19925
> URL: https://issues.apache.org/jira/browse/HBASE-19925
> Project: HBase
>  Issue Type: Bug
>Reporter: Yun Zhao
>Assignee: Yun Zhao
>Priority: Critical
> Attachments: HBASE-19925.master.001.patch
>
>
> Add an unreachable peer
> {code:java}
> add_peer '4', CLUSTER_KEY => "server1.cie.com:2181:/hbase"{code}
> After a while to delete it,Regionserver will appear in the following log and 
> stop.
> {code:java}
> 2018-02-02 20:04:25,959 INFO [main-EventThread.replicationSource,4] 
> regionserver.ReplicationSource: Replicating 
> 5467de52-dc46-45be-902c-110dd7a83e06 -> null
> 2018-02-02 20:04:25,960 ERROR 
> [main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
>  regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, currentPath=null
> java.lang.IllegalArgumentException: Peer with id= 4 is not connected
>  at 
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getStatusOfPeer(ReplicationPeersZKImpl.java:207)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.isPeerEnabled(ReplicationSource.java:327)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:512)
> 2018-02-02 20:04:25,960 INFO 
> [main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
>  regionserver.HRegionServer: STOPPED: Unexpected exception in 
> ReplicationSourceWorkerThread{code}
>  
> HBase 1.2.6



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19925) Delete an unreachable peer will triggers all regionservers abort

2018-02-02 Thread Yun Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Zhao updated HBASE-19925:
-
Attachment: HBASE-19925.master.001.patch

> Delete an unreachable peer will triggers all regionservers abort
> 
>
> Key: HBASE-19925
> URL: https://issues.apache.org/jira/browse/HBASE-19925
> Project: HBase
>  Issue Type: Bug
>Reporter: Yun Zhao
>Priority: Critical
> Attachments: HBASE-19925.master.001.patch
>
>
> Add an unreachable peer
> {code:java}
> add_peer '4', CLUSTER_KEY => "server1.cie.com:2181:/hbase"{code}
> After a while to delete it,Regionserver will appear in the following log and 
> stop.
> {code:java}
> 2018-02-02 20:04:25,959 INFO [main-EventThread.replicationSource,4] 
> regionserver.ReplicationSource: Replicating 
> 5467de52-dc46-45be-902c-110dd7a83e06 -> null
> 2018-02-02 20:04:25,960 ERROR 
> [main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
>  regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, currentPath=null
> java.lang.IllegalArgumentException: Peer with id= 4 is not connected
>  at 
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getStatusOfPeer(ReplicationPeersZKImpl.java:207)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.isPeerEnabled(ReplicationSource.java:327)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:512)
> 2018-02-02 20:04:25,960 INFO 
> [main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
>  regionserver.HRegionServer: STOPPED: Unexpected exception in 
> ReplicationSourceWorkerThread{code}
>  
> HBase 1.2.6



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19925) Delete an unreachable peer will triggers all regionservers abort

2018-02-02 Thread Yun Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351235#comment-16351235
 ] 

Yun Zhao commented on HBASE-19925:
--

[~yuzhih...@gmail.com]   

It is really because of this.
However, when the ReplicationSource terminating, Should not start a new 
ReplicationSourceWorkerThread / ReplicationSourceShipper thread.
And this thread will reconnect zookeeeper, Cause zookeeper SendThread can not 
stop.

> Delete an unreachable peer will triggers all regionservers abort
> 
>
> Key: HBASE-19925
> URL: https://issues.apache.org/jira/browse/HBASE-19925
> Project: HBase
>  Issue Type: Bug
>Reporter: Yun Zhao
>Priority: Critical
> Attachments: HBASE-19925.master.001.patch
>
>
> Add an unreachable peer
> {code:java}
> add_peer '4', CLUSTER_KEY => "server1.cie.com:2181:/hbase"{code}
> After a while to delete it,Regionserver will appear in the following log and 
> stop.
> {code:java}
> 2018-02-02 20:04:25,959 INFO [main-EventThread.replicationSource,4] 
> regionserver.ReplicationSource: Replicating 
> 5467de52-dc46-45be-902c-110dd7a83e06 -> null
> 2018-02-02 20:04:25,960 ERROR 
> [main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
>  regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, currentPath=null
> java.lang.IllegalArgumentException: Peer with id= 4 is not connected
>  at 
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getStatusOfPeer(ReplicationPeersZKImpl.java:207)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.isPeerEnabled(ReplicationSource.java:327)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:512)
> 2018-02-02 20:04:25,960 INFO 
> [main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
>  regionserver.HRegionServer: STOPPED: Unexpected exception in 
> ReplicationSourceWorkerThread{code}
>  
> HBase 1.2.6



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351232#comment-16351232
 ] 

Hadoop QA commented on HBASE-19703:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  8m 
 2s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
53s{color} | {color:red} hbase-server: The patch generated 2 new + 150 
unchanged - 2 fixed = 152 total (was 152) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
12s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m  0s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}185m 51s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}238m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestReplicasClient |
|   | hadoop.hbase.regionserver.TestCompactingToCellFlatMapMemStore |
|   | hadoop.hbase.regionserver.TestEncryptionKeyRotation |
|   | hadoop.hbase.TestPartialResultsFromClientSide |
|   | hadoop.hbase.regionserver.TestRegionReplicaFailover |
|   | hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleWAL |
|   | hadoop.hbase.replication.TestReplicationEndpoint |
|   | hadoop.hbase.TestIOFencing |
|   | hadoop.hbase.io.encoding.TestEncodedSeekers |
|   | hadoop.hbase.regionserver.TestMajorCompaction |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-19703 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909050/HBASE-19703.branch-2.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux cc51df42234b 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / b0bf6f504e |
| maven | 

[jira] [Updated] (HBASE-19925) Delete an unreachable peer will triggers all regionservers abort

2018-02-02 Thread Yun Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Zhao updated HBASE-19925:
-
Description: 
Add an unreachable peer
{code:java}
add_peer '4', CLUSTER_KEY => "server1.cie.com:2181:/hbase"{code}
After a while to delete it,Regionserver will appear in the following log and 
stop.
{code:java}
2018-02-02 20:04:25,959 INFO [main-EventThread.replicationSource,4] 
regionserver.ReplicationSource: Replicating 
5467de52-dc46-45be-902c-110dd7a83e06 -> null
2018-02-02 20:04:25,960 ERROR 
[main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
 regionserver.ReplicationSource: Unexpected exception in 
ReplicationSourceWorkerThread, currentPath=null
java.lang.IllegalArgumentException: Peer with id= 4 is not connected
 at 
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getStatusOfPeer(ReplicationPeersZKImpl.java:207)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.isPeerEnabled(ReplicationSource.java:327)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:512)
2018-02-02 20:04:25,960 INFO 
[main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
 regionserver.HRegionServer: STOPPED: Unexpected exception in 
ReplicationSourceWorkerThread{code}
 

HBase 1.2.6

  was:
Add an unreachable peer
{code:java}
add_peer '4', CLUSTER_KEY => "server1.cie.com:2181:/hbase"{code}
After a while to delete it,Regionserver will appear in the following log and 
stop.
{code:java}
2018-02-02 20:04:25,959 INFO [main-EventThread.replicationSource,4] 
regionserver.ReplicationSource: Replicating 
5467de52-dc46-45be-902c-110dd7a83e06 -> null
2018-02-02 20:04:25,960 ERROR 
[main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
 regionserver.ReplicationSource: Unexpected exception in 
ReplicationSourceWorkerThread, currentPath=null
java.lang.IllegalArgumentException: Peer with id= 4 is not connected
 at 
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getStatusOfPeer(ReplicationPeersZKImpl.java:207)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.isPeerEnabled(ReplicationSource.java:327)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:512)
2018-02-02 20:04:25,960 INFO 
[main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
 regionserver.HRegionServer: STOPPED: Unexpected exception in 
ReplicationSourceWorkerThread{code}
 


> Delete an unreachable peer will triggers all regionservers abort
> 
>
> Key: HBASE-19925
> URL: https://issues.apache.org/jira/browse/HBASE-19925
> Project: HBase
>  Issue Type: Bug
>Reporter: Yun Zhao
>Priority: Critical
>
> Add an unreachable peer
> {code:java}
> add_peer '4', CLUSTER_KEY => "server1.cie.com:2181:/hbase"{code}
> After a while to delete it,Regionserver will appear in the following log and 
> stop.
> {code:java}
> 2018-02-02 20:04:25,959 INFO [main-EventThread.replicationSource,4] 
> regionserver.ReplicationSource: Replicating 
> 5467de52-dc46-45be-902c-110dd7a83e06 -> null
> 2018-02-02 20:04:25,960 ERROR 
> [main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
>  regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, currentPath=null
> java.lang.IllegalArgumentException: Peer with id= 4 is not connected
>  at 
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getStatusOfPeer(ReplicationPeersZKImpl.java:207)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.isPeerEnabled(ReplicationSource.java:327)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:512)
> 2018-02-02 20:04:25,960 INFO 
> [main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
>  regionserver.HRegionServer: STOPPED: Unexpected exception in 
> ReplicationSourceWorkerThread{code}
>  
> HBase 1.2.6



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19925) Delete an unreachable peer will triggers all regionservers abort

2018-02-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351229#comment-16351229
 ] 

Ted Yu commented on HBASE-19925:


It seems isPeerEnabled() can call getPeer() first.
If the return value is null, don't proceed to calling getStatusOfPeer().

Do you want to provide a patch ?

Thanks

> Delete an unreachable peer will triggers all regionservers abort
> 
>
> Key: HBASE-19925
> URL: https://issues.apache.org/jira/browse/HBASE-19925
> Project: HBase
>  Issue Type: Bug
>Reporter: Yun Zhao
>Priority: Critical
>
> Add an unreachable peer
> {code:java}
> add_peer '4', CLUSTER_KEY => "server1.cie.com:2181:/hbase"{code}
> After a while to delete it,Regionserver will appear in the following log and 
> stop.
> {code:java}
> 2018-02-02 20:04:25,959 INFO [main-EventThread.replicationSource,4] 
> regionserver.ReplicationSource: Replicating 
> 5467de52-dc46-45be-902c-110dd7a83e06 -> null
> 2018-02-02 20:04:25,960 ERROR 
> [main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
>  regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, currentPath=null
> java.lang.IllegalArgumentException: Peer with id= 4 is not connected
>  at 
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getStatusOfPeer(ReplicationPeersZKImpl.java:207)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.isPeerEnabled(ReplicationSource.java:327)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:512)
> 2018-02-02 20:04:25,960 INFO 
> [main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
>  regionserver.HRegionServer: STOPPED: Unexpected exception in 
> ReplicationSourceWorkerThread{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19925) Delete an unreachable peer will triggers all regionservers abort

2018-02-02 Thread Yun Zhao (JIRA)
Yun Zhao created HBASE-19925:


 Summary: Delete an unreachable peer will triggers all 
regionservers abort
 Key: HBASE-19925
 URL: https://issues.apache.org/jira/browse/HBASE-19925
 Project: HBase
  Issue Type: Bug
Reporter: Yun Zhao


Add an unreachable peer
{code:java}
add_peer '4', CLUSTER_KEY => "server1.cie.com:2181:/hbase"{code}
After a while to delete it,Regionserver will appear in the following log and 
stop.
{code:java}
2018-02-02 20:04:25,959 INFO [main-EventThread.replicationSource,4] 
regionserver.ReplicationSource: Replicating 
5467de52-dc46-45be-902c-110dd7a83e06 -> null
2018-02-02 20:04:25,960 ERROR 
[main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
 regionserver.ReplicationSource: Unexpected exception in 
ReplicationSourceWorkerThread, currentPath=null
java.lang.IllegalArgumentException: Peer with id= 4 is not connected
 at 
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getStatusOfPeer(ReplicationPeersZKImpl.java:207)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.isPeerEnabled(ReplicationSource.java:327)
 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:512)
2018-02-02 20:04:25,960 INFO 
[main-EventThread.replicationSource,4.replicationSource..com%2C16020%2C1515498473547.default,4]
 regionserver.HRegionServer: STOPPED: Unexpected exception in 
ReplicationSourceWorkerThread{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351221#comment-16351221
 ] 

Hadoop QA commented on HBASE-19920:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
25s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
32s{color} | {color:red} hbase-client: The patch generated 6 new + 374 
unchanged - 2 fixed = 380 total (was 376) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 58s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hbase-client generated 3 new + 2 unchanged - 0 fixed = 
5 total (was 2) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
56s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}146m 58s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}197m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.procedure.TestFailedProcCleanup |
|   | hadoop.hbase.TestJMXListener |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909057/HBASE-19920.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 9a888c7e6bbc 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Created] (HBASE-19924) hbase rpc throttling does not work for multi() with request count rater.

2018-02-02 Thread huaxiang sun (JIRA)
huaxiang sun created HBASE-19924:


 Summary: hbase rpc throttling does not work for multi() with 
request count rater.
 Key: HBASE-19924
 URL: https://issues.apache.org/jira/browse/HBASE-19924
 Project: HBase
  Issue Type: Bug
  Components: rpc
Affects Versions: 1.2.6, 2.0
Reporter: huaxiang sun
Assignee: huaxiang sun


Basically, rpc throttling does not work for request count based rater for 
multi. for the following code, when it calls limiter's checkQuota(), 
numWrites/numReads is lost.
{code:java}

@Override
public void checkQuota(int numWrites, int numReads, int numScans) throws 
ThrottlingException {
  writeConsumed = estimateConsume(OperationType.MUTATE, numWrites, 100);
  readConsumed = estimateConsume(OperationType.GET, numReads, 100);
  readConsumed += estimateConsume(OperationType.SCAN, numScans, 1000);

  writeAvailable = Long.MAX_VALUE;
  readAvailable = Long.MAX_VALUE;
  for (final QuotaLimiter limiter : limiters) {
if (limiter.isBypass()) continue;

limiter.checkQuota(writeConsumed, readConsumed);
readAvailable = Math.min(readAvailable, limiter.getReadAvailable());
writeAvailable = Math.min(writeAvailable, limiter.getWriteAvailable());
  }

  for (final QuotaLimiter limiter : limiters) {
limiter.grabQuota(writeConsumed, readConsumed);
  }
}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19726) Failed to start HMaster due to infinite retrying on meta assign

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19726:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master and branch-2.

> Failed to start HMaster due to infinite retrying on meta assign
> ---
>
> Key: HBASE-19726
> URL: https://issues.apache.org/jira/browse/HBASE-19726
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 19726.patch
>
>
> This is what I got at first, an exception when trying to write something to 
> meta when meta has not been onlined yet.
> {noformat}
> 2018-01-07,21:03:14,389 INFO org.apache.hadoop.hbase.master.HMaster: Running 
> RecoverMetaProcedure to ensure proper hbase:meta deploy.
> 2018-01-07,21:03:14,637 INFO 
> org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: Start pid=1, 
> state=RUNNABLE:RECOVER_META_SPLIT_LOGS; RecoverMetaProcedure 
> failedMetaServer=null, splitWal=true
> 2018-01-07,21:03:14,645 INFO org.apache.hadoop.hbase.master.MasterWalManager: 
> Log folder 
> hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st27.bj,38900,1515330173896
>  belongs to an existing region server
> 2018-01-07,21:03:14,646 INFO org.apache.hadoop.hbase.master.MasterWalManager: 
> Log folder 
> hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st29.bj,38900,1515330177232
>  belongs to an existing region server
> 2018-01-07,21:03:14,648 INFO 
> org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: pid=1, 
> state=RUNNABLE:RECOVER_META_ASSIGN_REGIONS; RecoverMetaProcedure 
> failedMetaServer=null, splitWal=true; Retaining meta assignment to server=null
> 2018-01-07,21:03:14,653 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized 
> subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; 
> AssignProcedure table=hbase:meta, region=1588230740}]
> 2018-01-07,21:03:14,660 INFO 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: pid=2, 
> ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure 
> table=hbase:meta, region=1588230740 hbase:meta hbase:meta,,1.1588230740
> 2018-01-07,21:03:14,663 INFO 
> org.apache.hadoop.hbase.master.assignment.AssignProcedure: Start pid=2, 
> ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure 
> table=hbase:meta, region=1588230740; rit=OFFLINE, location=null; 
> forceNewPlan=false, retain=false
> 2018-01-07,21:03:14,831 INFO 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta 
> (replicaId=0) location in ZooKeeper as 
> c4-hadoop-tst-st27.bj,38900,1515330173896
> 2018-01-07,21:03:14,841 INFO 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: Dispatch 
> pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure 
> table=hbase:meta, region=1588230740; rit=OPENING, 
> location=c4-hadoop-tst-st27.bj,38900,1515330173896
> 2018-01-07,21:03:14,992 INFO 
> org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher: Using 
> procedure batch rpc execution for 
> serverName=c4-hadoop-tst-st27.bj,38900,1515330173896 version=3145728
> 2018-01-07,21:03:15,593 ERROR 
> org.apache.hadoop.hbase.client.AsyncRequestFutureImpl: Cannot get replica 0 
> location for 
> {"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1515330195514}]},"ts":1515330195514}
> 2018-01-07,21:03:15,594 WARN 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: 
> Retryable error trying to transition: pid=2, ppid=1, 
> state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, 
> region=1588230740; rit=OPEN, 
> location=c4-hadoop-tst-st27.bj,38900,1515330173896
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: IOException: 1 time, servers with issues: null
> at 
> org.apache.hadoop.hbase.client.BatchErrors.makeException(BatchErrors.java:54)
> at 
> org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.getErrors(AsyncRequestFutureImpl.java:1250)
> at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:457)
> at org.apache.hadoop.hbase.client.HTable.put(HTable.java:570)
> at 
> org.apache.hadoop.hbase.MetaTableAccessor.put(MetaTableAccessor.java:1450)
> at 
> org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:1439)
> at 
> org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:1785)
> at 
> org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:1151)
> at 
> 

[jira] [Updated] (HBASE-19915) From split/ merge procedures daughter/ merged regions get created in OFFLINE state

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19915:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master and branch-2.

Thanks [~uagashe]. This fixes what would be some awkward, hard-to-find bugs... 

> From split/ merge procedures daughter/ merged regions get created in OFFLINE 
> state
> --
>
> Key: HBASE-19915
> URL: https://issues.apache.org/jira/browse/HBASE-19915
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-beta-1
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: hbase-19915.master.001.patch, 
> hbase-19915.master.001.patch
>
>
> See HBASE-19530. When regions are created initial state should be CLOSED. Bug 
> was discovered while debugging flaky test 
> TestSplitTableRegionProcedure#testRollbackAndDoubleExecution with numOfSteps 
> set to 4. After updating daughter regions in meta when master is restarted, 
> startup sequence of master assigns all OFFLINE regions. As daughter regions 
> are stored with OFFLINE state, daughter regions are assigned. This is 
> followed by re-assignment of daughter regions from resumed 
> SplitTableRegionProcedure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19915) From split/ merge procedures daughter/ merged regions get created in OFFLINE state

2018-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351198#comment-16351198
 ] 

Hadoop QA commented on HBASE-19915:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
12s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
50s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
54s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19915 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909066/hbase-19915.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux b28921dbbd7c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ad580acc89 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11369/testReport/ |
| Max. process+thread count | 266 (vs. ulimit of 1) |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11369/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> From split/ merge procedures daughter/ 

[jira] [Commented] (HBASE-19726) Failed to start HMaster due to infinite retrying on meta assign

2018-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351197#comment-16351197
 ] 

Hadoop QA commented on HBASE-19726:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
59s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 4s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 18s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}103m 
34s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19726 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909044/19726.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux f54c041d6b59 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8143d5afa4 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11367/testReport/ |
| Max. process+thread count | 5094 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11367/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Failed to start HMaster due to infinite retrying on meta 

[jira] [Updated] (HBASE-19855) Refactor RegionScannerImpl.nextInternal method

2018-02-02 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19855:
---
Attachment: HBASE-19855.master.004.patch

> Refactor RegionScannerImpl.nextInternal method
> --
>
> Key: HBASE-19855
> URL: https://issues.apache.org/jira/browse/HBASE-19855
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-19855.master.002.patch, 
> HBASE-19855.master.003.patch, HBASE-19855.master.004.patch, 
> HBASE-19855.master.004.patch
>
>
> Now this method is too complicated and confusing...
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19923) Reset peer state and config when refresh replication source failed

2018-02-02 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19923:
---
Component/s: Replication

> Reset peer state and config when refresh replication source failed
> --
>
> Key: HBASE-19923
> URL: https://issues.apache.org/jira/browse/HBASE-19923
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 3.0.0
>Reporter: Guanghao Zhang
>Priority: Major
>
> Now we use procedure for replication. When peer state changed, the RS will 
> read peer state from storage to cache. If RS found the peer state changed, 
> then it will refresh replication source. If refresh failed, the Master will 
> retry the procedure. Then RS will read peer state again, but now the peer 
> state in cache is right. So it don't refresh replication source.. So we 
> need reset the peer state to old peer state when refresh failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19923) Reset peer state and config when refresh replication source failed

2018-02-02 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19923:
---
Affects Version/s: 3.0.0

> Reset peer state and config when refresh replication source failed
> --
>
> Key: HBASE-19923
> URL: https://issues.apache.org/jira/browse/HBASE-19923
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 3.0.0
>Reporter: Guanghao Zhang
>Priority: Major
>
> Now we use procedure for replication. When peer state changed, the RS will 
> read peer state from storage to cache. If RS found the peer state changed, 
> then it will refresh replication source. If refresh failed, the Master will 
> retry the procedure. Then RS will read peer state again, but now the peer 
> state in cache is right. So it don't refresh replication source.. So we 
> need reset the peer state to old peer state when refresh failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19923) Reset peer state and config when refresh replication source failed

2018-02-02 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-19923:
--

 Summary: Reset peer state and config when refresh replication 
source failed
 Key: HBASE-19923
 URL: https://issues.apache.org/jira/browse/HBASE-19923
 Project: HBase
  Issue Type: Bug
Reporter: Guanghao Zhang


Now we use procedure for replication. When peer state changed, the RS will read 
peer state from storage to cache. If RS found the peer state changed, then it 
will refresh replication source. If refresh failed, the Master will retry the 
procedure. Then RS will read peer state again, but now the peer state in cache 
is right. So it don't refresh replication source.. So we need reset the 
peer state to old peer state when refresh failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19918) Promote TestAsyncClusterAdminApi to LargeTests

2018-02-02 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19918:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master and branch-2.

> Promote TestAsyncClusterAdminApi to LargeTests
> --
>
> Key: HBASE-19918
> URL: https://issues.apache.org/jira/browse/HBASE-19918
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.0.0-beta-1
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-19918.master.001.patch
>
>
> https://builds.apache.org/job/HBase%20Nightly/job/branch-2/221/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncClusterAdminApi/org_apache_hadoop_hbase_client_TestAsyncClusterAdminApi/
> org.junit.runners.model.TestTimedOutException: test timed out after 180 
> seconds
> Found this timeout in our branch-2 nightly jobs. And this test run more than 
> 110 seconds on my local computer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19918) Promote TestAsyncClusterAdminApi to LargeTests

2018-02-02 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19918:
---
Fix Version/s: 2.0.0-beta-2

> Promote TestAsyncClusterAdminApi to LargeTests
> --
>
> Key: HBASE-19918
> URL: https://issues.apache.org/jira/browse/HBASE-19918
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.0.0-beta-1
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19918.master.001.patch
>
>
> https://builds.apache.org/job/HBase%20Nightly/job/branch-2/221/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncClusterAdminApi/org_apache_hadoop_hbase_client_TestAsyncClusterAdminApi/
> org.junit.runners.model.TestTimedOutException: test timed out after 180 
> seconds
> Found this timeout in our branch-2 nightly jobs. And this test run more than 
> 110 seconds on my local computer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19919) Tidying up logging

2018-02-02 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351151#comment-16351151
 ] 

Appy commented on HBASE-19919:
--

That looks pretty. +1

> Tidying up logging
> --
>
> Key: HBASE-19919
> URL: https://issues.apache.org/jira/browse/HBASE-19919
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: HBASE-19919.branch-2.001.patch
>
>
> Reading logs, there is a bunch of stuff we don't need, thread names are too 
> long, etc. Doing a little tidying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19915) From split/ merge procedures daughter/ merged regions get created in OFFLINE state

2018-02-02 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-19915:
-
Attachment: hbase-19915.master.001.patch

> From split/ merge procedures daughter/ merged regions get created in OFFLINE 
> state
> --
>
> Key: HBASE-19915
> URL: https://issues.apache.org/jira/browse/HBASE-19915
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-beta-1
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: hbase-19915.master.001.patch, 
> hbase-19915.master.001.patch
>
>
> See HBASE-19530. When regions are created initial state should be CLOSED. Bug 
> was discovered while debugging flaky test 
> TestSplitTableRegionProcedure#testRollbackAndDoubleExecution with numOfSteps 
> set to 4. After updating daughter regions in meta when master is restarted, 
> startup sequence of master assigns all OFFLINE regions. As daughter regions 
> are stored with OFFLINE state, daughter regions are assigned. This is 
> followed by re-assignment of daughter regions from resumed 
> SplitTableRegionProcedure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19915) From split/ merge procedures daughter/ merged regions get created in OFFLINE state

2018-02-02 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351135#comment-16351135
 ] 

Umesh Agashe commented on HBASE-19915:
--

retry

> From split/ merge procedures daughter/ merged regions get created in OFFLINE 
> state
> --
>
> Key: HBASE-19915
> URL: https://issues.apache.org/jira/browse/HBASE-19915
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-beta-1
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: hbase-19915.master.001.patch, 
> hbase-19915.master.001.patch
>
>
> See HBASE-19530. When regions are created initial state should be CLOSED. Bug 
> was discovered while debugging flaky test 
> TestSplitTableRegionProcedure#testRollbackAndDoubleExecution with numOfSteps 
> set to 4. After updating daughter regions in meta when master is restarted, 
> startup sequence of master assigns all OFFLINE regions. As daughter regions 
> are stored with OFFLINE state, daughter regions are assigned. This is 
> followed by re-assignment of daughter regions from resumed 
> SplitTableRegionProcedure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19919) Tidying up logging

2018-02-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351132#comment-16351132
 ] 

stack commented on HBASE-19919:
---

Thanks for review [~appy]... the toStringWithoutDomain is for thread naming. 
Here are what log lines look like now on a cluster:

2018-02-01 21:17:29,832 INFO  
[master/ve0524.halxg.cloudera.com/10.17.240.20:16000] cleaner.CleanerChore: 
Cleaner pool size is 24

Almost 50% of the log is the thread name of which 3/4 is host name.

With toStringWithoutDomain, thread name becomes master/ve0524:1600

I've been doing some parametrization. Let me do some more.

Thanks. Reading the logs trying to tell a story from what is in here is a bit 
tough w/ all the noise.

> Tidying up logging
> --
>
> Key: HBASE-19919
> URL: https://issues.apache.org/jira/browse/HBASE-19919
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: HBASE-19919.branch-2.001.patch
>
>
> Reading logs, there is a bunch of stuff we don't need, thread names are too 
> long, etc. Doing a little tidying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19915) From split/ merge procedures daughter/ merged regions get created in OFFLINE state

2018-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351113#comment-16351113
 ] 

Hadoop QA commented on HBASE-19915:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
26s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 1s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  5m 
59s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  7m 
59s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m  
4s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
50s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19915 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909031/hbase-19915.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 418371191a31 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8143d5afa4 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11365/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 

[jira] [Commented] (HBASE-19922) ProtobufUtils::PRIMITIVES is unused

2018-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351114#comment-16351114
 ] 

Hadoop QA commented on HBASE-19922:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
26s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
26s{color} | {color:red} hbase-client: The patch generated 2 new + 295 
unchanged - 0 fixed = 297 total (was 295) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 4s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m  
0s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  8m  
0s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m  
5s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
52s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19922 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909049/HBASE-19922.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux a4011871bc2a 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8143d5afa4 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 

[jira] [Commented] (HBASE-19919) Tidying up logging

2018-02-02 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351082#comment-16351082
 ] 

Appy commented on HBASE-19919:
--

toStringWithoutDomain(): Unless all logging of servername changes to this, 
it'll be hard to debug tests where one needs to search for RS name and iterates 
over all occurrences to get high level story of what's going on...since they'll 
miss (part/all?) rpc related stuff (that name is used of rpcserver).

Some log.debug can be converted to parameterized style ("xyz {}").

> Tidying up logging
> --
>
> Key: HBASE-19919
> URL: https://issues.apache.org/jira/browse/HBASE-19919
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: HBASE-19919.branch-2.001.patch
>
>
> Reading logs, there is a bunch of stuff we don't need, thread names are too 
> long, etc. Doing a little tidying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-19920:
--
Attachment: HBASE-19920.patch

> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Priority: Major
> Attachments: HBASE-19920.patch
>
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.local.dir}/jars/ and also instantiates a filesystem class 
> to access hbase.dynamic.jars.dir.
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L109-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351039#comment-16351039
 ] 

Mike Drob commented on HBASE-19920:
---

Attaching a patch that makes an attempt at this.

> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Priority: Major
> Fix For: 2.0
>
> Attachments: HBASE-19920.patch
>
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.local.dir}/jars/ and also instantiates a filesystem class 
> to access hbase.dynamic.jars.dir.
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L109-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-19920:
--
Fix Version/s: 2.0

> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Priority: Major
> Fix For: 2.0
>
> Attachments: HBASE-19920.patch
>
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.local.dir}/jars/ and also instantiates a filesystem class 
> to access hbase.dynamic.jars.dir.
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L109-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-19920:
--
Assignee: Mike Drob
  Status: Patch Available  (was: Open)

> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Assignee: Mike Drob
>Priority: Major
> Fix For: 2.0
>
> Attachments: HBASE-19920.patch
>
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.local.dir}/jars/ and also instantiates a filesystem class 
> to access hbase.dynamic.jars.dir.
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L109-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19922) ProtobufUtils::PRIMITIVES is unused

2018-02-02 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351018#comment-16351018
 ] 

Mike Drob commented on HBASE-19922:
---

I accidentally left an unused import in, I think, would plan to fix that on 
commit assuming everything else is good in QA and get a favorable review.

> ProtobufUtils::PRIMITIVES is unused
> ---
>
> Key: HBASE-19922
> URL: https://issues.apache.org/jira/browse/HBASE-19922
> Project: HBase
>  Issue Type: Task
>  Components: Protobufs
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: 2.0
>
> Attachments: HBASE-19922.patch
>
>
> It looks like ProtobufUtils::PRIMITIVES is never read in both the shaded and 
> non-shaded versions of the class. Is it safe to remove?
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java#L128
> We populate the map in a static initializer but never read any values from 
> it...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Rohini Palaniswamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351012#comment-16351012
 ] 

Rohini Palaniswamy commented on HBASE-19920:


There was no failure because of this. We just noticed that this was happening 
while debugging a different issue.

> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Priority: Major
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.local.dir}/jars/ and also instantiates a filesystem class 
> to access hbase.dynamic.jars.dir.
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L109-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method

2018-02-02 Thread Key Hutu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351007#comment-16351007
 ] 

Key Hutu commented on HBASE-19848:
--

thanks for your help, Ted Yu, huaxiang sun, thanks

> Zookeeper thread leaks in hbase-spark bulkLoad method
> -
>
> Key: HBASE-19848
> URL: https://issues.apache.org/jira/browse/HBASE-19848
> Project: HBase
>  Issue Type: Bug
>  Components: spark, Zookeeper
>Affects Versions: 1.2.0
> Environment: hbase-spark-1.2.0-cdh5.12.1 version
> spark 1.6
>Reporter: Key Hutu
>Assignee: Key Hutu
>Priority: Major
>  Labels: performance
> Attachments: HBASE-19848-V2.patch, HBASE-19848-V3.patch, 
> HBaseContext.patch, HBaseContext.scala
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> In hbase-spark project, HBaseContext provides bulkload methond for loading 
> spark rdd data to hbase easily.But when i using it frequently, the program 
> will throw "cannot create native thread" exception.
> using pstack command in spark driver process , the thread num is increasing 
> using jstack, named "main-SendThread" and "main-EventThread"  thread so many
> It seems like that , connection created before bulkload ,but close method 
> uninvoked at last



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351005#comment-16351005
 ] 

Mike Drob commented on HBASE-19920:
---

[~rohini] - thanks for clarifying, yep that makes perfect sense and is 
something that we can address, I think. Do you have any code you can share from 
around the failure that we can use to create a test case?

> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Priority: Major
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.local.dir}/jars/ and also instantiates a filesystem class 
> to access hbase.dynamic.jars.dir.
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L109-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Rohini Palaniswamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohini Palaniswamy updated HBASE-19920:
---
Description: 
On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
which in its static block initializes DynamicClassLoader and that creates the 
directory ${hbase.local.dir}/jars/ and also instantiates a filesystem class to 
access hbase.dynamic.jars.dir.

https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L109-L127

Since this is region server specific code, not expecting this to happen when 
one accesses hbase as a client.

  was:
On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
which in its static block initializes DynamicClassLoader and that creates the 
directory ${hbase.rootdir}/lib

https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127

Since this is region server specific code, not expecting this to happen when 
one accesses hbase as a client.


> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Priority: Major
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.local.dir}/jars/ and also instantiates a filesystem class 
> to access hbase.dynamic.jars.dir.
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L109-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19703:
--
Status: Patch Available  (was: Open)

> Functionality added as part of HBASE-12583 is not working after moving the 
> split code to master
> ---
>
> Key: HBASE-19703
> URL: https://issues.apache.org/jira/browse/HBASE-19703
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19703-WIP.patch, HBASE-19703.branch-2.001.patch, 
> HBASE-19703_v2.patch, HBASE-19703_v3.patch, HBASE-19703_v4.patch, 
> HBASE-19703_v5.patch
>
>
> As part of HBASE-12583 we are passing split policy to 
> HRegionFileSystem#splitStoreFile so that we can allow to create reference 
> files even the split key is out of HFile key range. This is needed for Local 
> Indexing implementation in Phoenix. But now after moving the split code to 
> master just passing null for split policy.
> {noformat}
> final String familyName = Bytes.toString(family);
> final Path path_first =
> regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, 
> false, null);
> final Path path_second =
> regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, 
> true, null);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-02-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350993#comment-16350993
 ] 

stack commented on HBASE-19703:
---

.001 Add more doc explaining this is doubling-down on a hack for Phoenix local 
indices. Its the only user. Made that clear. Yeah, this needs cleanup. Did you 
get a chance to file an issue [~rajeshbabu]? Thanks. Will push this after a 
hadoopqa run.

> Functionality added as part of HBASE-12583 is not working after moving the 
> split code to master
> ---
>
> Key: HBASE-19703
> URL: https://issues.apache.org/jira/browse/HBASE-19703
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19703-WIP.patch, HBASE-19703.branch-2.001.patch, 
> HBASE-19703_v2.patch, HBASE-19703_v3.patch, HBASE-19703_v4.patch, 
> HBASE-19703_v5.patch
>
>
> As part of HBASE-12583 we are passing split policy to 
> HRegionFileSystem#splitStoreFile so that we can allow to create reference 
> files even the split key is out of HFile key range. This is needed for Local 
> Indexing implementation in Phoenix. But now after moving the split code to 
> master just passing null for split policy.
> {noformat}
> final String familyName = Bytes.toString(family);
> final Path path_first =
> regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, 
> false, null);
> final Path path_second =
> regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, 
> true, null);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19703:
--
Attachment: HBASE-19703.branch-2.001.patch

> Functionality added as part of HBASE-12583 is not working after moving the 
> split code to master
> ---
>
> Key: HBASE-19703
> URL: https://issues.apache.org/jira/browse/HBASE-19703
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19703-WIP.patch, HBASE-19703.branch-2.001.patch, 
> HBASE-19703_v2.patch, HBASE-19703_v3.patch, HBASE-19703_v4.patch, 
> HBASE-19703_v5.patch
>
>
> As part of HBASE-12583 we are passing split policy to 
> HRegionFileSystem#splitStoreFile so that we can allow to create reference 
> files even the split key is out of HFile key range. This is needed for Local 
> Indexing implementation in Phoenix. But now after moving the split code to 
> master just passing null for split policy.
> {noformat}
> final String familyName = Bytes.toString(family);
> final Path path_first =
> regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, 
> false, null);
> final Path path_second =
> regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, 
> true, null);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Rohini Palaniswamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350991#comment-16350991
 ] 

Rohini Palaniswamy commented on HBASE-19920:


bq. Do you want to submit a patch ?
No

>From my perspective, a call to get delegation token should not be 
   1) Creating a local directory
   2) Instantiating a filesystem class be it local or remote. It is worse when 
it is remote because of the overhead involved with instantiating a DFSClient 
(opening sockets, etc).

I do not have a problem if DynamicClassLoader actually does those things when 
the client intends to use coprocessors. Would just prefer it to be taken out of 
the code path of getting delegation tokens.



> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Priority: Major
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.rootdir}/lib
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19922) ProtobufUtils::PRIMITIVES is unused

2018-02-02 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-19922:
--
 Assignee: Mike Drob
Fix Version/s: 2.0
   Status: Patch Available  (was: Open)

> ProtobufUtils::PRIMITIVES is unused
> ---
>
> Key: HBASE-19922
> URL: https://issues.apache.org/jira/browse/HBASE-19922
> Project: HBase
>  Issue Type: Task
>  Components: Protobufs
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: 2.0
>
> Attachments: HBASE-19922.patch
>
>
> It looks like ProtobufUtils::PRIMITIVES is never read in both the shaded and 
> non-shaded versions of the class. Is it safe to remove?
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java#L128
> We populate the map in a static initializer but never read any values from 
> it...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19922) ProtobufUtils::PRIMITIVES is unused

2018-02-02 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-19922:
--
Attachment: HBASE-19922.patch

> ProtobufUtils::PRIMITIVES is unused
> ---
>
> Key: HBASE-19922
> URL: https://issues.apache.org/jira/browse/HBASE-19922
> Project: HBase
>  Issue Type: Task
>  Components: Protobufs
>Reporter: Mike Drob
>Priority: Major
> Fix For: 2.0
>
> Attachments: HBASE-19922.patch
>
>
> It looks like ProtobufUtils::PRIMITIVES is never read in both the shaded and 
> non-shaded versions of the class. Is it safe to remove?
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java#L128
> We populate the map in a static initializer but never read any values from 
> it...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19922) ProtobufUtils::PRIMITIVES is unused

2018-02-02 Thread Mike Drob (JIRA)
Mike Drob created HBASE-19922:
-

 Summary: ProtobufUtils::PRIMITIVES is unused
 Key: HBASE-19922
 URL: https://issues.apache.org/jira/browse/HBASE-19922
 Project: HBase
  Issue Type: Task
  Components: Protobufs
Reporter: Mike Drob


It looks like ProtobufUtils::PRIMITIVES is never read in both the shaded and 
non-shaded versions of the class. Is it safe to remove?

https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java#L128

We populate the map in a static initializer but never read any values from it...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350943#comment-16350943
 ] 

Mike Drob commented on HBASE-19920:
---

It looks like there are a couple of subtle things going on here.

# Clients could use dynamic jars for coprocessors, for example if they are 
making a request that involves endpoint coprocessors. So I think I disagree 
with the implied solution.
# Dynamic jars shouldn't be getting loaded from the local file system, the 
intended use it to load them from a shared file system like HDFS. This might 
break in use cases where HBase is running on LocalFS instead of HDFS, I suspect 
this to be seen in a test environment?
# Maybe we shouldn't be creating this directory, but limit to checking if it 
exists and is readable. Not having the directory there shouldn't be a fatal 
error, probably sufficient to log a warning and move on.

> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Priority: Major
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.rootdir}/lib
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19726) Failed to start HMaster due to infinite retrying on meta assign

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19726:
--
Status: Patch Available  (was: Open)

As suggested above by [~Apache9], no need of setting hbase:meta as ENABLED. It 
is always ENABLED. Short-circuit all calls to ENABLE hbase:meta. This saves on 
RPC and possible deadlock.

The other issue in here where we were stuck in an RPC has been addressed 
elsewhere; shutdown now closes the Master connection which breaks the 
Connection shown hung in the original thread dump here in the description.

> Failed to start HMaster due to infinite retrying on meta assign
> ---
>
> Key: HBASE-19726
> URL: https://issues.apache.org/jira/browse/HBASE-19726
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 19726.patch
>
>
> This is what I got at first, an exception when trying to write something to 
> meta when meta has not been onlined yet.
> {noformat}
> 2018-01-07,21:03:14,389 INFO org.apache.hadoop.hbase.master.HMaster: Running 
> RecoverMetaProcedure to ensure proper hbase:meta deploy.
> 2018-01-07,21:03:14,637 INFO 
> org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: Start pid=1, 
> state=RUNNABLE:RECOVER_META_SPLIT_LOGS; RecoverMetaProcedure 
> failedMetaServer=null, splitWal=true
> 2018-01-07,21:03:14,645 INFO org.apache.hadoop.hbase.master.MasterWalManager: 
> Log folder 
> hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st27.bj,38900,1515330173896
>  belongs to an existing region server
> 2018-01-07,21:03:14,646 INFO org.apache.hadoop.hbase.master.MasterWalManager: 
> Log folder 
> hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st29.bj,38900,1515330177232
>  belongs to an existing region server
> 2018-01-07,21:03:14,648 INFO 
> org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: pid=1, 
> state=RUNNABLE:RECOVER_META_ASSIGN_REGIONS; RecoverMetaProcedure 
> failedMetaServer=null, splitWal=true; Retaining meta assignment to server=null
> 2018-01-07,21:03:14,653 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized 
> subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; 
> AssignProcedure table=hbase:meta, region=1588230740}]
> 2018-01-07,21:03:14,660 INFO 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: pid=2, 
> ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure 
> table=hbase:meta, region=1588230740 hbase:meta hbase:meta,,1.1588230740
> 2018-01-07,21:03:14,663 INFO 
> org.apache.hadoop.hbase.master.assignment.AssignProcedure: Start pid=2, 
> ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure 
> table=hbase:meta, region=1588230740; rit=OFFLINE, location=null; 
> forceNewPlan=false, retain=false
> 2018-01-07,21:03:14,831 INFO 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta 
> (replicaId=0) location in ZooKeeper as 
> c4-hadoop-tst-st27.bj,38900,1515330173896
> 2018-01-07,21:03:14,841 INFO 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: Dispatch 
> pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure 
> table=hbase:meta, region=1588230740; rit=OPENING, 
> location=c4-hadoop-tst-st27.bj,38900,1515330173896
> 2018-01-07,21:03:14,992 INFO 
> org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher: Using 
> procedure batch rpc execution for 
> serverName=c4-hadoop-tst-st27.bj,38900,1515330173896 version=3145728
> 2018-01-07,21:03:15,593 ERROR 
> org.apache.hadoop.hbase.client.AsyncRequestFutureImpl: Cannot get replica 0 
> location for 
> {"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1515330195514}]},"ts":1515330195514}
> 2018-01-07,21:03:15,594 WARN 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: 
> Retryable error trying to transition: pid=2, ppid=1, 
> state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, 
> region=1588230740; rit=OPEN, 
> location=c4-hadoop-tst-st27.bj,38900,1515330173896
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: IOException: 1 time, servers with issues: null
> at 
> org.apache.hadoop.hbase.client.BatchErrors.makeException(BatchErrors.java:54)
> at 
> org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.getErrors(AsyncRequestFutureImpl.java:1250)
> at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:457)
> at org.apache.hadoop.hbase.client.HTable.put(HTable.java:570)
> at 
> org.apache.hadoop.hbase.MetaTableAccessor.put(MetaTableAccessor.java:1450)
> at 
> 

[jira] [Commented] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method

2018-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350936#comment-16350936
 ] 

Hudson commented on HBASE-19848:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4515 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4515/])
HBASE-19848 Zookeeper thread leaks in hbase-spark bulkLoad method (Key (tedyu: 
rev 8143d5afa4a34c5f06a22e30b5017958b8c3f60c)
* (edit) 
hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala


> Zookeeper thread leaks in hbase-spark bulkLoad method
> -
>
> Key: HBASE-19848
> URL: https://issues.apache.org/jira/browse/HBASE-19848
> Project: HBase
>  Issue Type: Bug
>  Components: spark, Zookeeper
>Affects Versions: 1.2.0
> Environment: hbase-spark-1.2.0-cdh5.12.1 version
> spark 1.6
>Reporter: Key Hutu
>Assignee: Key Hutu
>Priority: Major
>  Labels: performance
> Attachments: HBASE-19848-V2.patch, HBASE-19848-V3.patch, 
> HBaseContext.patch, HBaseContext.scala
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> In hbase-spark project, HBaseContext provides bulkload methond for loading 
> spark rdd data to hbase easily.But when i using it frequently, the program 
> will throw "cannot create native thread" exception.
> using pstack command in spark driver process , the thread num is increasing 
> using jstack, named "main-SendThread" and "main-EventThread"  thread so many
> It seems like that , connection created before bulkload ,but close method 
> uninvoked at last



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19726) Failed to start HMaster due to infinite retrying on meta assign

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19726:
--
Attachment: 19726.patch

> Failed to start HMaster due to infinite retrying on meta assign
> ---
>
> Key: HBASE-19726
> URL: https://issues.apache.org/jira/browse/HBASE-19726
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 19726.patch
>
>
> This is what I got at first, an exception when trying to write something to 
> meta when meta has not been onlined yet.
> {noformat}
> 2018-01-07,21:03:14,389 INFO org.apache.hadoop.hbase.master.HMaster: Running 
> RecoverMetaProcedure to ensure proper hbase:meta deploy.
> 2018-01-07,21:03:14,637 INFO 
> org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: Start pid=1, 
> state=RUNNABLE:RECOVER_META_SPLIT_LOGS; RecoverMetaProcedure 
> failedMetaServer=null, splitWal=true
> 2018-01-07,21:03:14,645 INFO org.apache.hadoop.hbase.master.MasterWalManager: 
> Log folder 
> hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st27.bj,38900,1515330173896
>  belongs to an existing region server
> 2018-01-07,21:03:14,646 INFO org.apache.hadoop.hbase.master.MasterWalManager: 
> Log folder 
> hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st29.bj,38900,1515330177232
>  belongs to an existing region server
> 2018-01-07,21:03:14,648 INFO 
> org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: pid=1, 
> state=RUNNABLE:RECOVER_META_ASSIGN_REGIONS; RecoverMetaProcedure 
> failedMetaServer=null, splitWal=true; Retaining meta assignment to server=null
> 2018-01-07,21:03:14,653 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized 
> subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; 
> AssignProcedure table=hbase:meta, region=1588230740}]
> 2018-01-07,21:03:14,660 INFO 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: pid=2, 
> ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure 
> table=hbase:meta, region=1588230740 hbase:meta hbase:meta,,1.1588230740
> 2018-01-07,21:03:14,663 INFO 
> org.apache.hadoop.hbase.master.assignment.AssignProcedure: Start pid=2, 
> ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure 
> table=hbase:meta, region=1588230740; rit=OFFLINE, location=null; 
> forceNewPlan=false, retain=false
> 2018-01-07,21:03:14,831 INFO 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta 
> (replicaId=0) location in ZooKeeper as 
> c4-hadoop-tst-st27.bj,38900,1515330173896
> 2018-01-07,21:03:14,841 INFO 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: Dispatch 
> pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure 
> table=hbase:meta, region=1588230740; rit=OPENING, 
> location=c4-hadoop-tst-st27.bj,38900,1515330173896
> 2018-01-07,21:03:14,992 INFO 
> org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher: Using 
> procedure batch rpc execution for 
> serverName=c4-hadoop-tst-st27.bj,38900,1515330173896 version=3145728
> 2018-01-07,21:03:15,593 ERROR 
> org.apache.hadoop.hbase.client.AsyncRequestFutureImpl: Cannot get replica 0 
> location for 
> {"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1515330195514}]},"ts":1515330195514}
> 2018-01-07,21:03:15,594 WARN 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: 
> Retryable error trying to transition: pid=2, ppid=1, 
> state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, 
> region=1588230740; rit=OPEN, 
> location=c4-hadoop-tst-st27.bj,38900,1515330173896
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: IOException: 1 time, servers with issues: null
> at 
> org.apache.hadoop.hbase.client.BatchErrors.makeException(BatchErrors.java:54)
> at 
> org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.getErrors(AsyncRequestFutureImpl.java:1250)
> at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:457)
> at org.apache.hadoop.hbase.client.HTable.put(HTable.java:570)
> at 
> org.apache.hadoop.hbase.MetaTableAccessor.put(MetaTableAccessor.java:1450)
> at 
> org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:1439)
> at 
> org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:1785)
> at 
> org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:1151)
> at 
> org.apache.hadoop.hbase.master.TableStateManager.udpateMetaState(TableStateManager.java:183)
> at 
> 

[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350915#comment-16350915
 ] 

stack commented on HBASE-19767:
---

Is the metric rubbish then? Should we just remove it?

> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19876) The exception happening in converting pb mutation to hbase.mutation messes up the CellScanner

2018-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350908#comment-16350908
 ] 

Hadoop QA commented on HBASE-19876:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
47s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
44s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m 14s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 32s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.replication.TestReplicationDroppedTables |
|   | hadoop.hbase.TestFullLogReconstruction |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19876 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909023/HBASE-19876.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux edd281aedb18 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8143d5afa4 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11363/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11363/testReport/ |
| Max. process+thread count | 4797 (vs. ulimit of 1) |
| modules | 

[jira] [Updated] (HBASE-19786) acl table is created by coprocessor inside Master start procedure; broke TestJMXConnectorServer

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19786:
--
Fix Version/s: (was: 2.0.0-beta-2)
   2.0.0

> acl table is created by coprocessor inside Master start procedure; broke 
> TestJMXConnectorServer
> ---
>
> Key: HBASE-19786
> URL: https://issues.apache.org/jira/browse/HBASE-19786
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
>
> Parent reordering of startup broke TestJMXConnectorServer. Its failing 
> because we start cluster then near immediately go down. Meantime, the acl 
> table is trying to get created but the servers have been pulled out from 
> under it so it can't complete Test gets stuck.
> Creating tables inside the Master startup process is a bit dodgy. Fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19786) acl table is created by coprocessor inside Master start procedure; broke TestJMXConnectorServer

2018-02-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350890#comment-16350890
 ] 

stack commented on HBASE-19786:
---

Moved out of beta-2. Not happening.

> acl table is created by coprocessor inside Master start procedure; broke 
> TestJMXConnectorServer
> ---
>
> Key: HBASE-19786
> URL: https://issues.apache.org/jira/browse/HBASE-19786
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
>
> Parent reordering of startup broke TestJMXConnectorServer. Its failing 
> because we start cluster then near immediately go down. Meantime, the acl 
> table is trying to get created but the servers have been pulled out from 
> under it so it can't complete Test gets stuck.
> Creating tables inside the Master startup process is a bit dodgy. Fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19904) Break dependency of WAL constructor on Replication

2018-02-02 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350884#comment-16350884
 ] 

Appy commented on HBASE-19904:
--

Wohoo..since QA is happy...commit it.
There's one more improvement suggestion up in RB, but can be done in 
addendum/separate jira.


> Break dependency of WAL constructor on Replication
> --
>
> Key: HBASE-19904
> URL: https://issues.apache.org/jira/browse/HBASE-19904
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication, wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19904-branch-2.patch, HBASE-19904-branch-2.patch, 
> HBASE-19904-v3.patch, HBASE-19904-v3.patch, HBASE-19904-v4.patch, 
> HBASE-19904-v4.patch, HBASE-19904-v5.patch
>
>
> When implementing synchronous replication, I found that we need to depend 
> more on replication in WAL so it is even more pain...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19921) Disable 1.1 nightly builds.

2018-02-02 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350877#comment-16350877
 ] 

Mike Drob commented on HBASE-19921:
---

Would it have been better to delete the whole branch?

> Disable 1.1 nightly builds.
> ---
>
> Key: HBASE-19921
> URL: https://issues.apache.org/jira/browse/HBASE-19921
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 1.1.13
>
> Attachments: 0001-HBASE-19921-Disable-1.1-nightly-builds.patch
>
>
> As suggested by [~mdrob], remove the JenkinsFile is the trick (Nightlies run 
> for all branches in hbase).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19921) Disable 1.1 nightly builds.

2018-02-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350874#comment-16350874
 ] 

stack commented on HBASE-19921:
---

Turned off test runs on jenkins too.

> Disable 1.1 nightly builds.
> ---
>
> Key: HBASE-19921
> URL: https://issues.apache.org/jira/browse/HBASE-19921
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 1.1.13
>
> Attachments: 0001-HBASE-19921-Disable-1.1-nightly-builds.patch
>
>
> As suggested by [~mdrob], remove the JenkinsFile is the trick (Nightlies run 
> for all branches in hbase).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-19921) Disable 1.1 nightly builds.

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-19921.
---
   Resolution: Fixed
 Assignee: stack
Fix Version/s: 1.1.13
 Release Note: Disabled nightly build on branch-1.1 since it EOL'd.

Removed JenkinsFile from under dev-support. Pushed to branch-1.1.



> Disable 1.1 nightly builds.
> ---
>
> Key: HBASE-19921
> URL: https://issues.apache.org/jira/browse/HBASE-19921
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 1.1.13
>
> Attachments: 0001-HBASE-19921-Disable-1.1-nightly-builds.patch
>
>
> As suggested by [~mdrob], remove the JenkinsFile is the trick (Nightlies run 
> for all branches in hbase).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"

2018-02-02 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19663:
---
Fix Version/s: 1.5.0

> site build fails complaining "javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found"
> 
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: site
>Reporter: stack
>Assignee: stack
>Priority: Blocker
> Fix For: 2.0.0, 1.5.0, 1.4.2
>
> Attachments: script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19921) Disable 1.1 nightly builds.

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19921:
--
Attachment: 0001-HBASE-19921-Disable-1.1-nightly-builds.patch

> Disable 1.1 nightly builds.
> ---
>
> Key: HBASE-19921
> URL: https://issues.apache.org/jira/browse/HBASE-19921
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
> Attachments: 0001-HBASE-19921-Disable-1.1-nightly-builds.patch
>
>
> As suggested by [~mdrob], remove the JenkinsFile is the trick (Nightlies run 
> for all branches in hbase).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19921) Disable 1.1 nightly builds.

2018-02-02 Thread stack (JIRA)
stack created HBASE-19921:
-

 Summary: Disable 1.1 nightly builds.
 Key: HBASE-19921
 URL: https://issues.apache.org/jira/browse/HBASE-19921
 Project: HBase
  Issue Type: Sub-task
Reporter: stack


As suggested by [~mdrob], remove the JenkinsFile is the trick (Nightlies run 
for all branches in hbase).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19901) Up yetus proclimit on nightlies

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19901:
--
   Resolution: Fixed
Fix Version/s: 1.1.13
   1.4.2
   1.2.8
   2.0.0-beta-2
   1.5.0
   1.3.2
 Release Note: 
Pass to yetus a dockermemlimit of 20G and a proclimit of 1. Defaults are 4G 
and 1G respectively.

   Status: Resolved  (was: Patch Available)

Pushed change to 1.1+  All now have hardcoded docker memlimit of 20G and 
proclimit of 10k.

TODO: How to get hbase_nightly_yetus to use defined globals in 
hbase-personality instead of hardcoding.

> Up yetus proclimit on nightlies
> ---
>
> Key: HBASE-19901
> URL: https://issues.apache.org/jira/browse/HBASE-19901
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 2.0.0-beta-2, 1.2.8, 1.4.2, 1.1.13
>
> Attachments: HBASE-19901.master.001.patch, 
> HBASE-19901.master.002.patch
>
>
> We're on 0.7.0 now which enforces limits meant to protect against runaway 
> processes. Default is 1000 procs. HBase test runs seem to consume almost 4k. 
> Up our proclimit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19528) Major Compaction Tool

2018-02-02 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-19528:
---
Status: Patch Available  (was: Open)

> Major Compaction Tool 
> --
>
> Key: HBASE-19528
> URL: https://issues.apache.org/jira/browse/HBASE-19528
> Project: HBase
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, 
> HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, 
> HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v8.patch
>
>
> The basic overview of how this tool works is:
> Parameters:
> Table
> Stores
> ClusterConcurrency
> Timestamp
> So you input a table, desired concurrency and the list of stores you wish to 
> major compact.  The tool first checks the filesystem to see which stores need 
> compaction based on the timestamp you provide (default is current time).  It 
> takes that list of stores that require compaction and executes those requests 
> concurrently with at most N distinct RegionServers compacting at a given 
> time.  Each thread waits for the compaction to complete before moving to the 
> next queue.  If a region split, merge or move happens this tool ensures those 
> regions get major compacted as well. 
> This helps us in two ways, we can limit how much I/O bandwidth we are using 
> for major compaction cluster wide and we are guaranteed after the tool 
> completes that all requested compactions complete regardless of moves, merges 
> and splits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19528) Major Compaction Tool

2018-02-02 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-19528:
---
Status: Open  (was: Patch Available)

> Major Compaction Tool 
> --
>
> Key: HBASE-19528
> URL: https://issues.apache.org/jira/browse/HBASE-19528
> Project: HBase
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, 
> HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, 
> HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v8.patch
>
>
> The basic overview of how this tool works is:
> Parameters:
> Table
> Stores
> ClusterConcurrency
> Timestamp
> So you input a table, desired concurrency and the list of stores you wish to 
> major compact.  The tool first checks the filesystem to see which stores need 
> compaction based on the timestamp you provide (default is current time).  It 
> takes that list of stores that require compaction and executes those requests 
> concurrently with at most N distinct RegionServers compacting at a given 
> time.  Each thread waits for the compaction to complete before moving to the 
> next queue.  If a region split, merge or move happens this tool ensures those 
> regions get major compacted as well. 
> This helps us in two ways, we can limit how much I/O bandwidth we are using 
> for major compaction cluster wide and we are guaranteed after the tool 
> completes that all requested compactions complete regardless of moves, merges 
> and splits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19805) NPE in HMaster while issuing a sequence of table splits

2018-02-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350786#comment-16350786
 ] 

stack commented on HBASE-19805:
---

Any luck [~sergey.soldatov] ?

> NPE in HMaster while issuing a sequence of table splits
> ---
>
> Key: HBASE-19805
> URL: https://issues.apache.org/jira/browse/HBASE-19805
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0-beta-1
>Reporter: Josh Elser
>Assignee: Sergey Soldatov
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
>
> I wrote a toy program to test the client tarball in HBASE-19735. After the 
> first few region splits, I see the following error in the Master log. 
> {noformat}
> 2018-01-16 14:07:52,797 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16000] master.HMaster: 
> Client=jelser//192.168.1.23 split 
> myTestTable,1,1516129669054.8313b755f74092118f9dd30a4190ee23.
> 2018-01-16 14:07:52,797 ERROR 
> [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16000] ipc.RpcServer: 
> Unexpected throwable object
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils.getStubKey(ConnectionUtils.java:229)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getAdmin(ConnectionImplementation.java:1175)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.getAdmin(ConnectionUtils.java:149)
>   at 
> org.apache.hadoop.hbase.master.assignment.Util.getRegionInfoResponse(Util.java:59)
>   at 
> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.checkSplittable(SplitTableRegionProcedure.java:146)
>   at 
> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:103)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:761)
>   at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1626)
>   at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:134)
>   at org.apache.hadoop.hbase.master.HMaster.splitRegion(HMaster.java:1618)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.splitRegion(MasterRpcServices.java:778)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {noformat}
> {code}
>   public static void main(String[] args) throws Exception {
> Configuration conf = HBaseConfiguration.create();
> try (Connection conn = ConnectionFactory.createConnection(conf);
> Admin admin = conn.getAdmin()) {
>   final TableName tn = TableName.valueOf("myTestTable");
>   if (admin.tableExists(tn)) {
> admin.disableTable(tn);
> admin.deleteTable(tn);
>   }
>   final TableDescriptor desc = TableDescriptorBuilder.newBuilder(tn)
>   
> .addColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(Bytes.toBytes("f1")).build())
>   .build();
>   admin.createTable(desc);
>   List splitPoints = new ArrayList<>(16);
>   for (int i = 1; i <= 16; i++) {
> splitPoints.add(Integer.toString(i, 16));
>   }
>   
>   System.out.println("Splits: " + splitPoints);
>   int numRegions = admin.getRegions(tn).size();
>   for (String splitPoint : splitPoints) {
> System.out.println("Splitting on " + splitPoint);
> admin.split(tn, Bytes.toBytes(splitPoint));
> Thread.sleep(200);
> int newRegionSize = admin.getRegions(tn).size();
> while (numRegions == newRegionSize) {
>   Thread.sleep(50);
>   newRegionSize = admin.getRegions(tn).size();
> }
>   }
> {code}
> A quick glance, looks like {{Util.getRegionInfoResponse}} is to blame.
> {code}
>   static GetRegionInfoResponse getRegionInfoResponse(final MasterProcedureEnv 
> env,
>   final ServerName regionLocation, final RegionInfo hri, boolean 
> includeBestSplitRow)
>   throws IOException {
> // TODO: There is no timeout on this controller. Set one!
> HBaseRpcController controller = 
> env.getMasterServices().getClusterConnection().
> getRpcControllerFactory().newController();
> final AdminService.BlockingInterface admin =
> 
> env.getMasterServices().getClusterConnection().getAdmin(regionLocation);
> {code}
> We don't validate that we have a non-null {{ServerName 

[jira] [Updated] (HBASE-19915) From split/ merge procedures daughter/ merged regions get created in OFFLINE state

2018-02-02 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-19915:
-
Status: Patch Available  (was: In Progress)

> From split/ merge procedures daughter/ merged regions get created in OFFLINE 
> state
> --
>
> Key: HBASE-19915
> URL: https://issues.apache.org/jira/browse/HBASE-19915
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-beta-1
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: hbase-19915.master.001.patch
>
>
> See HBASE-19530. When regions are created initial state should be CLOSED. Bug 
> was discovered while debugging flaky test 
> TestSplitTableRegionProcedure#testRollbackAndDoubleExecution with numOfSteps 
> set to 4. After updating daughter regions in meta when master is restarted, 
> startup sequence of master assigns all OFFLINE regions. As daughter regions 
> are stored with OFFLINE state, daughter regions are assigned. This is 
> followed by re-assignment of daughter regions from resumed 
> SplitTableRegionProcedure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19915) From split/ merge procedures daughter/ merged regions get created in OFFLINE state

2018-02-02 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-19915:
-
Attachment: hbase-19915.master.001.patch

> From split/ merge procedures daughter/ merged regions get created in OFFLINE 
> state
> --
>
> Key: HBASE-19915
> URL: https://issues.apache.org/jira/browse/HBASE-19915
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-beta-1
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: hbase-19915.master.001.patch
>
>
> See HBASE-19530. When regions are created initial state should be CLOSED. Bug 
> was discovered while debugging flaky test 
> TestSplitTableRegionProcedure#testRollbackAndDoubleExecution with numOfSteps 
> set to 4. After updating daughter regions in meta when master is restarted, 
> startup sequence of master assigns all OFFLINE regions. As daughter regions 
> are stored with OFFLINE state, daughter regions are assigned. This is 
> followed by re-assignment of daughter regions from resumed 
> SplitTableRegionProcedure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-19915) From split/ merge procedures daughter/ merged regions get created in OFFLINE state

2018-02-02 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-19915 started by Umesh Agashe.

> From split/ merge procedures daughter/ merged regions get created in OFFLINE 
> state
> --
>
> Key: HBASE-19915
> URL: https://issues.apache.org/jira/browse/HBASE-19915
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-beta-1
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
>
> See HBASE-19530. When regions are created initial state should be CLOSED. Bug 
> was discovered while debugging flaky test 
> TestSplitTableRegionProcedure#testRollbackAndDoubleExecution with numOfSteps 
> set to 4. After updating daughter regions in meta when master is restarted, 
> startup sequence of master assigns all OFFLINE regions. As daughter regions 
> are stored with OFFLINE state, daughter regions are assigned. This is 
> followed by re-assignment of daughter regions from resumed 
> SplitTableRegionProcedure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19528) Major Compaction Tool

2018-02-02 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350768#comment-16350768
 ] 

churro morales commented on HBASE-19528:


[~stack] I've retried a bunch of times, cant get that docker container not to 
fail, any ideas or just wait and try some other time.

> Major Compaction Tool 
> --
>
> Key: HBASE-19528
> URL: https://issues.apache.org/jira/browse/HBASE-19528
> Project: HBase
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, 
> HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, 
> HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v8.patch
>
>
> The basic overview of how this tool works is:
> Parameters:
> Table
> Stores
> ClusterConcurrency
> Timestamp
> So you input a table, desired concurrency and the list of stores you wish to 
> major compact.  The tool first checks the filesystem to see which stores need 
> compaction based on the timestamp you provide (default is current time).  It 
> takes that list of stores that require compaction and executes those requests 
> concurrently with at most N distinct RegionServers compacting at a given 
> time.  Each thread waits for the compaction to complete before moving to the 
> next queue.  If a region split, merge or move happens this tool ensures those 
> regions get major compacted as well. 
> This helps us in two ways, we can limit how much I/O bandwidth we are using 
> for major compaction cluster wide and we are guaranteed after the tool 
> completes that all requested compactions complete regardless of moves, merges 
> and splits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19528) Major Compaction Tool

2018-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350759#comment-16350759
 ] 

Hadoop QA commented on HBASE-19528:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 15m 
37s{color} | {color:red} Docker failed to build yetus/hbase:36a7029. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-19528 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909024/HBASE-19528.v2.branch-1.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11362/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Major Compaction Tool 
> --
>
> Key: HBASE-19528
> URL: https://issues.apache.org/jira/browse/HBASE-19528
> Project: HBase
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, 
> HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, 
> HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v8.patch
>
>
> The basic overview of how this tool works is:
> Parameters:
> Table
> Stores
> ClusterConcurrency
> Timestamp
> So you input a table, desired concurrency and the list of stores you wish to 
> major compact.  The tool first checks the filesystem to see which stores need 
> compaction based on the timestamp you provide (default is current time).  It 
> takes that list of stores that require compaction and executes those requests 
> concurrently with at most N distinct RegionServers compacting at a given 
> time.  Each thread waits for the compaction to complete before moving to the 
> next queue.  If a region split, merge or move happens this tool ensures those 
> regions get major compacted as well. 
> This helps us in two ways, we can limit how much I/O bandwidth we are using 
> for major compaction cluster wide and we are guaranteed after the tool 
> completes that all requested compactions complete regardless of moves, merges 
> and splits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19663:
--
Fix Version/s: (was: 2.0.0-beta-2)
   2.0.0

> site build fails complaining "javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found"
> 
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: site
>Reporter: stack
>Assignee: stack
>Priority: Blocker
> Fix For: 2.0.0, 1.4.2
>
> Attachments: script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19900) Region-level exception destroy the result of batch

2018-02-02 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19900:
---
Fix Version/s: 1.4.2
   2.0.0-beta-2
   1.2.7
   1.5.0
   1.3.2

> Region-level exception destroy the result of batch
> --
>
> Key: HBASE-19900
> URL: https://issues.apache.org/jira/browse/HBASE-19900
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-2, 1.4.2
>
>
> 1) decrease action count repeatedly
> If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the 
> incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone 
> as the count is never equal with 0.
> 2) the successive result will be overwrited 
> 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly 
> AsyncRequestFutureImpl#receiveMultiAction process the action-lever error 
> first, and then add the region-level exception to each action. Hence, user 
> may get the various exceptions for the same action (row op) from the 
> RetriesExhaustedWithDetailsException.
> In fact, if both of action-level exception and region-lever exception exist, 
> they always have the same context. I'm not sure whether that is what 
> RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
> duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
> users if they catch the RetriesExhaustedWithDetailsException to check the 
> invalid operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19528) Major Compaction Tool

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19528:
--
Attachment: HBASE-19528.v2.branch-1.patch

> Major Compaction Tool 
> --
>
> Key: HBASE-19528
> URL: https://issues.apache.org/jira/browse/HBASE-19528
> Project: HBase
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, 
> HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, 
> HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v2.branch-1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v8.patch
>
>
> The basic overview of how this tool works is:
> Parameters:
> Table
> Stores
> ClusterConcurrency
> Timestamp
> So you input a table, desired concurrency and the list of stores you wish to 
> major compact.  The tool first checks the filesystem to see which stores need 
> compaction based on the timestamp you provide (default is current time).  It 
> takes that list of stores that require compaction and executes those requests 
> concurrently with at most N distinct RegionServers compacting at a given 
> time.  Each thread waits for the compaction to complete before moving to the 
> next queue.  If a region split, merge or move happens this tool ensures those 
> regions get major compacted as well. 
> This helps us in two ways, we can limit how much I/O bandwidth we are using 
> for major compaction cluster wide and we are guaranteed after the tool 
> completes that all requested compactions complete regardless of moves, merges 
> and splits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19876) The exception happening in converting pb mutation to hbase.mutation messes up the CellScanner

2018-02-02 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350734#comment-16350734
 ] 

Chia-Ping Tsai commented on HBASE-19876:


Thanks [~stack]. I forget to say this issue is blocked by HBASE-19900. If 
HBASE-19900 is not resolved, the tests in the patch are unstable because of the 
region-level exception.

> The exception happening in converting pb mutation to hbase.mutation messes up 
> the CellScanner
> -
>
> Key: HBASE-19876
> URL: https://issues.apache.org/jira/browse/HBASE-19876
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-2, 1.4.2
>
> Attachments: HBASE-19876.master.001.patch, HBASE-19876.v0.patch, 
> HBASE-19876.v1.patch, HBASE-19876.v2.patch, HBASE-19876.v3.patch, 
> HBASE-19876.v3.patch, HBASE-19876.v3.patch, HBASE-19876.v3.patch
>
>
> {code:java}
> 2018-01-27 22:51:43,794 INFO  [hconnection-0x3291b443-shared-pool11-t6] 
> client.AsyncRequestFutureImpl(778): id=5, table=testQuotaStatusFromMaster3, 
> attempt=6/16 failed=20ops, last 
> exception=org.apache.hadoop.hbase.client.WrongRowIOException: 
> org.apache.hadoop.hbase.client.WrongRowIOException: The row in xxx doesn't 
> match the original one aaa
>   at org.apache.hadoop.hbase.client.Mutation.add(Mutation.java:776)
>   at org.apache.hadoop.hbase.client.Put.add(Put.java:282)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toPut(ProtobufUtil.java:642)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:952)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:896)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2591)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41560)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304){code}
> I noticed this bug when testing the table space quota.
> When rs are converting pb mutation to hbase.mutation, the quota exception or 
> cell exception may be thrown.
> {code}
> Unable to find source-code formatter for language: 
> rsrpcservices#dobatchop.java. Available languages are: actionscript, ada, 
> applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, 
> java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, 
> rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml  for 
> (ClientProtos.Action action: mutations) {
> MutationProto m = action.getMutation();
> Mutation mutation;
> if (m.getMutateType() == MutationType.PUT) {
>   mutation = ProtobufUtil.toPut(m, cells);
>   batchContainsPuts = true;
> } else {
>   mutation = ProtobufUtil.toDelete(m, cells);
>   batchContainsDelete = true;
> }
> mutationActionMap.put(mutation, action);
> mArray[i++] = mutation;
> checkCellSizeLimit(region, mutation);
> // Check if a space quota disallows this mutation
> spaceQuotaEnforcement.getPolicyEnforcement(region).check(mutation);
> quota.addMutation(mutation);
>   }
> {code}
> rs has caught the exception but it doesn't have the cellscanner skip the 
> failed cells.
> {code:java}
> } catch (IOException ie) {
>   if (atomic) {
> throw ie;
>   }
>   for (Action mutation : mutations) {
> builder.addResultOrException(getResultOrException(ie, 
> mutation.getIndex()));
>   }
> }
> {code}
> The bug results in the WrongRowIOException to remaining mutations since they 
> refer to invalid cells.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19528) Major Compaction Tool

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19528:
--
Fix Version/s: (was: 3.0.0)

> Major Compaction Tool 
> --
>
> Key: HBASE-19528
> URL: https://issues.apache.org/jira/browse/HBASE-19528
> Project: HBase
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 0001-HBASE-19528-Major-Compaction-Tool-ADDENDUM.patch, 
> HBASE-19528.branch-1.patch, HBASE-19528.patch, HBASE-19528.v1.branch-1.patch, 
> HBASE-19528.v1.patch, HBASE-19528.v2.branch-1.patch, 
> HBASE-19528.v2.branch-1.patch, HBASE-19528.v8.patch
>
>
> The basic overview of how this tool works is:
> Parameters:
> Table
> Stores
> ClusterConcurrency
> Timestamp
> So you input a table, desired concurrency and the list of stores you wish to 
> major compact.  The tool first checks the filesystem to see which stores need 
> compaction based on the timestamp you provide (default is current time).  It 
> takes that list of stores that require compaction and executes those requests 
> concurrently with at most N distinct RegionServers compacting at a given 
> time.  Each thread waits for the compaction to complete before moving to the 
> next queue.  If a region split, merge or move happens this tool ensures those 
> regions get major compacted as well. 
> This helps us in two ways, we can limit how much I/O bandwidth we are using 
> for major compaction cluster wide and we are guaranteed after the tool 
> completes that all requested compactions complete regardless of moves, merges 
> and splits. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19876) The exception happening in converting pb mutation to hbase.mutation messes up the CellScanner

2018-02-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350730#comment-16350730
 ] 

stack commented on HBASE-19876:
---

.001 rebase of [~chia7712] patch.

> The exception happening in converting pb mutation to hbase.mutation messes up 
> the CellScanner
> -
>
> Key: HBASE-19876
> URL: https://issues.apache.org/jira/browse/HBASE-19876
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-2, 1.4.2
>
> Attachments: HBASE-19876.master.001.patch, HBASE-19876.v0.patch, 
> HBASE-19876.v1.patch, HBASE-19876.v2.patch, HBASE-19876.v3.patch, 
> HBASE-19876.v3.patch, HBASE-19876.v3.patch, HBASE-19876.v3.patch
>
>
> {code:java}
> 2018-01-27 22:51:43,794 INFO  [hconnection-0x3291b443-shared-pool11-t6] 
> client.AsyncRequestFutureImpl(778): id=5, table=testQuotaStatusFromMaster3, 
> attempt=6/16 failed=20ops, last 
> exception=org.apache.hadoop.hbase.client.WrongRowIOException: 
> org.apache.hadoop.hbase.client.WrongRowIOException: The row in xxx doesn't 
> match the original one aaa
>   at org.apache.hadoop.hbase.client.Mutation.add(Mutation.java:776)
>   at org.apache.hadoop.hbase.client.Put.add(Put.java:282)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toPut(ProtobufUtil.java:642)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:952)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:896)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2591)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41560)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304){code}
> I noticed this bug when testing the table space quota.
> When rs are converting pb mutation to hbase.mutation, the quota exception or 
> cell exception may be thrown.
> {code}
> Unable to find source-code formatter for language: 
> rsrpcservices#dobatchop.java. Available languages are: actionscript, ada, 
> applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, 
> java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, 
> rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml  for 
> (ClientProtos.Action action: mutations) {
> MutationProto m = action.getMutation();
> Mutation mutation;
> if (m.getMutateType() == MutationType.PUT) {
>   mutation = ProtobufUtil.toPut(m, cells);
>   batchContainsPuts = true;
> } else {
>   mutation = ProtobufUtil.toDelete(m, cells);
>   batchContainsDelete = true;
> }
> mutationActionMap.put(mutation, action);
> mArray[i++] = mutation;
> checkCellSizeLimit(region, mutation);
> // Check if a space quota disallows this mutation
> spaceQuotaEnforcement.getPolicyEnforcement(region).check(mutation);
> quota.addMutation(mutation);
>   }
> {code}
> rs has caught the exception but it doesn't have the cellscanner skip the 
> failed cells.
> {code:java}
> } catch (IOException ie) {
>   if (atomic) {
> throw ie;
>   }
>   for (Action mutation : mutations) {
> builder.addResultOrException(getResultOrException(ie, 
> mutation.getIndex()));
>   }
> }
> {code}
> The bug results in the WrongRowIOException to remaining mutations since they 
> refer to invalid cells.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19876) The exception happening in converting pb mutation to hbase.mutation messes up the CellScanner

2018-02-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19876:
--
Attachment: HBASE-19876.master.001.patch

> The exception happening in converting pb mutation to hbase.mutation messes up 
> the CellScanner
> -
>
> Key: HBASE-19876
> URL: https://issues.apache.org/jira/browse/HBASE-19876
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-2, 1.4.2
>
> Attachments: HBASE-19876.master.001.patch, HBASE-19876.v0.patch, 
> HBASE-19876.v1.patch, HBASE-19876.v2.patch, HBASE-19876.v3.patch, 
> HBASE-19876.v3.patch, HBASE-19876.v3.patch, HBASE-19876.v3.patch
>
>
> {code:java}
> 2018-01-27 22:51:43,794 INFO  [hconnection-0x3291b443-shared-pool11-t6] 
> client.AsyncRequestFutureImpl(778): id=5, table=testQuotaStatusFromMaster3, 
> attempt=6/16 failed=20ops, last 
> exception=org.apache.hadoop.hbase.client.WrongRowIOException: 
> org.apache.hadoop.hbase.client.WrongRowIOException: The row in xxx doesn't 
> match the original one aaa
>   at org.apache.hadoop.hbase.client.Mutation.add(Mutation.java:776)
>   at org.apache.hadoop.hbase.client.Put.add(Put.java:282)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toPut(ProtobufUtil.java:642)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:952)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:896)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2591)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41560)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304){code}
> I noticed this bug when testing the table space quota.
> When rs are converting pb mutation to hbase.mutation, the quota exception or 
> cell exception may be thrown.
> {code}
> Unable to find source-code formatter for language: 
> rsrpcservices#dobatchop.java. Available languages are: actionscript, ada, 
> applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, 
> java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, 
> rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml  for 
> (ClientProtos.Action action: mutations) {
> MutationProto m = action.getMutation();
> Mutation mutation;
> if (m.getMutateType() == MutationType.PUT) {
>   mutation = ProtobufUtil.toPut(m, cells);
>   batchContainsPuts = true;
> } else {
>   mutation = ProtobufUtil.toDelete(m, cells);
>   batchContainsDelete = true;
> }
> mutationActionMap.put(mutation, action);
> mArray[i++] = mutation;
> checkCellSizeLimit(region, mutation);
> // Check if a space quota disallows this mutation
> spaceQuotaEnforcement.getPolicyEnforcement(region).check(mutation);
> quota.addMutation(mutation);
>   }
> {code}
> rs has caught the exception but it doesn't have the cellscanner skip the 
> failed cells.
> {code:java}
> } catch (IOException ie) {
>   if (atomic) {
> throw ie;
>   }
>   for (Action mutation : mutations) {
> builder.addResultOrException(getResultOrException(ie, 
> mutation.getIndex()));
>   }
> }
> {code}
> The bug results in the WrongRowIOException to remaining mutations since they 
> refer to invalid cells.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19900) Region-level exception destroy the result of batch

2018-02-02 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19900:
---
Description: 
1) decrease action count repeatedly

If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the 
incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone as 
the count is never equal with 0.

2) the successive result will be overwrited 

3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly 

AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, 
and then add the region-level exception to each action. Hence, user may get the 
various exceptions for the same action (row op) from the 
RetriesExhaustedWithDetailsException.

In fact, if both of action-level exception and region-lever exception exist, 
they always have the same context. I'm not sure whether that is what 
RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
users if they catch the RetriesExhaustedWithDetailsException to check the 
invalid operations.

  was:
The inconsistency includes the following bug.

1) decrease action count repeatedly

If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the 
incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone as 
the count is never equal with 0.

2) the successive result will be overwrited 

3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly 

AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, 
and then add the region-level exception to each action. Hence, user may get the 
various exceptions for the same action (row op) from the 
RetriesExhaustedWithDetailsException.

In fact, if both of action-level exception and region-lever exception exist, 
they always have the same context. I'm not sure whether that is what 
RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
users if they catch the RetriesExhaustedWithDetailsException to check the 
invalid operations.


> Region-level exception destroy the result of batch
> --
>
> Key: HBASE-19900
> URL: https://issues.apache.org/jira/browse/HBASE-19900
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> 1) decrease action count repeatedly
> If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the 
> incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone 
> as the count is never equal with 0.
> 2) the successive result will be overwrited 
> 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly 
> AsyncRequestFutureImpl#receiveMultiAction process the action-lever error 
> first, and then add the region-level exception to each action. Hence, user 
> may get the various exceptions for the same action (row op) from the 
> RetriesExhaustedWithDetailsException.
> In fact, if both of action-level exception and region-lever exception exist, 
> they always have the same context. I'm not sure whether that is what 
> RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
> duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
> users if they catch the RetriesExhaustedWithDetailsException to check the 
> invalid operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19900) Region-level exception destroy the result of batch

2018-02-02 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19900:
---
Summary: Region-level exception destroy the result of batch  (was: The 
failed op is added to RetriesExhaustedWithDetailsException repeatedly )

> Region-level exception destroy the result of batch
> --
>
> Key: HBASE-19900
> URL: https://issues.apache.org/jira/browse/HBASE-19900
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> The inconsistency includes the following bug.
> 1) decrease action count repeatedly
> If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the 
> incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone 
> as the count is never equal with 0.
> 2) the successive result will be overwrited 
> 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly 
> AsyncRequestFutureImpl#receiveMultiAction process the action-lever error 
> first, and then add the region-level exception to each action. Hence, user 
> may get the various exceptions for the same action (row op) from the 
> RetriesExhaustedWithDetailsException.
> In fact, if both of action-level exception and region-lever exception exist, 
> they always have the same context. I'm not sure whether that is what 
> RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
> duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
> users if they catch the RetriesExhaustedWithDetailsException to check the 
> invalid operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19900) Region-level exception destroy the result of batch

2018-02-02 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19900:
---
Priority: Critical  (was: Minor)

> Region-level exception destroy the result of batch
> --
>
> Key: HBASE-19900
> URL: https://issues.apache.org/jira/browse/HBASE-19900
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
>
> 1) decrease action count repeatedly
> If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the 
> incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone 
> as the count is never equal with 0.
> 2) the successive result will be overwrited 
> 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly 
> AsyncRequestFutureImpl#receiveMultiAction process the action-lever error 
> first, and then add the region-level exception to each action. Hence, user 
> may get the various exceptions for the same action (row op) from the 
> RetriesExhaustedWithDetailsException.
> In fact, if both of action-level exception and region-lever exception exist, 
> they always have the same context. I'm not sure whether that is what 
> RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
> duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
> users if they catch the RetriesExhaustedWithDetailsException to check the 
> invalid operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19900) The failed op is added to RetriesExhaustedWithDetailsException repeatedly

2018-02-02 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19900:
---
Description: 
The inconsistency includes the following bug.

1) decrease action count repeatedly

If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the 
incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone as 
the count is never equal with 0.

2) the successive result will be overwrited 

3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly 

AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, 
and then add the region-level exception to each action. Hence, user may get the 
various exceptions for the same action (row op) from the 
RetriesExhaustedWithDetailsException.

In fact, if both of action-level exception and region-lever exception exist, 
they always have the same context. I'm not sure whether that is what 
RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
users if they catch the RetriesExhaustedWithDetailsException to check the 
invalid operations.

  was:
The inconsistency includes the following bug.

 

3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly 

 

AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, 
and then add the region-level exception to each action. Hence, user may get the 
various exceptions for the same action (row op) from the 
RetriesExhaustedWithDetailsException.

In fact, if both of action-level exception and region-lever exception exist, 
they always have the same context. I'm not sure whether that is what 
RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
users if they catch the RetriesExhaustedWithDetailsException to check the 
invalid operations.


> The failed op is added to RetriesExhaustedWithDetailsException repeatedly 
> --
>
> Key: HBASE-19900
> URL: https://issues.apache.org/jira/browse/HBASE-19900
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> The inconsistency includes the following bug.
> 1) decrease action count repeatedly
> If the AsyncRequestFuture#waitUntilDone return prematurely, user will get the 
> incorrect results. Or user will be block by AsyncRequestFuture#waitUntilDone 
> as the count is never equal with 0.
> 2) the successive result will be overwrited 
> 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly 
> AsyncRequestFutureImpl#receiveMultiAction process the action-lever error 
> first, and then add the region-level exception to each action. Hence, user 
> may get the various exceptions for the same action (row op) from the 
> RetriesExhaustedWithDetailsException.
> In fact, if both of action-level exception and region-lever exception exist, 
> they always have the same context. I'm not sure whether that is what 
> RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
> duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
> users if they catch the RetriesExhaustedWithDetailsException to check the 
> invalid operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19900) The failed op is added to RetriesExhaustedWithDetailsException repeatedly

2018-02-02 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19900:
---
Description: 
The inconsistency includes the following bug.

 

3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly 

 

AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, 
and then add the region-level exception to each action. Hence, user may get the 
various exceptions for the same action (row op) from the 
RetriesExhaustedWithDetailsException.

In fact, if both of action-level exception and region-lever exception exist, 
they always have the same context. I'm not sure whether that is what 
RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
users if they catch the RetriesExhaustedWithDetailsException to check the 
invalid operations.

  was:
AsyncRequestFutureImpl#receiveMultiAction process the action-lever error first, 
and then add the region-level exception to each action. Hence, user may get the 
various exceptions for the same action (row op) from the 
RetriesExhaustedWithDetailsException.

In fact, if both of action-level exception and region-lever exception exist, 
they always have the same context. I'm not sure whether that is what 
RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
users if they catch the RetriesExhaustedWithDetailsException to check the 
invalid operations.


> The failed op is added to RetriesExhaustedWithDetailsException repeatedly 
> --
>
> Key: HBASE-19900
> URL: https://issues.apache.org/jira/browse/HBASE-19900
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> The inconsistency includes the following bug.
>  
> 3) The failed op is added to RetriesExhaustedWithDetailsException repeatedly 
>  
> AsyncRequestFutureImpl#receiveMultiAction process the action-lever error 
> first, and then add the region-level exception to each action. Hence, user 
> may get the various exceptions for the same action (row op) from the 
> RetriesExhaustedWithDetailsException.
> In fact, if both of action-level exception and region-lever exception exist, 
> they always have the same context. I'm not sure whether that is what 
> RetriesExhaustedWithDetailsException want. As i see it, we shouldn't have the 
> duplicate ops in RetriesExhaustedWithDetailsException since that may confuse 
> users if they catch the RetriesExhaustedWithDetailsException to check the 
> invalid operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350662#comment-16350662
 ] 

Ted Yu commented on HBASE-19920:


Rohini:
Do you want to submit a patch ?

> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Priority: Major
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.rootdir}/lib
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19720) Rename WALKey#getTabnename to WALKey#getTableName

2018-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350650#comment-16350650
 ] 

Hudson commented on HBASE-19720:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4514 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4514/])
HBASE-19720 Rename WALKey#getTabnename to WALKey#getTableName (chia7712: rev 
2f4d0b94bc61b00f1d7c549e8dafb4cc420fab18)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALKey.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALKeyImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/AbstractTestProtobufLog.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpointNoMaster.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/NamespaceTableCfWALEntryFilter.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWALLockup.java
* (edit) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALPlayer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ReaderBase.java
* (edit) 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java
* (edit) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/SystemTableWALEntryFilter.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java


> Rename WALKey#getTabnename to WALKey#getTableName
> -
>
> Key: HBASE-19720
> URL: https://issues.apache.org/jira/browse/HBASE-19720
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-19720.v0.patch
>
>
> WALKey is denoted as LP so its naming should obey the common rule in our 
> codebase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350623#comment-16350623
 ] 

Mike Drob commented on HBASE-19920:
---

Ah, ok a few lines up there is the mkdir

> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Priority: Major
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.rootdir}/lib
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19133) Transfer big cells or upserted/appended cells into MSLAB upon flattening to CellChunkMap

2018-02-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350621#comment-16350621
 ] 

Ted Yu commented on HBASE-19133:


In CellChunkImmutableSegment#copyCellIntoMSLAB , we shouldn't pass true for 
forceCloneOfBigCell:
{code}
long oldHeapSize = heapSizeChange(cell, true);
long oldCellSize = getCellLength(cell);
cell = maybeCloneWithAllocator(cell, true);
{code}
We can lift maybeCloneWithAllocator() as the first call in copyCellIntoMSLAB.
maybeCloneWithAllocator() should check whether clone is supported by 
this.memStoreLAB. If not, it just returns the Cell.
copyCellIntoMSLAB() would determine the forceCloneOfBigCell flag based on 
whether cloning happened or nor.

[~anastas] [~galish]:
What do you think ?

> Transfer big cells or upserted/appended cells into MSLAB upon flattening to 
> CellChunkMap
> 
>
> Key: HBASE-19133
> URL: https://issues.apache.org/jira/browse/HBASE-19133
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Gali Sheffi
>Priority: Major
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19133-V01.patch, HBASE-19133-V02.patch, 
> HBASE-19133-V03.patch, HBASE-19133.01.patch, HBASE-19133.02.patch, 
> HBASE-19133.03.patch, HBASE-19133.04.patch, HBASE-19133.05.patch, 
> HBASE-19133.06.patch, HBASE-19133.07.patch, HBASE-19133.08.patch, 
> HBASE-19133.09.patch, HBASE-19133.10.patch, HBASE-19133.11.patch
>
>
> CellChunkMap Segment index requires all cell data to be written in the MSLAB 
> Chunks. Eventhough MSLAB is enabled, cells bigger than chunk size or 
> upserted/incremented/appended cells are still allocated on the JVM stack. If 
> such cells are found in the process of flattening into CellChunkMap 
> (in-memory-flush) they need to be copied into MSLAB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350569#comment-16350569
 ] 

Mike Drob commented on HBASE-19920:
---

I don't see any directory creation code there, can you be more specific?

> TokenUtil.obtainToken unnecessarily creates a local directory
> -
>
> Key: HBASE-19920
> URL: https://issues.apache.org/jira/browse/HBASE-19920
> Project: HBase
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Priority: Major
>
> On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
> which in its static block initializes DynamicClassLoader and that creates the 
> directory ${hbase.rootdir}/lib
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127
> Since this is region server specific code, not expecting this to happen when 
> one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19920) TokenUtil.obtainToken unnecessarily creates a local directory

2018-02-02 Thread Rohini Palaniswamy (JIRA)
Rohini Palaniswamy created HBASE-19920:
--

 Summary: TokenUtil.obtainToken unnecessarily creates a local 
directory
 Key: HBASE-19920
 URL: https://issues.apache.org/jira/browse/HBASE-19920
 Project: HBase
  Issue Type: Bug
Reporter: Rohini Palaniswamy


On client code, when one calls TokenUtil.obtainToken it loads ProtobufUtil 
which in its static block initializes DynamicClassLoader and that creates the 
directory ${hbase.rootdir}/lib

https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java#L115-L127

Since this is region server specific code, not expecting this to happen when 
one accesses hbase as a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries

2018-02-02 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350493#comment-16350493
 ] 

Ankit Singhal commented on HBASE-19826:
---

{quote}What is a index scrutiny? When do you need to do this?
{quote}
It's a MapReduce tool which does time range scan on the data table and SKIP 
SCAN on the index table to verify that index table is in sync with a data table 
or not.
 
 
[1]https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexScrutinyTool.java

> Provide a option to see rows behind a delete in a time range queries
> 
>
> Key: HBASE-19826
> URL: https://issues.apache.org/jira/browse/HBASE-19826
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 2.0.0
>
>
> We can provide an option (something like seePastDeleteMarkers) in a scan to 
> let the user see the versions behind the delete marker even if 
> keepDeletedCells is set to false in the descriptor.
> With the previous version, we workaround the same in preStoreScannerOpen 
> hook. For reference PHOENIX-4277
> {code}
>   @Override
>   public KeyValueScanner preStoreScannerOpen(final 
> ObserverContext c,
>   final Store store, final Scan scan, final NavigableSet 
> targetCols,
>   final KeyValueScanner s) throws IOException {
>   
> if (scan.isRaw() || 
> ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || 
> scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || 
> TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) {
>   return s;
> }
>   
> ScanInfo scanInfo = 
> ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo());
> return new StoreScanner(store, scanInfo, scan, targetCols,
> 
> c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel()));
>   }
> {code}
> Another way is to provide a way to set KEEP_DELETED_CELLS to true in 
> ScanOptions of preStoreScannerOpen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries

2018-02-02 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350441#comment-16350441
 ] 

Duo Zhang commented on HBASE-19826:
---

What is a index scrutiny? When do you need to do this?

> Provide a option to see rows behind a delete in a time range queries
> 
>
> Key: HBASE-19826
> URL: https://issues.apache.org/jira/browse/HBASE-19826
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 2.0.0
>
>
> We can provide an option (something like seePastDeleteMarkers) in a scan to 
> let the user see the versions behind the delete marker even if 
> keepDeletedCells is set to false in the descriptor.
> With the previous version, we workaround the same in preStoreScannerOpen 
> hook. For reference PHOENIX-4277
> {code}
>   @Override
>   public KeyValueScanner preStoreScannerOpen(final 
> ObserverContext c,
>   final Store store, final Scan scan, final NavigableSet 
> targetCols,
>   final KeyValueScanner s) throws IOException {
>   
> if (scan.isRaw() || 
> ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || 
> scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || 
> TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) {
>   return s;
> }
>   
> ScanInfo scanInfo = 
> ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo());
> return new StoreScanner(store, scanInfo, scan, targetCols,
> 
> c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel()));
>   }
> {code}
> Another way is to provide a way to set KEEP_DELETED_CELLS to true in 
> ScanOptions of preStoreScannerOpen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries

2018-02-02 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350402#comment-16350402
 ] 

Ankit Singhal edited comment on HBASE-19826 at 2/2/18 2:34 PM:
---

sure, (Second use-case mentioned in my earlier comment)(For reference 
PHOENIX-4277)

"While doing Index scrutiny on a live table, time range scan wants to see PUTs 
not eclipsed by newer DELETE markers.(raw scan cannot be utilized here as it 
will give all cells even if we have delete markers within the time range)"

To achieve this, we were earlier updating the store scanner by setting 
KeepDeletedCells to true in preStoreScannerOpen hook so that our time range 
queries will see puts which are deleted at the newer timestamp.

Let me know if you need more details. Thanks. 


was (Author: an...@apache.org):
sure, (Second use-case mentioned in my earlier 
[comment|https://issues.apache.org/jira/browse/HBASE-19826?focusedCommentId=16344850=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16344850]):-

"While doing Index scrutiny on a live table, time range scan wants to see PUTs 
not eclipsed by newer DELETE markers.(raw scan cannot be utilized here as it 
will give all cells even if we have delete markers within the time range)"

To achieve this, we were earlier updating the store scanner by setting 
KeepDeletedCells to true in preStoreScannerOpen hook so that our time range 
queries will see puts which are deleted at the newer timestamp.

Let me know if you need more details. Thanks. 

> Provide a option to see rows behind a delete in a time range queries
> 
>
> Key: HBASE-19826
> URL: https://issues.apache.org/jira/browse/HBASE-19826
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 2.0.0
>
>
> We can provide an option (something like seePastDeleteMarkers) in a scan to 
> let the user see the versions behind the delete marker even if 
> keepDeletedCells is set to false in the descriptor.
> With the previous version, we workaround the same in preStoreScannerOpen 
> hook. For reference PHOENIX-4277
> {code}
>   @Override
>   public KeyValueScanner preStoreScannerOpen(final 
> ObserverContext c,
>   final Store store, final Scan scan, final NavigableSet 
> targetCols,
>   final KeyValueScanner s) throws IOException {
>   
> if (scan.isRaw() || 
> ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || 
> scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || 
> TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) {
>   return s;
> }
>   
> ScanInfo scanInfo = 
> ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo());
> return new StoreScanner(store, scanInfo, scan, targetCols,
> 
> c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel()));
>   }
> {code}
> Another way is to provide a way to set KEEP_DELETED_CELLS to true in 
> ScanOptions of preStoreScannerOpen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19826) Provide a option to see rows behind a delete in a time range queries

2018-02-02 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350402#comment-16350402
 ] 

Ankit Singhal commented on HBASE-19826:
---

sure, (Second use-case mentioned in my earlier 
[comment|https://issues.apache.org/jira/browse/HBASE-19826?focusedCommentId=16344850=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16344850]):-

"While doing Index scrutiny on a live table, time range scan wants to see PUTs 
not eclipsed by newer DELETE markers.(raw scan cannot be utilized here as it 
will give all cells even if we have delete markers within the time range)"

To achieve this, we were earlier updating the store scanner by setting 
KeepDeletedCells to true in preStoreScannerOpen hook so that our time range 
queries will see puts which are deleted at the newer timestamp.

Let me know if you need more details. Thanks. 

> Provide a option to see rows behind a delete in a time range queries
> 
>
> Key: HBASE-19826
> URL: https://issues.apache.org/jira/browse/HBASE-19826
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 2.0.0
>
>
> We can provide an option (something like seePastDeleteMarkers) in a scan to 
> let the user see the versions behind the delete marker even if 
> keepDeletedCells is set to false in the descriptor.
> With the previous version, we workaround the same in preStoreScannerOpen 
> hook. For reference PHOENIX-4277
> {code}
>   @Override
>   public KeyValueScanner preStoreScannerOpen(final 
> ObserverContext c,
>   final Store store, final Scan scan, final NavigableSet 
> targetCols,
>   final KeyValueScanner s) throws IOException {
>   
> if (scan.isRaw() || 
> ScanInfoUtil.isKeepDeletedCells(store.getScanInfo()) || 
> scan.getTimeRange().getMax() == HConstants.LATEST_TIMESTAMP || 
> TransactionUtil.isTransactionalTimestamp(scan.getTimeRange().getMax())) {
>   return s;
> }
>   
> ScanInfo scanInfo = 
> ScanInfoUtil.cloneScanInfoWithKeepDeletedCells(store.getScanInfo());
> return new StoreScanner(store, scanInfo, scan, targetCols,
> 
> c.getEnvironment().getRegion().getReadpoint(scan.getIsolationLevel()));
>   }
> {code}
> Another way is to provide a way to set KEEP_DELETED_CELLS to true in 
> ScanOptions of preStoreScannerOpen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >