[jira] [Issue Comment Deleted] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17852:
--
Comment: was deleted

(was: There is nothing wrong with snapshot approach until someone proves it is 
wrong. I am waiting for your arguments, Stack. )

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17852:
--
Comment: was deleted

(was: deleted)

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17852:
--
Comment: was deleted

(was: {quote}
Vlad, you seem to be doing your utmost to sabotage the delivery of this 
feature. 
{quote}

Do you really believe in that? Josh is just too polite, in my opinion. He is 
trying to be good with you. I am just who I am. I am too straightforward, 
Stack. The only person who sabotage this feature here is you.)

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257974#comment-16257974
 ] 

Vladimir Rodionov edited comment on HBASE-17852 at 11/18/17 7:49 AM:
-

deleted


was (Author: vrodionov):
{quote}
This isn't helpful and, likely, directly harmful :\
{quote}

What about "what the fuck Vlad", Josh? Is it harmful?

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-11386) Replication#table,CF config will be wrong if the table name includes namespace

2017-11-17 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang resolved HBASE-11386.

Resolution: Duplicate

Resolved by HBASE-11393 and HBASE-16653.

> Replication#table,CF config will be wrong if the table name includes namespace
> --
>
> Key: HBASE-11386
> URL: https://issues.apache.org/jira/browse/HBASE-11386
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Qianxi Zhang
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 1.5.0
>
> Attachments: HBASE_11386_trunk_v1.patch, HBASE_11386_trunk_v2.patch
>
>
> Now we can config the table and CF in Replication, but I think the parse will 
> be wrong if the table name includes namespace
> ReplicationPeer#parseTableCFsFromConfig(line 125)
> {code}
> Map tableCFsMap = null;
> // parse out (table, cf-list) pairs from tableCFsConfig
> // format: "table1:cf1,cf2;table2:cfA,cfB"
> String[] tables = tableCFsConfig.split(";");
> for (String tab : tables) {
>   // 1 ignore empty table config
>   tab = tab.trim();
>   if (tab.length() == 0) {
> continue;
>   }
>   // 2 split to "table" and "cf1,cf2"
>   //   for each table: "table:cf1,cf2" or "table"
>   String[] pair = tab.split(":");
>   String tabName = pair[0].trim();
>   if (pair.length > 2 || tabName.length() == 0) {
> LOG.error("ignore invalid tableCFs setting: " + tab);
> continue;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18626) Handle the incompatible change about the replication TableCFs' config

2017-11-17 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18626:
---
Attachment: HBASE-18626.branch-1.001.patch

Attach a patch for branch-1.

> Handle the incompatible change about the replication TableCFs' config
> -
>
> Key: HBASE-18626
> URL: https://issues.apache.org/jira/browse/HBASE-18626
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 1.4.0, 1.5.0, 2.0.0-alpha-3
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18626.branch-1.001.patch
>
>
> About compatibility, there is one incompatible change about the replication 
> TableCFs' config. The old config is a string and it concatenate the list of 
> tables and column families in format "table1:cf1,cf2;table2:cfA,cfB" in 
> zookeeper for table-cf to replication peer mapping. When parse the config, it 
> use ":" to split the string. If table name includes namespace, it will be 
> wrong (See HBASE-11386). It is a problem since we support namespace (0.98). 
> So HBASE-11393 (and HBASE-16653) changed it to a PB object. When rolling 
> update cluster, you need rolling master first. And the master will try to 
> translate the string config to a PB object. But there are two problems.
> 1. Permission problem. The replication client can write the zookeeper 
> directly. So the znode may have different owner. And master may don't have 
> the write permission for the znode. It maybe failed to translate old 
> table-cfs string to new PB Object. See HBASE-16938
> 2. We usually keep compatibility between old client and new server. But the 
> old replication client may write a string config to znode directly. Then the 
> new server can't parse them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18626) Handle the incompatible change about the replication TableCFs' config

2017-11-17 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18626:
---
Status: Patch Available  (was: Open)

> Handle the incompatible change about the replication TableCFs' config
> -
>
> Key: HBASE-18626
> URL: https://issues.apache.org/jira/browse/HBASE-18626
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3, 3.0.0, 1.4.0, 1.5.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18626.branch-1.001.patch
>
>
> About compatibility, there is one incompatible change about the replication 
> TableCFs' config. The old config is a string and it concatenate the list of 
> tables and column families in format "table1:cf1,cf2;table2:cfA,cfB" in 
> zookeeper for table-cf to replication peer mapping. When parse the config, it 
> use ":" to split the string. If table name includes namespace, it will be 
> wrong (See HBASE-11386). It is a problem since we support namespace (0.98). 
> So HBASE-11393 (and HBASE-16653) changed it to a PB object. When rolling 
> update cluster, you need rolling master first. And the master will try to 
> translate the string config to a PB object. But there are two problems.
> 1. Permission problem. The replication client can write the zookeeper 
> directly. So the znode may have different owner. And master may don't have 
> the write permission for the znode. It maybe failed to translate old 
> table-cfs string to new PB Object. See HBASE-16938
> 2. We usually keep compatibility between old client and new server. But the 
> old replication client may write a string config to znode directly. Then the 
> new server can't parse them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257974#comment-16257974
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

{quote}
This isn't helpful and, likely, directly harmful :\
{quote}

What about "what the fuck Vlad", Josh? Is it harmful?

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257972#comment-16257972
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

There is nothing wrong with snapshot approach until someone proves it is wrong. 
I am waiting for your arguments, Stack. 

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257968#comment-16257968
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

{quote}
Vlad, you seem to be doing your utmost to sabotage the delivery of this 
feature. 
{quote}

Do you really believe in that? Josh is just too polite, in my opinion. He is 
trying to be good with you. I am just who I am. I am too straightforward, 
Stack. The only person who sabotage this feature here is you.

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18309) Support multi threads in CleanerChore

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257959#comment-16257959
 ] 

Hadoop QA commented on HBASE-18309:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
41s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} hbase-server: The patch generated 0 new + 4 
unchanged - 4 fixed = 4 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
29s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
63m  9s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}103m  
3s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-18309 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898321/HBASE-18309.master.009.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux df4443ddaeb5 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 777b653b45 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9907/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9907/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Support multi threads in CleanerChore
> -
>
> Key: 

[jira] [Commented] (HBASE-19293) Support add a disabled state replication peer directly

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257955#comment-16257955
 ] 

Hadoop QA commented on HBASE-19293:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
52s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
41s{color} | {color:red} hbase-client: The patch generated 1 new + 441 
unchanged - 14 fixed = 442 total (was 455) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
49s{color} | {color:red} root: The patch generated 1 new + 698 unchanged - 14 
fixed = 699 total (was 712) {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 6 new + 288 unchanged - 7 fixed = 
294 total (was 295) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 23 new + 408 unchanged - 1 fixed = 
431 total (was 409) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
34s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
54m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
5m  4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}172m 
10s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}292m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19293 |
| JIRA 

[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-17 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257951#comment-16257951
 ] 

Allan Yang commented on HBASE-19163:


Yes, this issue showed up in our production env, too. It started when we change 
the exclusive  rowlock to readwrite lock. But I think option 1 is a better 
choice. If the user senorio indeed will result in putting the the same row a 
lot of times, fix of option 2 will make the batch operation very inefficient. 
But, compare the row each time may not be a good practice. Maybe we can store 
the lock we required each time, if it is the same read lock( which can easily 
be checked by hash), then we don't need to lock it. 

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, 
> HBASE-19163.master.004.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same

2017-11-17 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257946#comment-16257946
 ] 

Duo Zhang commented on HBASE-16890:
---

So what can we do to close this issue? More tests with different workload? Or 
just confirm the result from [~ram_krish]? Thanks.

> Analyze the performance of AsyncWAL and fix the same
> 
>
> Key: HBASE-16890
> URL: https://issues.apache.org/jira/browse/HBASE-16890
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: AsyncWAL_disruptor.patch, AsyncWAL_disruptor_1 
> (2).patch, AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, 
> AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, 
> HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, 
> HBASE-16890-remove-contention-v1.patch, HBASE-16890-remove-contention.patch, 
> Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 
> PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at 
> 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png, async.svg, 
> classic.svg, contention.png, contention_defaultWAL.png
>
>
> Tests reveal that AsyncWAL under load in single node cluster performs slower 
> than the Default WAL. This task is to analyze and see if we could fix it.
> See some discussions in the tail of JIRA HBASE-15536.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19291) Use common header and footer for JSP pages

2017-11-17 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257945#comment-16257945
 ] 

Duo Zhang commented on HBASE-19291:
---

Looks good. Have you tried to setup a cluster and access these pages in browser?

Thanks.

> Use common header and footer for JSP pages
> --
>
> Key: HBASE-19291
> URL: https://issues.apache.org/jira/browse/HBASE-19291
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19291.master.001.patch
>
>
> Use header and footer in our *.jsp pages to avoid unnecessary redundancy 
> (copy-paste of code)
> (Been sitting in my local repo for long, best to get following pesky 
> user-facing things fixed before the next major release)
> Misc edits:
> - Due to redundancy, new additions make it to some places but not others. For 
> eg there are missing links to "/logLevel", "/processRS.jsp" in few places.
> - Fix processMaster.jsp wrongly pointing to rs-status instead of 
> master-status (probably due to copy paste from processRS.jsp)
> - Deleted a bunch of extraneous "" in processMaster.jsp & processRS.jsp
> - Added missing  tag in snapshot.jsp
> - Deleted fossils of html5shiv.js. It's uses and the js itself were deleted 
> in the commit "819aed4ccd073d818bfef5931ec8d248bfae5f1f"
> - Fixed wrongly matched heading tags
> - Deleted some unused variables
> Tested:
> Ran standalone cluster and opened each page to make sure it looked right.
> Sidenote:
> Looks like HBASE-3835 started the work of converting from jsp to jamon, but 
> the work didn't finish. Now we have a mix of jsp and jamon. Needs 
> reconciling, but later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19271) Update ref guide about the async client to reflect the change in HBASE-19251

2017-11-17 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257943#comment-16257943
 ] 

Duo Zhang commented on HBASE-19271:
---

Thanks [~appy] and [~stack] !

> Update ref guide about the async client to reflect the change in HBASE-19251
> 
>
> Key: HBASE-19271
> URL: https://issues.apache.org/jira/browse/HBASE-19271
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-19271.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257941#comment-16257941
 ] 

stack commented on HBASE-19163:
---

Patch looks great [~huaxiang]. The failure is probably related. Small fix. Good 
stuff.

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, 
> HBASE-19163.master.004.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19247) [hbase-thirdparty] upgrade to Netty 4.1.17

2017-11-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257938#comment-16257938
 ] 

stack commented on HBASE-19247:
---

So, we are able to do w/o setting the system property because you give .so a 
hbasey-name? Thats great. +1.

> [hbase-thirdparty] upgrade to Netty 4.1.17
> --
>
> Key: HBASE-19247
> URL: https://issues.apache.org/jira/browse/HBASE-19247
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, thirdparty
>Reporter: Mike Drob
>Assignee: Mike Drob
> Fix For: thirdparty-1.0.2
>
> Attachments: HBASE-19247.addendum.patch, 
> HBASE-19247.addendum.v2.patch, HBASE-19247.patch
>
>
> Netty has a few newer versions that what we're on. Specifically, there have 
> been some changes to the native library loading that I think might make our 
> current relocated usage less terrible.
> https://github.com/netty/netty/pull/6884
> https://github.com/netty/netty/pull/7102



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257930#comment-16257930
 ] 

stack commented on HBASE-17852:
---

bq. Stack, are you essentially asking why this isn't implemented on top of 
ProcV2?
bq.  I think at this point, it would be more productive if we can say more 
"there is something implicitly broken with this approach" instead of "there is 
a more elegant implementation to be had".

I am not asking for any particular implementation, to be clear. I'm just trying 
to understand and am having trouble digesting full restore of a meta table 
whatever the size or traffic on error. It strikes me as whack (You seem to at 
least agree it 'overkill'). There seems to be no write-up on the approach here 
ahead of piecemeal code drops (w/o overarching description of what all is 
entailed) so only way to figure it as best as I can ascertain, is via this 
really pleasant back and forth w the author.

Vlad, you seem to be doing your utmost to sabotage the delivery of this 
feature. The sort of answers you give us reviewers is one thing. Will operators 
who run into issues w/ this feature get the same treatment?



> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19247) [hbase-thirdparty] upgrade to Netty 4.1.17

2017-11-17 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257926#comment-16257926
 ] 

Mike Drob commented on HBASE-19247:
---

bump for review?

> [hbase-thirdparty] upgrade to Netty 4.1.17
> --
>
> Key: HBASE-19247
> URL: https://issues.apache.org/jira/browse/HBASE-19247
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, thirdparty
>Reporter: Mike Drob
>Assignee: Mike Drob
> Fix For: thirdparty-1.0.2
>
> Attachments: HBASE-19247.addendum.patch, 
> HBASE-19247.addendum.v2.patch, HBASE-19247.patch
>
>
> Netty has a few newer versions that what we're on. Specifically, there have 
> been some changes to the native library loading that I think might make our 
> current relocated usage less terrible.
> https://github.com/netty/netty/pull/6884
> https://github.com/netty/netty/pull/7102



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17165) Add retry to LoadIncrementalHFiles tool

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257921#comment-16257921
 ] 

Hadoop QA commented on HBASE-17165:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 11m 
15s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  9m 
 4s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | 
{color:green}116m 48s{color} | {color:green} Patch does not cause any errors 
with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 
3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 26s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}266m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-17165 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898305/HBASE-17165.master.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux cd41b62d5a7d 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 777b653b45 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9903/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9903/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9903/console |
| Powered by | Apache Yetus 0.6.0   

[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257920#comment-16257920
 ] 

Josh Elser commented on HBASE-17852:


{code}
In hbase2, we have builders for the below instead...

1381 HTableDescriptor tableDesc = new 
HTableDescriptor(getTableNameForBulkLoadedData(conf));
{code}

I had left a similar comment on RB. This was fixed in v6 (patchset 5 on RB). I 
think the majority of other changes were suggestions I had left on RB -- have 
not explicitly checked, just going off of the "issues" being resolved.

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257919#comment-16257919
 ] 

Josh Elser commented on HBASE-17852:


bq. This is going to confuse. 'system' tables have a particular meaning in 
hbase.

Should be easy enough to rename with your IDE of choice, right Vlad? Avoiding 
overloading terminology is always a good idea. "BackupMetadata" and 
"BackupBulkLoadFiles"? (just pitching ideas)

bq. The snapshot/restore of a whole system table strikes me as a bunch of 
moving parts. I have to ask why we got such an extreme? 2PC is tough-enough w/o 
offlining/restore of whole meta table. During restore, all clients are frozen 
out or something so they can't pollute the restored version? Restore is not 
atomic, right? We couldn't have something like a row-per-backup with a success 
tag if all went well (I've not been following closely – pardon all the 
questions).

Stack, are you essentially asking why this isn't implemented on top of ProcV2? 
I'm trying to read between the lines but am not sure if I'm inventing something 
that isn't there. There are definitely areas of the code in which the 
acknowledgement has already been made about a better implementation can be 
done. For example, clients _are_ "frozen out" right now from concurrent 
operations (a nod that backups, merges, and restores could be done 
concurrently). I think at this point, it would be more productive if we can say 
more "there is something implicitly broken with this approach" instead of 
"there is a more elegant implementation to be had". I don't think anyone is 
arguing against that.

Yes, rolling back the entire backup "system" table is overkill (for what may 
sometimes be deleting a single row/column -- the ACTIVE_SNAPSHOT as mentioned 
in the parent) and would take much longer that it could necessarily need to.

bq. You suggest I review code. I have been reviewing code. Thats how we got 
here.

And thank you for that. I know your intentions are good. We're all ultimately 
working towards a common goal here.

bq. Sure, you can start from very beginning, Stack. Go ahead.

This isn't helpful and, likely, directly harmful :\

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19285) Add per-table latency histograms

2017-11-17 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257913#comment-16257913
 ] 

Josh Elser commented on HBASE-19285:


bq. If there is not a significant impact defaulting to true should be fine. 
They can be disabled if there is a problem. A default of false is going to 
surprise people more I think. Earlier branch RMs may have a different opinion. 

Sounds good. Will be sure to ask.

bq. Will do that after getting back from Thanksgiving holiday. If you'd like to 
see these go out in 1.4 get a review and commit them before then and I'll also 
do a perf assessment. 

Thanks for the info! I certainly hope to get this in before Thanksgiving, and 
would be very happy if that would also result in you putting it through its 
paces. Definitely don't want to hold you up if your (or my) schedule changes -- 
I appreciate your insight and input thus far.

> Add per-table latency histograms
> 
>
> Key: HBASE-19285
> URL: https://issues.apache.org/jira/browse/HBASE-19285
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Clay B.
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.3
>
>
> HBASE-17017 removed the per-region latency histograms (e.g. Get, Put, Scan at 
> p75, p85, etc)
> HBASE-15518 added some per-table metrics, but not the latency histograms.
> Given the previous conversations, it seems like it these per-table 
> aggregations weren't intentionally omitted, just never re-implemented after 
> the per-region removal. They're some really nice out-of-the-box metrics we 
> can provide to our users/admins as long as it's not detrimental.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257908#comment-16257908
 ] 

Hadoop QA commented on HBASE-19163:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 11m 
21s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  9m 
 2s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | 
{color:green}116m 33s{color} | {color:green} Patch does not cause any errors 
with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 
3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m  6s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}205m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.io.TestHeapSize |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898290/HBASE-19163.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 784ed23c2c34 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 777b653b45 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9904/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9904/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9904/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (HBASE-19239) Fix findbugs and error-prone warnings (branch-1)

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257905#comment-16257905
 ] 

Hudson commented on HBASE-19239:


FAILURE: Integrated in Jenkins build HBase-1.4 #1023 (See 
[https://builds.apache.org/job/HBase-1.4/1023/])
HBASE-19239 Fix findbugs and error-prone issues (apurtell: rev 
2d579a41409e7df7693b58442c7285e56b9d6b53)
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestStruct.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestLoadTestKVGenerator.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/util/TestLRUDictionary.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/trace/SpanReceiverHost.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/JVM.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/Striped64.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyProvider.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/JitterScheduledThreadPoolExecutorImpl.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/TestHBaseConfiguration.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestConcatenatedLists.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecWithTags.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferArray.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassLoaderBase.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodecWithTags.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/BoundedByteBufferPool.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodec.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/Base64.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
* (edit) hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestCipherProvider.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodec.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/AbstractByteRange.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/types/CopyOnWriteArrayMap.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyStoreKeyProvider.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/HasThread.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/LRUDictionary.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/Triple.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/TestChoreService.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/util/ClassLoaderTestHelper.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/security/UserProvider.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/LongAdder.java
* (edit) hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellUtil.java
* (edit) hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/Threads.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/AsyncConsoleAppender.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes/TestAES.java
* (edit) hbase-common/src/test/java/org/apache/hadoop/hbase/ClassFinder.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestOrderedBytes.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/OrderedBytes.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java
* (edit) 

[jira] [Updated] (HBASE-18309) Support multi threads in CleanerChore

2017-11-17 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-18309:
--
Status: Patch Available  (was: Open)

> Support multi threads in CleanerChore
> -
>
> Key: HBASE-18309
> URL: https://issues.apache.org/jira/browse/HBASE-18309
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: binlijin
>Assignee: Reid Chan
> Attachments: HBASE-18309.master.001.patch, 
> HBASE-18309.master.002.patch, HBASE-18309.master.004.patch, 
> HBASE-18309.master.005.patch, HBASE-18309.master.006.patch, 
> HBASE-18309.master.007.patch, HBASE-18309.master.008.patch, 
> HBASE-18309.master.009.patch, space_consumption_in_archive.png
>
>
> There is only one thread in LogCleaner to clean oldWALs and in our big 
> cluster we find this is not enough. The number of files under oldWALs reach 
> the max-directory-items limit of HDFS and cause region server crash, so we 
> use multi threads for LogCleaner and the crash not happened any more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18309) Support multi threads in CleanerChore

2017-11-17 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-18309:
--
Attachment: HBASE-18309.master.009.patch

Trigger QA again.

> Support multi threads in CleanerChore
> -
>
> Key: HBASE-18309
> URL: https://issues.apache.org/jira/browse/HBASE-18309
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: binlijin
>Assignee: Reid Chan
> Attachments: HBASE-18309.master.001.patch, 
> HBASE-18309.master.002.patch, HBASE-18309.master.004.patch, 
> HBASE-18309.master.005.patch, HBASE-18309.master.006.patch, 
> HBASE-18309.master.007.patch, HBASE-18309.master.008.patch, 
> HBASE-18309.master.009.patch, space_consumption_in_archive.png
>
>
> There is only one thread in LogCleaner to clean oldWALs and in our big 
> cluster we find this is not enough. The number of files under oldWALs reach 
> the max-directory-items limit of HDFS and cause region server crash, so we 
> use multi threads for LogCleaner and the crash not happened any more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18309) Support multi threads in CleanerChore

2017-11-17 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-18309:
--
Attachment: (was: HBASE-18309.master.009.patch)

> Support multi threads in CleanerChore
> -
>
> Key: HBASE-18309
> URL: https://issues.apache.org/jira/browse/HBASE-18309
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: binlijin
>Assignee: Reid Chan
> Attachments: HBASE-18309.master.001.patch, 
> HBASE-18309.master.002.patch, HBASE-18309.master.004.patch, 
> HBASE-18309.master.005.patch, HBASE-18309.master.006.patch, 
> HBASE-18309.master.007.patch, HBASE-18309.master.008.patch, 
> space_consumption_in_archive.png
>
>
> There is only one thread in LogCleaner to clean oldWALs and in our big 
> cluster we find this is not enough. The number of files under oldWALs reach 
> the max-directory-items limit of HDFS and cause region server crash, so we 
> use multi threads for LogCleaner and the crash not happened any more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18309) Support multi threads in CleanerChore

2017-11-17 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-18309:
--
Status: Open  (was: Patch Available)

> Support multi threads in CleanerChore
> -
>
> Key: HBASE-18309
> URL: https://issues.apache.org/jira/browse/HBASE-18309
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: binlijin
>Assignee: Reid Chan
> Attachments: HBASE-18309.master.001.patch, 
> HBASE-18309.master.002.patch, HBASE-18309.master.004.patch, 
> HBASE-18309.master.005.patch, HBASE-18309.master.006.patch, 
> HBASE-18309.master.007.patch, HBASE-18309.master.008.patch, 
> space_consumption_in_archive.png
>
>
> There is only one thread in LogCleaner to clean oldWALs and in our big 
> cluster we find this is not enough. The number of files under oldWALs reach 
> the max-directory-items limit of HDFS and cause region server crash, so we 
> use multi threads for LogCleaner and the crash not happened any more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19290) Reduce zk request when doing split log

2017-11-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257863#comment-16257863
 ] 

Ted Yu commented on HBASE-19290:


{code}
+   * @param numTasks current total number of available tasks
+   */
+  private int calculateAvailableSplitters(int availableRSs, int numTasks) {
{code}
Add javadoc for availableRSs.
{code}
+  taskGrabed += 
grabTask(ZNodePaths.joinZNode(watcher.znodePaths.splitLogZNode, 
paths.get(idx)));
{code}
Grabed -> Grabbed
{code}
+  while (!shouldStop) {
+Thread.sleep(1000);
{code}
Consider using shorter sleep time.


> Reduce zk request when doing split log
> --
>
> Key: HBASE-19290
> URL: https://issues.apache.org/jira/browse/HBASE-19290
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-19290.master.001.patch
>
>
> We observe once the cluster has 1000+ nodes and when hundreds of nodes abort 
> and doing split log, the split is very very slow, and we find the 
> regionserver and master wait on the zookeeper response, so we need to reduce 
> zookeeper request and pressure for big cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19239) Fix findbugs and error-prone warnings (branch-1)

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257852#comment-16257852
 ] 

Hudson commented on HBASE-19239:


FAILURE: Integrated in Jenkins build HBase-1.5 #165 (See 
[https://builds.apache.org/job/HBase-1.5/165/])
HBASE-19239 Fix findbugs and error-prone issues (apurtell: rev 
c179d5144f9caead37dddcdd6c92cd52dd4b50bd)
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/util/ClassLoaderTestHelper.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/OrderedBytes.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/security/UserProvider.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/JitterScheduledThreadPoolExecutorImpl.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyStoreKeyProvider.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferArray.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/TestChoreService.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes/TestAES.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestStruct.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodecWithTags.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/HasThread.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/Base64.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodec.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/AbstractByteRange.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java
* (edit) hbase-common/src/test/java/org/apache/hadoop/hbase/ClassFinder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/BoundedByteBufferPool.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/LongAdder.java
* (edit) hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassLoaderBase.java
* (edit) hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/util/TestLRUDictionary.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/TestHBaseConfiguration.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/Threads.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorClassLoader.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/JVM.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/Striped64.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyProvider.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/Triple.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestLoadTestKVGenerator.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestOrderedBytes.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecWithTags.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/trace/SpanReceiverHost.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodec.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/AsyncConsoleAppender.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestCipherProvider.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/LRUDictionary.java
* (edit) hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellUtil.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/types/CopyOnWriteArrayMap.java
* (edit) 

[jira] [Commented] (HBASE-19297) Nightly job for master timing out in unit tests

2017-11-17 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257850#comment-16257850
 ] 

Sean Busbey commented on HBASE-19297:
-

pushed a feature branch based on master with the timeout bumped to 18hours.

> Nightly job for master timing out in unit tests
> ---
>
> Key: HBASE-19297
> URL: https://issues.apache.org/jira/browse/HBASE-19297
> Project: HBase
>  Issue Type: Task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>
> master now timing out at 6 hours in master.
> looks like it was making progress still, just in the midst of hte hbase-rest 
> module



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Work started] (HBASE-19297) Nightly job for master timing out in unit tests

2017-11-17 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-19297 started by Sean Busbey.
---
> Nightly job for master timing out in unit tests
> ---
>
> Key: HBASE-19297
> URL: https://issues.apache.org/jira/browse/HBASE-19297
> Project: HBase
>  Issue Type: Task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>
> master now timing out at 6 hours in master.
> looks like it was making progress still, just in the midst of hte hbase-rest 
> module



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19297) Nightly job for master timing out in unit tests

2017-11-17 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-19297:
---

 Summary: Nightly job for master timing out in unit tests
 Key: HBASE-19297
 URL: https://issues.apache.org/jira/browse/HBASE-19297
 Project: HBase
  Issue Type: Task
  Components: test
Affects Versions: 3.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey


master now timing out at 6 hours in master.

looks like it was making progress still, just in the midst of hte hbase-rest 
module



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19296) Fix findbugs and error-prone warnings (branch-2)

2017-11-17 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-19296:
--

 Summary: Fix findbugs and error-prone warnings (branch-2)
 Key: HBASE-19296
 URL: https://issues.apache.org/jira/browse/HBASE-19296
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 2.0.0-beta-1


Fix important findbugs and error-prone warnings on branch-2 / master. Start 
with a forward port pass from HBASE-19239. Assume rejected hunks need a new 
analysis. Do that analysis.  




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19239) Fix findbugs and error-prone warnings (branch-1)

2017-11-17 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19239:
---
Attachment: HBASE-19239-branch-1.patch

> Fix findbugs and error-prone warnings (branch-1)
> 
>
> Key: HBASE-19239
> URL: https://issues.apache.org/jira/browse/HBASE-19239
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0
>
> Attachments: HBASE-19239-branch-1.patch, HBASE-19239-branch-1.patch, 
> HBASE-19239-branch-1.patch
>
>
> Fix important findbugs and error-prone warnings on branch-1.4 / branch-1. 
> Forward port as appropriate. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-19239) Fix findbugs and error-prone warnings (branch-1)

2017-11-17 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-19239.

  Resolution: Fixed
Hadoop Flags: Reviewed

Pushed to branch-1 and branch-1.4

> Fix findbugs and error-prone warnings (branch-1)
> 
>
> Key: HBASE-19239
> URL: https://issues.apache.org/jira/browse/HBASE-19239
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0
>
> Attachments: HBASE-19239-branch-1.patch, HBASE-19239-branch-1.patch
>
>
> Fix important findbugs and error-prone warnings on branch-1.4 / branch-1. 
> Forward port as appropriate. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19239) Fix findbugs and error-prone warnings (branch-1)

2017-11-17 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19239:
---
Summary: Fix findbugs and error-prone warnings (branch-1)  (was: Fix 
findbugs and error-prone warnings)

> Fix findbugs and error-prone warnings (branch-1)
> 
>
> Key: HBASE-19239
> URL: https://issues.apache.org/jira/browse/HBASE-19239
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0
>
> Attachments: HBASE-19239-branch-1.patch, HBASE-19239-branch-1.patch
>
>
> Fix important findbugs and error-prone warnings on branch-1.4 / branch-1. 
> Forward port as appropriate. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19239) Fix findbugs and error-prone warnings (branch-1)

2017-11-17 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19239:
---
Fix Version/s: (was: 2.0.0-beta-1)
   (was: 3.0.0)

> Fix findbugs and error-prone warnings (branch-1)
> 
>
> Key: HBASE-19239
> URL: https://issues.apache.org/jira/browse/HBASE-19239
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0
>
> Attachments: HBASE-19239-branch-1.patch, HBASE-19239-branch-1.patch
>
>
> Fix important findbugs and error-prone warnings on branch-1.4 / branch-1. 
> Forward port as appropriate. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19285) Add per-table latency histograms

2017-11-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257841#comment-16257841
 ] 

Andrew Purtell commented on HBASE-19285:


bq. Well, color me surprised. This seems to be "OK".

Like you say maybe it's the aggregation up to tables that keeps both the heap 
utilization and perf impact greatly reduced from before. I didn't do any of the 
prior measurement so unfortunately have no insight there. 

bq. I think I could even be swayed to have them be default=false to remove all 
possibility for unexpected perf impact :)

If there is not a significant impact defaulting to true should be fine. They 
can be disabled if there is a problem. A default of false is going to surprise 
people more I think. Earlier branch RMs may have a different opinion. 

The 1.4 RC keeps getting pushed back but I am about to commit HBASE-19239 and 
then have nothing else to do but spin binaries and exercise them. Will do that 
after getting back from Thanksgiving holiday. If you'd like to see these go out 
in 1.4 get a review and commit them before then and I'll also do a perf 
assessment. 

> Add per-table latency histograms
> 
>
> Key: HBASE-19285
> URL: https://issues.apache.org/jira/browse/HBASE-19285
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Clay B.
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.3
>
>
> HBASE-17017 removed the per-region latency histograms (e.g. Get, Put, Scan at 
> p75, p85, etc)
> HBASE-15518 added some per-table metrics, but not the latency histograms.
> Given the previous conversations, it seems like it these per-table 
> aggregations weren't intentionally omitted, just never re-implemented after 
> the per-region removal. They're some really nice out-of-the-box metrics we 
> can provide to our users/admins as long as it's not detrimental.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19285) Add per-table latency histograms

2017-11-17 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257830#comment-16257830
 ] 

Josh Elser commented on HBASE-19285:


Well, color me surprised. This seems to be "OK".

* As we'd expect, aggregating up to the table level (instead of regions) helps 
keep memory use under wraps, regardless the size of table. Confirmed this by 
JFR object count: 95% is in the write path (bytes, ConcurrentSkipListMap/Nodes, 
KeyValue, etc).
* With a sampling rate of 10ms, didn't get a single hit in a FastLongHistogram 
or the corresponding metrics2 Histogram wrappers. Nice confirmation.
* Using {{jmap -histo:live}} to compare the current branch-1.3 with my working 
patch, the amount of extra heap with this new per-table histograms (again, for 
a single table) appears to be a few KB -- nominal.

I need to clean up how the metrics are being shown on the JMX (each table is 
getting its own "sub" which is gross), but I think I'm very close to a first 
patch. [~apurtell] do you (and/or [~enis] given the prior-art) have any other 
testing/data you'd like me to collect while I have my harness set up?

I'm also wondering about some value in adding a configuration property which 
can disable these histograms. I think I could even be swayed to have them be 
default=false to remove all possibility for unexpected perf impact :)

> Add per-table latency histograms
> 
>
> Key: HBASE-19285
> URL: https://issues.apache.org/jira/browse/HBASE-19285
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Clay B.
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.3
>
>
> HBASE-17017 removed the per-region latency histograms (e.g. Get, Put, Scan at 
> p75, p85, etc)
> HBASE-15518 added some per-table metrics, but not the latency histograms.
> Given the previous conversations, it seems like it these per-table 
> aggregations weren't intentionally omitted, just never re-implemented after 
> the per-region removal. They're some really nice out-of-the-box metrics we 
> can provide to our users/admins as long as it's not detrimental.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19035) Miss metrics when coprocessor use region scanner to read data

2017-11-17 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19035:
---
Attachment: HBASE-19035.branch-1.patch

> Miss metrics when coprocessor use region scanner to read data
> -
>
> Key: HBASE-19035
> URL: https://issues.apache.org/jira/browse/HBASE-19035
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19035.branch-1.001.patch, 
> HBASE-19035.branch-1.patch, HBASE-19035.branch-1.patch, 
> HBASE-19035.branch-1.patch, HBASE-19035.master.001.patch, 
> HBASE-19035.master.002.patch, HBASE-19035.master.003.patch, 
> HBASE-19035.master.003.patch
>
>
> Region interface is exposed to coprocessor. So coprocessor use getScanner to 
> get a region scanner to read data. But the scan metrics was only updated in 
> region server level. So we will miss some scan metrics for the read from 
> coprocessor.
> || Region Operation || When to update requests metric ||
> | get | update read metric in nextRaw() |
> | put | update write metric in batchMutate() |
> | delete | update write metric in batchMutate() |
> | increment | update read metric by get() and  update write metric in 
> doDelta() |
> | append | update read metric by get() and  update write metric in doDelta() |
> | mutateRow | update write metric in processRowsWithLocks() |
> | mutateRowsWithLocks | update write metric in processRowsWithLocks() |
> | batchMutate | update write metric in batchMutate() |
> | checkAndMutate | update read metric by get() and  update write metric by 
> mutateRow() |
> | checkAndRowMutate | update read metric by get() and  update write metric by 
> doBatchMutate() |
> | processRowsWithLocks | update write metric in processRowsWithLocks() |
> 1. Move read requests to region level. Because RegionScanner exposed to CP.
> 2. Update write requests count in processRowsWithLocks. This was missed in 
> previous implemenation, too.
> 3. Remove requestRowActionCount in RSRpcServices. This metric can be computed 
> by region's readRequestsCount and writeRequestsCount.
> Upload to review board: https://reviews.apache.org/r/63579/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19293) Support add a disabled state replication peer directly

2017-11-17 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257828#comment-16257828
 ] 

Guanghao Zhang commented on HBASE-19293:


bq. 1. It will be good to capture this replicate state in the audit log
Add log in HMaster.addReplicationPeer method.
bq. Need to update the document section
Add it in 002 patch.

> Support add a disabled state replication peer directly
> --
>
> Key: HBASE-19293
> URL: https://issues.apache.org/jira/browse/HBASE-19293
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19293.master.001.patch, 
> HBASE-19293.master.002.patch
>
>
> Now when add a replication peer, the default state is enabled. If you want 
> add a disabled replication peer, you need add a peer first, then disable it. 
> It need two step to finish now.
> Use case for add a disabled replication peer. When user want sync data from a 
> cluster A to a new peer cluster.
> 1. Add a disabled replication peer. And config the table to peer config.
> 2. Take a snapshot of table and export snapshot to peer cluster.
> 3. Restore snapshot in peer cluster.
> 4. Enable the peer and wait all stuck replication log replicated to peer 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19293) Support add a disabled state replication peer directly

2017-11-17 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19293:
---
Attachment: HBASE-19293.master.002.patch

> Support add a disabled state replication peer directly
> --
>
> Key: HBASE-19293
> URL: https://issues.apache.org/jira/browse/HBASE-19293
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19293.master.001.patch, 
> HBASE-19293.master.002.patch
>
>
> Now when add a replication peer, the default state is enabled. If you want 
> add a disabled replication peer, you need add a peer first, then disable it. 
> It need two step to finish now.
> Use case for add a disabled replication peer. When user want sync data from a 
> cluster A to a new peer cluster.
> 1. Add a disabled replication peer. And config the table to peer config.
> 2. Take a snapshot of table and export snapshot to peer cluster.
> 3. Restore snapshot in peer cluster.
> 4. Enable the peer and wait all stuck replication log replicated to peer 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257822#comment-16257822
 ] 

Hadoop QA commented on HBASE-19163:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
56s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
55s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
53m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 50s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.io.TestHeapSize |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898290/HBASE-19163.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 6ebb595cdab8 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 777b653b45 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9902/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9902/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9902/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (HBASE-5761) [Thrift2] TDelete.deleteType defaults to TDeleteType.DELETE_COLUMNS, but the docs suggest otherwise

2017-11-17 Thread Arovit (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257818#comment-16257818
 ] 

Arovit commented on HBASE-5761:
---

[~enis] [~andrew.purt...@gmail.com] I can't assign this JIRA to myself, can you 
guys please help ?

> [Thrift2] TDelete.deleteType defaults to TDeleteType.DELETE_COLUMNS, but the 
> docs suggest otherwise
> ---
>
> Key: HBASE-5761
> URL: https://issues.apache.org/jira/browse/HBASE-5761
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Wouter Bolsterlee
>Priority: Trivial
>  Labels: beginner
>
> It seems to me there is an inconsistency (or error) in the Thrift2 
> {{TDelete}} struct and its documentation. The docs for the {{TDelete}} struct 
> state:
> {quote}
> If no timestamp is specified the most recent version will be deleted.  To 
> delete all previous versions, specify the DELETE_COLUMNS TDeleteType.
> {quote}
> ...which implies that the default is {{TDeleteType.DELETE_COLUMN}} 
> (singular), not {{TDeleteType.DELETE_COLUMNS}} (plural).
> However, the {{deleteType}} field in the {{TDelete}} struct defaults to the 
> value {{1}}, which is {{TDeleteType.DELETE_COLUMNS}} (plural) in 
> {{/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift}}. The 
> field is currently (r1239241) defined as follows:
> {{4: optional TDeleteType deleteType = 1,}}
> I'd suggest that the default for this optional field is changed to 
> {{TDeleteType.DELETE_COLUMN}} (singular). The line above from the {{TDelete}} 
> struct would then become:
> {{4: optional TDeleteType deleteType = 0,}}
> Since this change just involves changing a {{1}} into a {{0}}, I'll leave the 
> trivial patch to someone who can also commit it in one go. Thanks in advance. 
> :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15320) HBase connector for Kafka Connect

2017-11-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257812#comment-16257812
 ] 

Ted Yu commented on HBASE-15320:


On reviewboard, specify 'hbase' in Groups.

> HBase connector for Kafka Connect
> -
>
> Key: HBASE-15320
> URL: https://issues.apache.org/jira/browse/HBASE-15320
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Andrew Purtell
>Assignee: Mike Wingert
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-15320.master.1.patch, HBASE-15320.master.2.patch, 
> HBASE-15320.master.3.patch, HBASE-15320.master.4.patch, 
> HBASE-15320.master.5.patch, HBASE-15320.master.6.patch, 
> HBASE-15320.master.7.patch, HBASE-15320.pdf
>
>
> Implement an HBase connector with source and sink tasks for the Connect 
> framework (http://docs.confluent.io/2.0.0/connect/index.html) available in 
> Kafka 0.9 and later.
> See also: 
> http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines
> An HBase source 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task)
>  could be implemented as a replication endpoint or WALObserver, publishing 
> cluster wide change streams from the WAL to one or more topics, with 
> configurable mapping and partitioning of table changes to topics.  
> An HBase sink task 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would 
> persist, with optional transformation (JSON? Avro?, map fields to native 
> schema?), Kafka SinkRecords into HBase tables.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16574) Add backup / restore feature to refguide

2017-11-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257780#comment-16257780
 ] 

Vladimir Rodionov commented on HBASE-16574:
---

lgtm, [~elserj]

+1

> Add backup / restore feature to refguide
> 
>
> Key: HBASE-16574
> URL: https://issues.apache.org/jira/browse/HBASE-16574
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Frank Welsch
>  Labels: backup
> Fix For: 2.0.0-beta-1
>
> Attachments: B command-line tools and configuration (updated).pdf, 
> Backup-and-Restore-Apache_19Sep2016.pdf, HBASE-16574.001.patch, 
> HBASE-16574.002.patch, HBASE-16574.003.branch-2.patch, 
> HBASE-16574.004.branch-2.patch, HBASE-16574.005.branch-2.patch, 
> HBASE-16574.006.branch-2.patch, HBASE-16574.007.branch-2.patch, 
> HBASE-16574.008.branch-2.patch, HBASE-16574.009.branch-2.patch, 
> apache_hbase_reference_guide_004.pdf, apache_hbase_reference_guide_007.pdf, 
> apache_hbase_reference_guide_008.pdf, apache_hbase_reference_guide_009.pdf, 
> hbase-book-16574.003.pdf, hbase_reference_guide.v1.pdf
>
>
> This issue is to add backup / restore feature description to hbase refguide.
> The description should cover:
> scenarios where backup / restore is used
> backup / restore commands and sample usage
> considerations in setup



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19260) Add lock back to avoid parallel accessing meta to locate region

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257772#comment-16257772
 ] 

Hudson commented on HBASE-19260:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #371 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/371/])
HBASE-19260 Add lock back to avoid parallel accessing meta to locate (stack: 
rev c9f6aa3b15f2edc892da55854d19f3647bb4b7b8)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java


> Add lock back to avoid parallel accessing meta to locate region
> ---
>
> Key: HBASE-19260
> URL: https://issues.apache.org/jira/browse/HBASE-19260
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 2.0.0-alpha-3, 1.1.12
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.4.0, 1.3.2, 1.2.7, 2.0.0-beta-1
>
> Attachments: HBASE-19260.patch, HBASE-19260.v2.patch
>
>
> In branch-0.98 we have below codes to avoid accessing meta in parallel in 
> {{HConnectionManager}}:
> {code}
>   Result regionInfoRow;
>   // This block guards against two threads trying to load the meta
>   // region at the same time. The first will load the meta region and
>   // the second will use the value that the first one found.
>   if (useCache) {
> if (TableName.META_TABLE_NAME.equals(parentTable) && usePrefetch 
> &&
> getRegionCachePrefetch(tableName)) {
>   synchronized (regionLockObject) {
> // Check the cache again for a hit in case some other thread 
> made the
> // same query while we were waiting on the lock.
> ...
>   }
> }
>   ...
> {code}
> while in HBASE-10018 we removed such logic along with 
> region-location-prefetching. 
> We regard this as an unexpected behavior change and observed below phenomenon 
> in our product env:
> 1. Unnecessary connection setup to meta when multiple threads locating region 
> in a client process
> 2. Priority handler of the RS holding meta region exhausted, application keep 
> retrying and cause a vicious circle
> To resolve this problem, we propose to add the {{userRegionLock}} back and 
> keep the behavior in accordance with 0.98



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19260) Add lock back to avoid parallel accessing meta to locate region

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257771#comment-16257771
 ] 

Hudson commented on HBASE-19260:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK7 #281 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/281/])
HBASE-19260 Add lock back to avoid parallel accessing meta to locate (stack: 
rev ea9d1713d272a2a062a6fe3054fbcce37dbb10d6)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java


> Add lock back to avoid parallel accessing meta to locate region
> ---
>
> Key: HBASE-19260
> URL: https://issues.apache.org/jira/browse/HBASE-19260
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 2.0.0-alpha-3, 1.1.12
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.4.0, 1.3.2, 1.2.7, 2.0.0-beta-1
>
> Attachments: HBASE-19260.patch, HBASE-19260.v2.patch
>
>
> In branch-0.98 we have below codes to avoid accessing meta in parallel in 
> {{HConnectionManager}}:
> {code}
>   Result regionInfoRow;
>   // This block guards against two threads trying to load the meta
>   // region at the same time. The first will load the meta region and
>   // the second will use the value that the first one found.
>   if (useCache) {
> if (TableName.META_TABLE_NAME.equals(parentTable) && usePrefetch 
> &&
> getRegionCachePrefetch(tableName)) {
>   synchronized (regionLockObject) {
> // Check the cache again for a hit in case some other thread 
> made the
> // same query while we were waiting on the lock.
> ...
>   }
> }
>   ...
> {code}
> while in HBASE-10018 we removed such logic along with 
> region-location-prefetching. 
> We regard this as an unexpected behavior change and observed below phenomenon 
> in our product env:
> 1. Unnecessary connection setup to meta when multiple threads locating region 
> in a client process
> 2. Priority handler of the RS holding meta region exhausted, application keep 
> retrying and cause a vicious circle
> To resolve this problem, we propose to add the {{userRegionLock}} back and 
> keep the behavior in accordance with 0.98



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19260) Add lock back to avoid parallel accessing meta to locate region

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257770#comment-16257770
 ] 

Hudson commented on HBASE-19260:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK7 #351 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/351/])
HBASE-19260 Add lock back to avoid parallel accessing meta to locate (stack: 
rev c9f6aa3b15f2edc892da55854d19f3647bb4b7b8)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java


> Add lock back to avoid parallel accessing meta to locate region
> ---
>
> Key: HBASE-19260
> URL: https://issues.apache.org/jira/browse/HBASE-19260
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 2.0.0-alpha-3, 1.1.12
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.4.0, 1.3.2, 1.2.7, 2.0.0-beta-1
>
> Attachments: HBASE-19260.patch, HBASE-19260.v2.patch
>
>
> In branch-0.98 we have below codes to avoid accessing meta in parallel in 
> {{HConnectionManager}}:
> {code}
>   Result regionInfoRow;
>   // This block guards against two threads trying to load the meta
>   // region at the same time. The first will load the meta region and
>   // the second will use the value that the first one found.
>   if (useCache) {
> if (TableName.META_TABLE_NAME.equals(parentTable) && usePrefetch 
> &&
> getRegionCachePrefetch(tableName)) {
>   synchronized (regionLockObject) {
> // Check the cache again for a hit in case some other thread 
> made the
> // same query while we were waiting on the lock.
> ...
>   }
> }
>   ...
> {code}
> while in HBASE-10018 we removed such logic along with 
> region-location-prefetching. 
> We regard this as an unexpected behavior change and observed below phenomenon 
> in our product env:
> 1. Unnecessary connection setup to meta when multiple threads locating region 
> in a client process
> 2. Priority handler of the RS holding meta region exhausted, application keep 
> retrying and cause a vicious circle
> To resolve this problem, we propose to add the {{userRegionLock}} back and 
> keep the behavior in accordance with 0.98



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19260) Add lock back to avoid parallel accessing meta to locate region

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257767#comment-16257767
 ] 

Hudson commented on HBASE-19260:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK8 #278 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/278/])
HBASE-19260 Add lock back to avoid parallel accessing meta to locate (stack: 
rev ea9d1713d272a2a062a6fe3054fbcce37dbb10d6)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java


> Add lock back to avoid parallel accessing meta to locate region
> ---
>
> Key: HBASE-19260
> URL: https://issues.apache.org/jira/browse/HBASE-19260
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 2.0.0-alpha-3, 1.1.12
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.4.0, 1.3.2, 1.2.7, 2.0.0-beta-1
>
> Attachments: HBASE-19260.patch, HBASE-19260.v2.patch
>
>
> In branch-0.98 we have below codes to avoid accessing meta in parallel in 
> {{HConnectionManager}}:
> {code}
>   Result regionInfoRow;
>   // This block guards against two threads trying to load the meta
>   // region at the same time. The first will load the meta region and
>   // the second will use the value that the first one found.
>   if (useCache) {
> if (TableName.META_TABLE_NAME.equals(parentTable) && usePrefetch 
> &&
> getRegionCachePrefetch(tableName)) {
>   synchronized (regionLockObject) {
> // Check the cache again for a hit in case some other thread 
> made the
> // same query while we were waiting on the lock.
> ...
>   }
> }
>   ...
> {code}
> while in HBASE-10018 we removed such logic along with 
> region-location-prefetching. 
> We regard this as an unexpected behavior change and observed below phenomenon 
> in our product env:
> 1. Unnecessary connection setup to meta when multiple threads locating region 
> in a client process
> 2. Priority handler of the RS holding meta region exhausted, application keep 
> retrying and cause a vicious circle
> To resolve this problem, we propose to add the {{userRegionLock}} back and 
> keep the behavior in accordance with 0.98



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19285) Add per-table latency histograms

2017-11-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257766#comment-16257766
 ] 

Andrew Purtell commented on HBASE-19285:


Yep you're going to want to use JFR or another profiler to check. 

> Add per-table latency histograms
> 
>
> Key: HBASE-19285
> URL: https://issues.apache.org/jira/browse/HBASE-19285
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Clay B.
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.3
>
>
> HBASE-17017 removed the per-region latency histograms (e.g. Get, Put, Scan at 
> p75, p85, etc)
> HBASE-15518 added some per-table metrics, but not the latency histograms.
> Given the previous conversations, it seems like it these per-table 
> aggregations weren't intentionally omitted, just never re-implemented after 
> the per-region removal. They're some really nice out-of-the-box metrics we 
> can provide to our users/admins as long as it's not detrimental.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17165) Add retry to LoadIncrementalHFiles tool

2017-11-17 Thread Mike Grimes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Grimes updated HBASE-17165:

Attachment: HBASE-17165.master.003.patch

> Add retry to LoadIncrementalHFiles tool
> ---
>
> Key: HBASE-17165
> URL: https://issues.apache.org/jira/browse/HBASE-17165
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase, HFile, tooling
>Affects Versions: 2.0.0, 1.2.3
>Reporter: Mike Grimes
>Assignee: Mike Grimes
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17165.branch-1.001.patch, 
> HBASE-17165.branch-1.001.patch, HBASE-17165.branch-1.2.001.patch, 
> HBASE-17165.branch-1.2.002.patch, HBASE-17165.branch-1.2.003.patch, 
> HBASE-17165.branch-1.2.004.patch, HBASE-17165.master.001.patch, 
> HBASE-17165.master.002.patch, HBASE-17165.master.003.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> As using the LoadIncrementalHFiles tool with S3 as the filesystem is prone to 
> failing due to FileNotFoundExceptions due to inconsistency, simple, 
> configurable retry logic was added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19260) Add lock back to avoid parallel accessing meta to locate region

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257758#comment-16257758
 ] 

Hudson commented on HBASE-19260:


FAILURE: Integrated in Jenkins build HBase-1.5 #163 (See 
[https://builds.apache.org/job/HBase-1.5/163/])
HBASE-19260 Add lock back to avoid parallel accessing meta to locate (stack: 
rev bda31bbf69b31bf0833afc4272e96e8f75caf9d4)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java


> Add lock back to avoid parallel accessing meta to locate region
> ---
>
> Key: HBASE-19260
> URL: https://issues.apache.org/jira/browse/HBASE-19260
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 2.0.0-alpha-3, 1.1.12
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.4.0, 1.3.2, 1.2.7, 2.0.0-beta-1
>
> Attachments: HBASE-19260.patch, HBASE-19260.v2.patch
>
>
> In branch-0.98 we have below codes to avoid accessing meta in parallel in 
> {{HConnectionManager}}:
> {code}
>   Result regionInfoRow;
>   // This block guards against two threads trying to load the meta
>   // region at the same time. The first will load the meta region and
>   // the second will use the value that the first one found.
>   if (useCache) {
> if (TableName.META_TABLE_NAME.equals(parentTable) && usePrefetch 
> &&
> getRegionCachePrefetch(tableName)) {
>   synchronized (regionLockObject) {
> // Check the cache again for a hit in case some other thread 
> made the
> // same query while we were waiting on the lock.
> ...
>   }
> }
>   ...
> {code}
> while in HBASE-10018 we removed such logic along with 
> region-location-prefetching. 
> We regard this as an unexpected behavior change and observed below phenomenon 
> in our product env:
> 1. Unnecessary connection setup to meta when multiple threads locating region 
> in a client process
> 2. Priority handler of the RS holding meta region exhausted, application keep 
> retrying and cause a vicious circle
> To resolve this problem, we propose to add the {{userRegionLock}} back and 
> keep the behavior in accordance with 0.98



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19260) Add lock back to avoid parallel accessing meta to locate region

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257759#comment-16257759
 ] 

Hudson commented on HBASE-19260:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #1021 (See 
[https://builds.apache.org/job/HBase-1.2-IT/1021/])
HBASE-19260 Add lock back to avoid parallel accessing meta to locate (stack: 
rev ea9d1713d272a2a062a6fe3054fbcce37dbb10d6)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java


> Add lock back to avoid parallel accessing meta to locate region
> ---
>
> Key: HBASE-19260
> URL: https://issues.apache.org/jira/browse/HBASE-19260
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 2.0.0-alpha-3, 1.1.12
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.4.0, 1.3.2, 1.2.7, 2.0.0-beta-1
>
> Attachments: HBASE-19260.patch, HBASE-19260.v2.patch
>
>
> In branch-0.98 we have below codes to avoid accessing meta in parallel in 
> {{HConnectionManager}}:
> {code}
>   Result regionInfoRow;
>   // This block guards against two threads trying to load the meta
>   // region at the same time. The first will load the meta region and
>   // the second will use the value that the first one found.
>   if (useCache) {
> if (TableName.META_TABLE_NAME.equals(parentTable) && usePrefetch 
> &&
> getRegionCachePrefetch(tableName)) {
>   synchronized (regionLockObject) {
> // Check the cache again for a hit in case some other thread 
> made the
> // same query while we were waiting on the lock.
> ...
>   }
> }
>   ...
> {code}
> while in HBASE-10018 we removed such logic along with 
> region-location-prefetching. 
> We regard this as an unexpected behavior change and observed below phenomenon 
> in our product env:
> 1. Unnecessary connection setup to meta when multiple threads locating region 
> in a client process
> 2. Priority handler of the RS holding meta region exhausted, application keep 
> retrying and cause a vicious circle
> To resolve this problem, we propose to add the {{userRegionLock}} back and 
> keep the behavior in accordance with 0.98



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257737#comment-16257737
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

{quote}
So, the idea to offline a system table and then restore from a snapshot on 
error with clients 'advised' to stop writing as some-sort of 2PC got buy-in 
from others? This is 'fault-tolerance'? Is there a write-up somewhere that 
explains why we have to offline and then restore a whole table (whatever its 
size) just because a particular op failed and how it is more simple and elegant 
than other soluntions (what others?), I'd like to read it. Otherwise, I just 
don't get it (neither will the operator whose cron job failed because backup 
table was gone when it ran).
{quote}

Stack, you just out of context right now, but I appreciate you want to spend so 
much time digging into my code once again. Thanks.
Your are the only one who is objecting snapshot-based approach, but I am still 
waiting for a single argument why is this bad?


> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19260) Add lock back to avoid parallel accessing meta to locate region

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257732#comment-16257732
 ] 

Hudson commented on HBASE-19260:


FAILURE: Integrated in Jenkins build HBase-1.3-IT #291 (See 
[https://builds.apache.org/job/HBase-1.3-IT/291/])
HBASE-19260 Add lock back to avoid parallel accessing meta to locate (stack: 
rev c9f6aa3b15f2edc892da55854d19f3647bb4b7b8)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java


> Add lock back to avoid parallel accessing meta to locate region
> ---
>
> Key: HBASE-19260
> URL: https://issues.apache.org/jira/browse/HBASE-19260
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 2.0.0-alpha-3, 1.1.12
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.4.0, 1.3.2, 1.2.7, 2.0.0-beta-1
>
> Attachments: HBASE-19260.patch, HBASE-19260.v2.patch
>
>
> In branch-0.98 we have below codes to avoid accessing meta in parallel in 
> {{HConnectionManager}}:
> {code}
>   Result regionInfoRow;
>   // This block guards against two threads trying to load the meta
>   // region at the same time. The first will load the meta region and
>   // the second will use the value that the first one found.
>   if (useCache) {
> if (TableName.META_TABLE_NAME.equals(parentTable) && usePrefetch 
> &&
> getRegionCachePrefetch(tableName)) {
>   synchronized (regionLockObject) {
> // Check the cache again for a hit in case some other thread 
> made the
> // same query while we were waiting on the lock.
> ...
>   }
> }
>   ...
> {code}
> while in HBASE-10018 we removed such logic along with 
> region-location-prefetching. 
> We regard this as an unexpected behavior change and observed below phenomenon 
> in our product env:
> 1. Unnecessary connection setup to meta when multiple threads locating region 
> in a client process
> 2. Priority handler of the RS holding meta region exhausted, application keep 
> retrying and cause a vicious circle
> To resolve this problem, we propose to add the {{userRegionLock}} back and 
> keep the behavior in accordance with 0.98



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257729#comment-16257729
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

{quote}
You suggest I review code. I have been reviewing code. Thats how we got here.
{quote}

Sure, you can start from very beginning, Stack. Go ahead.

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-17 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19163:
-
Attachment: (was: HBASE-19163.master.005.patch)

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, 
> HBASE-19163.master.004.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19285) Add per-table latency histograms

2017-11-17 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257723#comment-16257723
 ] 

Josh Elser commented on HBASE-19285:


Well, color me surprised. Initial tests show no impact. This makes me think I 
did something wrong.

{noformat}
$ bin/hbase pe --latency --nomapred --presplit=1000 --valueSize=1000 
--rows=10 sequentialWrite 30
{noformat}

{noformat}
### With table metrics

2017-11-17 22:36:53,939 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Summary of timings (ms): [59063, 59046, 60074, 59116, 
59436, 59829, 58789, 59049, 60074, 59087, 58798, 59934, 59328, 59325, 59790, 
59317, 59961, 60116, 58530, 59394, 59515, 58430, 58510, 59290, 59376, 58812, 
59762, 58578, 59929, 59388]
2017-11-17 22:36:53,940 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Min: 58430msMax: 60116msAvg: 59321ms

2017-11-17 22:40:24,731 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Summary of timings (ms): [56313, 56755, 56521, 52565, 
54266, 53052, 56263, 54206, 56329, 51822, 52421, 54228, 53171, 56686, 56707, 
53018, 50557, 56529, 56326, 56704, 56519, 54258, 56503, 54292, 56338, 53167, 
56508, 53116, 52534, 56193]
2017-11-17 22:40:24,733 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Min: 50557msMax: 56755msAvg: 54795ms

2017-11-17 22:44:45,189 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Summary of timings (ms): [64965, 63050, 63186, 63057, 
63068, 63450, 63929, 63121, 63272, 62800, 64640, 64476, 63888, 62953, 64284, 
62958, 64466, 64607, 64266, 63359, 64628, 64374, 64948, 63883, 64322, 63837, 
63264, 64348, 64670, 63408]
2017-11-17 22:44:45,190 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Min: 62800msMax: 64965msAvg: 63849ms

### Without table metrics (stock)

2017-11-17 22:52:01,694 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Summary of timings (ms): [68170, 68175, 61443, 69265, 
69226, 69447, 69371, 60942, 69250, 69088, 65739, 65723, 65756, 65764, 63096, 
68151, 65782, 63133, 68174, 68193, 63076, 61539, 65809, 68167, 68984, 69373, 
67716, 68154, 65756, 69024]
2017-11-17 22:52:01,694 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Min: 60942msMax: 69447msAvg: 66716ms

2017-11-17 22:57:35,774 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Summary of timings (ms): [62112, 61847, 62267, 61856, 
62088, 61863, 59649, 61915, 62237, 60353, 59469, 60340, 59375, 60396, 59450, 
59625, 62170, 60338, 59000, 61362, 60308, 61921, 61300, 61834, 62142, 59622, 
62203, 61935, 59639, 59467]
2017-11-17 22:57:35,775 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Min: 59000msMax: 62267msAvg: 60936ms

2017-11-17 23:00:28,712 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Summary of timings (ms): [58838, 64460, 63499, 65246, 
59610, 60985, 63585, 60948, 65323, 58409, 58478, 63565, 63601, 6, 64464, 
65149, 60909, 59680, 65169, 61050, 65371, 65197, 59647, 60922, 59605, 65232, 
63591, 65147, 65307, 64475]
2017-11-17 23:00:28,713 INFO  [main] hbase.PerformanceEvaluation: 
[SequentialWriteTest] Min: 58409msMax: 65371msAvg: 62619ms
{noformat}

Single RS, 8G heap a couple of memstore flush config tweaks to try to smooth 
out the 1K regions on a single RS. My only guess is that I'm still getting 
blocked on flushes which is dominating execution time. Let me poke with JFR or 
YourKit.

> Add per-table latency histograms
> 
>
> Key: HBASE-19285
> URL: https://issues.apache.org/jira/browse/HBASE-19285
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Clay B.
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.3
>
>
> HBASE-17017 removed the per-region latency histograms (e.g. Get, Put, Scan at 
> p75, p85, etc)
> HBASE-15518 added some per-table metrics, but not the latency histograms.
> Given the previous conversations, it seems like it these per-table 
> aggregations weren't intentionally omitted, just never re-implemented after 
> the per-region removal. They're some really nice out-of-the-box metrics we 
> can provide to our users/admins as long as it's not detrimental.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-17 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19163:
-
Attachment: HBASE-19163.master.005.patch

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, 
> HBASE-19163.master.004.patch, HBASE-19163.master.005.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257706#comment-16257706
 ] 

Hadoop QA commented on HBASE-17852:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
 7s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} hbase-backup: The patch generated 4 new + 69 unchanged 
- 6 fixed = 73 total (was 75) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
23s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
64m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
16s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-17852 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898274/HBASE-17852-v6.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 5973dfc868e3 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ca74ec7740 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9901/artifact/patchprocess/diff-checkstyle-hbase-backup.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9901/testReport/ |
| modules | C: hbase-backup U: hbase-backup |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9901/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-17 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257704#comment-16257704
 ] 

Umesh Agashe commented on HBASE-19163:
--

+1 nice fix! Added a few nits on latest patch. Please take a look.Thanks!
(non-binding)

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, 
> HBASE-19163.master.004.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19260) Add lock back to avoid parallel accessing meta to locate region

2017-11-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19260:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0-beta-1
   1.2.7
   1.3.2
   1.4.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-1.2+. Thanks for the nice fix [~carp84]

> Add lock back to avoid parallel accessing meta to locate region
> ---
>
> Key: HBASE-19260
> URL: https://issues.apache.org/jira/browse/HBASE-19260
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 2.0.0-alpha-3, 1.1.12
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.4.0, 1.3.2, 1.2.7, 2.0.0-beta-1
>
> Attachments: HBASE-19260.patch, HBASE-19260.v2.patch
>
>
> In branch-0.98 we have below codes to avoid accessing meta in parallel in 
> {{HConnectionManager}}:
> {code}
>   Result regionInfoRow;
>   // This block guards against two threads trying to load the meta
>   // region at the same time. The first will load the meta region and
>   // the second will use the value that the first one found.
>   if (useCache) {
> if (TableName.META_TABLE_NAME.equals(parentTable) && usePrefetch 
> &&
> getRegionCachePrefetch(tableName)) {
>   synchronized (regionLockObject) {
> // Check the cache again for a hit in case some other thread 
> made the
> // same query while we were waiting on the lock.
> ...
>   }
> }
>   ...
> {code}
> while in HBASE-10018 we removed such logic along with 
> region-location-prefetching. 
> We regard this as an unexpected behavior change and observed below phenomenon 
> in our product env:
> 1. Unnecessary connection setup to meta when multiple threads locating region 
> in a client process
> 2. Priority handler of the RS holding meta region exhausted, application keep 
> retrying and cause a vicious circle
> To resolve this problem, we propose to add the {{userRegionLock}} back and 
> keep the behavior in accordance with 0.98



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-17 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19163:
-
Attachment: HBASE-19163.master.004.patch

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, 
> HBASE-19163.master.004.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19271) Update ref guide about the async client to reflect the change in HBASE-19251

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257654#comment-16257654
 ] 

Hudson commented on HBASE-19271:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4070 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4070/])
HBASE-19271 Update ref guide about the async client to reflect the (appy: rev 
907b268fd402f3e796f9fc7d2e4e45105aecd7dc)
* (edit) src/main/asciidoc/_chapters/architecture.adoc


> Update ref guide about the async client to reflect the change in HBASE-19251
> 
>
> Key: HBASE-19271
> URL: https://issues.apache.org/jira/browse/HBASE-19271
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0
>
> Attachments: HBASE-19271.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19260) Add lock back to avoid parallel accessing meta to locate region

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257657#comment-16257657
 ] 

Hudson commented on HBASE-19260:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4070 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4070/])
HBASE-19260 Add lock back to avoid parallel accessing meta to locate (stack: 
rev 777b653b45e54c89dd69e86eff2b261054465623)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java


> Add lock back to avoid parallel accessing meta to locate region
> ---
>
> Key: HBASE-19260
> URL: https://issues.apache.org/jira/browse/HBASE-19260
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 2.0.0-alpha-3, 1.1.12
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HBASE-19260.patch, HBASE-19260.v2.patch
>
>
> In branch-0.98 we have below codes to avoid accessing meta in parallel in 
> {{HConnectionManager}}:
> {code}
>   Result regionInfoRow;
>   // This block guards against two threads trying to load the meta
>   // region at the same time. The first will load the meta region and
>   // the second will use the value that the first one found.
>   if (useCache) {
> if (TableName.META_TABLE_NAME.equals(parentTable) && usePrefetch 
> &&
> getRegionCachePrefetch(tableName)) {
>   synchronized (regionLockObject) {
> // Check the cache again for a hit in case some other thread 
> made the
> // same query while we were waiting on the lock.
> ...
>   }
> }
>   ...
> {code}
> while in HBASE-10018 we removed such logic along with 
> region-location-prefetching. 
> We regard this as an unexpected behavior change and observed below phenomenon 
> in our product env:
> 1. Unnecessary connection setup to meta when multiple threads locating region 
> in a client process
> 2. Priority handler of the RS holding meta region exhausted, application keep 
> retrying and cause a vicious circle
> To resolve this problem, we propose to add the {{userRegionLock}} back and 
> keep the behavior in accordance with 0.98



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19114) Split out o.a.h.h.zookeeper from hbase-server and hbase-client

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257656#comment-16257656
 ] 

Hudson commented on HBASE-19114:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4070 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4070/])
HBASE-19114 Split out o.a.h.h.zookeeper from hbase-server and (appy: rev 
330b0d05b99981b4bdc92c81b22ebb5be5ece155)
* (delete) 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZkAclReset.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/ReplicationZKNodeCleaner.java
* (add) 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/MasterAddressTracker.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileLinkCleaner.java
* (edit) hbase-shell/src/main/ruby/hbase/admin.rb
* (delete) 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithRemove.java
* (add) 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKLeaderManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/ZKPermissionWatcher.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHeapMemoryManager.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestTablePermissions.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKNodeTracker.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestRecoverableZooKeeper.java
* (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/util/MockServer.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
* (edit) hbase-assembly/src/main/assembly/hadoop-two-compat.xml
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestTableStateManager.java
* (add) 
hbase-zookeeper/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKWatcher.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationHFileCleaner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationTableBase.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMultiSlaveReplication.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/TableAuthManager.java
* (delete) 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperListener.java
* (add) 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKSplitLog.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelsCache.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/DeletionListener.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitOrMergeTracker.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkSplitLogWorkerCoordination.java
* (add) 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/InstancePending.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKServerTool.java
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestMetaReplicas.java
* (add) 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/flush/RegionServerFlushTableProcedureManager.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationBase.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java
* (add) hbase-zookeeper/pom.xml
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java
* (add) 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKMetricsListener.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationStateHBaseImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWALLockup.java
* (add) 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/backup/example/TestZooKeeperTableArchiveClient.java
* (edit) hbase-replication/pom.xml
* (delete) 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
* (edit) 

[jira] [Commented] (HBASE-19269) Reenable TestShellRSGroups

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257655#comment-16257655
 ] 

Hudson commented on HBASE-19269:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4070 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4070/])
HBASE-19269 Reenable TestShellRSGroups (stack: rev 
ca74ec774040ae78ebe4cca37730144f498ebb14)
* (edit) 
hbase-shell/src/test/rsgroup/org/apache/hadoop/hbase/client/rsgroup/TestShellRSGroups.java
* (edit) hbase-shell/src/test/ruby/shell/rsgroup_shell_test.rb


> Reenable TestShellRSGroups
> --
>
> Key: HBASE-19269
> URL: https://issues.apache.org/jira/browse/HBASE-19269
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: Guangxu Cheng
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19269.master.001.patch, 
> HBASE-19269.master.002.patch
>
>
> It was disabled by the parent issue because RSGroups was failing. RSGroups 
> now works but this test is still failling. Need to dig in (signal from these 
> jruby tests is murky).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15320) HBase connector for Kafka Connect

2017-11-17 Thread Mike Wingert (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257627#comment-16257627
 ] 

Mike Wingert commented on HBASE-15320:
--

I'm putting it in now.

> HBase connector for Kafka Connect
> -
>
> Key: HBASE-15320
> URL: https://issues.apache.org/jira/browse/HBASE-15320
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Andrew Purtell
>Assignee: Mike Wingert
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-15320.master.1.patch, HBASE-15320.master.2.patch, 
> HBASE-15320.master.3.patch, HBASE-15320.master.4.patch, 
> HBASE-15320.master.5.patch, HBASE-15320.master.6.patch, 
> HBASE-15320.master.7.patch, HBASE-15320.pdf
>
>
> Implement an HBase connector with source and sink tasks for the Connect 
> framework (http://docs.confluent.io/2.0.0/connect/index.html) available in 
> Kafka 0.9 and later.
> See also: 
> http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines
> An HBase source 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task)
>  could be implemented as a replication endpoint or WALObserver, publishing 
> cluster wide change streams from the WAL to one or more topics, with 
> configurable mapping and partitioning of table changes to topics.  
> An HBase sink task 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would 
> persist, with optional transformation (JSON? Avro?, map fields to native 
> schema?), Kafka SinkRecords into HBase tables.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257621#comment-16257621
 ] 

stack commented on HBASE-17852:
---

My comments above are not in RB. Were they addressed?

Patches should include description. Helps reviewers and those trying to 
follow-behind. Yours have none.

You don't use the suggested patch-making tool either in-spite of an earlier 
request.

So, the idea to offline a system table and then restore from a snapshot on 
error with clients 'advised' to stop writing as some-sort of 2PC got buy-in 
from others? This is 'fault-tolerance'? Is there a write-up somewhere that 
explains why we have to offline and then restore a whole table (whatever its 
size) just because a particular op failed and how it is more simple and elegant 
than other soluntions (what others?), I'd like to read it. Otherwise, I just 
don't get it (neither will the operator whose cron job failed because backup 
table was gone when it ran).

You suggest I review code. I have been reviewing code. Thats how we got here.

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19114) Split out o.a.h.h.zookeeper from hbase-server and hbase-client

2017-11-17 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19114:
-
  Resolution: Fixed
Release Note: 
Splits out most of ZooKeeper related code into a separate new module: 
hbase-zookeeper.
Also, renames some ZooKeeper related classes to follow a common naming pattern 
- "ZK" prefix - as compared to many different styles earlier.

  was:TODO

  Status: Resolved  (was: Patch Available)

> Split out o.a.h.h.zookeeper from hbase-server and hbase-client
> --
>
> Key: HBASE-19114
> URL: https://issues.apache.org/jira/browse/HBASE-19114
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19114.master.001.patch, 
> HBASE-19114.master.002.patch, HBASE-19114.master.003.patch, 
> HBASE-19114.master.004.patch, HBASE-19114.master.005.patch, 
> HBASE-19114.master.006.patch, HBASE-19114.master.007.patch, 
> HBASE-19114.master.008.patch
>
>
> Changes so far:
> - Moved DrainingServerTracker and RegionServerTracker to 
> hbase-server:o.a.h.h.master.
> - Moved SplitOrMergeTracker to oahh.master (because it depends on a PB)
> - Moving hbase-client:oahh.zookeeper.*  to hbase-zookeeper module.  After 
> [~Apache9]'s cleanup (HBASE-19200), hbase-client doesn't need them anymore 
> (except 3 classes).
> - Renamed some classes to use a consistent naming for classes - ZK instead of 
> mix of ZK, Zk , ZooKeeper. Couldn't rename following public classes: 
> MiniZooKeeperCluster, ZooKeeperConnectionException. Left RecoverableZooKeeper 
> for lack of better name. (suggestions?)
> - Sadly, can't move tests out because they depend on HBaseTestingUtility 
> (which defeats part of the purpose - trimming down hbase-server tests. We 
> need to promote more use of mocks in our tests)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15320) HBase connector for Kafka Connect

2017-11-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257606#comment-16257606
 ] 

Ted Yu commented on HBASE-15320:


Can you put patch on reviewboard ?

> HBase connector for Kafka Connect
> -
>
> Key: HBASE-15320
> URL: https://issues.apache.org/jira/browse/HBASE-15320
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Andrew Purtell
>Assignee: Mike Wingert
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-15320.master.1.patch, HBASE-15320.master.2.patch, 
> HBASE-15320.master.3.patch, HBASE-15320.master.4.patch, 
> HBASE-15320.master.5.patch, HBASE-15320.master.6.patch, 
> HBASE-15320.master.7.patch, HBASE-15320.pdf
>
>
> Implement an HBase connector with source and sink tasks for the Connect 
> framework (http://docs.confluent.io/2.0.0/connect/index.html) available in 
> Kafka 0.9 and later.
> See also: 
> http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines
> An HBase source 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task)
>  could be implemented as a replication endpoint or WALObserver, publishing 
> cluster wide change streams from the WAL to one or more topics, with 
> configurable mapping and partitioning of table changes to topics.  
> An HBase sink task 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would 
> persist, with optional transformation (JSON? Avro?, map fields to native 
> schema?), Kafka SinkRecords into HBase tables.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19114) Split out o.a.h.h.zookeeper from hbase-server and hbase-client

2017-11-17 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257588#comment-16257588
 ] 

Appy edited comment on HBASE-19114 at 11/17/17 9:28 PM:


+1 unit. Pushed to branch-2 and mastter.
Like mentioned earlier, checkstyle errors are already existing ones and getting 
flagged because of moving the files.



was (Author: appy):
+1 unit. Committing.
Like mentioned earlier, checkstyle errors are getting flagged from new files 
created as a result of move.


> Split out o.a.h.h.zookeeper from hbase-server and hbase-client
> --
>
> Key: HBASE-19114
> URL: https://issues.apache.org/jira/browse/HBASE-19114
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19114.master.001.patch, 
> HBASE-19114.master.002.patch, HBASE-19114.master.003.patch, 
> HBASE-19114.master.004.patch, HBASE-19114.master.005.patch, 
> HBASE-19114.master.006.patch, HBASE-19114.master.007.patch, 
> HBASE-19114.master.008.patch
>
>
> Changes so far:
> - Moved DrainingServerTracker and RegionServerTracker to 
> hbase-server:o.a.h.h.master.
> - Moved SplitOrMergeTracker to oahh.master (because it depends on a PB)
> - Moving hbase-client:oahh.zookeeper.*  to hbase-zookeeper module.  After 
> [~Apache9]'s cleanup (HBASE-19200), hbase-client doesn't need them anymore 
> (except 3 classes).
> - Renamed some classes to use a consistent naming for classes - ZK instead of 
> mix of ZK, Zk , ZooKeeper. Couldn't rename following public classes: 
> MiniZooKeeperCluster, ZooKeeperConnectionException. Left RecoverableZooKeeper 
> for lack of better name. (suggestions?)
> - Sadly, can't move tests out because they depend on HBaseTestingUtility 
> (which defeats part of the purpose - trimming down hbase-server tests. We 
> need to promote more use of mocks in our tests)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257590#comment-16257590
 ] 

Vladimir Rodionov edited comment on HBASE-17852 at 11/17/17 9:21 PM:
-

{quote}
Which comments were addressed?
{quote}
You can find them as fixed on RB:
https://reviews.apache.org/r/63155/


was (Author: vrodionov):
You can find them as fixed on RB:
https://reviews.apache.org/r/63155/

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19114) Split out o.a.h.h.zookeeper from hbase-server and hbase-client

2017-11-17 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257588#comment-16257588
 ] 

Appy commented on HBASE-19114:
--

+1 unit. Committing.
Like mentioned earlier, checkstyle errors are getting flagged from new files 
created as a result of move.


> Split out o.a.h.h.zookeeper from hbase-server and hbase-client
> --
>
> Key: HBASE-19114
> URL: https://issues.apache.org/jira/browse/HBASE-19114
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19114.master.001.patch, 
> HBASE-19114.master.002.patch, HBASE-19114.master.003.patch, 
> HBASE-19114.master.004.patch, HBASE-19114.master.005.patch, 
> HBASE-19114.master.006.patch, HBASE-19114.master.007.patch, 
> HBASE-19114.master.008.patch
>
>
> Changes so far:
> - Moved DrainingServerTracker and RegionServerTracker to 
> hbase-server:o.a.h.h.master.
> - Moved SplitOrMergeTracker to oahh.master (because it depends on a PB)
> - Moving hbase-client:oahh.zookeeper.*  to hbase-zookeeper module.  After 
> [~Apache9]'s cleanup (HBASE-19200), hbase-client doesn't need them anymore 
> (except 3 classes).
> - Renamed some classes to use a consistent naming for classes - ZK instead of 
> mix of ZK, Zk , ZooKeeper. Couldn't rename following public classes: 
> MiniZooKeeperCluster, ZooKeeperConnectionException. Left RecoverableZooKeeper 
> for lack of better name. (suggestions?)
> - Sadly, can't move tests out because they depend on HBaseTestingUtility 
> (which defeats part of the purpose - trimming down hbase-server tests. We 
> need to promote more use of mocks in our tests)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257590#comment-16257590
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

You can find them as fixed on RB:
https://reviews.apache.org/r/63155/

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-17 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257586#comment-16257586
 ] 

huaxiang sun commented on HBASE-19163:
--

The latest patch is v2, to avoid confusion, I deleted the previous patch which 
is named as v3 by the submit-review tool.

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-17 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19163:
-
Attachment: (was: HBASE-19163.master.003.patch)

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-17 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19163:
-
Attachment: HBASE-19163.master.002.patch

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, 
> HBASE-19163.master.003.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257580#comment-16257580
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

{quote}
The snapshot/restore of a whole system table strikes me as a bunch of moving 
parts. 
{quote}

That is only one backup system table. 

{quote}
I have to ask why we got such an extreme?
{quote}

What is so extreme here? Snapshot of a system table? I consider this approach 
much more simple and elegant than 
others?

{quote}
During restore, all clients are frozen out or something so they can't pollute 
the restored version?
{quote}

Yes. During table restore operation, all clients (of this table) must  be 
stopped.  In theory, this is not a hard requirement - it is just an advice. But,
we truncate table, before restore and this, definitely, may affect unexpectedly 
incoming writes. Any database system, which allows writes to a table during 
restore of a table? 

Stack, if you have doubts in the implementation, I suggest you to go over code 
and find places where you think the code has issues.



> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19289) CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3 beta1

2017-11-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257404#comment-16257404
 ] 

Ted Yu edited comment on HBASE-19289 at 11/17/17 9:12 PM:
--

bq. updating LocalFileSystem to support hflush/hsync

This would be done in hadoop, right ?
Consider logging a HADOOP issue.


was (Author: yuzhih...@gmail.com):
bq. updating LocalFileSystem to support hflush/hsync

This would be done in hadoop, right ?
Consider logging a HADOOP issue - possibly after finding out which commit(s) 
changed the behavior in hadoop3.

> CommonFSUtils$StreamLacksCapabilityException: hflush when running test 
> against hadoop3 beta1
> 
>
> Key: HBASE-19289
> URL: https://issues.apache.org/jira/browse/HBASE-19289
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> As of commit d8fb10c8329b19223c91d3cda6ef149382ad4ea0 , I encountered the 
> following exception when running unit test against hadoop3 beta1:
> {code}
> testRefreshStoreFiles(org.apache.hadoop.hbase.regionserver.TestHStore)  Time 
> elapsed: 0.061 sec  <<< ERROR!
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19289) CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3 beta1

2017-11-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257578#comment-16257578
 ] 

Ted Yu commented on HBASE-19289:


Logged HADOOP-15051

> CommonFSUtils$StreamLacksCapabilityException: hflush when running test 
> against hadoop3 beta1
> 
>
> Key: HBASE-19289
> URL: https://issues.apache.org/jira/browse/HBASE-19289
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> As of commit d8fb10c8329b19223c91d3cda6ef149382ad4ea0 , I encountered the 
> following exception when running unit test against hadoop3 beta1:
> {code}
> testRefreshStoreFiles(org.apache.hadoop.hbase.regionserver.TestHStore)  Time 
> elapsed: 0.061 sec  <<< ERROR!
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19290) Reduce zk request when doing split log

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257575#comment-16257575
 ] 

Hadoop QA commented on HBASE-19290:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
59s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
52s{color} | {color:red} hbase-server: The patch generated 1 new + 5 unchanged 
- 0 fixed = 6 total (was 5) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 6s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
45m 17s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 78m 
21s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19290 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898139/HBASE-19290.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux c5ddf3132a60 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 907b268fd4 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9897/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9897/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-19114) Split out o.a.h.h.zookeeper from hbase-server and hbase-client

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257577#comment-16257577
 ] 

Hadoop QA commented on HBASE-19114:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
6s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 78 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  6m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
55s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} hbase-client: The patch generated 0 new + 0 
unchanged - 199 fixed = 0 total (was 199) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hbase-zookeeper: The patch generated 240 new + 0 
unchanged - 0 fixed = 240 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} hbase-replication: The patch generated 3 new + 24 
unchanged - 11 fixed = 27 total (was 35) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
26s{color} | {color:red} hbase-server: The patch generated 18 new + 1442 
unchanged - 86 fixed = 1460 total (was 1528) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} The patch hbase-mapreduce passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch hbase-rsgroup passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch hbase-shell passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} The patch hbase-it passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch hbase-assembly passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch hbase-client-project passed checkstyle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch hbase-shaded-client-project passed 
checkstyle {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
45s{color} | {color:red} root: The patch generated 261 new + 1498 unchanged - 
296 fixed = 1759 total (was 1794) {color} |
| {color:green}+1{color} | {color:green} rubocop 

[jira] [Commented] (HBASE-19123) Purge 'complete' support from Coprocesor Observers

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257573#comment-16257573
 ] 

Hadoop QA commented on HBASE-19123:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} hbase-server: The patch generated 0 new + 50 
unchanged - 3 fixed = 50 total (was 53) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 8s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
45m 20s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 78m  
8s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19123 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898244/HBASE-19123.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux db5f794f0b19 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 907b268fd4 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9896/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9896/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Purge 'complete' support from Coprocesor Observers
> --
>
>  

[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257572#comment-16257572
 ] 

stack commented on HBASE-17852:
---

bq. v6 addresses some of the RB comments

Which comments were addressed?

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17852:
--
Attachment: HBASE-17852-v6.patch

v6 addresses some of the RB comments

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257561#comment-16257561
 ] 

stack commented on HBASE-17852:
---

bq. That is why we take snapshot of a backup system table and restore this 
table from snapshot, previously taken, in case of a command 
(create/delete/merge) failure.

Was this written up somewhere previously and the design shopped before others 
with buy-in?

The snapshot/restore of a whole system table strikes me as a bunch of moving 
parts. I have to ask why we got such an extreme? 2PC is tough-enough w/o 
offlining/restore of whole meta table. During restore, all clients are frozen 
out or something so they can't pollute the restored version? Restore is not 
atomic, right? We couldn't have something like a row-per-backup with a success 
tag if all went well (I've not been following closely -- pardon all the 
questions).



> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-15320) HBase connector for Kafka Connect

2017-11-17 Thread Mike Wingert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Wingert updated HBASE-15320:
-
Attachment: HBASE-15320.master.7.patch

> HBase connector for Kafka Connect
> -
>
> Key: HBASE-15320
> URL: https://issues.apache.org/jira/browse/HBASE-15320
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Andrew Purtell
>Assignee: Mike Wingert
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-15320.master.1.patch, HBASE-15320.master.2.patch, 
> HBASE-15320.master.3.patch, HBASE-15320.master.4.patch, 
> HBASE-15320.master.5.patch, HBASE-15320.master.6.patch, 
> HBASE-15320.master.7.patch, HBASE-15320.pdf
>
>
> Implement an HBase connector with source and sink tasks for the Connect 
> framework (http://docs.confluent.io/2.0.0/connect/index.html) available in 
> Kafka 0.9 and later.
> See also: 
> http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines
> An HBase source 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task)
>  could be implemented as a replication endpoint or WALObserver, publishing 
> cluster wide change streams from the WAL to one or more topics, with 
> configurable mapping and partitioning of table changes to topics.  
> An HBase sink task 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would 
> persist, with optional transformation (JSON? Avro?, map fields to native 
> schema?), Kafka SinkRecords into HBase tables.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16574) Add backup / restore feature to refguide

2017-11-17 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-16574:
---
Attachment: apache_hbase_reference_guide_009.pdf

PDF for 009. BTW, it is simple to generate: {{mvn site -Dmaven.javadoc.skip 
-Dcheckstyle.skip -Dfindbugs.skip}} and it will show up in 
target/site/apache_hbase_reference_guide.pdf after a few minutes (do not have 
to wait for the entire mvn site command to complete).

> Add backup / restore feature to refguide
> 
>
> Key: HBASE-16574
> URL: https://issues.apache.org/jira/browse/HBASE-16574
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Frank Welsch
>  Labels: backup
> Fix For: 2.0.0-beta-1
>
> Attachments: B command-line tools and configuration (updated).pdf, 
> Backup-and-Restore-Apache_19Sep2016.pdf, HBASE-16574.001.patch, 
> HBASE-16574.002.patch, HBASE-16574.003.branch-2.patch, 
> HBASE-16574.004.branch-2.patch, HBASE-16574.005.branch-2.patch, 
> HBASE-16574.006.branch-2.patch, HBASE-16574.007.branch-2.patch, 
> HBASE-16574.008.branch-2.patch, HBASE-16574.009.branch-2.patch, 
> apache_hbase_reference_guide_004.pdf, apache_hbase_reference_guide_007.pdf, 
> apache_hbase_reference_guide_008.pdf, apache_hbase_reference_guide_009.pdf, 
> hbase-book-16574.003.pdf, hbase_reference_guide.v1.pdf
>
>
> This issue is to add backup / restore feature description to hbase refguide.
> The description should cover:
> scenarios where backup / restore is used
> backup / restore commands and sample usage
> considerations in setup



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18359) CoprocessorHConnection#getConnectionForEnvironment should read config from CoprocessorEnvironment

2017-11-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257519#comment-16257519
 ] 

stack commented on HBASE-18359:
---

Filed HBASE-19295 for throwing exception if CP tries to change the 
Configuration they get from the CpEnv.

> CoprocessorHConnection#getConnectionForEnvironment should read config from 
> CoprocessorEnvironment
> -
>
> Key: HBASE-18359
> URL: https://issues.apache.org/jira/browse/HBASE-18359
> Project: HBase
>  Issue Type: Bug
>Reporter: Samarth Jain
> Fix For: 2.0.0
>
>
> It seems like the method getConnectionForEnvironment isn't doing the right 
> thing when it is creating a CoprocessorHConnection by reading the config from 
> HRegionServer and not from the env passed in. 
> If coprocessors want to use a CoprocessorHConnection with some custom config 
> settings, then they have no option but to configure it in the hbase-site.xml 
> of the region servers. This isn't ideal as a lot of times these "global" 
> level configs can have side effects. See PHOENIX-3974 as an example where 
> configuring ServerRpcControllerFactory (a Phoenix implementation of 
> RpcControllerFactory) could result in deadlocks. Or PHOENIX-3983 where 
> presence of this global config causes our index rebuild code to incorrectly 
> use handlers it shouldn't.
> If the CoprocessorHConnection created through getConnectionForEnvironment API 
> used the CoprocessorEnvironment config, then it would allow co-processors to 
> pass in their own config without needing to configure them in hbase-site.xml. 
> The change would be simple. Basically change the below
> {code}
> if (services instanceof HRegionServer) {
> return new CoprocessorHConnection((HRegionServer) services);
> }
> {code}
> to
> {code}
> if (services instanceof HRegionServer) {
> return new CoprocessorHConnection(env.getConfiguration(), 
> (HRegionServer) services);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19295) The Configuration returned by CPEnv should be read-only.

2017-11-17 Thread stack (JIRA)
stack created HBASE-19295:
-

 Summary: The Configuration returned by CPEnv should be read-only.
 Key: HBASE-19295
 URL: https://issues.apache.org/jira/browse/HBASE-19295
 Project: HBase
  Issue Type: Task
Reporter: stack


The Configuration a CP gets when it does a getConfiguration on the environment 
is that of the RegionServer. The CP should not be able to modify this config.  
We should throw exception if they try to write us.

Ditto w/ the Connection they can get from the env. They should not be able to 
close it at min.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18359) CoprocessorHConnection#getConnectionForEnvironment should read config from CoprocessorEnvironment

2017-11-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257514#comment-16257514
 ] 

stack commented on HBASE-18359:
---

[~samarth.j...@gmail.com] you even need a distinct Connection apart from the RS 
one when the RS one is doing short-circuit? Thanks for the input.

> CoprocessorHConnection#getConnectionForEnvironment should read config from 
> CoprocessorEnvironment
> -
>
> Key: HBASE-18359
> URL: https://issues.apache.org/jira/browse/HBASE-18359
> Project: HBase
>  Issue Type: Bug
>Reporter: Samarth Jain
> Fix For: 2.0.0
>
>
> It seems like the method getConnectionForEnvironment isn't doing the right 
> thing when it is creating a CoprocessorHConnection by reading the config from 
> HRegionServer and not from the env passed in. 
> If coprocessors want to use a CoprocessorHConnection with some custom config 
> settings, then they have no option but to configure it in the hbase-site.xml 
> of the region servers. This isn't ideal as a lot of times these "global" 
> level configs can have side effects. See PHOENIX-3974 as an example where 
> configuring ServerRpcControllerFactory (a Phoenix implementation of 
> RpcControllerFactory) could result in deadlocks. Or PHOENIX-3983 where 
> presence of this global config causes our index rebuild code to incorrectly 
> use handlers it shouldn't.
> If the CoprocessorHConnection created through getConnectionForEnvironment API 
> used the CoprocessorEnvironment config, then it would allow co-processors to 
> pass in their own config without needing to configure them in hbase-site.xml. 
> The change would be simple. Basically change the below
> {code}
> if (services instanceof HRegionServer) {
> return new CoprocessorHConnection((HRegionServer) services);
> }
> {code}
> to
> {code}
> if (services instanceof HRegionServer) {
> return new CoprocessorHConnection(env.getConfiguration(), 
> (HRegionServer) services);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-17 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257502#comment-16257502
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

Backup/Delete/Merge operations must be executed in a transactional manner. 
Backup system table keeps data (meta-data) which allows to run backups and 
others commands. During backup create, delete or merge, we update backup system 
table multiple times and do not want these updates to be partial ones (when 
operation fails), because *it will prevent further backups/deletes/merges after 
a failure*.

That is why we take snapshot of a backup system table and restore this table 
from snapshot, previously taken, in case of a command (create/delete/merge) 
failure.

By consistency of data I mean - no partial updates should be visible to a user 
after operation completes (either successfully or not). Partial updates in a 
backup system tables == corruption of a system table and MUST be avoided. When 
corruption happens - the only way to restore backup system is to truncate 
backup system table and re-run all backups in full mode.




> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15320) HBase connector for Kafka Connect

2017-11-17 Thread Mike Wingert (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257494#comment-16257494
 ] 

Mike Wingert commented on HBASE-15320:
--

I've attached some docs, let me know if they are not detailed enough.

> HBase connector for Kafka Connect
> -
>
> Key: HBASE-15320
> URL: https://issues.apache.org/jira/browse/HBASE-15320
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Andrew Purtell
>Assignee: Mike Wingert
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-15320.master.1.patch, HBASE-15320.master.2.patch, 
> HBASE-15320.master.3.patch, HBASE-15320.master.4.patch, 
> HBASE-15320.master.5.patch, HBASE-15320.master.6.patch, HBASE-15320.pdf
>
>
> Implement an HBase connector with source and sink tasks for the Connect 
> framework (http://docs.confluent.io/2.0.0/connect/index.html) available in 
> Kafka 0.9 and later.
> See also: 
> http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines
> An HBase source 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task)
>  could be implemented as a replication endpoint or WALObserver, publishing 
> cluster wide change streams from the WAL to one or more topics, with 
> configurable mapping and partitioning of table changes to topics.  
> An HBase sink task 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would 
> persist, with optional transformation (JSON? Avro?, map fields to native 
> schema?), Kafka SinkRecords into HBase tables.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-15320) HBase connector for Kafka Connect

2017-11-17 Thread Mike Wingert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Wingert updated HBASE-15320:
-
Attachment: HBASE-15320.pdf

> HBase connector for Kafka Connect
> -
>
> Key: HBASE-15320
> URL: https://issues.apache.org/jira/browse/HBASE-15320
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Andrew Purtell
>Assignee: Mike Wingert
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-15320.master.1.patch, HBASE-15320.master.2.patch, 
> HBASE-15320.master.3.patch, HBASE-15320.master.4.patch, 
> HBASE-15320.master.5.patch, HBASE-15320.master.6.patch, HBASE-15320.pdf
>
>
> Implement an HBase connector with source and sink tasks for the Connect 
> framework (http://docs.confluent.io/2.0.0/connect/index.html) available in 
> Kafka 0.9 and later.
> See also: 
> http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines
> An HBase source 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task)
>  could be implemented as a replication endpoint or WALObserver, publishing 
> cluster wide change streams from the WAL to one or more topics, with 
> configurable mapping and partitioning of table changes to topics.  
> An HBase sink task 
> (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would 
> persist, with optional transformation (JSON? Avro?, map fields to native 
> schema?), Kafka SinkRecords into HBase tables.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19269) Reenable TestShellRSGroups

2017-11-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19269:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 2.0.0)
   2.0.0-beta-1
   Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Thank you for the fixup [~andrewcheng]

> Reenable TestShellRSGroups
> --
>
> Key: HBASE-19269
> URL: https://issues.apache.org/jira/browse/HBASE-19269
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: Guangxu Cheng
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19269.master.001.patch, 
> HBASE-19269.master.002.patch
>
>
> It was disabled by the parent issue because RSGroups was failing. RSGroups 
> now works but this test is still failling. Need to dig in (signal from these 
> jruby tests is murky).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >