[jira] [Commented] (HBASE-21922) BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter

2019-02-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774855#comment-16774855
 ] 

stack commented on HBASE-21922:
---

Good one lads. Yes, to removing it as not trustworthy after your findings 
above. It is flawed.



> BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter
> -
>
> Key: HBASE-21922
> URL: https://issues.apache.org/jira/browse/HBASE-21922
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21922.master.001.patch
>
>
> Assume we use '5' as the delimiter, there are rowkeys: row1 is smaller than 
> row2
> {code:java}
> row1: 12345xxx
> row2: 1235{code}
> When use ROWPREFIX_DELIMITED bloom filter, the rowkey write to bloom filter 
> are
> {code:java}
> row1's key for bloom filter: 1234
> row2's key for bloom fitler: 123{code}
> The row1's key for bloom filter is bigger than row2. Then 
> BloomContext#sanityCheck will failed.
> {code:java}
> private void sanityCheck(Cell cell) throws IOException {
>   if (this.getLastCell() != null) {
> LOG.debug("Current cell " + cell + ", prevCell = " + this.getLastCell());
> if (comparator.compare(cell, this.getLastCell()) <= 0) {
>   throw new IOException("Added a key not lexically larger than" + " 
> previous. Current cell = "
>   + cell + ", prevCell = " + this.getLastCell());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21487) Concurrent modify table ops can lead to unexpected results

2019-02-21 Thread Syeda Arshiya Tabreen (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774856#comment-16774856
 ] 

Syeda Arshiya Tabreen commented on HBASE-21487:
---

Thanks for reviewing [~allan163], will address the above comment and submit the 
patch.

> Concurrent modify table ops can lead to unexpected results
> --
>
> Key: HBASE-21487
> URL: https://issues.apache.org/jira/browse/HBASE-21487
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0
>Reporter: Syeda Arshiya Tabreen
>Assignee: Syeda Arshiya Tabreen
>Priority: Major
> Fix For: 2.2.0
>
> Attachments: HBASE-21487.branch-2.02.patch, 
> HBASE-21487.branch-2.03.patch, HBASE-21487.branch-2.04.patch, 
> HBASE-21487.branch-2.patch
>
>
> Concurrent  modifyTable or add/delete/modify columnFamily leads to incorrect 
> result. After HBASE-18893, The behavior of add/delete/modify column family 
> during concurrent operation is changed compare to branch-1.When  one client 
> is adding cf2 and another one cf3 .. In branch-1 final result will be 
> cf1,cf2,cf3 but now either cf1,cf2 OR cf1,cf3 will be the outcome depending 
> on which ModifyTableProcedure executed finally.Its because new table 
> descriptor is constructed before submitting the ModifyTableProcedure in 
> HMaster class and its not guarded by any lock.
> *Steps to reproduce*
> 1.Create table 't' with column family 'f1'
> 2.Client-1 and Client-2 requests to add column family 'f2' and 'f3' on table 
> 't' concurrently.
> *Expected Result*
> Table should have three column families(f1,f2,f3)
> *Actual Result*
> Table 't' will have column family either (f1,f2) or (f1,f3)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21942) [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)

2019-02-21 Thread xuqinya (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqinya updated HBASE-21942:

Attachment: HBASE-21942.master.003.patch

> [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)
> --
>
> Key: HBASE-21942
> URL: https://issues.apache.org/jira/browse/HBASE-21942
> Project: HBase
>  Issue Type: Bug
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Attachments: HBASE-21942.master.0001.patch, 
> HBASE-21942.master.001.patch, HBASE-21942.master.002.patch, 
> HBASE-21942.master.003.patch, new_rsgroup.png, rsgroup.png
>
>
> http://example:16010/rsgroup.jsp?name=default
> The rsgroup page (rsgroup.jsp) in UI does not show correct information about 
> Total *_Requests Per Second_*.
> Change totalRequests into totalRequestsPerSecond



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21942) [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774847#comment-16774847
 ] 

Hadoop QA commented on HBASE-21942:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m 
39s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 36s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21942 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959725/HBASE-21942.master.002.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  |
| uname | Linux 69a4e406eec9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 482b505796 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16089/artifact/patchprocess/patch-mvninstall-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16089/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16089/testReport/ |
| Max. process+thread count | 97 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16089/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)
> --
>
> Key: HBASE-21942
> URL: https://issues.apache.org/jira/browse/HBASE-21942
> Project: HBase
>  Issue Type: Bug
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Attachments: HBASE-21942.master.0001.patch, 
> HBASE-21942.master.001.patch, HBASE-21942.master.002.patch, new_rsgroup.png, 
> rsgroup.png
>
>
> http://example:16010/rsgroup.jsp?name=default
> The rsgroup page (rsgroup.jsp) in UI does not show correct information about 
> Total *_Requests Per Second_*.
> Change totalRequests into totalRequestsPerSecond



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21487) Concurrent modify table ops can lead to unexpected results

2019-02-21 Thread Allan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774845#comment-16774845
 ] 

Allan Yang commented on HBASE-21487:


{code}
+  public ModifyTableProcedure(final MasterProcedureEnv env,
+  final TableDescriptor newTableDescriptor, final ProcedurePrepareLatch 
latch,
+  final TableDescriptor oldTableDescriptor, final boolean 
shouldCheckDescriptor)
+  throws HBaseIOException {
+this(env, newTableDescriptor, latch);
+this.unmodifiedTableDescriptor = oldTableDescriptor;
+this.shouldCheckDescriptor = shouldCheckDescriptor;
+  }
+
{code}
Constructor with less arguments should call constructors with more 
arguments,and extra arguments use default values, not the way around like in 
the patch. Except this, the patch looks great.

> Concurrent modify table ops can lead to unexpected results
> --
>
> Key: HBASE-21487
> URL: https://issues.apache.org/jira/browse/HBASE-21487
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0
>Reporter: Syeda Arshiya Tabreen
>Assignee: Syeda Arshiya Tabreen
>Priority: Major
> Fix For: 2.2.0
>
> Attachments: HBASE-21487.branch-2.02.patch, 
> HBASE-21487.branch-2.03.patch, HBASE-21487.branch-2.04.patch, 
> HBASE-21487.branch-2.patch
>
>
> Concurrent  modifyTable or add/delete/modify columnFamily leads to incorrect 
> result. After HBASE-18893, The behavior of add/delete/modify column family 
> during concurrent operation is changed compare to branch-1.When  one client 
> is adding cf2 and another one cf3 .. In branch-1 final result will be 
> cf1,cf2,cf3 but now either cf1,cf2 OR cf1,cf3 will be the outcome depending 
> on which ModifyTableProcedure executed finally.Its because new table 
> descriptor is constructed before submitting the ModifyTableProcedure in 
> HMaster class and its not guarded by any lock.
> *Steps to reproduce*
> 1.Create table 't' with column family 'f1'
> 2.Client-1 and Client-2 requests to add column family 'f2' and 'f3' on table 
> 't' concurrently.
> *Expected Result*
> Table should have three column families(f1,f2,f3)
> *Actual Result*
> Table 't' will have column family either (f1,f2) or (f1,f3)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17094) Add a sitemap for hbase.apache.org

2019-02-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774840#comment-16774840
 ] 

stack commented on HBASE-17094:
---

It is resolved.

Patch gets resolved after it is committed.

Your patch had some white space in it. I should have fixed it before committing 
but didn't.

You have been made a contributor. I assigned this issue to you.

You do not need to commit in github. You are not able to commit, not until you 
are made a committer by the project management committee (PMC) of hbase. You 
earn committership by contributing patches, help on mailing lists, reviews of 
other folks code -- by just being around and helping out generally.

Ask more questions if you are unclear about anything.

Thanks again for the patch [~pingsutw]

> Add a sitemap for hbase.apache.org
> --
>
> Key: HBASE-17094
> URL: https://issues.apache.org/jira/browse/HBASE-17094
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: stack
>Assignee: kevin su
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-17094.v0.patch, HBASE-17094.v0.patch
>
>
> We don't have a sitemap. It was pointed out by [~mbrukman].  Lets add one. 
> Add tooling under dev-support so it gets autogenerated as part of site build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21942) [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)

2019-02-21 Thread xuqinya (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqinya updated HBASE-21942:

Attachment: HBASE-21942.master.002.patch

> [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)
> --
>
> Key: HBASE-21942
> URL: https://issues.apache.org/jira/browse/HBASE-21942
> Project: HBase
>  Issue Type: Bug
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Attachments: HBASE-21942.master.0001.patch, 
> HBASE-21942.master.001.patch, HBASE-21942.master.002.patch, new_rsgroup.png, 
> rsgroup.png
>
>
> http://example:16010/rsgroup.jsp?name=default
> The rsgroup page (rsgroup.jsp) in UI does not show correct information about 
> Total *_Requests Per Second_*.
> Change totalRequests into totalRequestsPerSecond



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21810) bulkload support set hfile compression on client

2019-02-21 Thread Yechao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yechao Chen updated HBASE-21810:

Affects Version/s: (was: 2.1.2)
   2.1.3

> bulkload  support set hfile compression on client 
> --
>
> Key: HBASE-21810
> URL: https://issues.apache.org/jira/browse/HBASE-21810
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 1.3.3, 1.4.9, 1.2.10, 2.0.4, 2.1.3
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Major
> Attachments: HBASE-21810.branch-1.001.patch, 
> HBASE-21810.branch-1.2.001.patch, HBASE-21810.branch-2.001.patch, 
> HBASE-21810.master.001.patch
>
>
> hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the 
> table(cf) compression,
> if the compression can be set on client ,sometimes,it's useful,
> some case in our production:
> 1、hfile bulkload replication between the data center with bandwidth limit, we 
> can set the compression of the bulkload hfile not changing the table 
> compression
> 2、bulkload hfile not set  compression ,but the table compression is 
> gz/zstd/snappy... ,can reduce the hfile created time and compaction will make 
> the hfile to compression finally
> 3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has 
> no compression lib,but the hbase cluster has,it's useful for this case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20587) Replace Jackson with shaded thirdparty gson

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774835#comment-16774835
 ] 

Hadoop QA commented on HBASE-20587:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
55s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} hbase-common generated 0 new + 41 unchanged - 1 
fixed = 41 total (was 42) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} hbase-metrics in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 23s{color} 
| {color:red} hbase-http generated 1 new + 10 unchanged - 0 fixed = 11 total 
(was 10) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
17s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} hbase-common: The patch generated 0 new + 0 
unchanged - 18 fixed = 0 total (was 18) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch passed checkstyle in hbase-metrics {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} hbase-client: The patch generated 0 new + 11 
unchanged - 1 fixed = 11 total (was 12) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} hbase-http: The patch generated 0 new + 0 unchanged 
- 2 fixed = 0 total (was 2) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} hbase-server: The patch generated 0 new + 110 
unchanged - 28 fixed = 110 total (was 138) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} hbase-mapreduce: The patch generated 0 new + 113 
unchanged - 3 fixed = 113 total (was 116) {color} |
| {color:green}+1{color} | {color:green} checkstyle 

[jira] [Commented] (HBASE-17094) Add a sitemap for hbase.apache.org

2019-02-21 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774828#comment-16774828
 ] 

kevin su commented on HBASE-17094:
--

Appreciate for [~stack]. help me find something wrong in my patch.

Do i need to change anything again or it already resolved.

btw, if it's resolved, it means i become contributor in (hbase github)? 

Do i need to commit in github.

> Add a sitemap for hbase.apache.org
> --
>
> Key: HBASE-17094
> URL: https://issues.apache.org/jira/browse/HBASE-17094
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: stack
>Assignee: kevin su
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-17094.v0.patch, HBASE-17094.v0.patch
>
>
> We don't have a sitemap. It was pointed out by [~mbrukman].  Lets add one. 
> Add tooling under dev-support so it gets autogenerated as part of site build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21942) [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774827#comment-16774827
 ] 

Hadoop QA commented on HBASE-21942:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m 
54s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 36s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21942 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959718/HBASE-21942.master.001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  |
| uname | Linux 679b5f4443fe 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 482b505796 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16088/artifact/patchprocess/patch-mvninstall-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16088/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16088/testReport/ |
| Max. process+thread count | 87 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16088/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)
> --
>
> Key: HBASE-21942
> URL: https://issues.apache.org/jira/browse/HBASE-21942
> Project: HBase
>  Issue Type: Bug
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Attachments: HBASE-21942.master.0001.patch, 
> HBASE-21942.master.001.patch, new_rsgroup.png, rsgroup.png
>
>
> http://example:16010/rsgroup.jsp?name=default
> The rsgroup page (rsgroup.jsp) in UI does not show correct information about 
> Total *_Requests Per Second_*.
> Change totalRequests into totalRequestsPerSecond



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21810) bulkload support set hfile compression on client

2019-02-21 Thread Yechao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yechao Chen updated HBASE-21810:

Description: 
hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the 
table(cf) compression,

if the compression can be set on client ,sometimes,it's useful,

some case in our production:

1、hfile bulkload replication between the data center with bandwidth limit, we 
can set the compression of the bulkload hfile not changing the table compression

2、bulkload hfile not set  compression ,but the table compression is 
gz/zstd/snappy... ,can reduce the hfile created time and compaction will make 
the hfile to compression finally

3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has no 
compression lib,but the hbase cluster has,it's useful for this case

  was:
hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the 
table(cf) compression,

if the compression can be set on client ,somethings it's useful,

some case in our production:

1、hfile bulkload replication between the data center with bandwidth limit, we 
can set the compression of the bulkload hfile not changing the table compression

2、bulkload hfile not set  compression ,but the table compression is 
gz/zstd/snappy... ,can reduce the hfile created time and compaction will make 
the hfile to compression finally

3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has no 
compression lib,but the hbase cluster has,it's useful for this case


> bulkload  support set hfile compression on client 
> --
>
> Key: HBASE-21810
> URL: https://issues.apache.org/jira/browse/HBASE-21810
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 1.3.3, 1.4.9, 2.1.2, 1.2.10, 2.0.4
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Major
> Attachments: HBASE-21810.branch-1.001.patch, 
> HBASE-21810.branch-1.2.001.patch, HBASE-21810.branch-2.001.patch, 
> HBASE-21810.master.001.patch
>
>
> hbase bulkload (HFileOutputFormat2) generate hfile ,the compression from the 
> table(cf) compression,
> if the compression can be set on client ,sometimes,it's useful,
> some case in our production:
> 1、hfile bulkload replication between the data center with bandwidth limit, we 
> can set the compression of the bulkload hfile not changing the table 
> compression
> 2、bulkload hfile not set  compression ,but the table compression is 
> gz/zstd/snappy... ,can reduce the hfile created time and compaction will make 
> the hfile to compression finally
> 3、somethings the yarn nodes (hfile created by reduce) /dobulkload client has 
> no compression lib,but the hbase cluster has,it's useful for this case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21867) Support multi-threads in HFileArchiver

2019-02-21 Thread Toshihiro Suzuki (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774802#comment-16774802
 ] 

Toshihiro Suzuki commented on HBASE-21867:
--

It looks like the failed tests in the last QA are not related to the patch. 
They were successful locally. We can ignore the failure in the last QA.

> Support multi-threads in HFileArchiver
> --
>
> Key: HBASE-21867
> URL: https://issues.apache.org/jira/browse/HBASE-21867
> Project: HBase
>  Issue Type: Improvement
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-21867.branch-2.001.patch, 
> HBASE-21867.branch-2.1.001.patch, HBASE-21867.master.001.patch, 
> HBASE-21867.master.002.patch, HBASE-21867.master.002.patch, 
> HBASE-21867.master.003.patch, HBASE-21867.master.004.patch
>
>
> As of now, when deleting a table, we do the following regarding the 
> filesystem layout:
> 1. Move the table data to the temp directory (hbase/.tmp)
> 2. Archive the region directories of the table in the temp directory one by 
> one:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java#L319-L323
> However, step 2 will take a long time when the table has a huge number of 
> regions. So I propose doing step 2 in multi-threads in this Jira. 
> Also, during master startup, we do the same process as step 2:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java#L313-L319
> We should make it multi-threaded, similarly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21942) [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)

2019-02-21 Thread xuqinya (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqinya updated HBASE-21942:

Attachment: HBASE-21942.master.001.patch

> [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)
> --
>
> Key: HBASE-21942
> URL: https://issues.apache.org/jira/browse/HBASE-21942
> Project: HBase
>  Issue Type: Bug
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Attachments: HBASE-21942.master.0001.patch, 
> HBASE-21942.master.001.patch, new_rsgroup.png, rsgroup.png
>
>
> http://example:16010/rsgroup.jsp?name=default
> The rsgroup page (rsgroup.jsp) in UI does not show correct information about 
> Total *_Requests Per Second_*.
> Change totalRequests into totalRequestsPerSecond



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21867) Support multi-threads in HFileArchiver

2019-02-21 Thread Toshihiro Suzuki (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774785#comment-16774785
 ] 

Toshihiro Suzuki commented on HBASE-21867:
--

Thank you for taking a look at this. [~stack]

{quote}
Is this a workaround for fact that we can't do bulk operations against hdfs; 
i.e. pass it a bunch of files at a time to delete?
{quote}
Yes. We can't do bulk operations against hdfs, so I think we need to do it in 
multi-threads.

> Support multi-threads in HFileArchiver
> --
>
> Key: HBASE-21867
> URL: https://issues.apache.org/jira/browse/HBASE-21867
> Project: HBase
>  Issue Type: Improvement
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-21867.branch-2.001.patch, 
> HBASE-21867.branch-2.1.001.patch, HBASE-21867.master.001.patch, 
> HBASE-21867.master.002.patch, HBASE-21867.master.002.patch, 
> HBASE-21867.master.003.patch, HBASE-21867.master.004.patch
>
>
> As of now, when deleting a table, we do the following regarding the 
> filesystem layout:
> 1. Move the table data to the temp directory (hbase/.tmp)
> 2. Archive the region directories of the table in the temp directory one by 
> one:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java#L319-L323
> However, step 2 will take a long time when the table has a huge number of 
> regions. So I propose doing step 2 in multi-threads in this Jira. 
> Also, during master startup, we do the same process as step 2:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java#L313-L319
> We should make it multi-threaded, similarly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21730) Update HBase-book with the procedure based WAL splitting

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774772#comment-16774772
 ] 

Hadoop QA commented on HBASE-21730:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
55s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 17m  
1s{color} | {color:red} root in master failed. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  6m 
29s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 18m 
25s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  6m 
25s{color} | {color:blue} patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21730 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959712/HBASE-21730.master.002.patch
 |
| Optional Tests |  dupname  asflicense  refguide  mvnsite  |
| uname | Linux 402ee5d1a83a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / dfb95cfd83 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16086/artifact/patchprocess/branch-mvnsite-root.txt
 |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16086/artifact/patchprocess/branch-site/book.html
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16086/artifact/patchprocess/patch-mvnsite-root.txt
 |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16086/artifact/patchprocess/patch-site/book.html
 |
| Max. process+thread count | 87 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16086/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Update HBase-book with the procedure based WAL splitting
> 
>
> Key: HBASE-21730
> URL: https://issues.apache.org/jira/browse/HBASE-21730
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Minor
> Attachments: HBASE-21730.master.001.patch, 
> HBASE-21730.master.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21938) Add a new ClusterMetrics.Option SERVERS_NAME to only return the live region servers's name without metrics

2019-02-21 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-21938:
---
   Resolution: Fixed
Fix Version/s: 2.3.0
   2.2.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-2.2+. Thanks [~Yi Mei] for contributing.

> Add a new ClusterMetrics.Option SERVERS_NAME to only return the live region 
> servers's name without metrics
> --
>
> Key: HBASE-21938
> URL: https://issues.apache.org/jira/browse/HBASE-21938
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-21938.master.001.patch, 
> HBASE-21938.master.002.patch
>
>
> One of our production cluster ( which has 20 regions) meet one protobuf 
> exception when getClusterStatus.
>  
> {code:java}
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
> {code}
> And there are some client methods which call getClusterStatus but only need 
> server names. Plan to add a new option which only to return server name. So 
> we can reduce the influence scope even we meet this problem.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21922) BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter

2019-02-21 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774762#comment-16774762
 ] 

Guanghao Zhang commented on HBASE-21922:


{quote}If the ASCII of the delimiter is smaller than all characters in front of 
the delimiter, then there will be no similar problem.
{quote}
Yes... But it is difficult to limit the user's rowkey...

> BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter
> -
>
> Key: HBASE-21922
> URL: https://issues.apache.org/jira/browse/HBASE-21922
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21922.master.001.patch
>
>
> Assume we use '5' as the delimiter, there are rowkeys: row1 is smaller than 
> row2
> {code:java}
> row1: 12345xxx
> row2: 1235{code}
> When use ROWPREFIX_DELIMITED bloom filter, the rowkey write to bloom filter 
> are
> {code:java}
> row1's key for bloom filter: 1234
> row2's key for bloom fitler: 123{code}
> The row1's key for bloom filter is bigger than row2. Then 
> BloomContext#sanityCheck will failed.
> {code:java}
> private void sanityCheck(Cell cell) throws IOException {
>   if (this.getLastCell() != null) {
> LOG.debug("Current cell " + cell + ", prevCell = " + this.getLastCell());
> if (comparator.compare(cell, this.getLastCell()) <= 0) {
>   throw new IOException("Added a key not lexically larger than" + " 
> previous. Current cell = "
>   + cell + ", prevCell = " + this.getLastCell());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-02-21 Thread ramkrishna.s.vasudevan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774760#comment-16774760
 ] 

ramkrishna.s.vasudevan commented on HBASE-21879:


bq.And you can get a ByteBuffer from a netty ByteBuf, by calling the nioBuffer 
method, no different from our ByteBuff. And we have CompositeByteBuf where we 
can have multiple ByteBuf combined.

Thanks [~Apache9]. Yes in the recent years seeing some Netty code - I was 
thinking while typing the above comment that your reply on using nioBuffer or 
CompositeByteBuf will be the answer for it. The ref count and the resource 
leaking detection may be different so I could be wrong there. 

Ya it will be a big project. The Cell,, Cellcomparators, CellUtils all needs to 
be changed and that will alone  be a big change. doing it in a seperate branch 
will be better. 
Thanks for the useful discussions here.


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4
>
> Attachments: QPS-latencies-before-HBASE-21879.png, 
> gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21505) Several inconsistencies on information reported for Replication Sources by hbase shell status 'replication' command.

2019-02-21 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-21505:
-
Attachment: HBASE-21505-branch-2.002.patch

> Several inconsistencies on information reported for Replication Sources by 
> hbase shell status 'replication' command.
> 
>
> Key: HBASE-21505
> URL: https://issues.apache.org/jira/browse/HBASE-21505
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 3.0.0, 1.4.6, 2.2.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Attachments: 
> 0001-HBASE-21505-initial-version-for-more-detailed-report.patch, 
> HBASE-21505-branch-2.001.patch, HBASE-21505-branch-2.001.patch, 
> HBASE-21505-branch-2.002.patch, HBASE-21505-branch-2.002.patch, 
> HBASE-21505-master.001.patch, HBASE-21505-master.002.patch, 
> HBASE-21505-master.003.patch, HBASE-21505-master.004.patch, 
> HBASE-21505-master.005.patch, HBASE-21505-master.006.patch, 
> HBASE-21505-master.007.patch, HBASE-21505-master.008.patch, 
> HBASE-21505-master.009.patch, HBASE-21505-master.010.patch, 
> HBASE-21505-master.011.patch, HBASE-21505-master.011.patch
>
>
> While reviewing hbase shell status 'replication' command, noticed the 
> following issues related to replication source section:
> 1) TimeStampsOfLastShippedOp keeps getting updated and increasing even when 
> no new edits were added to source, so nothing was really shipped. Test steps 
> performed:
> 1.1) Source cluster with only one table targeted to replication;
> 1.2) Added a new row, confirmed the row appeared in Target cluster;
> 1.3) Issued status 'replication' command in source, TimeStampsOfLastShippedOp 
> shows current timestamp T1.
> 1.4) Waited 30 seconds, no new data added to source. Issued status 
> 'replication' command, now shows timestamp T2.
> 2) When replication is stuck due some connectivity issues or target 
> unavailability, if new edits are added in source, reported AgeOfLastShippedOp 
> is wrongly showing same value as "Replication Lag". This is incorrect, 
> AgeOfLastShippedOp should not change until there's indeed another edit 
> shipped to target. Test steps performed:
> 2.1) Source cluster with only one table targeted to replication;
> 2.2) Stopped target cluster RS;
> 2.3) Put a new row on source. Running status 'replication' command does show 
> lag increasing. TimeStampsOfLastShippedOp seems correct also, no further 
> updates as described on bullet #1 above.
> 2.4) AgeOfLastShippedOp keeps increasing together with Replication Lag, even 
> though there's no new edit shipped to target:
> {noformat}
> ...
>  SOURCE: PeerID=1, AgeOfLastShippedOp=5581, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=5581
> ...
> ...
> SOURCE: PeerID=1, AgeOfLastShippedOp=8586, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=8586
> ...
> {noformat}
> 3) AgeOfLastShippedOp gets set to 0 even when a given edit had taken some 
> time before it got finally shipped to target. Test steps performed:
> 3.1) Source cluster with only one table targeted to replication;
> 3.2) Stopped target cluster RS;
> 3.3) Put a new row on source. 
> 3.4) AgeOfLastShippedOp keeps increasing together with Replication Lag, even 
> though there's no new edit shipped to target:
> {noformat}
> T1:
> ...
>  SOURCE: PeerID=1, AgeOfLastShippedOp=5581, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=5581
> ...
> T2:
> ...
> SOURCE: PeerID=1, AgeOfLastShippedOp=8586, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=8586
> ...
> {noformat}
> 3.5) Restart target cluster RS and verified the new row appeared there. No 
> new edit added, but status 'replication' command reports AgeOfLastShippedOp 
> as 0, while it should be the diff between the time it concluded shipping at 
> target and the time it was added in source:
> {noformat}
> SOURCE: PeerID=1, AgeOfLastShippedOp=0, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=0
> {noformat}
> 4) When replication is stuck due some connectivity issues or target 
> unavailability, if RS is restarted, once recovered queue source is started, 
> TimeStampsOfLastShippedOp is set to initial java date (Thu Jan 01 01:00:00 
> GMT 1970, for example), thus "Replication Lag" also gives a complete 
> inaccurate value. 
> Tests performed:
> 4.1) Source cluster with only one table targeted to replication;
> 4.2) Stopped target cluster RS;
> 4.3) Put a new row on source, restart RS on source, waited a few seconds for 
> recovery queue source to startup, then 

[jira] [Commented] (HBASE-21934) SplitWALProcedure get stuck during ITBLL

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774744#comment-16774744
 ] 

Hadoop QA commented on HBASE-21934:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
17s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m 
46s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
2s{color} | {color:red} hbase-server: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  3m 
17s{color} | {color:red} patch has 11 errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  2m  
3s{color} | {color:red} The patch causes 11 errors with Hadoop v2.7.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  4m 
15s{color} | {color:red} The patch causes 11 errors with Hadoop v3.0.0. {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
29s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 53s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21934 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959709/HBASE-21934.master.002.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 6bffd2c539f9 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 

[jira] [Commented] (HBASE-21934) SplitWALProcedure get stuck during ITBLL

2019-02-21 Thread Jingyun Tian (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774741#comment-16774741
 ] 

Jingyun Tian commented on HBASE-21934:
--

[~Apache9] I got your point. Maybe we should make these methods synchronized? 
Like remoteCallFailed and remoteOperationCompleted. Then when we get the lock, 
we can check if the procedure is finished.

> SplitWALProcedure get stuck during ITBLL
> 
>
> Key: HBASE-21934
> URL: https://issues.apache.org/jira/browse/HBASE-21934
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.x
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.
>
> Attachments: HBASE-21934.master.001.patch, 
> HBASE-21934.master.002.patch
>
>
> I encounter the problem that when master assign a splitWALRemoteProcedure to 
> a region server. The log of this region server says it failed to recover the 
> lease of this file. Then this region server is killed by chaosMonkey. As the 
> result, this procedure is not timeout and hang there forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21945) Maintain the original order when send batching request

2019-02-21 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-21945:
-

 Summary: Maintain the original order when send batching request
 Key: HBASE-21945
 URL: https://issues.apache.org/jira/browse/HBASE-21945
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang


Find this when implementing HBASE-21717. In some UT we put several good 
requests and bad requests together, and expect only the bad ones to fail. This 
usually depends on the grouping at rs side, if we group the good one and the 
bad one together as a batch, it will fail them all. So usually in test we will 
insert an increment or append in the middle to break them into two groups when 
executing at RS side.

So if we do not maintain the order, at the rs side, the increment or append may 
comes first or last, then the good ones and bad ones will be grouped and cause 
all of them to fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21922) BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter

2019-02-21 Thread Guangxu Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774737#comment-16774737
 ] 

Guangxu Cheng commented on HBASE-21922:
---

This problem does exist in individual scenarios:(, which is related to the 
choice of delimiter. If the ASCII of the delimiter is smaller than all 
characters in front of the delimiter, then there will be no similar problem.

> BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter
> -
>
> Key: HBASE-21922
> URL: https://issues.apache.org/jira/browse/HBASE-21922
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21922.master.001.patch
>
>
> Assume we use '5' as the delimiter, there are rowkeys: row1 is smaller than 
> row2
> {code:java}
> row1: 12345xxx
> row2: 1235{code}
> When use ROWPREFIX_DELIMITED bloom filter, the rowkey write to bloom filter 
> are
> {code:java}
> row1's key for bloom filter: 1234
> row2's key for bloom fitler: 123{code}
> The row1's key for bloom filter is bigger than row2. Then 
> BloomContext#sanityCheck will failed.
> {code:java}
> private void sanityCheck(Cell cell) throws IOException {
>   if (this.getLastCell() != null) {
> LOG.debug("Current cell " + cell + ", prevCell = " + this.getLastCell());
> if (comparator.compare(cell, this.getLastCell()) <= 0) {
>   throw new IOException("Added a key not lexically larger than" + " 
> previous. Current cell = "
>   + cell + ", prevCell = " + this.getLastCell());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21730) Update HBase-book with the procedure based WAL splitting

2019-02-21 Thread Jingyun Tian (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774735#comment-16774735
 ] 

Jingyun Tian commented on HBASE-21730:
--

[~stack], Thanks for your comment. I removed the old description in the new 
patch.

> Update HBase-book with the procedure based WAL splitting
> 
>
> Key: HBASE-21730
> URL: https://issues.apache.org/jira/browse/HBASE-21730
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Minor
> Attachments: HBASE-21730.master.001.patch, 
> HBASE-21730.master.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21943) The usage of RegionLocations.mergeRegionLocations is wrong for async client

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774732#comment-16774732
 ] 

Hadoop QA commented on HBASE-21943:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
39s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
6s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}139m 
16s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}192m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21943 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959689/HBASE-21943-UT.patch |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  xml  |
| uname | Linux 008f7c3efa3d 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Updated] (HBASE-21730) Update HBase-book with the procedure based WAL splitting

2019-02-21 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-21730:
-
Attachment: HBASE-21730.master.002.patch

> Update HBase-book with the procedure based WAL splitting
> 
>
> Key: HBASE-21730
> URL: https://issues.apache.org/jira/browse/HBASE-21730
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Minor
> Attachments: HBASE-21730.master.001.patch, 
> HBASE-21730.master.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21934) SplitWALProcedure get stuck during ITBLL

2019-02-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774720#comment-16774720
 ] 

Duo Zhang commented on HBASE-21934:
---

I think we should be careful here. There could be race that we have already 
sent the request to RS, and even RS has already report back, and then it 
crashes...

> SplitWALProcedure get stuck during ITBLL
> 
>
> Key: HBASE-21934
> URL: https://issues.apache.org/jira/browse/HBASE-21934
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.x
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.
>
> Attachments: HBASE-21934.master.001.patch, 
> HBASE-21934.master.002.patch
>
>
> I encounter the problem that when master assign a splitWALRemoteProcedure to 
> a region server. The log of this region server says it failed to recover the 
> lease of this file. Then this region server is killed by chaosMonkey. As the 
> result, this procedure is not timeout and hang there forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21934) SplitWALProcedure get stuck during ITBLL

2019-02-21 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-21934:
-
Attachment: HBASE-21934.master.002.patch

> SplitWALProcedure get stuck during ITBLL
> 
>
> Key: HBASE-21934
> URL: https://issues.apache.org/jira/browse/HBASE-21934
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.x
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.
>
> Attachments: HBASE-21934.master.001.patch, 
> HBASE-21934.master.002.patch
>
>
> I encounter the problem that when master assign a splitWALRemoteProcedure to 
> a region server. The log of this region server says it failed to recover the 
> lease of this file. Then this region server is killed by chaosMonkey. As the 
> result, this procedure is not timeout and hang there forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21783) Support exceed user/table/ns throttle quota if region server has available quota

2019-02-21 Thread Yi Mei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei updated HBASE-21783:
---
Attachment: HBASE-21783.branch-2.001.patch

> Support exceed user/table/ns throttle quota if region server has available 
> quota
> 
>
> Key: HBASE-21783
> URL: https://issues.apache.org/jira/browse/HBASE-21783
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Attachments: HBASE-21783.branch-2.001.patch, 
> HBASE-21783.master.001.patch, HBASE-21783.master.002.patch, 
> HBASE-21783.master.003.patch, HBASE-21783.master.004.patch, 
> HBASE-21783.master.005.patch, HBASE-21783.master.006.patch
>
>
> Currently, all types of rpc throttle quota (include region server, namespace, 
> table and user quota) are hard limit, which means once requests exceed the 
> amount, they will be throttled.
>  In some situation, user use out of all their own quotas but region server 
> still has available quota because other users don't consume at the same time, 
> in this case, we can allow user consume additional quota. So add a switch to 
> enable or disable other quotas(except region server quota) exceed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21944) Validate put for batch operation

2019-02-21 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-21944:
-

 Summary: Validate put for batch operation
 Key: HBASE-21944
 URL: https://issues.apache.org/jira/browse/HBASE-21944
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17094) Add a sitemap for hbase.apache.org

2019-02-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774700#comment-16774700
 ] 

Hudson commented on HBASE-17094:


Results for branch master
[build #814 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/814/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/814//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/814//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/814//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add a sitemap for hbase.apache.org
> --
>
> Key: HBASE-17094
> URL: https://issues.apache.org/jira/browse/HBASE-17094
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: stack
>Assignee: kevin su
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-17094.v0.patch, HBASE-17094.v0.patch
>
>
> We don't have a sitemap. It was pointed out by [~mbrukman].  Lets add one. 
> Add tooling under dev-support so it gets autogenerated as part of site build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-02-21 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774697#comment-16774697
 ] 

Zheng Hu commented on HBASE-21879:
--

Discussed with [~Apache9],  in general, Replacing hbase.nio.ByteBuff with 
netty.ByteBuf is possible, and will reduce lots of code in HBase. but lots of 
work. we think can move the HBASE-21879 forward firstly by using a simple 
refcnt, then we can let the HBase2.x working. Maybe in HBase3.x,  the 
netty.ByteBuf will come in.

> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4
>
> Attachments: QPS-latencies-before-HBASE-21879.png, 
> gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20587) Replace Jackson with shaded thirdparty gson

2019-02-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20587:
--
Attachment: HBASE-20587-v3.patch

> Replace Jackson with shaded thirdparty gson
> ---
>
> Key: HBASE-20587
> URL: https://issues.apache.org/jira/browse/HBASE-20587
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Reporter: Josh Elser
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-20587-v1.patch, HBASE-20587-v2.patch, 
> HBASE-20587-v3.patch, HBASE-20587.001.patch
>
>
> HBASE-20582 got me looking at how we use Jackson. It appears that we moved 
> some JSON code from hbase-server into hbase-common via HBASE-19053. But, 
> there seems to be no good reason why this code should live there and not in 
> hbase-http instead. Keeping Jackson off the user's classpath is a nice goal.
> FYI [~appy], [~mdrob]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21922) BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter

2019-02-21 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774690#comment-16774690
 ] 

Guanghao Zhang commented on HBASE-21922:


{quote}bq.Will KeyDelimitedRegionSplitPolicy have the same problem?
{quote}
You mean which problem?

> BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter
> -
>
> Key: HBASE-21922
> URL: https://issues.apache.org/jira/browse/HBASE-21922
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21922.master.001.patch
>
>
> Assume we use '5' as the delimiter, there are rowkeys: row1 is smaller than 
> row2
> {code:java}
> row1: 12345xxx
> row2: 1235{code}
> When use ROWPREFIX_DELIMITED bloom filter, the rowkey write to bloom filter 
> are
> {code:java}
> row1's key for bloom filter: 1234
> row2's key for bloom fitler: 123{code}
> The row1's key for bloom filter is bigger than row2. Then 
> BloomContext#sanityCheck will failed.
> {code:java}
> private void sanityCheck(Cell cell) throws IOException {
>   if (this.getLastCell() != null) {
> LOG.debug("Current cell " + cell + ", prevCell = " + this.getLastCell());
> if (comparator.compare(cell, this.getLastCell()) <= 0) {
>   throw new IOException("Added a key not lexically larger than" + " 
> previous. Current cell = "
>   + cell + ", prevCell = " + this.getLastCell());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21942) [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774691#comment-16774691
 ] 

Hadoop QA commented on HBASE-21942:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HBASE-21942 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.8.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-21942 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16082/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)
> --
>
> Key: HBASE-21942
> URL: https://issues.apache.org/jira/browse/HBASE-21942
> Project: HBase
>  Issue Type: Bug
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Attachments: HBASE-21942.master.0001.patch, new_rsgroup.png, 
> rsgroup.png
>
>
> http://example:16010/rsgroup.jsp?name=default
> The rsgroup page (rsgroup.jsp) in UI does not show correct information about 
> Total *_Requests Per Second_*.
> Change totalRequests into totalRequestsPerSecond



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20587) Replace Jackson with shaded thirdparty gson

2019-02-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774687#comment-16774687
 ] 

Duo Zhang commented on HBASE-20587:
---

Review board link:

https://reviews.apache.org/r/70029/

> Replace Jackson with shaded thirdparty gson
> ---
>
> Key: HBASE-20587
> URL: https://issues.apache.org/jira/browse/HBASE-20587
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Reporter: Josh Elser
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-20587-v1.patch, HBASE-20587-v2.patch, 
> HBASE-20587.001.patch
>
>
> HBASE-20582 got me looking at how we use Jackson. It appears that we moved 
> some JSON code from hbase-server into hbase-common via HBASE-19053. But, 
> there seems to be no good reason why this code should live there and not in 
> hbase-http instead. Keeping Jackson off the user's classpath is a nice goal.
> FYI [~appy], [~mdrob]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20587) Replace Jackson with shaded thirdparty gson

2019-02-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20587:
--
Fix Version/s: 2.2.0

> Replace Jackson with shaded thirdparty gson
> ---
>
> Key: HBASE-20587
> URL: https://issues.apache.org/jira/browse/HBASE-20587
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies
>Reporter: Josh Elser
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-20587-v1.patch, HBASE-20587-v2.patch, 
> HBASE-20587.001.patch
>
>
> HBASE-20582 got me looking at how we use Jackson. It appears that we moved 
> some JSON code from hbase-server into hbase-common via HBASE-19053. But, 
> there seems to be no good reason why this code should live there and not in 
> hbase-http instead. Keeping Jackson off the user's classpath is a nice goal.
> FYI [~appy], [~mdrob]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21922) BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter

2019-02-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774682#comment-16774682
 ] 

Duo Zhang commented on HBASE-21922:
---

Ping [~stack]. What do you think sir? Thanks.

> BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter
> -
>
> Key: HBASE-21922
> URL: https://issues.apache.org/jira/browse/HBASE-21922
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21922.master.001.patch
>
>
> Assume we use '5' as the delimiter, there are rowkeys: row1 is smaller than 
> row2
> {code:java}
> row1: 12345xxx
> row2: 1235{code}
> When use ROWPREFIX_DELIMITED bloom filter, the rowkey write to bloom filter 
> are
> {code:java}
> row1's key for bloom filter: 1234
> row2's key for bloom fitler: 123{code}
> The row1's key for bloom filter is bigger than row2. Then 
> BloomContext#sanityCheck will failed.
> {code:java}
> private void sanityCheck(Cell cell) throws IOException {
>   if (this.getLastCell() != null) {
> LOG.debug("Current cell " + cell + ", prevCell = " + this.getLastCell());
> if (comparator.compare(cell, this.getLastCell()) <= 0) {
>   throw new IOException("Added a key not lexically larger than" + " 
> previous. Current cell = "
>   + cell + ", prevCell = " + this.getLastCell());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21942) [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)

2019-02-21 Thread xuqinya (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqinya updated HBASE-21942:

Attachment: rsgroup.png
new_rsgroup.png

> [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)
> --
>
> Key: HBASE-21942
> URL: https://issues.apache.org/jira/browse/HBASE-21942
> Project: HBase
>  Issue Type: Bug
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Attachments: HBASE-21942.master.0001.patch, new_rsgroup.png, 
> rsgroup.png
>
>
> http://example:16010/rsgroup.jsp?name=default
> The rsgroup page (rsgroup.jsp) in UI does not show correct information about 
> Total *_Requests Per Second_*.
> Change totalRequests into totalRequestsPerSecond



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21942) [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)

2019-02-21 Thread xuqinya (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqinya updated HBASE-21942:

Attachment: HBASE-21942.master.0001.patch
Status: Patch Available  (was: Open)

> [UI] requests per second is incorrect in rsgroup page(rsgroup.jsp)
> --
>
> Key: HBASE-21942
> URL: https://issues.apache.org/jira/browse/HBASE-21942
> Project: HBase
>  Issue Type: Bug
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Attachments: HBASE-21942.master.0001.patch
>
>
> http://example:16010/rsgroup.jsp?name=default
> The rsgroup page (rsgroup.jsp) in UI does not show correct information about 
> Total *_Requests Per Second_*.
> Change totalRequests into totalRequestsPerSecond



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21938) Add a new ClusterMetrics.Option SERVERS_NAME to only return the live region servers's name without metrics

2019-02-21 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774685#comment-16774685
 ] 

Guanghao Zhang commented on HBASE-21938:


+1

> Add a new ClusterMetrics.Option SERVERS_NAME to only return the live region 
> servers's name without metrics
> --
>
> Key: HBASE-21938
> URL: https://issues.apache.org/jira/browse/HBASE-21938
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Yi Mei
>Priority: Major
> Attachments: HBASE-21938.master.001.patch, 
> HBASE-21938.master.002.patch
>
>
> One of our production cluster ( which has 20 regions) meet one protobuf 
> exception when getClusterStatus.
>  
> {code:java}
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
> {code}
> And there are some client methods which call getClusterStatus but only need 
> server names. Plan to add a new option which only to return server name. So 
> we can reduce the influence scope even we meet this problem.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21922) BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter

2019-02-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774680#comment-16774680
 ] 

Duo Zhang commented on HBASE-21922:
---

Maybe not since we just use the algorithm to get a key, and then use it to 
split... For the case here the result may not be what user expects but no logic 
error.

But for bloom filter this is will be a big problem.

So I'm +1 on removing the ROWPREFIX_DELMITED bloom filter type. As it requires 
the row key to have some specific patterns, but until now we have no way to 
confirm the row key pattern before actually write it and then scanning the 
whole row keys, which means we can not reject invalid row keys at the 
beginning. This is dangerous I think...

> BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter
> -
>
> Key: HBASE-21922
> URL: https://issues.apache.org/jira/browse/HBASE-21922
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21922.master.001.patch
>
>
> Assume we use '5' as the delimiter, there are rowkeys: row1 is smaller than 
> row2
> {code:java}
> row1: 12345xxx
> row2: 1235{code}
> When use ROWPREFIX_DELIMITED bloom filter, the rowkey write to bloom filter 
> are
> {code:java}
> row1's key for bloom filter: 1234
> row2's key for bloom fitler: 123{code}
> The row1's key for bloom filter is bigger than row2. Then 
> BloomContext#sanityCheck will failed.
> {code:java}
> private void sanityCheck(Cell cell) throws IOException {
>   if (this.getLastCell() != null) {
> LOG.debug("Current cell " + cell + ", prevCell = " + this.getLastCell());
> if (comparator.compare(cell, this.getLastCell()) <= 0) {
>   throw new IOException("Added a key not lexically larger than" + " 
> previous. Current cell = "
>   + cell + ", prevCell = " + this.getLastCell());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21922) BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter

2019-02-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774674#comment-16774674
 ] 

Duo Zhang commented on HBASE-21922:
---

Will KeyDelimitedRegionSplitPolicy have the same problem?

> BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter
> -
>
> Key: HBASE-21922
> URL: https://issues.apache.org/jira/browse/HBASE-21922
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21922.master.001.patch
>
>
> Assume we use '5' as the delimiter, there are rowkeys: row1 is smaller than 
> row2
> {code:java}
> row1: 12345xxx
> row2: 1235{code}
> When use ROWPREFIX_DELIMITED bloom filter, the rowkey write to bloom filter 
> are
> {code:java}
> row1's key for bloom filter: 1234
> row2's key for bloom fitler: 123{code}
> The row1's key for bloom filter is bigger than row2. Then 
> BloomContext#sanityCheck will failed.
> {code:java}
> private void sanityCheck(Cell cell) throws IOException {
>   if (this.getLastCell() != null) {
> LOG.debug("Current cell " + cell + ", prevCell = " + this.getLastCell());
> if (comparator.compare(cell, this.getLastCell()) <= 0) {
>   throw new IOException("Added a key not lexically larger than" + " 
> previous. Current cell = "
>   + cell + ", prevCell = " + this.getLastCell());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21934) SplitWALProcedure get stuck during ITBLL

2019-02-21 Thread Jingyun Tian (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774679#comment-16774679
 ] 

Jingyun Tian commented on HBASE-21934:
--

[~stack] Thanks for your comments. Yes. I'm considering the same problem too. 
I'm wondering if there is a way to call this method like try-finally block. 

> SplitWALProcedure get stuck during ITBLL
> 
>
> Key: HBASE-21934
> URL: https://issues.apache.org/jira/browse/HBASE-21934
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.x
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.
>
> Attachments: HBASE-21934.master.001.patch
>
>
> I encounter the problem that when master assign a splitWALRemoteProcedure to 
> a region server. The log of this region server says it failed to recover the 
> lease of this file. Then this region server is killed by chaosMonkey. As the 
> result, this procedure is not timeout and hang there forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21818) a document write misstack

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774643#comment-16774643
 ] 

Hadoop QA commented on HBASE-21818:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 19m  
2s{color} | {color:red} Docker failed to build yetus/hbase:a527708. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-21818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959694/HBASE-21818.branch-1.1.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16081/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> a document write misstack
> -
>
> Key: HBASE-21818
> URL: https://issues.apache.org/jira/browse/HBASE-21818
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, master
>Affects Versions: 1.1.0, 2.0.1
> Environment: has nothing to do with env
>Reporter: qiang Liu
>Priority: Trivial
>  Labels: easyfix, javadoc
> Fix For: 2.0.2
>
> Attachments: HBASE-21818.branch-1.001.patch, 
> HBASE-21818.branch-1.1.001.patch, HBASE-21818.branch-1.1.002.patch, 
> blankLineOfJavaDoc.png
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> the java doc of this function type a  as ,so dispaly a blanck line
> {code:java}
> //org.apache.hadoop.hbase.master.HMaster#finishActiveMasterInitialization
> {code}
> paste the function and it's doc here , please look the line "Ensure 
> assignment of meta/namespace regions"
>  
>  
> {code:java}
> /**
>  * Finish initialization of HMaster after becoming the primary master.
>  *
>  * 
>  * Initialize master components - file system manager, server manager,
>  * assignment manager, region server tracker, etc
>  * Start necessary service threads - balancer, catalog janior,
>  * executor services, etc
>  * Set cluster as UP in ZooKeeper
>  * Wait for RegionServers to check-in
>  * Split logs and perform data recovery, if necessary
>  * Ensure assignment of meta/namespace regions
>  * Handle either fresh cluster start or master failover
>  * 
>  *
>  * @throws IOException
>  * @throws InterruptedException
>  * @throws KeeperException
>  * @throws CoordinatedStateException
>  */
> private void finishActiveMasterInitialization(MonitoredTask status)
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21818) a document write misstack

2019-02-21 Thread qiang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774632#comment-16774632
 ] 

qiang Liu commented on HBASE-21818:
---

patch number added

> a document write misstack
> -
>
> Key: HBASE-21818
> URL: https://issues.apache.org/jira/browse/HBASE-21818
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, master
>Affects Versions: 1.1.0, 2.0.1
> Environment: has nothing to do with env
>Reporter: qiang Liu
>Priority: Trivial
>  Labels: easyfix, javadoc
> Fix For: 2.0.2
>
> Attachments: HBASE-21818.branch-1.001.patch, 
> HBASE-21818.branch-1.1.001.patch, HBASE-21818.branch-1.1.002.patch, 
> blankLineOfJavaDoc.png
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> the java doc of this function type a  as ,so dispaly a blanck line
> {code:java}
> //org.apache.hadoop.hbase.master.HMaster#finishActiveMasterInitialization
> {code}
> paste the function and it's doc here , please look the line "Ensure 
> assignment of meta/namespace regions"
>  
>  
> {code:java}
> /**
>  * Finish initialization of HMaster after becoming the primary master.
>  *
>  * 
>  * Initialize master components - file system manager, server manager,
>  * assignment manager, region server tracker, etc
>  * Start necessary service threads - balancer, catalog janior,
>  * executor services, etc
>  * Set cluster as UP in ZooKeeper
>  * Wait for RegionServers to check-in
>  * Split logs and perform data recovery, if necessary
>  * Ensure assignment of meta/namespace regions
>  * Handle either fresh cluster start or master failover
>  * 
>  *
>  * @throws IOException
>  * @throws InterruptedException
>  * @throws KeeperException
>  * @throws CoordinatedStateException
>  */
> private void finishActiveMasterInitialization(MonitoredTask status)
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21818) a document write misstack

2019-02-21 Thread qiang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiang Liu updated HBASE-21818:
--
Attachment: HBASE-21818.branch-1.1.002.patch

> a document write misstack
> -
>
> Key: HBASE-21818
> URL: https://issues.apache.org/jira/browse/HBASE-21818
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, master
>Affects Versions: 1.1.0, 2.0.1
> Environment: has nothing to do with env
>Reporter: qiang Liu
>Priority: Trivial
>  Labels: easyfix, javadoc
> Fix For: 2.0.2
>
> Attachments: HBASE-21818.branch-1.001.patch, 
> HBASE-21818.branch-1.1.001.patch, HBASE-21818.branch-1.1.002.patch, 
> blankLineOfJavaDoc.png
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> the java doc of this function type a  as ,so dispaly a blanck line
> {code:java}
> //org.apache.hadoop.hbase.master.HMaster#finishActiveMasterInitialization
> {code}
> paste the function and it's doc here , please look the line "Ensure 
> assignment of meta/namespace regions"
>  
>  
> {code:java}
> /**
>  * Finish initialization of HMaster after becoming the primary master.
>  *
>  * 
>  * Initialize master components - file system manager, server manager,
>  * assignment manager, region server tracker, etc
>  * Start necessary service threads - balancer, catalog janior,
>  * executor services, etc
>  * Set cluster as UP in ZooKeeper
>  * Wait for RegionServers to check-in
>  * Split logs and perform data recovery, if necessary
>  * Ensure assignment of meta/namespace regions
>  * Handle either fresh cluster start or master failover
>  * 
>  *
>  * @throws IOException
>  * @throws InterruptedException
>  * @throws KeeperException
>  * @throws CoordinatedStateException
>  */
> private void finishActiveMasterInitialization(MonitoredTask status)
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21796) RecoverableZooKeeper indefinitely retries a client stuck in AUTH_FAILED

2019-02-21 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774622#comment-16774622
 ] 

Ankit Singhal edited comment on HBASE-21796 at 2/22/19 12:34 AM:
-

 
{quote}bq.   need to look more closely at when AUTH_FAILED would be thrown by 
the ZK server.
{quote}
 

probably in our case, AUTH_FAILED is thrown when zk was not able to communicate 
with KDC.
{noformat}
client.ZooKeeperSaslClient: An error: (java.security
.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Connection refused (Connection refused))]) occur
red when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper 
Client will go to AUTH_FAILED state.{noformat}
{quote}.003 adds some extra configuration properties 
({{hbase.zookeeper.authfailed.retries.number}} default: 15, 
{{hbase.zookeeper.authfailed.pause}} default: 100)
{quote}
Can we use RetryCounter instead of using 
{noformat}
+ observedAuthFailed++;
+ if (observedAuthFailed > allowedAuthFailedRetries) {
+ throw new RuntimeException("Exceeded the configured retries for handling 
ZooKeeper"
+ + " AUTH_FAILED exceptions (" + allowedAuthFailedRetries + ")");
+ }
+ // Avoid a fast retry loop.
+ if (LOG.isTraceEnabled()) {
+ LOG.trace("Sleeping " + authFailedPause + "ms before re-creating ZooKeeper 
object after"
+ + " AUTH_FAILED state");
+ }
+ TimeUnit.MILLISECONDS.sleep(authFailedPause);{noformat}


was (Author: an...@apache.org):
 

  need to look more closely at when AUTH_FAILED would be thrown by the ZK 
server.

 

probably in our case, AUTH_FAILED is thrown when zk was not able to communicate 
with KDC.
{noformat}
client.ZooKeeperSaslClient: An error: (java.security
.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Connection refused (Connection refused))]) occur
red when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper 
Client will go to AUTH_FAILED state.{noformat}



{quote}.003 adds some extra configuration properties 
({{hbase.zookeeper.authfailed.retries.number}} default: 15, 
{{hbase.zookeeper.authfailed.pause}} default: 100)
{quote}
Can we use RetryCounter instead of using 
{noformat}
+ observedAuthFailed++;
+ if (observedAuthFailed > allowedAuthFailedRetries) {
+ throw new RuntimeException("Exceeded the configured retries for handling 
ZooKeeper"
+ + " AUTH_FAILED exceptions (" + allowedAuthFailedRetries + ")");
+ }
+ // Avoid a fast retry loop.
+ if (LOG.isTraceEnabled()) {
+ LOG.trace("Sleeping " + authFailedPause + "ms before re-creating ZooKeeper 
object after"
+ + " AUTH_FAILED state");
+ }
+ TimeUnit.MILLISECONDS.sleep(authFailedPause);{noformat}

> RecoverableZooKeeper indefinitely retries a client stuck in AUTH_FAILED
> ---
>
> Key: HBASE-21796
> URL: https://issues.apache.org/jira/browse/HBASE-21796
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 1.5.1
>
> Attachments: HBASE-21796.001.branch-1.patch, 
> HBASE-21796.002.branch-1.patch, HBASE-21796.003.branch-1.patch
>
>
> We've observed the following situation inside of a RegionServer which leaves 
> an HConnection in a broken state as a result of the ZooKeeper client having 
> received an AUTH_FAILED case in the Phoenix secondary indexing code-path. The 
> result was that the HConnection used to write the secondary index updates 
> failed every time the client re-attempted the write but we had no outward 
> signs from the HConnection that there was a problem with that HConnection 
> instance.
> ZooKeeper programmer docs tell us that if a ZooKeeper instance goes to the 
> {{AUTH_FAILED}} state that we must open a new ZooKeeper instance: 
> [https://zookeeper.apache.org/doc/r3.4.13/zookeeperProgrammers.html#ch_zkSessions]
> When a new HConnection (or one without a cached meta location) tries to 
> access ZooKeeper to find meta's location or the cluster ID, this spin 
> indefinitely because we can never access ZooKeeper because our client is 
> broken from the AUTH_FAILED. For the Phoenix use-case (where we're trying to 
> use this HConnection within the RS), this breaks things pretty fast.
> The circumstances that caused us to observe this are not an HBase (or Phoenix 
> or ZooKeeper) problem. The AUTH_FAILED exception we see is a result of 
> networking issues on a user's system. Despite this, we can make our handling 
> of this situation better.
> We already have logic inside of RecoverableZooKeeper to re-create a ZooKeeper 
> object when we need one (e.g. session 

[jira] [Comment Edited] (HBASE-21796) RecoverableZooKeeper indefinitely retries a client stuck in AUTH_FAILED

2019-02-21 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774622#comment-16774622
 ] 

Ankit Singhal edited comment on HBASE-21796 at 2/22/19 12:33 AM:
-

 

  need to look more closely at when AUTH_FAILED would be thrown by the ZK 
server.

 

probably in our case, AUTH_FAILED is thrown when zk was not able to communicate 
with KDC.
{noformat}
client.ZooKeeperSaslClient: An error: (java.security
.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Connection refused (Connection refused))]) occur
red when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper 
Client will go to AUTH_FAILED state.{noformat}



{quote}.003 adds some extra configuration properties 
({{hbase.zookeeper.authfailed.retries.number}} default: 15, 
{{hbase.zookeeper.authfailed.pause}} default: 100)
{quote}
Can we use RetryCounter instead of using 
{noformat}
+ observedAuthFailed++;
+ if (observedAuthFailed > allowedAuthFailedRetries) {
+ throw new RuntimeException("Exceeded the configured retries for handling 
ZooKeeper"
+ + " AUTH_FAILED exceptions (" + allowedAuthFailedRetries + ")");
+ }
+ // Avoid a fast retry loop.
+ if (LOG.isTraceEnabled()) {
+ LOG.trace("Sleeping " + authFailedPause + "ms before re-creating ZooKeeper 
object after"
+ + " AUTH_FAILED state");
+ }
+ TimeUnit.MILLISECONDS.sleep(authFailedPause);{noformat}


was (Author: an...@apache.org):
{quote}{quote}  need to look more closely at when AUTH_FAILED would be thrown 
by the ZK server.
{quote}{quote}
probably in our case, AUTH_FAILED is thrown when zk was not able to communicate 
with KDC.
client.ZooKeeperSaslClient: An error: (java.security
.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Connection refused (Connection refused))]) occur
red when evaluating Zookeeper Quorum Member's  received SASL token. Zookeeper 
Client will go to AUTH_FAILED state.
{quote}.003 adds some extra configuration properties 
({{hbase.zookeeper.authfailed.retries.number}} default: 15, 
{{hbase.zookeeper.authfailed.pause}} default: 100)
{quote}
Can we use RetryCounter instead of using 
{noformat}
+ observedAuthFailed++;
+ if (observedAuthFailed > allowedAuthFailedRetries) {
+ throw new RuntimeException("Exceeded the configured retries for handling 
ZooKeeper"
+ + " AUTH_FAILED exceptions (" + allowedAuthFailedRetries + ")");
+ }
+ // Avoid a fast retry loop.
+ if (LOG.isTraceEnabled()) {
+ LOG.trace("Sleeping " + authFailedPause + "ms before re-creating ZooKeeper 
object after"
+ + " AUTH_FAILED state");
+ }
+ TimeUnit.MILLISECONDS.sleep(authFailedPause);{noformat}

> RecoverableZooKeeper indefinitely retries a client stuck in AUTH_FAILED
> ---
>
> Key: HBASE-21796
> URL: https://issues.apache.org/jira/browse/HBASE-21796
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 1.5.1
>
> Attachments: HBASE-21796.001.branch-1.patch, 
> HBASE-21796.002.branch-1.patch, HBASE-21796.003.branch-1.patch
>
>
> We've observed the following situation inside of a RegionServer which leaves 
> an HConnection in a broken state as a result of the ZooKeeper client having 
> received an AUTH_FAILED case in the Phoenix secondary indexing code-path. The 
> result was that the HConnection used to write the secondary index updates 
> failed every time the client re-attempted the write but we had no outward 
> signs from the HConnection that there was a problem with that HConnection 
> instance.
> ZooKeeper programmer docs tell us that if a ZooKeeper instance goes to the 
> {{AUTH_FAILED}} state that we must open a new ZooKeeper instance: 
> [https://zookeeper.apache.org/doc/r3.4.13/zookeeperProgrammers.html#ch_zkSessions]
> When a new HConnection (or one without a cached meta location) tries to 
> access ZooKeeper to find meta's location or the cluster ID, this spin 
> indefinitely because we can never access ZooKeeper because our client is 
> broken from the AUTH_FAILED. For the Phoenix use-case (where we're trying to 
> use this HConnection within the RS), this breaks things pretty fast.
> The circumstances that caused us to observe this are not an HBase (or Phoenix 
> or ZooKeeper) problem. The AUTH_FAILED exception we see is a result of 
> networking issues on a user's system. Despite this, we can make our handling 
> of this situation better.
> We already have logic inside of RecoverableZooKeeper to re-create a ZooKeeper 
> object when we need one (e.g. session expired/closed). We can 

[jira] [Commented] (HBASE-14850) C++ client implementation

2019-02-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774620#comment-16774620
 ] 

Hudson commented on HBASE-14850:


Results for branch HBASE-14850
[build #2 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-14850/2/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-14850/2//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-14850/2//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-14850/2//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> C++ client implementation
> -
>
> Key: HBASE-14850
> URL: https://issues.apache.org/jira/browse/HBASE-14850
> Project: HBase
>  Issue Type: Task
>Reporter: Elliott Clark
>Priority: Major
>
> It's happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21796) RecoverableZooKeeper indefinitely retries a client stuck in AUTH_FAILED

2019-02-21 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774622#comment-16774622
 ] 

Ankit Singhal commented on HBASE-21796:
---

{quote}{quote}  need to look more closely at when AUTH_FAILED would be thrown 
by the ZK server.
{quote}{quote}
probably in our case, AUTH_FAILED is thrown when zk was not able to communicate 
with KDC.
client.ZooKeeperSaslClient: An error: (java.security
.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Connection refused (Connection refused))]) occur
red when evaluating Zookeeper Quorum Member's  received SASL token. Zookeeper 
Client will go to AUTH_FAILED state.
{quote}.003 adds some extra configuration properties 
({{hbase.zookeeper.authfailed.retries.number}} default: 15, 
{{hbase.zookeeper.authfailed.pause}} default: 100)
{quote}
Can we use RetryCounter instead of using 
{noformat}
+ observedAuthFailed++;
+ if (observedAuthFailed > allowedAuthFailedRetries) {
+ throw new RuntimeException("Exceeded the configured retries for handling 
ZooKeeper"
+ + " AUTH_FAILED exceptions (" + allowedAuthFailedRetries + ")");
+ }
+ // Avoid a fast retry loop.
+ if (LOG.isTraceEnabled()) {
+ LOG.trace("Sleeping " + authFailedPause + "ms before re-creating ZooKeeper 
object after"
+ + " AUTH_FAILED state");
+ }
+ TimeUnit.MILLISECONDS.sleep(authFailedPause);{noformat}

> RecoverableZooKeeper indefinitely retries a client stuck in AUTH_FAILED
> ---
>
> Key: HBASE-21796
> URL: https://issues.apache.org/jira/browse/HBASE-21796
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 1.5.1
>
> Attachments: HBASE-21796.001.branch-1.patch, 
> HBASE-21796.002.branch-1.patch, HBASE-21796.003.branch-1.patch
>
>
> We've observed the following situation inside of a RegionServer which leaves 
> an HConnection in a broken state as a result of the ZooKeeper client having 
> received an AUTH_FAILED case in the Phoenix secondary indexing code-path. The 
> result was that the HConnection used to write the secondary index updates 
> failed every time the client re-attempted the write but we had no outward 
> signs from the HConnection that there was a problem with that HConnection 
> instance.
> ZooKeeper programmer docs tell us that if a ZooKeeper instance goes to the 
> {{AUTH_FAILED}} state that we must open a new ZooKeeper instance: 
> [https://zookeeper.apache.org/doc/r3.4.13/zookeeperProgrammers.html#ch_zkSessions]
> When a new HConnection (or one without a cached meta location) tries to 
> access ZooKeeper to find meta's location or the cluster ID, this spin 
> indefinitely because we can never access ZooKeeper because our client is 
> broken from the AUTH_FAILED. For the Phoenix use-case (where we're trying to 
> use this HConnection within the RS), this breaks things pretty fast.
> The circumstances that caused us to observe this are not an HBase (or Phoenix 
> or ZooKeeper) problem. The AUTH_FAILED exception we see is a result of 
> networking issues on a user's system. Despite this, we can make our handling 
> of this situation better.
> We already have logic inside of RecoverableZooKeeper to re-create a ZooKeeper 
> object when we need one (e.g. session expired/closed). We can extend this 
> same logic to also re-create the ZK client object if we observe an 
> AUTH_FAILED state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21943) The usage of RegionLocations.mergeRegionLocations is wrong for async client

2019-02-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21943:
--
Attachment: HBASE-21943-UT.patch

> The usage of RegionLocations.mergeRegionLocations is wrong for async client
> ---
>
> Key: HBASE-21943
> URL: https://issues.apache.org/jira/browse/HBASE-21943
> Project: HBase
>  Issue Type: Bug
>  Components: asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.2.0, 2.0.5, 2.3.0, 2.1.4
>
> Attachments: HBASE-21943-UT.patch, HBASE-21943-UT.patch, 
> HBASE-21943.patch
>
>
> In AsyncRegionLocatorHelper.mergeRegionLocations we create a new 
> RegionLocations and call mergeRegionLocations on it, expected that it will be 
> changed by this method, but actually the method will not modify the object 
> itself, it will return a new one...
> And we are lucky that we create the RegionLocations with the new locations, 
> so usually we will get update result. But when testing HBASE-21717, we meet 
> another bug in AsyncNonMetaRegionLocator.isEqual, where we missed a '!' when 
> checking server name equals...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21915) FileLink$FileLinkInputStream doesn't implement CanUnbuffer

2019-02-21 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21915:
---
Fix Version/s: (was: 1.5.1)

> FileLink$FileLinkInputStream doesn't implement CanUnbuffer
> --
>
> Key: HBASE-21915
> URL: https://issues.apache.org/jira/browse/HBASE-21915
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 1.4.10, 2.0.5, 2.3.0, 2.1.4
>
> Attachments: HBASE-21915.001.patch, HBASE-21915.002.patch
>
>
> FileLinkInputStream is an InputStream which handles the indirection of where 
> the real HFile lives. This implementation is wrapped via 
> FSDataInputStreamWrapper and is transparent when it's being used by a caller. 
> Often, we have an FSDataInputStreamWrapper wrapping a FileLinkInputStream 
> which wraps an FSDataInputStream.
> The problem is that FileLinkInputStream does not implement the 
> \{{CanUnbuffer}} interface, which means that the underlying 
> {{FSDataInputStream}} for the HFile the link refers to doesn't get 
> {{unbuffer()}} called on it. This can cause an open Socket to hang around, as 
> described in HBASE-9393.
> Both [~wchevreuil] and myself have run into this, each for different users. 
> We think the commonality as to why these users saw this (but we haven't run 
> into it on our own) is that it requires a very large snapshot to be brought 
> into a new system. Big kudos to [~esteban] for his help in diagnosing this as 
> well!
> If this analysis is accurate, it would affect all branches.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21866) Do not move the table to null rsgroup when creating an existing table

2019-02-21 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21866:
---
Fix Version/s: (was: 1.5.1)
   1.5.0

> Do not move the table to null rsgroup when creating an existing table
> -
>
> Key: HBASE-21866
> URL: https://issues.apache.org/jira/browse/HBASE-21866
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: HBASE-21866.branch-1.000.patch, 
> HBASE-21866.branch-2.000.patch, HBASE-21866.master.000.patch, 
> HBASE-21866.master.001.patch, HBASE-21866.master.002.patch, 
> HBASE-21866.master.003.patch, HBASE-21866.master.003.patch, 
> HBASE-21866.master.003.patch
>
>
> By using the latest HBase master branch, the bug could be re-produced as:
>  # create 't1', 'cf1'
>  # create 't1', 'cf1'
> The following message is logged into HMaster's log:
> {code}
> INFO  [PEWorker-12] rsgroup.RSGroupAdminServer: Moving table t1 to RSGroup 
> null
> {code}
> This is a wrong action and instead, we should keep t1 as where it originally 
> is.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21740) NPE happens while shutdown the RS

2019-02-21 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21740:
---
Fix Version/s: (was: 1.5.1)
   1.5.0

> NPE happens while shutdown the RS
> -
>
> Key: HBASE-21740
> URL: https://issues.apache.org/jira/browse/HBASE-21740
> Project: HBase
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: 0001-fix-HBASE-21740.patch, HBASE-21740_1.patch, 
> HBASE-21740_2.patch, HBASE-21740_branch_1.4_1.patch, 
> HBASE-21740_branch_2.1_2.patch
>
>
> while shutdown a NM, we meet a NPE:
> {code:java}
> 2019-01-18 16:52:05,500 INFO [Thread-4] regionserver.HRegionServer: STOPPED: 
> Shutdown hook
> 2019-01-18 16:52:05,896 INFO [regionserver/hadoop15:16020] 
> regionserver.MetricsRegionServerWrapperImpl: Computing regionserver metrics 
> every 5000 milliseconds
> 2019-01-18 16:52:05,978 INFO [regionserver/hadoop15:16020.Chore.1] 
> hbase.ScheduledChore: Chore: CompactedHFilesCleaner was stopped
> 2019-01-18 16:52:05,996 ERROR [regionserver/hadoop15:16020] 
> regionserver.HRegionServer: Failed init
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.startServices(HRegionServer.java:1978)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:975)
> at java.lang.Thread.run(Thread.java:745)
> 2019-01-18 16:52:06,011 ERROR [regionserver/hadoop15:16020] 
> regionserver.HRegionServer: * ABORTING region server 
> hadoop15,16020,1547801516426: Unhandled: Region server startup failed *
> java.io.IOException: Region server startup failed
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:3392)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1591)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:975)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.startServices(HRegionServer.java:1978)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1572)
> ... 2 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21915) FileLink$FileLinkInputStream doesn't implement CanUnbuffer

2019-02-21 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21915:
---
Fix Version/s: 1.5.0

> FileLink$FileLinkInputStream doesn't implement CanUnbuffer
> --
>
> Key: HBASE-21915
> URL: https://issues.apache.org/jira/browse/HBASE-21915
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.0.5, 2.3.0, 2.1.4
>
> Attachments: HBASE-21915.001.patch, HBASE-21915.002.patch
>
>
> FileLinkInputStream is an InputStream which handles the indirection of where 
> the real HFile lives. This implementation is wrapped via 
> FSDataInputStreamWrapper and is transparent when it's being used by a caller. 
> Often, we have an FSDataInputStreamWrapper wrapping a FileLinkInputStream 
> which wraps an FSDataInputStream.
> The problem is that FileLinkInputStream does not implement the 
> \{{CanUnbuffer}} interface, which means that the underlying 
> {{FSDataInputStream}} for the HFile the link refers to doesn't get 
> {{unbuffer()}} called on it. This can cause an open Socket to hang around, as 
> described in HBASE-9393.
> Both [~wchevreuil] and myself have run into this, each for different users. 
> We think the commonality as to why these users saw this (but we haven't run 
> into it on our own) is that it requires a very large snapshot to be brought 
> into a new system. Big kudos to [~esteban] for his help in diagnosing this as 
> well!
> If this analysis is accurate, it would affect all branches.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21932) Use Runtime.getRuntime().halt to terminate regionserver when abort timeout

2019-02-21 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21932:
---
Fix Version/s: (was: 1.5.1)
   1.5.0

> Use Runtime.getRuntime().halt to terminate regionserver when abort timeout 
> ---
>
> Key: HBASE-21932
> URL: https://issues.apache.org/jira/browse/HBASE-21932
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.5.0, 2.2.0, 2.0.4, 2.1.3, 2.3.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 2.0.5, 2.3.0, 2.1.4
>
> Attachments: HBASE-21932.branch-2.1.001.patch, 
> HBASE-21932.master.001.patch, HBASE-21932.master.002.patch
>
>
> Find one case when run ITBLL. And the regionserver hang when abort as need to 
> waitAllRegionClose. But System.exit need to run shutdown hook. And the hook 
> can't finished as the RS thread is still alive. So use 
> Runtime.getRuntime().halt to terminate regionserver and this method will not 
> run any shutdown hook.
>  
> Stack trace of RegionServer
> {code:java}
> "regionserver/c4-hadoop-tst-st27:29100" #29 daemon prio=5 os_prio=0 
> tid=0x7f03d6066670 nid=0x1afe7 waiting on condition [0x7eff7ce4e000]
> java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.sleep(HRegionServer.java:1453)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.waitOnAllRegionsToClose(HRegionServer.java:1439)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1133)
> at java.lang.Thread.run(Thread.java:745)
> "Thread-10" #68 prio=5 os_prio=0 tid=0x7f02bc7bbeb0 nid=0x141d3 in 
> Object.wait() [0x7eff486e4000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Thread.join(Thread.java:1249)
> - locked <0x0005c02da008> (a java.lang.Thread)
> at org.apache.hadoop.hbase.util.Threads.shutdown(Threads.java:113)
> at org.apache.hadoop.hbase.util.Threads.shutdown(Threads.java:101)
> at 
> org.apache.hadoop.hbase.regionserver.ShutdownHook$ShutdownHookThread.run(ShutdownHook.java:116)
> at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
> "Abort regionserver monitor" #26361 daemon prio=5 os_prio=0 
> tid=0x7f00605a43d0 nid=0xe in Object.wait() [0x7eff48a6a000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Thread.join(Thread.java:1249)
> - locked <0x0005c0b55c80> (a org.apache.hadoop.util.ShutdownHookManager$1)
> at java.lang.Thread.join(Thread.java:1323)
> at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
> at java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
> at java.lang.Shutdown.runHooks(Shutdown.java:123)
> at java.lang.Shutdown.sequence(Shutdown.java:167)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - locked <0x0005c00451b0> (a java.lang.Class for java.lang.Shutdown)
> at java.lang.Runtime.exit(Runtime.java:109)
> at java.lang.System.exit(System.java:971)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer$SystemExitWhenAbortTimeout.run(HRegionServer.java:3792)
> at java.util.TimerThread.mainLoop(Timer.java:555)
> at java.util.TimerThread.run(Timer.java:505)
> "main" #1 prio=5 os_prio=0 tid=0x7f03d40110f0 nid=0x1ac78 in 
> Object.wait() [0x7f03d813a000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0x0005c02da008> (a java.lang.Thread)
> at java.lang.Thread.join(Thread.java:1249)
> - locked <0x0005c02da008> (a java.lang.Thread)
> at java.lang.Thread.join(Thread.java:1323)
> at org.apache.hadoop.hbase.util.HasThread.join(HasThread.java:92)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:65)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3044)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-02-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774573#comment-16774573
 ] 

Duo Zhang commented on HBASE-21879:
---

The ref counting and resource leak detection are two different things in netty, 
they are not tied together. You can use ByteBuf and disable the resource leak 
detection I believe.

And you can get a ByteBuffer from a netty ByteBuf, by calling the nioBuffer 
method, no different from our ByteBuff. And we have CompositeByteBuf where we 
can have multiple ByteBuf combined.

> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4
>
> Attachments: QPS-latencies-before-HBASE-21879.png, 
> gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-17094) Add a sitemap for hbase.apache.org

2019-02-21 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17094:
--
   Resolution: Fixed
 Assignee: kevin su
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master branch. Lets see how it does. Thanks for patch [~pingsutw]

> Add a sitemap for hbase.apache.org
> --
>
> Key: HBASE-17094
> URL: https://issues.apache.org/jira/browse/HBASE-17094
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: stack
>Assignee: kevin su
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-17094.v0.patch, HBASE-17094.v0.patch
>
>
> We don't have a sitemap. It was pointed out by [~mbrukman].  Lets add one. 
> Add tooling under dev-support so it gets autogenerated as part of site build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-9888) HBase replicates edits written before the replication peer is created

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-9888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774531#comment-16774531
 ] 

Hadoop QA commented on HBASE-9888:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
12s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 54s{color} 
| {color:red} hbase-server generated 8 new + 180 unchanged - 8 fixed = 188 
total (was 188) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 6s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}271m 43s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}313m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleWAL |
|   | hadoop.hbase.master.procedure.TestServerCrashProcedure |
|   | hadoop.hbase.master.TestAssignmentManagerMetrics |
|   | hadoop.hbase.client.TestFromClientSide3 |
|   | hadoop.hbase.replication.TestReplicationEndpoint |
|   | hadoop.hbase.replication.TestMultiSlaveReplication |
|   | 
hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleAsyncWAL |
|   | hadoop.hbase.master.procedure.TestServerCrashProcedureWithReplicas |
|   | hadoop.hbase.client.TestAdmin1 |
|   | hadoop.hbase.client.TestFromClientSide |
\\
\\
|| Subsystem || Report/Notes 

[jira] [Commented] (HBASE-17094) Add a sitemap for hbase.apache.org

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774475#comment-16774475
 ] 

Hadoop QA commented on HBASE-17094:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
49s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m  9s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}203m  
9s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}252m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-17094 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959635/HBASE-17094.v0.patch |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  |
| uname | Linux 3b7b91d2dfa5 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9a55cbb2c1 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16077/artifact/patchprocess/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16077/testReport/ |
| Max. process+thread count | 5108 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16077/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add a sitemap for hbase.apache.org
> --
>
> Key: HBASE-17094
> URL: https://issues.apache.org/jira/browse/HBASE-17094
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: stack
>Priority: Major
>  Labels: beginner
> Attachments: 

[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-02-21 Thread Anoop Sam John (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774437#comment-16774437
 ] 

Anoop Sam John commented on HBASE-21879:


bq.Anoop Sam John - Is there anything I had missed out here.
The netty ByteBuf is like NIO ByteBuffer.  We can replace NIO with netty's.  Ya 
all the perf reasons were considered.
One main to note is we need a way to club together N buffers.  We have 
MultiByteBuff..   So our own extension at least is needed.  Need to see how the 
ref counting within ByteBuff comes out..  

So will be creating a branch now?


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4
>
> Attachments: QPS-latencies-before-HBASE-21879.png, 
> gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-14498) Master stuck in infinite loop when all Zookeeper servers are unreachable (and RS may run after losing its znode)

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774429#comment-16774429
 ] 

Hadoop QA commented on HBASE-14498:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
57s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
58s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hbase-zookeeper in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}139m 
51s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-14498 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959638/HBASE-14498.009.patch 
|
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux c9f669f771e8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9a55cbb2c1 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| 

[jira] [Commented] (HBASE-21938) Add a new ClusterMetrics.Option SERVERS_NAME to only return the live region servers's name without metrics

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774423#comment-16774423
 ] 

Hadoop QA commented on HBASE-21938:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} The patch passed checkstyle in hbase-protocol-shaded 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} The patch passed checkstyle in hbase-client {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} hbase-server: The patch generated 0 new + 141 
unchanged - 1 fixed = 141 total (was 142) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
58s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 21s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
6s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}139m 
59s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}195m 58s{color} | 
{color:black} {color} 

[jira] [Commented] (HBASE-21768) list_quota_table_sizes/list_quota_snapshots should print human readable values for size

2019-02-21 Thread Xu Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774398#comment-16774398
 ] 

Xu Cang commented on HBASE-21768:
-

In the past, there was also discussion about loosening the rubocop checks a 
bit. Because some rules are too strict to follow, some don't make too much 
sense to follow, such as the one you guys mentioned above. 

> list_quota_table_sizes/list_quota_snapshots should print human readable 
> values for size
> ---
>
> Key: HBASE-21768
> URL: https://issues.apache.org/jira/browse/HBASE-21768
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Attachments: HBASE-21768.master.0001.patch, 
> HBASE-21768.master.0002.patch, HBASE-21768.master.0003.patch, 
> HBASE-21768.master.0004.patch
>
>
> +underlined text+Using space quota, 
> list_quota_table_sizes/list_quota_snapshots should print human readable 
> values for size.Keep the old implementation by default, add *HUMANREADABLE* 
> constant.
> {code:java}
> hbase(main):001:0> list_quota_table_sizes
> TABLE SIZE 
> TestTable 110399 
> t1 5211 
> hbase(main):002:0> list_quota_snapshots
> TABLE USAGE LIMIT IN_VIOLATION POLICY
> t1 5211 1073741824 false None
> {code}
> Using HUMANREADABLE :
> {code:java}
> hbase(main):001:0> list_quota_snapshots({HUMANREADABLE=>'true'})
>  TABLE USAGE LIMIT IN_VIOLATION POLICY
>  TestTable 20G 2T false None
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-02-21 Thread ramkrishna.s.vasudevan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774386#comment-16774386
 ] 

ramkrishna.s.vasudevan commented on HBASE-21879:


However if at all we need netty's ref counting mechanism I believe the 
ResourceLeakDetector cannot be DISABLED. 

> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4
>
> Attachments: QPS-latencies-before-HBASE-21879.png, 
> gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-02-21 Thread ramkrishna.s.vasudevan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774380#comment-16774380
 ] 

ramkrishna.s.vasudevan commented on HBASE-21879:


Thanks for the ping here folks. From the docs that we prepared when we did the 
offheaping work we have the following points that were discussed 

Netty's ByteBuf and NIO Bytebuffers- The comparison using JMH showed that NIO 
BBs are 17% better. Ideally we should have seen similar performance but in the 
netty version 4.0.23 had this reference counting and memory leak detection 
mechanism which was actually not allowing the C2 compiler to do some proper 
iniling of the code. How ever netty 4.0.4 had the feature to disable the 
ResourceLeakDetector which brought the performance closer to the NIO case.

Still the reason that we went ahead with NIO - is indirectly a reason why this 
JIRA is created- in the sense that since HDFS was already having an API to pass 
NIO BB and read into the NIO BB, going with Netty ByteBuf would not allow that 
to happen easily because of the HDFS API. The other advantage is that if we are 
able to pass a offheap NIO BB we can avoid a copy to onheap once we read from 
the DFS. 

[~anoopsamjohn] - Is there anything I had missed out here. 

But I think the idea of Netty doing ref counting helps in avoiding we doing the 
ref counting which is adding some complexity. May be we had missed out some 
options- if so it would be great to know about them. Good one. 

> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4
>
> Attachments: QPS-latencies-before-HBASE-21879.png, 
> gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21874) Bucket cache on Persistent memory

2019-02-21 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774358#comment-16774358
 ] 

Wellington Chevreuil commented on HBASE-21874:
--

Thanks guys, I guess it's worth having this in our notes for documentation 
purposes once we get to a final version for the patch.

> Bucket cache on Persistent memory
> -
>
> Key: HBASE-21874
> URL: https://issues.apache.org/jira/browse/HBASE-21874
> Project: HBase
>  Issue Type: New Feature
>  Components: BucketCache
>Affects Versions: 3.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-21874.patch, HBASE-21874.patch, 
> HBASE-21874_V2.patch, Pmem_BC.png
>
>
> Non volatile persistent memory devices are byte addressable like DRAM (for 
> eg. Intel DCPMM). Bucket cache implementation can take advantage of this new 
> memory type and can make use of the existing offheap data structures to serve 
> data directly from this memory area without having to bring the data to 
> onheap.
> The patch is a new IOEngine implementation that works with the persistent 
> memory.
> Note : Here we don't make use of the persistence nature of the device and 
> just make use of the big memory it provides.
> Performance numbers to follow. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21943) The usage of RegionLocations.mergeRegionLocations is wrong for async client

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774325#comment-16774325
 ] 

Hadoop QA commented on HBASE-21943:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
30s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
23s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 11s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
48s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}298m 20s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}361m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.quotas.TestSpaceQuotas |
|   | hadoop.hbase.master.procedure.TestServerCrashProcedure |
|   | hadoop.hbase.client.TestFromClientSide3 |
|   | hadoop.hbase.regionserver.TestSplitTransactionOnCluster |
|   | hadoop.hbase.client.TestAdmin1 |
|   | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory |
|   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
|   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
|   | 

[jira] [Commented] (HBASE-21874) Bucket cache on Persistent memory

2019-02-21 Thread Anoop Sam John (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774312#comment-16774312
 ] 

Anoop Sam John commented on HBASE-21874:


Thanks [~wchevreuil].. Me too checked with this config today.

> Bucket cache on Persistent memory
> -
>
> Key: HBASE-21874
> URL: https://issues.apache.org/jira/browse/HBASE-21874
> Project: HBase
>  Issue Type: New Feature
>  Components: BucketCache
>Affects Versions: 3.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-21874.patch, HBASE-21874.patch, 
> HBASE-21874_V2.patch, Pmem_BC.png
>
>
> Non volatile persistent memory devices are byte addressable like DRAM (for 
> eg. Intel DCPMM). Bucket cache implementation can take advantage of this new 
> memory type and can make use of the existing offheap data structures to serve 
> data directly from this memory area without having to bring the data to 
> onheap.
> The patch is a new IOEngine implementation that works with the persistent 
> memory.
> Note : Here we don't make use of the persistence nature of the device and 
> just make use of the big memory it provides.
> Performance numbers to follow. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21730) Update HBase-book with the procedure based WAL splitting

2019-02-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774304#comment-16774304
 ] 

stack commented on HBASE-21730:
---

Very nice. I like the diagram. Would suggest you remove the old description 
rather than append this on the end. Your description is how it is now. Thanks.

> Update HBase-book with the procedure based WAL splitting
> 
>
> Key: HBASE-21730
> URL: https://issues.apache.org/jira/browse/HBASE-21730
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Minor
> Attachments: HBASE-21730.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-9888) HBase replicates edits written before the replication peer is created

2019-02-21 Thread Pankaj Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-9888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774289#comment-16774289
 ] 

Pankaj Kumar commented on HBASE-9888:
-

WALKeyWriteTimeBasedFilter filter is set as an internal filter in 003 patch.

> HBase replicates edits written before the replication peer is created
> -
>
> Key: HBASE-9888
> URL: https://issues.apache.org/jira/browse/HBASE-9888
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Dave Latham
>Assignee: Pankaj Kumar
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-9888.002.patch, HBASE-9888.003.patch, 
> HBASE-9888.branch-1.001.patch, HBASE-9888.branch-1.002.patch, 
> HBASE-9888.branch-2.002.patch, HBASE-9888.branch-2.patch, 
> HBASE-9888.branch-2.patch, HBASE-9888.patch
>
>
> When creating a new replication peer the ReplicationSourceManager enqueues 
> the currently open HLog to the ReplicationSource to ship to the destination 
> cluster.  The ReplicationSource starts at the beginning of the HLog and ships 
> over any pre-existing writes.
> A workaround is to roll all the HLogs before enabling replication.
> A little background for how it affected us - we were migrating one cluster in 
> a master-master pair.  I.e. transitioning from A <\-> B to B <-> C.  After 
> shutting down writes from A -> B we enabled writes from C -> B.  However, 
> this replicated some earlier writes that were in C's HLogs that had 
> originated in A.  Since we were running a version of HBase before HBASE-7709 
> those writes then got caught in a infinite replication cycle and bringing 
> down region servers OOM because of HBASE-9865.
> However, in general, if one wants to manage what data gets replicated, one 
> wouldn't expect that potentially very old writes would be included when 
> setting up a new replication link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21934) SplitWALProcedure get stuck during ITBLL

2019-02-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774298#comment-16774298
 ] 

stack commented on HBASE-21934:
---

It looks like you've identified a 'hole' in our accounting. Good one 
[~tianjingyun]. Keeping account of ongoing dispatched calls makes sense. Pity 
have to change all the Procedures to do it though. Wonder if a cleaner way of 
keeping account (probably not but asking anyways). Thanks.

> SplitWALProcedure get stuck during ITBLL
> 
>
> Key: HBASE-21934
> URL: https://issues.apache.org/jira/browse/HBASE-21934
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.x
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.
>
> Attachments: HBASE-21934.master.001.patch
>
>
> I encounter the problem that when master assign a splitWALRemoteProcedure to 
> a region server. The log of this region server says it failed to recover the 
> lease of this file. Then this region server is killed by chaosMonkey. As the 
> result, this procedure is not timeout and hang there forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21941) Increment the default scanner timeout

2019-02-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774287#comment-16774287
 ] 

stack commented on HBASE-21941:
---

[~psomogyi] You're good at this stuff... Any opinion in here sir?

> Increment the default scanner timeout
> -
>
> Key: HBASE-21941
> URL: https://issues.apache.org/jira/browse/HBASE-21941
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Priority: Major
>
> There are hbase.rpc.timeout and hbase.client.operation.timeout for client 
> operation expect scan. And there is a special config 
> hbase.client.scanner.timeout.period for scan. If I am not wrong, this should 
> rpc timeout of scan call. But now we use this as operation timeout of scan 
> call. The scan callable is complicated as we need handle the replica case. 
> The real call with retry is called in 
> [https://github.com/apache/hbase/blob/9a55cbb2c1dfe5a13a6ceb323ac7edd23532f4b5/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultBoundedCompletionService.java#L80|https://github.com/apache/hbase/blob/9a55cbb2c1dfe5a13a6ceb323ac7edd23532f4b5/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultBoundedCompletionService.java#L80.]
>  . And the callTimeout is configed by hbase.client.scanner.timeout.period. So 
> I thought this is not right.
>  
> I meet this problem when run ITBLL for branch-2.2. The verify map task failed 
> when scan.
> {code:java}
> 2019-02-21 03:47:20,287 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: recovered from 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=16, exceptions: 
> 2019-02-21 03:47:20,287 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: Closing the 
> previously opened scanner object.
> 2019-02-21 03:47:20,331 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: Current 
> scan={"loadColumnFamiliesOnDemand":null,"startRow":"\\xE1\\x9B\\xB4\\xF0\\xB3(JT\\xDC\\x86pf|y\\xF3\\xE9","stopRow":"","batch":-1,"cacheBlocks":false,"totalColumns":3,"maxResultSize":4194304,"families":{"big":["big"],"meta":["prev"],"tiny":["tiny"]},"caching":1,"maxVersions":1,"timeRange":[0,9223372036854775807]}
>  2019-02-21 03:47:20,335 INFO 
> [hconnection-0x7b44b63d-metaLookup-shared--pool4-t36] 
> org.apache.hadoop.hbase.client.ScannerCallable: Open 
> scanner=-4916858472898750097 for 
> scan={"loadColumnFamiliesOnDemand":null,"startRow":"IntegrationTestBigLinkedList,\\xE1\\x9B\\xB4\\xF0\\xB3(JT\\xDC\\x86pf|y\\xF3\\xE9,99","stopRow":"IntegrationTestBigLinkedList,,","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{"info":["ALL"]},"caching":5,"maxVersions":1,"timeRange":[0,9223372036854775807]}
>  on region region=hbase:meta,,1.1588230740, 
> hostname=c4-hadoop-tst-st26.bj,29100,1550660298519, seqNum=-1
> 2019-02-21 03:48:20,354 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: Mapper took 60023ms 
> to process 0 rows
> 2019-02-21 03:48:20,355 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=16, exceptions: Thu Feb 21 03:48:20 CST 2019, null, 
> java.net.SocketTimeoutException: callTimeout=6, callDuration=60215: Call 
> to c4-hadoop-tst-st30.bj/10.132.2.41:29100 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=7102, 
> waitTime=60006, rpcTimeout=6 row 'ᛴ�(JT܆pf|y��' on table 
> 'IntegrationTestBigLinkedList' at 
> region=IntegrationTestBigLinkedList,\xDD\xDD\xDD\xDD\xDD\xDD\xDD\xDD,1550661322522.d5d29d2f1e8fee42d666c117709c3a46.,
>  hostname=c4-hadoop-tst-st30.bj,29100,1550652984371, seqNum=1007960 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=16, exceptions: Thu Feb 21 03:48:20 CST 2019, null, 
> java.net.SocketTimeoutException: callTimeout=6, callDuration=60215: Call 
> to c4-hadoop-tst-st30.bj/10.132.2.41:29100 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=7102, 
> waitTime=60006, rpcTimeout=6 row 'ᛴ�(JT܆pf|y��' on table 
> 'IntegrationTestBigLinkedList' at 
> region=IntegrationTestBigLinkedList,\xDD\xDD\xDD\xDD\xDD\xDD\xDD\xDD,1550661322522.d5d29d2f1e8fee42d666c117709c3a46.,
>  hostname=c4-hadoop-tst-st30.bj,29100,1550652984371, seqNum=1007960 at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:299)
>  at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:242)
>  at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
>  at 
> 

[jira] [Commented] (HBASE-21934) SplitWALProcedure get stuck during ITBLL

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774293#comment-16774293
 ] 

Hadoop QA commented on HBASE-21934:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 3s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} hbase-procedure: The patch generated 1 new + 2 
unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
1s{color} | {color:red} hbase-server: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
52s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 18s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
17s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}256m 51s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}302m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestSplitTransactionOnCluster |
|   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
|   | hadoop.hbase.client.TestFromClientSide |
|   | hadoop.hbase.master.procedure.TestServerCrashProcedureWithReplicas |
|   | hadoop.hbase.master.procedure.TestServerCrashProcedure |
|   | hadoop.hbase.TestCheckTestClasses |
|   | hadoop.hbase.client.TestAdmin1 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | 

[jira] [Updated] (HBASE-9888) HBase replicates edits written before the replication peer is created

2019-02-21 Thread Pankaj Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-9888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-9888:

Attachment: HBASE-9888.003.patch

> HBase replicates edits written before the replication peer is created
> -
>
> Key: HBASE-9888
> URL: https://issues.apache.org/jira/browse/HBASE-9888
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Dave Latham
>Assignee: Pankaj Kumar
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-9888.002.patch, HBASE-9888.003.patch, 
> HBASE-9888.branch-1.001.patch, HBASE-9888.branch-1.002.patch, 
> HBASE-9888.branch-2.002.patch, HBASE-9888.branch-2.patch, 
> HBASE-9888.branch-2.patch, HBASE-9888.patch
>
>
> When creating a new replication peer the ReplicationSourceManager enqueues 
> the currently open HLog to the ReplicationSource to ship to the destination 
> cluster.  The ReplicationSource starts at the beginning of the HLog and ships 
> over any pre-existing writes.
> A workaround is to roll all the HLogs before enabling replication.
> A little background for how it affected us - we were migrating one cluster in 
> a master-master pair.  I.e. transitioning from A <\-> B to B <-> C.  After 
> shutting down writes from A -> B we enabled writes from C -> B.  However, 
> this replicated some earlier writes that were in C's HLogs that had 
> originated in A.  Since we were running a version of HBase before HBASE-7709 
> those writes then got caught in a infinite replication cycle and bringing 
> down region servers OOM because of HBASE-9865.
> However, in general, if one wants to manage what data gets replicated, one 
> wouldn't expect that potentially very old writes would be included when 
> setting up a new replication link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21867) Support multi-threads in HFileArchiver

2019-02-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774285#comment-16774285
 ] 

stack commented on HBASE-21867:
---

I took a quick look. Seems good [~brfrn169]. Is this a workaround for fact that 
we can't do bulk operations against hdfs; i.e. pass it a bunch of files at a 
time to delete? Thanks.

> Support multi-threads in HFileArchiver
> --
>
> Key: HBASE-21867
> URL: https://issues.apache.org/jira/browse/HBASE-21867
> Project: HBase
>  Issue Type: Improvement
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-21867.branch-2.001.patch, 
> HBASE-21867.branch-2.1.001.patch, HBASE-21867.master.001.patch, 
> HBASE-21867.master.002.patch, HBASE-21867.master.002.patch, 
> HBASE-21867.master.003.patch, HBASE-21867.master.004.patch
>
>
> As of now, when deleting a table, we do the following regarding the 
> filesystem layout:
> 1. Move the table data to the temp directory (hbase/.tmp)
> 2. Archive the region directories of the table in the temp directory one by 
> one:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java#L319-L323
> However, step 2 will take a long time when the table has a huge number of 
> regions. So I propose doing step 2 in multi-threads in this Jira. 
> Also, during master startup, we do the same process as step 2:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java#L313-L319
> We should make it multi-threaded, similarly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21057) upgrade to latest spotbugs

2019-02-21 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774230#comment-16774230
 ] 

Sean Busbey commented on HBASE-21057:
-

nope. I must have gotten derailed by something. I should have another block of 
community review time tomorrow.

> upgrade to latest spotbugs
> --
>
> Key: HBASE-21057
> URL: https://issues.apache.org/jira/browse/HBASE-21057
> Project: HBase
>  Issue Type: Task
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: kevin su
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 1.6.0, 2.2.0
>
> Attachments: HBASE-21057.master.001.patch, HBASE-21057.v0.patch, 
> HBASE-21057.v1.patch, HBASE-21057.v1.patch
>
>
> we currently rely on [spotbugs definitions from 
> 3.1.0-RC3|https://github.com/spotbugs/spotbugs/releases/tag/3.1.0_RC3], which 
> was a pre-release candidate from Jun 2017.
> [spotbugs version 
> 3.1.6|https://github.com/spotbugs/spotbugs/releases/tag/3.1.6] came out about 
> a month ago. We should update to the latest.
> they also have their own maven plugin now. as a stretch goal we could switch 
> over to that, if it works with yetus.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21820) Implement CLUSTER quota scope

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774237#comment-16774237
 ] 

Hadoop QA commented on HBASE-21820:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 1s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} The patch passed checkstyle in hbase-client {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} hbase-server: The patch generated 0 new + 11 
unchanged - 1 fixed = 11 total (was 12) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} The patch passed checkstyle in hbase-shell {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m  
7s{color} | {color:red} The patch generated 13 new + 123 unchanged - 8 fixed = 
136 total (was 131) {color} |
| {color:orange}-0{color} | {color:orange} ruby-lint {color} | {color:orange}  
0m  8s{color} | {color:orange} The patch generated 44 new + 413 unchanged - 0 
fixed = 457 total (was 413) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
55s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 22s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}314m 25s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
51s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 5s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HBASE-21867) Support multi-threads in HFileArchiver

2019-02-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774225#comment-16774225
 ] 

Hudson commented on HBASE-21867:


Results for branch branch-2
[build #1700 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1700/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1700//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1700//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1700//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Support multi-threads in HFileArchiver
> --
>
> Key: HBASE-21867
> URL: https://issues.apache.org/jira/browse/HBASE-21867
> Project: HBase
>  Issue Type: Improvement
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-21867.branch-2.001.patch, 
> HBASE-21867.branch-2.1.001.patch, HBASE-21867.master.001.patch, 
> HBASE-21867.master.002.patch, HBASE-21867.master.002.patch, 
> HBASE-21867.master.003.patch, HBASE-21867.master.004.patch
>
>
> As of now, when deleting a table, we do the following regarding the 
> filesystem layout:
> 1. Move the table data to the temp directory (hbase/.tmp)
> 2. Archive the region directories of the table in the temp directory one by 
> one:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java#L319-L323
> However, step 2 will take a long time when the table has a huge number of 
> regions. So I propose doing step 2 in multi-threads in this Jira. 
> Also, during master startup, we do the same process as step 2:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java#L313-L319
> We should make it multi-threaded, similarly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-14498) Master stuck in infinite loop when all Zookeeper servers are unreachable (and RS may run after losing its znode)

2019-02-21 Thread Pankaj Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774233#comment-16774233
 ] 

Pankaj Kumar commented on HBASE-14498:
--

TestServerCrashProcedureCarryingMetaStuck & TestClientOperationTimeout are 
passing locally, retry QA.

> Master stuck in infinite loop when all Zookeeper servers are unreachable (and 
> RS may run after losing its znode)
> 
>
> Key: HBASE-14498
> URL: https://issues.apache.org/jira/browse/HBASE-14498
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0, 2.2.0
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Pankaj Kumar
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HBASE-14498-V2.patch, HBASE-14498-V3.patch, 
> HBASE-14498-V4.patch, HBASE-14498-V5.patch, HBASE-14498-V6.patch, 
> HBASE-14498-V6.patch, HBASE-14498-addendum.patch, 
> HBASE-14498-branch-1.2.patch, HBASE-14498-branch-1.3-V2.patch, 
> HBASE-14498-branch-1.3.patch, HBASE-14498-branch-1.4.patch, 
> HBASE-14498-branch-1.patch, HBASE-14498.007.patch, HBASE-14498.008.patch, 
> HBASE-14498.009.patch, HBASE-14498.009.patch, HBASE-14498.master.001.patch, 
> HBASE-14498.master.002.patch, HBASE-14498.patch
>
>
> We met a weird scenario in our production environment.
> In a HA cluster,
> > Active Master (HM1) is not able to connect to any Zookeeper server (due to 
> > N/w breakdown on master machine network with Zookeeper servers).
> {code}
> 2015-09-26 15:24:47,508 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 33463ms for sessionid 0x104576b8dda0002, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:24:47,877 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:48,236 INFO [main-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:49,879 WARN 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:49,879 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-IP1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:24:50,238 WARN [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:50,238 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-Host1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:25:17,470 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 30023ms for sessionid 0x2045762cc710006, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:25:17,571 WARN [master/HM1-Host/HM1-IP:16000] 
> zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, 
> quorum=ZK-Host:2181,ZK-Host1:2181,ZK-Host2:2181, 
> exception=org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2015-09-26 15:25:17,872 INFO [main-SendThread(ZK-Host:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host 2181
> 2015-09-26 15:25:19,874 WARN [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host
> 2015-09-26 15:25:19,874 INFO [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server ZK-Host/ZK-IP:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> {code}
> > Since HM1 was not able to connect to any ZK, so session timeout didnt 
> > happen at Zookeeper server side and HM1 didnt abort.
> > On Zookeeper session timeout standby master (HM2) registered himself as an 
> > active master. 
> > HM2 is keep on waiting for region server to report him as part of active 
> > master intialization.
> {noformat} 
> 2015-09-26 15:24:44,928 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 0 ms, 
> expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval 
> of 1500 ms. | 
> org.apache.hadoop.hbase.master.ServerManager.waitForRegionServers(ServerManager.java:1011)
> ---
> ---
> 2015-09-26 15:32:50,841 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 483913 
> ms, expecting minimum of 1, maximum of 

[jira] [Updated] (HBASE-14498) Master stuck in infinite loop when all Zookeeper servers are unreachable (and RS may run after losing its znode)

2019-02-21 Thread Pankaj Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-14498:
-
Attachment: HBASE-14498.009.patch

> Master stuck in infinite loop when all Zookeeper servers are unreachable (and 
> RS may run after losing its znode)
> 
>
> Key: HBASE-14498
> URL: https://issues.apache.org/jira/browse/HBASE-14498
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0, 2.2.0
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Pankaj Kumar
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HBASE-14498-V2.patch, HBASE-14498-V3.patch, 
> HBASE-14498-V4.patch, HBASE-14498-V5.patch, HBASE-14498-V6.patch, 
> HBASE-14498-V6.patch, HBASE-14498-addendum.patch, 
> HBASE-14498-branch-1.2.patch, HBASE-14498-branch-1.3-V2.patch, 
> HBASE-14498-branch-1.3.patch, HBASE-14498-branch-1.4.patch, 
> HBASE-14498-branch-1.patch, HBASE-14498.007.patch, HBASE-14498.008.patch, 
> HBASE-14498.009.patch, HBASE-14498.009.patch, HBASE-14498.master.001.patch, 
> HBASE-14498.master.002.patch, HBASE-14498.patch
>
>
> We met a weird scenario in our production environment.
> In a HA cluster,
> > Active Master (HM1) is not able to connect to any Zookeeper server (due to 
> > N/w breakdown on master machine network with Zookeeper servers).
> {code}
> 2015-09-26 15:24:47,508 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 33463ms for sessionid 0x104576b8dda0002, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:24:47,877 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:48,236 INFO [main-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:49,879 WARN 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:49,879 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-IP1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:24:50,238 WARN [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:50,238 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-Host1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:25:17,470 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 30023ms for sessionid 0x2045762cc710006, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:25:17,571 WARN [master/HM1-Host/HM1-IP:16000] 
> zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, 
> quorum=ZK-Host:2181,ZK-Host1:2181,ZK-Host2:2181, 
> exception=org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2015-09-26 15:25:17,872 INFO [main-SendThread(ZK-Host:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host 2181
> 2015-09-26 15:25:19,874 WARN [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host
> 2015-09-26 15:25:19,874 INFO [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server ZK-Host/ZK-IP:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> {code}
> > Since HM1 was not able to connect to any ZK, so session timeout didnt 
> > happen at Zookeeper server side and HM1 didnt abort.
> > On Zookeeper session timeout standby master (HM2) registered himself as an 
> > active master. 
> > HM2 is keep on waiting for region server to report him as part of active 
> > master intialization.
> {noformat} 
> 2015-09-26 15:24:44,928 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 0 ms, 
> expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval 
> of 1500 ms. | 
> org.apache.hadoop.hbase.master.ServerManager.waitForRegionServers(ServerManager.java:1011)
> ---
> ---
> 2015-09-26 15:32:50,841 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 483913 
> ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, 
> interval of 1500 ms. | 
> 

[jira] [Updated] (HBASE-17094) Add a sitemap for hbase.apache.org

2019-02-21 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17094:
--
Attachment: HBASE-17094.v0.patch

> Add a sitemap for hbase.apache.org
> --
>
> Key: HBASE-17094
> URL: https://issues.apache.org/jira/browse/HBASE-17094
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: stack
>Priority: Major
>  Labels: beginner
> Attachments: HBASE-17094.v0.patch, HBASE-17094.v0.patch
>
>
> We don't have a sitemap. It was pointed out by [~mbrukman].  Lets add one. 
> Add tooling under dev-support so it gets autogenerated as part of site build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17094) Add a sitemap for hbase.apache.org

2019-02-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774218#comment-16774218
 ] 

stack commented on HBASE-17094:
---

Patch looks good [~pingsutw] Retrying build.

> Add a sitemap for hbase.apache.org
> --
>
> Key: HBASE-17094
> URL: https://issues.apache.org/jira/browse/HBASE-17094
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: stack
>Priority: Major
>  Labels: beginner
> Attachments: HBASE-17094.v0.patch, HBASE-17094.v0.patch
>
>
> We don't have a sitemap. It was pointed out by [~mbrukman].  Lets add one. 
> Add tooling under dev-support so it gets autogenerated as part of site build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-02-21 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774211#comment-16774211
 ] 

Zheng Hu commented on HBASE-21879:
--

There is also an simple way to fix the refCnt now, no need to replace 
hbase.nio.ByteBuff with netty.ByteBuf, just let the hbase.nio.ByteBuff extend 
the io.netty.util.AbstractReferenceCounted and  implement our own deallocate() 
method, the refCnt can work now. seems not too hard, then we can move this 
issue forward firstly. 

After that, I can start to replace the hbase.nio.ByteBuff with netty.ByteBuf in 
an new branch. read the netty code today, seems lots of work if we start the 
replacement,  the ByteBuff is widely used in our project, and lots of BB 
interface are not compatible.  How do you guys think ? 


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4
>
> Attachments: QPS-latencies-before-HBASE-21879.png, 
> gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21938) Add a new ClusterMetrics.Option SERVERS_NAME to only return the live region servers's name without metrics

2019-02-21 Thread Yi Mei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei updated HBASE-21938:
---
Status: Patch Available  (was: Open)

> Add a new ClusterMetrics.Option SERVERS_NAME to only return the live region 
> servers's name without metrics
> --
>
> Key: HBASE-21938
> URL: https://issues.apache.org/jira/browse/HBASE-21938
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Yi Mei
>Priority: Major
> Attachments: HBASE-21938.master.001.patch, 
> HBASE-21938.master.002.patch
>
>
> One of our production cluster ( which has 20 regions) meet one protobuf 
> exception when getClusterStatus.
>  
> {code:java}
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
> {code}
> And there are some client methods which call getClusterStatus but only need 
> server names. Plan to add a new option which only to return server name. So 
> we can reduce the influence scope even we meet this problem.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21938) Add a new ClusterMetrics.Option SERVERS_NAME to only return the live region servers's name without metrics

2019-02-21 Thread Yi Mei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei updated HBASE-21938:
---
Attachment: HBASE-21938.master.002.patch

> Add a new ClusterMetrics.Option SERVERS_NAME to only return the live region 
> servers's name without metrics
> --
>
> Key: HBASE-21938
> URL: https://issues.apache.org/jira/browse/HBASE-21938
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Yi Mei
>Priority: Major
> Attachments: HBASE-21938.master.001.patch, 
> HBASE-21938.master.002.patch
>
>
> One of our production cluster ( which has 20 regions) meet one protobuf 
> exception when getClusterStatus.
>  
> {code:java}
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
> {code}
> And there are some client methods which call getClusterStatus but only need 
> server names. Plan to add a new option which only to return server name. So 
> we can reduce the influence scope even we meet this problem.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2019-02-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774190#comment-16774190
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #110 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/110/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/110//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/110//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/110//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21867) Support multi-threads in HFileArchiver

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774166#comment-16774166
 ] 

Hadoop QA commented on HBASE-21867:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
15s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
33s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} branch-2.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} hbase-server: The patch generated 0 new + 40 
unchanged - 6 fixed = 40 total (was 46) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
35s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}263m 18s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}300m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestAssignmentManagerMetrics |
|   | hadoop.hbase.replication.TestReplicationKillSlaveRSWithSeparateOldWALs |
|   | hadoop.hbase.replication.TestMasterReplication |
|   | hadoop.hbase.replication.TestReplicationKillSlaveRS |
|   | hadoop.hbase.client.TestAdmin1 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 |
| JIRA Issue | HBASE-21867 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959584/HBASE-21867.branch-2.1.001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 637b6b9fa018 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.1 / ba02226302 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | 

[jira] [Commented] (HBASE-21867) Support multi-threads in HFileArchiver

2019-02-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774162#comment-16774162
 ] 

Hudson commented on HBASE-21867:


Results for branch branch-2.2
[build #57 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/57/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/57//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/57//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/57//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Support multi-threads in HFileArchiver
> --
>
> Key: HBASE-21867
> URL: https://issues.apache.org/jira/browse/HBASE-21867
> Project: HBase
>  Issue Type: Improvement
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-21867.branch-2.001.patch, 
> HBASE-21867.branch-2.1.001.patch, HBASE-21867.master.001.patch, 
> HBASE-21867.master.002.patch, HBASE-21867.master.002.patch, 
> HBASE-21867.master.003.patch, HBASE-21867.master.004.patch
>
>
> As of now, when deleting a table, we do the following regarding the 
> filesystem layout:
> 1. Move the table data to the temp directory (hbase/.tmp)
> 2. Archive the region directories of the table in the temp directory one by 
> one:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java#L319-L323
> However, step 2 will take a long time when the table has a huge number of 
> regions. So I propose doing step 2 in multi-threads in this Jira. 
> Also, during master startup, we do the same process as step 2:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java#L313-L319
> We should make it multi-threaded, similarly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21922) BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter

2019-02-21 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774054#comment-16774054
 ] 

Guanghao Zhang edited comment on HBASE-21922 at 2/21/19 1:04 PM:
-

The ROWPREFIX_DELIMITED bloom filter was introduced by HBASE-20636. But not 
released now. So another soluation is remove this feature as this break the 
assumption of row order. Ping [~andrewcheng] [~Apache9]  [~stack]


was (Author: zghaobac):
The ROWPREFIX_DELIMITED bloom filter was introduced by HBASE-20636. But not 
released now. So another soluation is remove this feature as this break the 
assumption of row order. 

> BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter
> -
>
> Key: HBASE-21922
> URL: https://issues.apache.org/jira/browse/HBASE-21922
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21922.master.001.patch
>
>
> Assume we use '5' as the delimiter, there are rowkeys: row1 is smaller than 
> row2
> {code:java}
> row1: 12345xxx
> row2: 1235{code}
> When use ROWPREFIX_DELIMITED bloom filter, the rowkey write to bloom filter 
> are
> {code:java}
> row1's key for bloom filter: 1234
> row2's key for bloom fitler: 123{code}
> The row1's key for bloom filter is bigger than row2. Then 
> BloomContext#sanityCheck will failed.
> {code:java}
> private void sanityCheck(Cell cell) throws IOException {
>   if (this.getLastCell() != null) {
> LOG.debug("Current cell " + cell + ", prevCell = " + this.getLastCell());
> if (comparator.compare(cell, this.getLastCell()) <= 0) {
>   throw new IOException("Added a key not lexically larger than" + " 
> previous. Current cell = "
>   + cell + ", prevCell = " + this.getLastCell());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21922) BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter

2019-02-21 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774054#comment-16774054
 ] 

Guanghao Zhang commented on HBASE-21922:


The ROWPREFIX_DELIMITED bloom filter was introduced by HBASE-20636. But not 
released now. So another soluation is remove this feature as this break the 
assumption of row order. 

> BloomContext#sanityCheck may failed when use ROWPREFIX_DELIMITED bloom filter
> -
>
> Key: HBASE-21922
> URL: https://issues.apache.org/jira/browse/HBASE-21922
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21922.master.001.patch
>
>
> Assume we use '5' as the delimiter, there are rowkeys: row1 is smaller than 
> row2
> {code:java}
> row1: 12345xxx
> row2: 1235{code}
> When use ROWPREFIX_DELIMITED bloom filter, the rowkey write to bloom filter 
> are
> {code:java}
> row1's key for bloom filter: 1234
> row2's key for bloom fitler: 123{code}
> The row1's key for bloom filter is bigger than row2. Then 
> BloomContext#sanityCheck will failed.
> {code:java}
> private void sanityCheck(Cell cell) throws IOException {
>   if (this.getLastCell() != null) {
> LOG.debug("Current cell " + cell + ", prevCell = " + this.getLastCell());
> if (comparator.compare(cell, this.getLastCell()) <= 0) {
>   throw new IOException("Added a key not lexically larger than" + " 
> previous. Current cell = "
>   + cell + ", prevCell = " + this.getLastCell());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20724) Sometimes some compacted storefiles are still opened after region failover

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774052#comment-16774052
 ] 

Hadoop QA commented on HBASE-20724:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
20s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 18s{color} 
| {color:red} hbase-server generated 1 new + 187 unchanged - 1 fixed = 188 
total (was 188) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
28s{color} | {color:red} hbase-server: The patch generated 2 new + 105 
unchanged - 1 fixed = 107 total (was 106) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 4s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m  1s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
19s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}146m 22s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 5s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}217m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.quotas.TestSpaceQuotasWithSnapshots |
|   | hadoop.hbase.quotas.TestSnapshotQuotaObserverChore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | 

[jira] [Commented] (HBASE-21783) Support exceed user/table/ns throttle quota if region server has available quota

2019-02-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774049#comment-16774049
 ] 

Hudson commented on HBASE-21783:


Results for branch master
[build #812 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/812/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/812//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/812//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/812//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Support exceed user/table/ns throttle quota if region server has available 
> quota
> 
>
> Key: HBASE-21783
> URL: https://issues.apache.org/jira/browse/HBASE-21783
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Attachments: HBASE-21783.master.001.patch, 
> HBASE-21783.master.002.patch, HBASE-21783.master.003.patch, 
> HBASE-21783.master.004.patch, HBASE-21783.master.005.patch, 
> HBASE-21783.master.006.patch
>
>
> Currently, all types of rpc throttle quota (include region server, namespace, 
> table and user quota) are hard limit, which means once requests exceed the 
> amount, they will be throttled.
>  In some situation, user use out of all their own quotas but region server 
> still has available quota because other users don't consume at the same time, 
> in this case, we can allow user consume additional quota. So add a switch to 
> enable or disable other quotas(except region server quota) exceed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21867) Support multi-threads in HFileArchiver

2019-02-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774048#comment-16774048
 ] 

Hudson commented on HBASE-21867:


Results for branch master
[build #812 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/812/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/812//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/812//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/812//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Support multi-threads in HFileArchiver
> --
>
> Key: HBASE-21867
> URL: https://issues.apache.org/jira/browse/HBASE-21867
> Project: HBase
>  Issue Type: Improvement
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-21867.branch-2.001.patch, 
> HBASE-21867.branch-2.1.001.patch, HBASE-21867.master.001.patch, 
> HBASE-21867.master.002.patch, HBASE-21867.master.002.patch, 
> HBASE-21867.master.003.patch, HBASE-21867.master.004.patch
>
>
> As of now, when deleting a table, we do the following regarding the 
> filesystem layout:
> 1. Move the table data to the temp directory (hbase/.tmp)
> 2. Archive the region directories of the table in the temp directory one by 
> one:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java#L319-L323
> However, step 2 will take a long time when the table has a huge number of 
> regions. So I propose doing step 2 in multi-threads in this Jira. 
> Also, during master startup, we do the same process as step 2:
> https://github.com/apache/hbase/blob/b322d0a3e552dc228893408161fd3fb20f6b8bf1/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java#L313-L319
> We should make it multi-threaded, similarly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21730) Update HBase-book with the procedure based WAL splitting

2019-02-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774035#comment-16774035
 ] 

Hadoop QA commented on HBASE-21730:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  4m 
56s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  5m  
0s{color} | {color:blue} patch has no errors when building the reference guide. 
See footer for rendered docs, which you should manually inspect. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21730 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959606/HBASE-21730.master.001.patch
 |
| Optional Tests |  dupname  asflicense  refguide  mvnsite  |
| uname | Linux 476f419ad57e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9a55cbb2c1 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16075/artifact/patchprocess/branch-site/book.html
 |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16075/artifact/patchprocess/patch-site/book.html
 |
| Max. process+thread count | 97 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16075/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Update HBase-book with the procedure based WAL splitting
> 
>
> Key: HBASE-21730
> URL: https://issues.apache.org/jira/browse/HBASE-21730
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Minor
> Attachments: HBASE-21730.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21934) SplitWALProcedure get stuck during ITBLL

2019-02-21 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-21934:
-
Affects Version/s: 2.x

> SplitWALProcedure get stuck during ITBLL
> 
>
> Key: HBASE-21934
> URL: https://issues.apache.org/jira/browse/HBASE-21934
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.x
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-21934.master.001.patch
>
>
> I encounter the problem that when master assign a splitWALRemoteProcedure to 
> a region server. The log of this region server says it failed to recover the 
> lease of this file. Then this region server is killed by chaosMonkey. As the 
> result, this procedure is not timeout and hang there forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21941) Increment the default scanner timeout

2019-02-21 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774000#comment-16774000
 ] 

Guanghao Zhang commented on HBASE-21941:


Checked 
[https://github.com/apache/hbase/blob/9a55cbb2c1dfe5a13a6ceb323ac7edd23532f4b5/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncScanSingleRegionRpcRetryingCaller.java#L546]
 . It use the remaining hbase.client.scanner.timeout.period as the rpc timeout 
for scan rpc call.

> Increment the default scanner timeout
> -
>
> Key: HBASE-21941
> URL: https://issues.apache.org/jira/browse/HBASE-21941
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Priority: Major
>
> There are hbase.rpc.timeout and hbase.client.operation.timeout for client 
> operation expect scan. And there is a special config 
> hbase.client.scanner.timeout.period for scan. If I am not wrong, this should 
> rpc timeout of scan call. But now we use this as operation timeout of scan 
> call. The scan callable is complicated as we need handle the replica case. 
> The real call with retry is called in 
> [https://github.com/apache/hbase/blob/9a55cbb2c1dfe5a13a6ceb323ac7edd23532f4b5/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultBoundedCompletionService.java#L80|https://github.com/apache/hbase/blob/9a55cbb2c1dfe5a13a6ceb323ac7edd23532f4b5/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultBoundedCompletionService.java#L80.]
>  . And the callTimeout is configed by hbase.client.scanner.timeout.period. So 
> I thought this is not right.
>  
> I meet this problem when run ITBLL for branch-2.2. The verify map task failed 
> when scan.
> {code:java}
> 2019-02-21 03:47:20,287 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: recovered from 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=16, exceptions: 
> 2019-02-21 03:47:20,287 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: Closing the 
> previously opened scanner object.
> 2019-02-21 03:47:20,331 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: Current 
> scan={"loadColumnFamiliesOnDemand":null,"startRow":"\\xE1\\x9B\\xB4\\xF0\\xB3(JT\\xDC\\x86pf|y\\xF3\\xE9","stopRow":"","batch":-1,"cacheBlocks":false,"totalColumns":3,"maxResultSize":4194304,"families":{"big":["big"],"meta":["prev"],"tiny":["tiny"]},"caching":1,"maxVersions":1,"timeRange":[0,9223372036854775807]}
>  2019-02-21 03:47:20,335 INFO 
> [hconnection-0x7b44b63d-metaLookup-shared--pool4-t36] 
> org.apache.hadoop.hbase.client.ScannerCallable: Open 
> scanner=-4916858472898750097 for 
> scan={"loadColumnFamiliesOnDemand":null,"startRow":"IntegrationTestBigLinkedList,\\xE1\\x9B\\xB4\\xF0\\xB3(JT\\xDC\\x86pf|y\\xF3\\xE9,99","stopRow":"IntegrationTestBigLinkedList,,","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{"info":["ALL"]},"caching":5,"maxVersions":1,"timeRange":[0,9223372036854775807]}
>  on region region=hbase:meta,,1.1588230740, 
> hostname=c4-hadoop-tst-st26.bj,29100,1550660298519, seqNum=-1
> 2019-02-21 03:48:20,354 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: Mapper took 60023ms 
> to process 0 rows
> 2019-02-21 03:48:20,355 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=16, exceptions: Thu Feb 21 03:48:20 CST 2019, null, 
> java.net.SocketTimeoutException: callTimeout=6, callDuration=60215: Call 
> to c4-hadoop-tst-st30.bj/10.132.2.41:29100 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=7102, 
> waitTime=60006, rpcTimeout=6 row 'ᛴ�(JT܆pf|y��' on table 
> 'IntegrationTestBigLinkedList' at 
> region=IntegrationTestBigLinkedList,\xDD\xDD\xDD\xDD\xDD\xDD\xDD\xDD,1550661322522.d5d29d2f1e8fee42d666c117709c3a46.,
>  hostname=c4-hadoop-tst-st30.bj,29100,1550652984371, seqNum=1007960 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=16, exceptions: Thu Feb 21 03:48:20 CST 2019, null, 
> java.net.SocketTimeoutException: callTimeout=6, callDuration=60215: Call 
> to c4-hadoop-tst-st30.bj/10.132.2.41:29100 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=7102, 
> waitTime=60006, rpcTimeout=6 row 'ᛴ�(JT܆pf|y��' on table 
> 'IntegrationTestBigLinkedList' at 
> region=IntegrationTestBigLinkedList,\xDD\xDD\xDD\xDD\xDD\xDD\xDD\xDD,1550661322522.d5d29d2f1e8fee42d666c117709c3a46.,
>  hostname=c4-hadoop-tst-st30.bj,29100,1550652984371, seqNum=1007960 at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:299)
>  at 
> 

[jira] [Updated] (HBASE-21730) Update HBase-book with the procedure based WAL splitting

2019-02-21 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-21730:
-
Status: Patch Available  (was: Open)

> Update HBase-book with the procedure based WAL splitting
> 
>
> Key: HBASE-21730
> URL: https://issues.apache.org/jira/browse/HBASE-21730
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Minor
> Attachments: HBASE-21730.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >