[jira] [Commented] (HBASE-19711) TestReplicationAdmin.testConcurrentPeerOperations hangs

2018-01-04 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312642#comment-16312642
 ] 

Duo Zhang commented on HBASE-19711:
---

I think peer related operations are rare, so let's name it as 
tryRemovePeerQueue, and always call it when a PeerProcedureInterface is 
finished, not only for the procedure which operation type is REMOVE?

> TestReplicationAdmin.testConcurrentPeerOperations hangs
> ---
>
> Key: HBASE-19711
> URL: https://issues.apache.org/jira/browse/HBASE-19711
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
> Fix For: HBASE-19397
>
> Attachments: HBASE-19711.HBASE-19397.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-19711) TestReplicationAdmin.testConcurrentPeerOperations hangs

2018-01-04 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang reassigned HBASE-19711:
--

Assignee: Guanghao Zhang

> TestReplicationAdmin.testConcurrentPeerOperations hangs
> ---
>
> Key: HBASE-19711
> URL: https://issues.apache.org/jira/browse/HBASE-19711
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
> Fix For: HBASE-19397
>
> Attachments: HBASE-19711.HBASE-19397.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19711) TestReplicationAdmin.testConcurrentPeerOperations hangs

2018-01-04 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19711:
---
Fix Version/s: HBASE-19397

> TestReplicationAdmin.testConcurrentPeerOperations hangs
> ---
>
> Key: HBASE-19711
> URL: https://issues.apache.org/jira/browse/HBASE-19711
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
> Fix For: HBASE-19397
>
> Attachments: HBASE-19711.HBASE-19397.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19711) TestReplicationAdmin.testConcurrentPeerOperations hangs

2018-01-04 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19711:
---
Status: Patch Available  (was: Open)

> TestReplicationAdmin.testConcurrentPeerOperations hangs
> ---
>
> Key: HBASE-19711
> URL: https://issues.apache.org/jira/browse/HBASE-19711
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-19711.HBASE-19397.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19711) TestReplicationAdmin.testConcurrentPeerOperations hangs

2018-01-04 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19711:
---
Attachment: HBASE-19711.HBASE-19397.001.patch

> TestReplicationAdmin.testConcurrentPeerOperations hangs
> ---
>
> Key: HBASE-19711
> URL: https://issues.apache.org/jira/browse/HBASE-19711
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
> Attachments: HBASE-19711.HBASE-19397.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19707) Race in start and terminate of a replication source after we async start replicatione endpoint

2018-01-04 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19707:
--
Status: Patch Available  (was: Open)

> Race in start and terminate of a replication source after we async start 
> replicatione endpoint
> --
>
> Key: HBASE-19707
> URL: https://issues.apache.org/jira/browse/HBASE-19707
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-19707-HBASE-19397.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19707) Race in start and terminate of a replication source after we async start replicatione endpoint

2018-01-04 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19707:
--
Attachment: HBASE-19707-HBASE-19397.patch

> Race in start and terminate of a replication source after we async start 
> replicatione endpoint
> --
>
> Key: HBASE-19707
> URL: https://issues.apache.org/jira/browse/HBASE-19707
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-19707-HBASE-19397.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-19707) Race in start and terminate of a replication source after we async start replicatione endpoint

2018-01-04 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reassigned HBASE-19707:
-

Assignee: Duo Zhang

> Race in start and terminate of a replication source after we async start 
> replicatione endpoint
> --
>
> Key: HBASE-19707
> URL: https://issues.apache.org/jira/browse/HBASE-19707
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19711) TestReplicationAdmin.testConcurrentPeerOperations hangs

2018-01-04 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19711:
--
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-19397

> TestReplicationAdmin.testConcurrentPeerOperations hangs
> ---
>
> Key: HBASE-19711
> URL: https://issues.apache.org/jira/browse/HBASE-19711
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19636) All rs should already start work with the new peer change when replication peer procedure is finished

2018-01-04 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312526#comment-16312526
 ] 

Duo Zhang edited comment on HBASE-19636 at 1/5/18 6:07 AM:
---

Pushed to branch HBASE-19397. Thanks [~zghaobac] for contributing.


was (Author: apache9):
Pushed to branch HBASE-19397. Thanks [~zghaobac] for reviewing.

> All rs should already start work with the new peer change when replication 
> peer procedure is finished
> -
>
> Key: HBASE-19636
> URL: https://issues.apache.org/jira/browse/HBASE-19636
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: HBASE-19397
>
> Attachments: HBASE-19636-HBASE-19397-v5.patch, 
> HBASE-19636.HBASE-19397.001.patch, HBASE-19636.HBASE-19397.002.patch, 
> HBASE-19636.HBASE-19397.003.patch, HBASE-19636.HBASE-19397.004.patch
>
>
> When replication peer operations use zk, the master will modify zk directly. 
> Then the rs will asynchronous track the zk event to start work with the new 
> peer change. When replication peer operations use procedure, need to make 
> sure this process is synchronous. All rs should already start work with the 
> new peer change when procedure is finished.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19636) All rs should already start work with the new peer change when replication peer procedure is finished

2018-01-04 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19636:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HBASE-19397
   Status: Resolved  (was: Patch Available)

Pushed to branch HBASE-19397. Thanks [~zghaobac] for reviewing.

> All rs should already start work with the new peer change when replication 
> peer procedure is finished
> -
>
> Key: HBASE-19636
> URL: https://issues.apache.org/jira/browse/HBASE-19636
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: HBASE-19397
>
> Attachments: HBASE-19636-HBASE-19397-v5.patch, 
> HBASE-19636.HBASE-19397.001.patch, HBASE-19636.HBASE-19397.002.patch, 
> HBASE-19636.HBASE-19397.003.patch, HBASE-19636.HBASE-19397.004.patch
>
>
> When replication peer operations use zk, the master will modify zk directly. 
> Then the rs will asynchronous track the zk event to start work with the new 
> peer change. When replication peer operations use procedure, need to make 
> sure this process is synchronous. All rs should already start work with the 
> new peer change when procedure is finished.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19636) All rs should already start work with the new peer change when replication peer procedure is finished

2018-01-04 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312521#comment-16312521
 ] 

Duo Zhang commented on HBASE-19636:
---

Will commit after fixing the checkstyle issue.

> All rs should already start work with the new peer change when replication 
> peer procedure is finished
> -
>
> Key: HBASE-19636
> URL: https://issues.apache.org/jira/browse/HBASE-19636
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-19636-HBASE-19397-v5.patch, 
> HBASE-19636.HBASE-19397.001.patch, HBASE-19636.HBASE-19397.002.patch, 
> HBASE-19636.HBASE-19397.003.patch, HBASE-19636.HBASE-19397.004.patch
>
>
> When replication peer operations use zk, the master will modify zk directly. 
> Then the rs will asynchronous track the zk event to start work with the new 
> peer change. When replication peer operations use procedure, need to make 
> sure this process is synchronous. All rs should already start work with the 
> new peer change when procedure is finished.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19711) TestReplicationAdmin.testConcurrentPeerOperations hangs

2018-01-04 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-19711:
-

 Summary: TestReplicationAdmin.testConcurrentPeerOperations hangs
 Key: HBASE-19711
 URL: https://issues.apache.org/jira/browse/HBASE-19711
 Project: HBase
  Issue Type: Bug
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312492#comment-16312492
 ] 

Hadoop QA commented on HBASE-19708:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 2s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
48s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 46s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.1 2.5.2 2.6.5 2.7.4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}110m  
9s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 |
| JIRA Issue | HBASE-19708 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904717/HBASE-19708-branch-1.patch
 |
| Optional Tests |  asflicense  javac  javadoc  

[jira] [Commented] (HBASE-19506) Support variable sized chunks from ChunkCreator

2018-01-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312475#comment-16312475
 ] 

ramkrishna.s.vasudevan commented on HBASE-19506:


bq.As for solution, we suggest to create another pool for "small" chunks in 
ChunkCreator. Let's say chunks of 256KB size.
Yes. Idea is right. Index chunk pool and data chunk pool will be there. How to 
ensure we don't run out of space in this index chunk pool?

> Support variable sized chunks from ChunkCreator
> ---
>
> Key: HBASE-19506
> URL: https://issues.apache.org/jira/browse/HBASE-19506
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>
> When CellChunkMap is created it allocates a special index chunk (or chunks) 
> where array of cell-representations is stored. When the number of 
> cell-representations is small, it is preferable to allocate a chunk smaller 
> than a default value which is 2MB.
> On the other hand, those "non-standard size" chunks can not be used in pool. 
> On-demand allocations in off-heap are costly. So this JIRA is about to 
> investigate the trade of between memory usage and the final performance. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19568) Restore of HBase table using incremental backup doesn't restore rows from an earlier incremental backup

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312450#comment-16312450
 ] 

Hadoop QA commented on HBASE-19568:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  3s{color} 
| {color:red} HBASE-19568 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.6.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-19568 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904480/HBASE-19568-v2.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10892/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Restore of HBase table using incremental backup doesn't restore rows from an 
> earlier incremental backup
> ---
>
> Key: HBASE-19568
> URL: https://issues.apache.org/jira/browse/HBASE-19568
> Project: HBase
>  Issue Type: Bug
>Reporter: Romil Choksi
>Assignee: Vladimir Rodionov
> Attachments: HBASE-19568-v1.patch, HBASE-19568-v2.patch
>
>
> Credits to [~romil.choksi]
> Restore of bulk-loaded HBase table doesn't restore deleted rows
> Steps:
> Create usertable and insert a few rows in it
> Create full backup
> Bulk load into usertable, and create first incremental backup
> Bulk load into usertable again, and create second incremental backup
> Delete row each from initial insert, first bulk load and second bulk load
> Restore usertable using second incremental backup
> Verify if each of the deleted rows has been restored
> On restore using second incremental backup id, the test failed as all of the 
> rows from first bulk load were not available. Data from initial insertion 
> (full backup) and second bulk load were only available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19568) Restore of HBase table using incremental backup doesn't restore rows from an earlier incremental backup

2018-01-04 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-19568:
--
Status: Patch Available  (was: Open)

> Restore of HBase table using incremental backup doesn't restore rows from an 
> earlier incremental backup
> ---
>
> Key: HBASE-19568
> URL: https://issues.apache.org/jira/browse/HBASE-19568
> Project: HBase
>  Issue Type: Bug
>Reporter: Romil Choksi
>Assignee: Vladimir Rodionov
> Attachments: HBASE-19568-v1.patch, HBASE-19568-v2.patch
>
>
> Credits to [~romil.choksi]
> Restore of bulk-loaded HBase table doesn't restore deleted rows
> Steps:
> Create usertable and insert a few rows in it
> Create full backup
> Bulk load into usertable, and create first incremental backup
> Bulk load into usertable again, and create second incremental backup
> Delete row each from initial insert, first bulk load and second bulk load
> Restore usertable using second incremental backup
> Verify if each of the deleted rows has been restored
> On restore using second incremental backup id, the test failed as all of the 
> rows from first bulk load were not available. Data from initial insertion 
> (full backup) and second bulk load were only available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312437#comment-16312437
 ] 

Hadoop QA commented on HBASE-19163:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 9s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
25s{color} | {color:red} hbase-server: The patch generated 3 new + 336 
unchanged - 3 fixed = 339 total (was 339) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
51s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m  1s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.1 2.5.2 2.6.5 2.7.4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 25s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestCatalogJanitorInMemoryStates |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 |
| JIRA Issue | HBASE-19163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904709/HBASE-19163-branch-1-v001.patch
 |
| 

[jira] [Commented] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312424#comment-16312424
 ] 

Chia-Ping Tsai commented on HBASE-19708:


LGTM

> Avoid NPE when the RPC listener's accept channel is closed
> --
>
> Key: HBASE-19708
> URL: https://issues.apache.org/jira/browse/HBASE-19708
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.24
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 1.3.2, 1.4.1, 1.5.0
>
> Attachments: HBASE-19708-branch-1.patch, HBASE-19708-branch-1.patch
>
>
> Rare NPE when the listener's accept channel is closed. We serialize access to 
> related state to avoid a previously fixed related NPE and need to do the same 
> for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
> code lines. Let me check.
> Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19709) Guard against a ThreadPool size of 0 in CleanerChore

2018-01-04 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312415#comment-16312415
 ] 

Chia-Ping Tsai commented on HBASE-19709:


bq. Or only 1 processor, so (int) (1 * 0.5) becomes zero.
It pays to add the log for the wrong config.

> Guard against a ThreadPool size of 0 in CleanerChore
> 
>
> Key: HBASE-19709
> URL: https://issues.apache.org/jira/browse/HBASE-19709
> Project: HBase
>  Issue Type: Bug
>Reporter: Siddharth Wagle
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-beta-2
>
> Attachments: HBASE-19709.001.branch-2.patch
>
>
> Post HBASE-18309, we choose the number of threads by the following logic:
> {code}
> +  /**
> +   * If it is an integer and >= 1, it would be the size;
> +   * if 0.0 < size <= 1.0, size would be available processors * size.
> +   * Pay attention that 1.0 is different from 1, former indicates it will 
> use 100% of cores,
> +   * while latter will use only 1 thread for chore to scan dir.
> +   */
> {code}
> [~swagle] has found on his VM that despite having two virtual processors, 
> {{Runtime.getRuntime().availableProcessors()}} returns 0, which results in 0 
> threads for the pool which throws an exception.
> {noformat}
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ForkJoinPool.checkParallelism(ForkJoinPool.java:2546)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2536)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2505)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:112)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:83)
> at 
> org.apache.hadoop.hbase.master.cleaner.LogCleaner.(LogCleaner.java:65)
> at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1130)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:813)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:223)
> at org.apache.hadoop.hbase.master.HMaster$4.run(HMaster.java:2016)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We should make sure that we take the max of {{1}} and the computed number of 
> threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312414#comment-16312414
 ] 

Andrew Purtell edited comment on HBASE-19708 at 1/5/18 3:05 AM:


Updated patch. Thanks for the review [~chia7712]


was (Author: apurtell):
Updated patch

> Avoid NPE when the RPC listener's accept channel is closed
> --
>
> Key: HBASE-19708
> URL: https://issues.apache.org/jira/browse/HBASE-19708
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.24
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 1.3.2, 1.4.1, 1.5.0
>
> Attachments: HBASE-19708-branch-1.patch, HBASE-19708-branch-1.patch
>
>
> Rare NPE when the listener's accept channel is closed. We serialize access to 
> related state to avoid a previously fixed related NPE and need to do the same 
> for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
> code lines. Let me check.
> Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19708:
---
Attachment: HBASE-19708-branch-1.patch

Updated patch

> Avoid NPE when the RPC listener's accept channel is closed
> --
>
> Key: HBASE-19708
> URL: https://issues.apache.org/jira/browse/HBASE-19708
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.24
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 1.3.2, 1.4.1, 1.5.0
>
> Attachments: HBASE-19708-branch-1.patch, HBASE-19708-branch-1.patch
>
>
> Rare NPE when the listener's accept channel is closed. We serialize access to 
> related state to avoid a previously fixed related NPE and need to do the same 
> for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
> code lines. Let me check.
> Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312411#comment-16312411
 ] 

Andrew Purtell commented on HBASE-19708:


bq. Does this change make another NPE when stopping RpcServer?

Maybe. I can remove it, actually, it's a left over from an earlier change. 
Saving the address lookup to field {{address}} is enough to fix the problem 
apparent in the stacktrace. 

> Avoid NPE when the RPC listener's accept channel is closed
> --
>
> Key: HBASE-19708
> URL: https://issues.apache.org/jira/browse/HBASE-19708
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.24
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 1.3.2, 1.4.1, 1.5.0
>
> Attachments: HBASE-19708-branch-1.patch
>
>
> Rare NPE when the listener's accept channel is closed. We serialize access to 
> related state to avoid a previously fixed related NPE and need to do the same 
> for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
> code lines. Let me check.
> Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312409#comment-16312409
 ] 

Chia-Ping Tsai commented on HBASE-19708:


Does this change make another NPE when stopping RpcServer? 
{code}
@@ -968,6 +971,7 @@ public class RpcServer implements RpcServerInterface, 
ConfigurationObserver {
 } catch (IOException e) {
   LOG.info(getName() + ": exception in closing listener socket. " + e);
 }
+acceptChannel = null;
   }
   readPool.shutdownNow();
 }
{code}

Listener#run() also try to close the {{Listener}} after loop. 
{code:title=Listener.java}
public void run() {
  LOG.info(getName() + ": starting");
  while (running) {
 // bababa
  }

  LOG.info(getName() + ": stopping");

  synchronized (this) {
try {
  acceptChannel.close();  // here
  selector.close();
} catch (IOException ignored) {
  if (LOG.isTraceEnabled()) LOG.trace("ignored", ignored);
}

selector= null;
acceptChannel= null;

// clean up all connections
while (!connectionList.isEmpty()) {
  closeConnection(connectionList.remove(0));
}
  }
}
{code}

No wait exist between listener#interrupt and listener#doStop. It cause the NPE 
if the acceptChannel#close is executed after the listener#doStop.
{code:title=RpcServer.java}
  @Override
  public synchronized void stop() {
LOG.info("Stopping server on " + port);
running = false;
if (authTokenSecretMgr != null) {
  authTokenSecretMgr.stop();
  authTokenSecretMgr = null;
}
listener.interrupt();
listener.doStop();
responder.interrupt();
scheduler.stop();
notifyAll();
  }
{code} 


> Avoid NPE when the RPC listener's accept channel is closed
> --
>
> Key: HBASE-19708
> URL: https://issues.apache.org/jira/browse/HBASE-19708
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.24
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 1.3.2, 1.4.1, 1.5.0
>
> Attachments: HBASE-19708-branch-1.patch
>
>
> Rare NPE when the listener's accept channel is closed. We serialize access to 
> related state to avoid a previously fixed related NPE and need to do the same 
> for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
> code lines. Let me check.
> Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312408#comment-16312408
 ] 

Hadoop QA commented on HBASE-19708:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
1s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
59s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
36s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.1 2.5.2 2.6.5 2.7.4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 30s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestSplitTransactionOnCluster |
|   | hadoop.hbase.TestFullLogReconstruction |
|   | 
hadoop.hbase.security.visibility.TestVisibilityLabelsWithCustomVisLabService |
|   | hadoop.hbase.regionserver.TestEncryptionKeyRotation |
|   

[jira] [Commented] (HBASE-19709) Guard against a ThreadPool size of 0 in CleanerChore

2018-01-04 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312400#comment-16312400
 ] 

Reid Chan commented on HBASE-19709:
---

Or only 1 processor, so (int) (1 * 0.5) becomes zero.

> Guard against a ThreadPool size of 0 in CleanerChore
> 
>
> Key: HBASE-19709
> URL: https://issues.apache.org/jira/browse/HBASE-19709
> Project: HBase
>  Issue Type: Bug
>Reporter: Siddharth Wagle
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-beta-2
>
> Attachments: HBASE-19709.001.branch-2.patch
>
>
> Post HBASE-18309, we choose the number of threads by the following logic:
> {code}
> +  /**
> +   * If it is an integer and >= 1, it would be the size;
> +   * if 0.0 < size <= 1.0, size would be available processors * size.
> +   * Pay attention that 1.0 is different from 1, former indicates it will 
> use 100% of cores,
> +   * while latter will use only 1 thread for chore to scan dir.
> +   */
> {code}
> [~swagle] has found on his VM that despite having two virtual processors, 
> {{Runtime.getRuntime().availableProcessors()}} returns 0, which results in 0 
> threads for the pool which throws an exception.
> {noformat}
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ForkJoinPool.checkParallelism(ForkJoinPool.java:2546)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2536)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2505)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:112)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:83)
> at 
> org.apache.hadoop.hbase.master.cleaner.LogCleaner.(LogCleaner.java:65)
> at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1130)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:813)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:223)
> at org.apache.hadoop.hbase.master.HMaster$4.run(HMaster.java:2016)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We should make sure that we take the max of {{1}} and the computed number of 
> threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19709) Guard against a ThreadPool size of 0 in CleanerChore

2018-01-04 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312396#comment-16312396
 ] 

Chia-Ping Tsai commented on HBASE-19709:


Is it a common case that {{Runtime.getRuntime().availableProcessors()}} returns 
0? If not, consider adding a debug/warn log there.

> Guard against a ThreadPool size of 0 in CleanerChore
> 
>
> Key: HBASE-19709
> URL: https://issues.apache.org/jira/browse/HBASE-19709
> Project: HBase
>  Issue Type: Bug
>Reporter: Siddharth Wagle
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-beta-2
>
> Attachments: HBASE-19709.001.branch-2.patch
>
>
> Post HBASE-18309, we choose the number of threads by the following logic:
> {code}
> +  /**
> +   * If it is an integer and >= 1, it would be the size;
> +   * if 0.0 < size <= 1.0, size would be available processors * size.
> +   * Pay attention that 1.0 is different from 1, former indicates it will 
> use 100% of cores,
> +   * while latter will use only 1 thread for chore to scan dir.
> +   */
> {code}
> [~swagle] has found on his VM that despite having two virtual processors, 
> {{Runtime.getRuntime().availableProcessors()}} returns 0, which results in 0 
> threads for the pool which throws an exception.
> {noformat}
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ForkJoinPool.checkParallelism(ForkJoinPool.java:2546)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2536)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2505)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:112)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:83)
> at 
> org.apache.hadoop.hbase.master.cleaner.LogCleaner.(LogCleaner.java:65)
> at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1130)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:813)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:223)
> at org.apache.hadoop.hbase.master.HMaster$4.run(HMaster.java:2016)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We should make sure that we take the max of {{1}} and the computed number of 
> threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19673) Backport " Periodically ensure records are not buffered too long by BufferedMutator" to branch-1

2018-01-04 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312394#comment-16312394
 ] 

Chia-Ping Tsai commented on HBASE-19673:


bq. What if we simply say: in 1.x you can only set it during construction. In 
2.x you can modify it afterwards.
ya, that is a acceptable workaround.

bq. We make BufferedMutatorPeriodicFlush IA.private.
BufferedMutatorPeriodicFlush is unnecessary I'd say.


> Backport " Periodically ensure records are not buffered too long by 
> BufferedMutator" to branch-1
> 
>
> Key: HBASE-19673
> URL: https://issues.apache.org/jira/browse/HBASE-19673
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: HBASE-19673.20171230-130631.branch-1.patch, 
> HBASE-19673.20171230-131955.branch-1.patch, 
> HBASE-19673.20171231-112539.branch-1.patch, 
> HBASE-19673.20180102-082937.branch-1.patch, 
> HBASE-19673.20180102-155006.branch-1.patch, 
> HBASE-19673.20180103-084857.branch-1.patch, 
> HBASE-19673.branch-1.20180103-170905.patch
>
>
> In HBASE-19486 the feature was built to periodically flush the 
> BufferedMutator.
> Because backwards compatibility is important in the 1.x branch some 
> refactoring is needed to make this work.
> As requested by [~chia7712] this separate issue was created  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18693) adding an option to restore_snapshot to move mob files from archive dir to working dir

2018-01-04 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312387#comment-16312387
 ] 

Jingcheng Du commented on HBASE-18693:
--

Thanks [~huaxiang]!
I am +1 to V3.

> adding an option to restore_snapshot to move mob files from archive dir to 
> working dir
> --
>
> Key: HBASE-18693
> URL: https://issues.apache.org/jira/browse/HBASE-18693
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0-alpha-2
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-18693.master.001.patch, 
> HBASE-18693.master.002.patch, HBASE-18693.master.003.patch
>
>
> Today, there is a single mob region where mob files for all user regions are 
> saved. There could be many files (one million) in a single mob directory. 
> When one mob table is restored or cloned from snapshot, links are created for 
> these mob files. This creates a scaling issue for mob compaction. In mob 
> compaction's select() logic, for each hFileLink, it needs to call NN's 
> getFileStatus() to get the size of the linked hfile. Assume that one such 
> call takes 20ms, 20ms * 100 = 6 hours. 
> To avoid this overhead, we want to add an option so that restore_snapshot can 
> move mob files from archive dir to working dir. clone_snapshot is more 
> complicated as it can clone a snapshot to a different table so moving that 
> can destroy the snapshot. No option will be added for clone_snapshot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19709) Guard against a ThreadPool size of 0 in CleanerChore

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312353#comment-16312353
 ] 

Hadoop QA commented on HBASE-19709:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
24s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
28s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m 56s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 28s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-19709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904691/HBASE-19709.001.branch-2.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 9508156df449 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / e35fec284d |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10888/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10888/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10888/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Guard against a ThreadPool size of 0 in CleanerChore
> 

[jira] [Updated] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2018-01-04 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19163:
-
Attachment: HBASE-19163-branch-1-v001.patch

patch for branch-1, I removed the config knob for minibatchSize for branch-1.

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-branch-1-v001.patch, 
> HBASE-19163-master-v001.patch, HBASE-19163.master.001.patch, 
> HBASE-19163.master.002.patch, HBASE-19163.master.004.patch, 
> HBASE-19163.master.005.patch, HBASE-19163.master.006.patch, 
> HBASE-19163.master.007.patch, HBASE-19163.master.008.patch, 
> HBASE-19163.master.009.patch, HBASE-19163.master.009.patch, 
> HBASE-19163.master.010.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19709) Guard against a ThreadPool size of 0 in CleanerChore

2018-01-04 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312288#comment-16312288
 ] 

Reid Chan commented on HBASE-19709:
---

Looks like {{private static final int AVAIL_PROCESSORS = 
Runtime.getRuntime().availableProcessors();}} somehow gets a zero.
And thank you for the UT, it is good.

> Guard against a ThreadPool size of 0 in CleanerChore
> 
>
> Key: HBASE-19709
> URL: https://issues.apache.org/jira/browse/HBASE-19709
> Project: HBase
>  Issue Type: Bug
>Reporter: Siddharth Wagle
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-beta-2
>
> Attachments: HBASE-19709.001.branch-2.patch
>
>
> Post HBASE-18309, we choose the number of threads by the following logic:
> {code}
> +  /**
> +   * If it is an integer and >= 1, it would be the size;
> +   * if 0.0 < size <= 1.0, size would be available processors * size.
> +   * Pay attention that 1.0 is different from 1, former indicates it will 
> use 100% of cores,
> +   * while latter will use only 1 thread for chore to scan dir.
> +   */
> {code}
> [~swagle] has found on his VM that despite having two virtual processors, 
> {{Runtime.getRuntime().availableProcessors()}} returns 0, which results in 0 
> threads for the pool which throws an exception.
> {noformat}
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ForkJoinPool.checkParallelism(ForkJoinPool.java:2546)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2536)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2505)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:112)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:83)
> at 
> org.apache.hadoop.hbase.master.cleaner.LogCleaner.(LogCleaner.java:65)
> at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1130)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:813)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:223)
> at org.apache.hadoop.hbase.master.HMaster$4.run(HMaster.java:2016)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We should make sure that we take the max of {{1}} and the computed number of 
> threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19709) Guard against a ThreadPool size of 0 in CleanerChore

2018-01-04 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312269#comment-16312269
 ] 

Reid Chan commented on HBASE-19709:
---

+1

> Guard against a ThreadPool size of 0 in CleanerChore
> 
>
> Key: HBASE-19709
> URL: https://issues.apache.org/jira/browse/HBASE-19709
> Project: HBase
>  Issue Type: Bug
>Reporter: Siddharth Wagle
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-beta-2
>
> Attachments: HBASE-19709.001.branch-2.patch
>
>
> Post HBASE-18309, we choose the number of threads by the following logic:
> {code}
> +  /**
> +   * If it is an integer and >= 1, it would be the size;
> +   * if 0.0 < size <= 1.0, size would be available processors * size.
> +   * Pay attention that 1.0 is different from 1, former indicates it will 
> use 100% of cores,
> +   * while latter will use only 1 thread for chore to scan dir.
> +   */
> {code}
> [~swagle] has found on his VM that despite having two virtual processors, 
> {{Runtime.getRuntime().availableProcessors()}} returns 0, which results in 0 
> threads for the pool which throws an exception.
> {noformat}
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ForkJoinPool.checkParallelism(ForkJoinPool.java:2546)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2536)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2505)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:112)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:83)
> at 
> org.apache.hadoop.hbase.master.cleaner.LogCleaner.(LogCleaner.java:65)
> at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1130)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:813)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:223)
> at org.apache.hadoop.hbase.master.HMaster$4.run(HMaster.java:2016)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We should make sure that we take the max of {{1}} and the computed number of 
> threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19662) hbase-metrics-api fails checkstyle due to wrong import order

2018-01-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19662:
---
Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

trunk build has been running without this error.

> hbase-metrics-api fails checkstyle due to wrong import order
> 
>
> Key: HBASE-19662
> URL: https://issues.apache.org/jira/browse/HBASE-19662
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 19662.v1.txt
>
>
> In recent trunk builds, there were the following errors:
> {code}
> [ERROR] 
> src/main/java/org/apache/hadoop/hbase/metrics/MetricRegistriesLoader.java:[31]
>  (imports) ImportOrder: Wrong order for 
> 'org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting' 
> import.
> [ERROR] 
> src/test/java/org/apache/hadoop/hbase/metrics/TestMetricRegistriesLoader.java:[28]
>  (imports) ImportOrder: Wrong order for 
> 'org.apache.hbase.thirdparty.com.google.common.collect.Lists' import.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19688) TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate

2018-01-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312257#comment-16312257
 ] 

Hudson commented on HBASE-19688:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4344 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4344/])
HBASE-19688 TimeToLiveProcedureWALCleaner should extends (tedyu: rev 
bff937a767ecb851a4ba312ece52b50b84df4976)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/TimeToLiveProcedureWALCleaner.java


> TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate
> ---
>
> Key: HBASE-19688
> URL: https://issues.apache.org/jira/browse/HBASE-19688
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19688.master.001.patch
>
>
> According to {{LogCleaner extends CleanerChore}}, 
> {{TimeToLiveLogCleaner}} should extends {{BaseLogCleanerDelegate}} instead of 
> {{BaseFileCleanerDelegate}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19708:
---
Status: Patch Available  (was: Open)

> Avoid NPE when the RPC listener's accept channel is closed
> --
>
> Key: HBASE-19708
> URL: https://issues.apache.org/jira/browse/HBASE-19708
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.24
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 1.3.2, 1.4.1, 1.5.0
>
> Attachments: HBASE-19708-branch-1.patch
>
>
> Rare NPE when the listener's accept channel is closed. We serialize access to 
> related state to avoid a previously fixed related NPE and need to do the same 
> for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
> code lines. Let me check.
> Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19708:
---
Fix Version/s: 1.5.0
   1.4.1
   1.3.2

> Avoid NPE when the RPC listener's accept channel is closed
> --
>
> Key: HBASE-19708
> URL: https://issues.apache.org/jira/browse/HBASE-19708
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.24
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 1.3.2, 1.4.1, 1.5.0
>
> Attachments: HBASE-19708-branch-1.patch
>
>
> Rare NPE when the listener's accept channel is closed. We serialize access to 
> related state to avoid a previously fixed related NPE and need to do the same 
> for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
> code lines. Let me check.
> Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19708:
---
Attachment: HBASE-19708-branch-1.patch

Looks like just a problem for branch-1

> Avoid NPE when the RPC listener's accept channel is closed
> --
>
> Key: HBASE-19708
> URL: https://issues.apache.org/jira/browse/HBASE-19708
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.24
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Attachments: HBASE-19708-branch-1.patch
>
>
> Rare NPE when the listener's accept channel is closed. We serialize access to 
> related state to avoid a previously fixed related NPE and need to do the same 
> for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
> code lines. Let me check.
> Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19710) hbase:namespace table was stuck in transition

2018-01-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19710:
---
Attachment: master-006.tar.gz
rs-009.log.tar.gz
master-005-log.tar.gz

009 was the region server log where namespace table was last open.
006 was the master log which first experienced namespace table getting stuck.
005 was the master which became active master next, with namespace table still 
stuck.

> hbase:namespace table was stuck in transition
> -
>
> Key: HBASE-19710
> URL: https://issues.apache.org/jira/browse/HBASE-19710
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Critical
> Attachments: master-005-log.tar.gz, master-006.tar.gz, 
> rs-009.log.tar.gz
>
>
> ITBLL with chaos monkey failed due to namespace table getting stuck in 
> transition.
> From hbase-hbase-master-ctr-e137-1514896590304-3629-01-06.hwx.site.log , 
> we can see that master closed namespace table on 09:
> {code}
> 2018-01-04 17:24:35,067 DEBUG [main-EventThread] zookeeper.ZKWatcher: 
> master:2-0x160c222710c0028, 
> quorum=ctr-e137-1514896590304-3629-01-11.hwx.site:2181,ctr-e137-  
> 1514896590304-3629-01-14.hwx.site:2181,ctr-e137-1514896590304-3629-01-09.hwx.site:2181,ctr-e137-1514896590304-3629-01-06.hwx.site:2181,ctr-e137-1514896590304-3629-
>  
> 01-03.hwx.site:2181,ctr-e137-1514896590304-3629-01-07.hwx.site:2181,ctr-e137-1514896590304-3629-01-13.hwx.site:2181,ctr-e137-1514896590304-3629-01-02.hwx.site:
>  
> 2181,ctr-e137-1514896590304-3629-01-12.hwx.site:2181,ctr-e137-1514896590304-3629-01-08.hwx.site:2181,ctr-e137-1514896590304-3629-01-10.hwx.site:2181,
>  baseZNode=/   hbase-unsecure Received ZooKeeper Event, 
> type=NodeChildrenChanged, state=SyncConnected, path=/hbase-unsecure/rs
> 2018-01-04 17:24:35,067 INFO  [ProcExecWrkr-5] assignment.RegionStateStore: 
> pid=643 updating hbase:meta 
> row=hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9.,   
> regionState=CLOSING, 
> regionLocation=ctr-e137-1514896590304-3629-01-09.hwx.site,16020,1515086643872
> ...
> 2018-01-04 17:24:35,246 INFO  [ProcExecWrkr-12] 
> procedure.MasterProcedureScheduler: pid=647, ppid=642, 
> state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase: 
> namespace, region=a95ed2d7434a43390fbec73abeeb9fd9 hbase:namespace 
> hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9.
> 2018-01-04 17:25:17,041 DEBUG 
> [ctr-e137-1514896590304-3629-01-06:2.masterManager] 
> procedure2.ProcedureExecutor: Loading pid=641, 
> state=WAITING:MOVE_REGION_ASSIGN;  MoveRegionProcedure 
> hri=hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9., 
> source=ctr-e137-1514896590304-3629-01-09.hwx.site,16020,1515086643872,
> destination=
> {code}
> For the move operation, from ctr-e137-1514896590304-3629-01-09.hwx.site 
> log:
> {code}
> 2018-01-04 17:24:34,855 DEBUG 
> [RS_CLOSE_REGION-ctr-e137-1514896590304-3629-01-09:16020-0] 
> coprocessor.CoprocessorHost: Stop coprocessor 
> org.apache.hadoop.hbase.security.   access.SecureBulkLoadEndpoint
> 2018-01-04 17:24:34,855 INFO  
> [RS_CLOSE_REGION-ctr-e137-1514896590304-3629-01-09:16020-0] 
> regionserver.HRegion: Closed hbase:namespace,,1515085217343.  
> a95ed2d7434a43390fbec73abeeb9fd9.
> 2018-01-04 17:24:34,856 DEBUG 
> [RS_CLOSE_REGION-ctr-e137-1514896590304-3629-01-09:16020-0] 
> handler.CloseRegionHandler: Closed hbase:namespace,,1515085217343.
> a95ed2d7434a43390fbec73abeeb9fd9.
> ...
> 2018-01-04 17:25:47,607 DEBUG 
> [RpcServer.priority.FPBQ.Fifo.handler=18,queue=0,port=16020] ipc.RpcServer: 
> callId: 16 service: ClientService methodName: Get size: 103   
> connection: 172.27.13.80:36738 deadline: 1515086837568
> org.apache.hadoop.hbase.NotServingRegionException: 
> hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9. is not 
> online on ctr-e137-1514896590304-3629-01-09.hwx. site,16020,1515086719163
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3312)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3289)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1354)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2360)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41544)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:403)
> {code}
> We can see that the region server was not serving the region.
> After that, the masters kept thinking namespace 

[jira] [Commented] (HBASE-19710) hbase:namespace table was stuck in transition

2018-01-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312248#comment-16312248
 ] 

Ted Yu commented on HBASE-19710:


The build I used corresponded to this commit:

HBASE-19667 Get rid of MasterEnvironment#supportGroupCPs

The cluster has 13 nodes, running hadoop 3.

> hbase:namespace table was stuck in transition
> -
>
> Key: HBASE-19710
> URL: https://issues.apache.org/jira/browse/HBASE-19710
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Critical
>
> ITBLL with chaos monkey failed due to namespace table getting stuck in 
> transition.
> From hbase-hbase-master-ctr-e137-1514896590304-3629-01-06.hwx.site.log , 
> we can see that master closed namespace table on 09:
> {code}
> 2018-01-04 17:24:35,067 DEBUG [main-EventThread] zookeeper.ZKWatcher: 
> master:2-0x160c222710c0028, 
> quorum=ctr-e137-1514896590304-3629-01-11.hwx.site:2181,ctr-e137-  
> 1514896590304-3629-01-14.hwx.site:2181,ctr-e137-1514896590304-3629-01-09.hwx.site:2181,ctr-e137-1514896590304-3629-01-06.hwx.site:2181,ctr-e137-1514896590304-3629-
>  
> 01-03.hwx.site:2181,ctr-e137-1514896590304-3629-01-07.hwx.site:2181,ctr-e137-1514896590304-3629-01-13.hwx.site:2181,ctr-e137-1514896590304-3629-01-02.hwx.site:
>  
> 2181,ctr-e137-1514896590304-3629-01-12.hwx.site:2181,ctr-e137-1514896590304-3629-01-08.hwx.site:2181,ctr-e137-1514896590304-3629-01-10.hwx.site:2181,
>  baseZNode=/   hbase-unsecure Received ZooKeeper Event, 
> type=NodeChildrenChanged, state=SyncConnected, path=/hbase-unsecure/rs
> 2018-01-04 17:24:35,067 INFO  [ProcExecWrkr-5] assignment.RegionStateStore: 
> pid=643 updating hbase:meta 
> row=hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9.,   
> regionState=CLOSING, 
> regionLocation=ctr-e137-1514896590304-3629-01-09.hwx.site,16020,1515086643872
> ...
> 2018-01-04 17:24:35,246 INFO  [ProcExecWrkr-12] 
> procedure.MasterProcedureScheduler: pid=647, ppid=642, 
> state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase: 
> namespace, region=a95ed2d7434a43390fbec73abeeb9fd9 hbase:namespace 
> hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9.
> 2018-01-04 17:25:17,041 DEBUG 
> [ctr-e137-1514896590304-3629-01-06:2.masterManager] 
> procedure2.ProcedureExecutor: Loading pid=641, 
> state=WAITING:MOVE_REGION_ASSIGN;  MoveRegionProcedure 
> hri=hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9., 
> source=ctr-e137-1514896590304-3629-01-09.hwx.site,16020,1515086643872,
> destination=
> {code}
> For the move operation, from ctr-e137-1514896590304-3629-01-09.hwx.site 
> log:
> {code}
> 2018-01-04 17:24:34,855 DEBUG 
> [RS_CLOSE_REGION-ctr-e137-1514896590304-3629-01-09:16020-0] 
> coprocessor.CoprocessorHost: Stop coprocessor 
> org.apache.hadoop.hbase.security.   access.SecureBulkLoadEndpoint
> 2018-01-04 17:24:34,855 INFO  
> [RS_CLOSE_REGION-ctr-e137-1514896590304-3629-01-09:16020-0] 
> regionserver.HRegion: Closed hbase:namespace,,1515085217343.  
> a95ed2d7434a43390fbec73abeeb9fd9.
> 2018-01-04 17:24:34,856 DEBUG 
> [RS_CLOSE_REGION-ctr-e137-1514896590304-3629-01-09:16020-0] 
> handler.CloseRegionHandler: Closed hbase:namespace,,1515085217343.
> a95ed2d7434a43390fbec73abeeb9fd9.
> ...
> 2018-01-04 17:25:47,607 DEBUG 
> [RpcServer.priority.FPBQ.Fifo.handler=18,queue=0,port=16020] ipc.RpcServer: 
> callId: 16 service: ClientService methodName: Get size: 103   
> connection: 172.27.13.80:36738 deadline: 1515086837568
> org.apache.hadoop.hbase.NotServingRegionException: 
> hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9. is not 
> online on ctr-e137-1514896590304-3629-01-09.hwx. site,16020,1515086719163
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3312)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3289)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1354)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2360)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41544)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:403)
> {code}
> We can see that the region server was not serving the region.
> After that, the masters kept thinking namespace table was on 0009, leading to 
> prolonged downtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19710) hbase:namespace table was stuck in transition

2018-01-04 Thread Ted Yu (JIRA)
Ted Yu created HBASE-19710:
--

 Summary: hbase:namespace table was stuck in transition
 Key: HBASE-19710
 URL: https://issues.apache.org/jira/browse/HBASE-19710
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Priority: Critical


ITBLL with chaos monkey failed due to namespace table getting stuck in 
transition.

>From hbase-hbase-master-ctr-e137-1514896590304-3629-01-06.hwx.site.log , 
>we can see that master closed namespace table on 09:
{code}
2018-01-04 17:24:35,067 DEBUG [main-EventThread] zookeeper.ZKWatcher: 
master:2-0x160c222710c0028, 
quorum=ctr-e137-1514896590304-3629-01-11.hwx.site:2181,ctr-e137-  
1514896590304-3629-01-14.hwx.site:2181,ctr-e137-1514896590304-3629-01-09.hwx.site:2181,ctr-e137-1514896590304-3629-01-06.hwx.site:2181,ctr-e137-1514896590304-3629-
 
01-03.hwx.site:2181,ctr-e137-1514896590304-3629-01-07.hwx.site:2181,ctr-e137-1514896590304-3629-01-13.hwx.site:2181,ctr-e137-1514896590304-3629-01-02.hwx.site:
 
2181,ctr-e137-1514896590304-3629-01-12.hwx.site:2181,ctr-e137-1514896590304-3629-01-08.hwx.site:2181,ctr-e137-1514896590304-3629-01-10.hwx.site:2181,
 baseZNode=/   hbase-unsecure Received ZooKeeper Event, 
type=NodeChildrenChanged, state=SyncConnected, path=/hbase-unsecure/rs
2018-01-04 17:24:35,067 INFO  [ProcExecWrkr-5] assignment.RegionStateStore: 
pid=643 updating hbase:meta 
row=hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9.,   
regionState=CLOSING, 
regionLocation=ctr-e137-1514896590304-3629-01-09.hwx.site,16020,1515086643872
...
2018-01-04 17:24:35,246 INFO  [ProcExecWrkr-12] 
procedure.MasterProcedureScheduler: pid=647, ppid=642, 
state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase: 
namespace, region=a95ed2d7434a43390fbec73abeeb9fd9 hbase:namespace 
hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9.

2018-01-04 17:25:17,041 DEBUG 
[ctr-e137-1514896590304-3629-01-06:2.masterManager] 
procedure2.ProcedureExecutor: Loading pid=641, 
state=WAITING:MOVE_REGION_ASSIGN;  MoveRegionProcedure 
hri=hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9., 
source=ctr-e137-1514896590304-3629-01-09.hwx.site,16020,1515086643872,  
  destination=
{code}

For the move operation, from ctr-e137-1514896590304-3629-01-09.hwx.site log:
{code}
2018-01-04 17:24:34,855 DEBUG 
[RS_CLOSE_REGION-ctr-e137-1514896590304-3629-01-09:16020-0] 
coprocessor.CoprocessorHost: Stop coprocessor org.apache.hadoop.hbase.security. 
  access.SecureBulkLoadEndpoint
2018-01-04 17:24:34,855 INFO  
[RS_CLOSE_REGION-ctr-e137-1514896590304-3629-01-09:16020-0] 
regionserver.HRegion: Closed hbase:namespace,,1515085217343.
  a95ed2d7434a43390fbec73abeeb9fd9.
2018-01-04 17:24:34,856 DEBUG 
[RS_CLOSE_REGION-ctr-e137-1514896590304-3629-01-09:16020-0] 
handler.CloseRegionHandler: Closed hbase:namespace,,1515085217343.  
  a95ed2d7434a43390fbec73abeeb9fd9.
...
2018-01-04 17:25:47,607 DEBUG 
[RpcServer.priority.FPBQ.Fifo.handler=18,queue=0,port=16020] ipc.RpcServer: 
callId: 16 service: ClientService methodName: Get size: 103   
connection: 172.27.13.80:36738 deadline: 1515086837568
org.apache.hadoop.hbase.NotServingRegionException: 
hbase:namespace,,1515085217343.a95ed2d7434a43390fbec73abeeb9fd9. is not online 
on ctr-e137-1514896590304-3629-01-09.hwx. site,16020,1515086719163
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3312)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3289)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1354)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2360)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41544)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:403)
{code}
We can see that the region server was not serving the region.

After that, the masters kept thinking namespace table was on 0009, leading to 
prolonged downtime.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19488) Remove Unused Code from CollectionUtils

2018-01-04 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312218#comment-16312218
 ] 

Appy commented on HBASE-19488:
--

There are like 22 imports of org.apache.hadoop.hbase.util.CollectionUtils. Why 
not change them to org.apache.commons.collections.CollectionUtils and remove 
this class altogether?

> Remove Unused Code from CollectionUtils
> ---
>
> Key: HBASE-19488
> URL: https://issues.apache.org/jira/browse/HBASE-19488
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HBASE-19488.1.patch, HBASE-19488.2.patch
>
>
> A bunch of unused code in CollectionUtils or code that can be found in Apache 
> Commons libraries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19709) Guard against a ThreadPool size of 0 in CleanerChore

2018-01-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312204#comment-16312204
 ] 

Ted Yu commented on HBASE-19709:


lgtm

> Guard against a ThreadPool size of 0 in CleanerChore
> 
>
> Key: HBASE-19709
> URL: https://issues.apache.org/jira/browse/HBASE-19709
> Project: HBase
>  Issue Type: Bug
>Reporter: Siddharth Wagle
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-beta-2
>
> Attachments: HBASE-19709.001.branch-2.patch
>
>
> Post HBASE-18309, we choose the number of threads by the following logic:
> {code}
> +  /**
> +   * If it is an integer and >= 1, it would be the size;
> +   * if 0.0 < size <= 1.0, size would be available processors * size.
> +   * Pay attention that 1.0 is different from 1, former indicates it will 
> use 100% of cores,
> +   * while latter will use only 1 thread for chore to scan dir.
> +   */
> {code}
> [~swagle] has found on his VM that despite having two virtual processors, 
> {{Runtime.getRuntime().availableProcessors()}} returns 0, which results in 0 
> threads for the pool which throws an exception.
> {noformat}
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ForkJoinPool.checkParallelism(ForkJoinPool.java:2546)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2536)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2505)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:112)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:83)
> at 
> org.apache.hadoop.hbase.master.cleaner.LogCleaner.(LogCleaner.java:65)
> at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1130)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:813)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:223)
> at org.apache.hadoop.hbase.master.HMaster$4.run(HMaster.java:2016)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We should make sure that we take the max of {{1}} and the computed number of 
> threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19709) Guard against a ThreadPool size of 0 in CleanerChore

2018-01-04 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-19709:
---
Attachment: HBASE-19709.001.branch-2.patch

.001 Pretty simple change to just make sure that we don't return 0.

FYI [~reidchan].

> Guard against a ThreadPool size of 0 in CleanerChore
> 
>
> Key: HBASE-19709
> URL: https://issues.apache.org/jira/browse/HBASE-19709
> Project: HBase
>  Issue Type: Bug
>Reporter: Siddharth Wagle
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-beta-2
>
> Attachments: HBASE-19709.001.branch-2.patch
>
>
> Post HBASE-18309, we choose the number of threads by the following logic:
> {code}
> +  /**
> +   * If it is an integer and >= 1, it would be the size;
> +   * if 0.0 < size <= 1.0, size would be available processors * size.
> +   * Pay attention that 1.0 is different from 1, former indicates it will 
> use 100% of cores,
> +   * while latter will use only 1 thread for chore to scan dir.
> +   */
> {code}
> [~swagle] has found on his VM that despite having two virtual processors, 
> {{Runtime.getRuntime().availableProcessors()}} returns 0, which results in 0 
> threads for the pool which throws an exception.
> {noformat}
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ForkJoinPool.checkParallelism(ForkJoinPool.java:2546)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2536)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2505)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:112)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:83)
> at 
> org.apache.hadoop.hbase.master.cleaner.LogCleaner.(LogCleaner.java:65)
> at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1130)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:813)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:223)
> at org.apache.hadoop.hbase.master.HMaster$4.run(HMaster.java:2016)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We should make sure that we take the max of {{1}} and the computed number of 
> threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19709) Guard against a ThreadPool size of 0 in CleanerChore

2018-01-04 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-19709:
---
Status: Patch Available  (was: Open)

> Guard against a ThreadPool size of 0 in CleanerChore
> 
>
> Key: HBASE-19709
> URL: https://issues.apache.org/jira/browse/HBASE-19709
> Project: HBase
>  Issue Type: Bug
>Reporter: Siddharth Wagle
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-beta-2
>
> Attachments: HBASE-19709.001.branch-2.patch
>
>
> Post HBASE-18309, we choose the number of threads by the following logic:
> {code}
> +  /**
> +   * If it is an integer and >= 1, it would be the size;
> +   * if 0.0 < size <= 1.0, size would be available processors * size.
> +   * Pay attention that 1.0 is different from 1, former indicates it will 
> use 100% of cores,
> +   * while latter will use only 1 thread for chore to scan dir.
> +   */
> {code}
> [~swagle] has found on his VM that despite having two virtual processors, 
> {{Runtime.getRuntime().availableProcessors()}} returns 0, which results in 0 
> threads for the pool which throws an exception.
> {noformat}
> java.lang.IllegalArgumentException
> at 
> java.util.concurrent.ForkJoinPool.checkParallelism(ForkJoinPool.java:2546)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2536)
> at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2505)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:112)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:83)
> at 
> org.apache.hadoop.hbase.master.cleaner.LogCleaner.(LogCleaner.java:65)
> at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1130)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:813)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:223)
> at org.apache.hadoop.hbase.master.HMaster$4.run(HMaster.java:2016)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We should make sure that we take the max of {{1}} and the computed number of 
> threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19709) Guard against a ThreadPool size of 0 in CleanerChore

2018-01-04 Thread Josh Elser (JIRA)
Josh Elser created HBASE-19709:
--

 Summary: Guard against a ThreadPool size of 0 in CleanerChore
 Key: HBASE-19709
 URL: https://issues.apache.org/jira/browse/HBASE-19709
 Project: HBase
  Issue Type: Bug
Reporter: Siddharth Wagle
Assignee: Josh Elser
Priority: Critical
 Fix For: 3.0.0, 2.0.0-beta-2


Post HBASE-18309, we choose the number of threads by the following logic:

{code}
+  /**
+   * If it is an integer and >= 1, it would be the size;
+   * if 0.0 < size <= 1.0, size would be available processors * size.
+   * Pay attention that 1.0 is different from 1, former indicates it will use 
100% of cores,
+   * while latter will use only 1 thread for chore to scan dir.
+   */
{code}

[~swagle] has found on his VM that despite having two virtual processors, 
{{Runtime.getRuntime().availableProcessors()}} returns 0, which results in 0 
threads for the pool which throws an exception.

{noformat}
java.lang.IllegalArgumentException
at 
java.util.concurrent.ForkJoinPool.checkParallelism(ForkJoinPool.java:2546)
at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2536)
at java.util.concurrent.ForkJoinPool.(ForkJoinPool.java:2505)
at 
org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:112)
at 
org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:83)
at 
org.apache.hadoop.hbase.master.cleaner.LogCleaner.(LogCleaner.java:65)
at 
org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1130)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:813)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:223)
at org.apache.hadoop.hbase.master.HMaster$4.run(HMaster.java:2016)
at java.lang.Thread.run(Thread.java:745)
{noformat}

We should make sure that we take the max of {{1}} and the computed number of 
threads.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19483) Add proper privilege check for rsgroup commands

2018-01-04 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312107#comment-16312107
 ] 

Appy commented on HBASE-19483:
--

Posted last few comments. Minor stuff. Looks good to go in after them.

> Add proper privilege check for rsgroup commands
> ---
>
> Key: HBASE-19483
> URL: https://issues.apache.org/jira/browse/HBASE-19483
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
> Fix For: 1.4.1, 1.5.0, 2.0.0-beta-2
>
> Attachments: 19483.master.011.patch, 19483.v11.patch, 
> 19483.v11.patch, HBASE-19483.master.001.patch, HBASE-19483.master.002.patch, 
> HBASE-19483.master.003.patch, HBASE-19483.master.004.patch, 
> HBASE-19483.master.005.patch, HBASE-19483.master.006.patch, 
> HBASE-19483.master.007.patch, HBASE-19483.master.008.patch, 
> HBASE-19483.master.009.patch, HBASE-19483.master.010.patch, 
> HBASE-19483.master.011.patch, HBASE-19483.master.011.patch
>
>
> Currently list_rsgroups command can be executed by any user.
> This is inconsistent with other list commands such as list_peers and 
> list_peer_configs.
> We should add proper privilege check for list_rsgroups command.
> privilege check should be added for get_table_rsgroup / get_server_rsgroup / 
> get_rsgroup commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19708:
---
Description: 
Rare NPE when the listener's accept channel is closed. We serialize access to 
related state to avoid a previously fixed related NPE and need to do the same 
for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
code lines. Let me check.

Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)


  was:
Rare NPE when the listener's accept channel is closed. We serialize access to 
related state to avoid a previously fixed related NPE and need to do the same 
for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
code lines. Let me check.

Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)



> Avoid NPE when the RPC listener's accept channel is closed
> --
>
> Key: HBASE-19708
> URL: https://issues.apache.org/jira/browse/HBASE-19708
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.24
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
>
> Rare NPE when the listener's accept channel is closed. We serialize access to 
> related state to avoid a previously fixed related NPE and need to do the same 
> for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
> code lines. Let me check.
> Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19708:
---
Description: 
Rare NPE when the listener's accept channel is closed. We serialize access to 
related state to avoid a previously fixed related NPE and need to do the same 
for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
code lines. Let me check.

Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)


  was:
Rare NPE when the listener's accept channel is closed. We serialize access to 
related state to avoid a previously fixed related NPE and need to do the same 
for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
code lines. Let me check.

Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)


> Avoid NPE when the RPC listener's accept channel is closed
> --
>
> Key: HBASE-19708
> URL: https://issues.apache.org/jira/browse/HBASE-19708
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.24
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
>
> Rare NPE when the listener's accept channel is closed. We serialize access to 
> related state to avoid a previously fixed related NPE and need to do the same 
> for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
> code lines. Let me check.
> Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
>   at 
> org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
>   at 
> 

[jira] [Updated] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19708:
---
Description: 
Rare NPE when the listener's accept channel is closed. We serialize access to 
related state to avoid a previously fixed related NPE and need to do the same 
for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
code lines. Let me check.

Exception in thread "MetadataRpcServer.handler=191,queue=0,port=60020" 
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)

  was:
Rare NPE when the listener's accept channel is closed. We serialize access to 
related state to avoid a previously fixed related NPE and need to do the same 
for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
code lines. Let me check.

Exception in thread "MetadataRpcServer.handler=171,queue=0,port=60020" 
Exception in thread "MetadataRpcServer.handler=6,queue=0,port=60020" Exception 
in thread "MetadataRpcServer.handler=157,queue=0,port=60020" Exception in 
thread "MetadataRpcServer.handler=43,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=115,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=70,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=2,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=18,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=105,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=11,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=27,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=187,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=64,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=90,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=76,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=111,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=71,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=109,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=39,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=46,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=66,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=106,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=126,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=99,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=94,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=191,queue=0,port=60020" 
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 

[jira] [Updated] (HBASE-19688) TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate

2018-01-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19688:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the patch, Reid

Test failure was not related.

> TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate
> ---
>
> Key: HBASE-19688
> URL: https://issues.apache.org/jira/browse/HBASE-19688
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19688.master.001.patch
>
>
> According to {{LogCleaner extends CleanerChore}}, 
> {{TimeToLiveLogCleaner}} should extends {{BaseLogCleanerDelegate}} instead of 
> {{BaseFileCleanerDelegate}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19708) Avoid NPE when the RPC listener's accept channel is closed

2018-01-04 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-19708:
--

 Summary: Avoid NPE when the RPC listener's accept channel is closed
 Key: HBASE-19708
 URL: https://issues.apache.org/jira/browse/HBASE-19708
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.24
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor


Rare NPE when the listener's accept channel is closed. We serialize access to 
related state to avoid a previously fixed related NPE and need to do the same 
for {{acceptChannel}}. Seen in a 0.98 deploy but I think applicable to later 
code lines. Let me check.

Exception in thread "MetadataRpcServer.handler=171,queue=0,port=60020" 
Exception in thread "MetadataRpcServer.handler=6,queue=0,port=60020" Exception 
in thread "MetadataRpcServer.handler=157,queue=0,port=60020" Exception in 
thread "MetadataRpcServer.handler=43,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=115,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=70,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=2,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=18,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=105,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=11,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=27,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=187,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=64,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=90,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=76,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=111,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=71,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=109,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=39,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=46,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=66,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=106,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=126,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=99,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=94,queue=0,port=60020" Exception in thread 
"MetadataRpcServer.handler=191,queue=0,port=60020" 
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener.getAddress(RpcServer.java:858)
at 
org.apache.hadoop.hbase.ipc.RpcServer.getListenerAddress(RpcServer.java:2338)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19688) TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311768#comment-16311768
 ] 

Hadoop QA commented on HBASE-19688:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
43s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
34s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 27m 48s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestMemstoreLABWithoutPool |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19688 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904632/HBASE-19688.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux c50a99c357fb 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5195435941 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10887/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10887/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-18693) adding an option to restore_snapshot to move mob files from archive dir to working dir

2018-01-04 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311744#comment-16311744
 ] 

huaxiang sun commented on HBASE-18693:
--

Hi [~jingcheng.du], just want to follow up the review status, thanks.

> adding an option to restore_snapshot to move mob files from archive dir to 
> working dir
> --
>
> Key: HBASE-18693
> URL: https://issues.apache.org/jira/browse/HBASE-18693
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0-alpha-2
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-18693.master.001.patch, 
> HBASE-18693.master.002.patch, HBASE-18693.master.003.patch
>
>
> Today, there is a single mob region where mob files for all user regions are 
> saved. There could be many files (one million) in a single mob directory. 
> When one mob table is restored or cloned from snapshot, links are created for 
> these mob files. This creates a scaling issue for mob compaction. In mob 
> compaction's select() logic, for each hFileLink, it needs to call NN's 
> getFileStatus() to get the size of the linked hfile. Assume that one such 
> call takes 20ms, 20ms * 100 = 6 hours. 
> To avoid this overhead, we want to add an option so that restore_snapshot can 
> move mob files from archive dir to working dir. clone_snapshot is more 
> complicated as it can clone a snapshot to a different table so moving that 
> can destroy the snapshot. No option will be added for clone_snapshot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19364) Truncate_preserve fails with table when replica region > 1

2018-01-04 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311742#comment-16311742
 ] 

huaxiang sun commented on HBASE-19364:
--

Hi [~pankaj2461], did you check if the issue exists in master branch as well? 
If that is the case, can you also post the patch for master? I followed up with 
HBASE-17319 and the issue exists in the master branch, I will upload a patch 
for the master there, thanks.

> Truncate_preserve fails with table when replica region > 1
> --
>
> Key: HBASE-19364
> URL: https://issues.apache.org/jira/browse/HBASE-19364
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
> Fix For: 1.5.0
>
> Attachments: HBASE-19364-branch-1.patch
>
>
> Root cause is same as HBASE-17319, here we need to exclude secondary regions 
> while reading meta.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19391) Calling HRegion#initializeRegionInternals from a region replica can still re-create a region directory

2018-01-04 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311738#comment-16311738
 ] 

huaxiang sun commented on HBASE-19391:
--

+1. With HBASE-18625, I think this case will happen less frequently. The fix is 
correct in logic as replica regions does not own any fs resources. Thanks.

> Calling HRegion#initializeRegionInternals from a region replica can still 
> re-create a region directory
> --
>
> Key: HBASE-19391
> URL: https://issues.apache.org/jira/browse/HBASE-19391
> Project: HBase
>  Issue Type: Bug
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Attachments: HBASE-19391.master.v0.patch
>
>
> This is a follow up from HBASE-18024. There stills a chance that attempting 
> to open a region that is not the default region replica can still create a 
> GC'd region directory by the CatalogJanitor causing inconsistencies with hbck.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-14252) RegionServers fail to start when setting hbase.ipc.server.callqueue.scan.ratio to 0

2018-01-04 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311734#comment-16311734
 ] 

Appy commented on HBASE-14252:
--

Backport to 1.2 please?

> RegionServers fail to start when setting 
> hbase.ipc.server.callqueue.scan.ratio to 0
> ---
>
> Key: HBASE-14252
> URL: https://issues.apache.org/jira/browse/HBASE-14252
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
> Environment: hbase-0.98.6-cdh5.3.1
>Reporter: Toshihiro Suzuki
>Assignee: Yubao Liu
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 14252.v2.patch, 14252.v3.patch
>
>
> I set the following configuration in hbase-site.xml.
> {code}
> 
>   hbase.ipc.server.callqueue.read.ratio
>   0.5
> 
> 
>   hbase.ipc.server.callqueue.scan.ratio
>   0
> 
> {code}
> Then, the RegionServer failed to start and I saw the following log:
> {code}
> 2015-08-19 14:30:19,561 ERROR 
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine: Region server 
> exiting
> java.lang.RuntimeException: Failed construction of Regionserver: class 
> org.apache.hadoop.hbase.regionserver.HRegionServer
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2457)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:61)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2472)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2455)
> ... 5 more
> Caused by: java.lang.IllegalArgumentException: Queue size is <= 0, must be at 
> least 1
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.getBalancer(RpcExecutor.java:139)
> at 
> org.apache.hadoop.hbase.ipc.RWQueueRpcExecutor.(RWQueueRpcExecutor.java:121)
> at 
> org.apache.hadoop.hbase.ipc.RWQueueRpcExecutor.(RWQueueRpcExecutor.java:83)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.(SimpleRpcScheduler.java:129)
> at 
> org.apache.hadoop.hbase.regionserver.SimpleRpcSchedulerFactory.create(SimpleRpcSchedulerFactory.java:36)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:610)
> ... 10 more
> {code}
> The doc of "hbase.ipc.server.callqueue.scan.ratio" says "A value of 0 or 1 
> indicate to use the same set of queues for gets and scans.".
> I think that there is a bug in validation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19651) Remove LimitInputStream

2018-01-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311728#comment-16311728
 ] 

stack commented on HBASE-19651:
---

Thanks for update.

+1 on commit

> Remove LimitInputStream
> ---
>
> Key: HBASE-19651
> URL: https://issues.apache.org/jira/browse/HBASE-19651
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 3.0.0, 2.0.0-beta-2
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-19651.1.patch, HBASE-19651.2.patch, 
> HBASE-19651.3.patch, HBASE-19651.4.patch, HBASE-19651.5.patch, 
> HBASE-19651.6.patch
>
>
> Let us "drink our own champagne" and use the existing Apache Commons 
> BoundedInputStream instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19358) Improve the stability of splitting log when do fail over

2018-01-04 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311715#comment-16311715
 ] 

Appy commented on HBASE-19358:
--

i was wondering that when i got merge conflicts with this one when backporting 
something else to branch-2. But the fix version said 2.0.0, so i thought that 
maybe the patches here were different for two branches. Your comment solves my 
mystery :)

> Improve the stability of splitting log when do fail over
> 
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Affects Versions: 0.98.24
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Fix For: 2.0.0, 3.0.0, 1.4.1, 1.5.0
>
> Attachments: HBASE-18619-branch-2-v2.patch, 
> HBASE-18619-branch-2-v2.patch, HBASE-18619-branch-2.patch, 
> HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch, 
> HBASE-19358-branch-1.patch, HBASE-19358-v1.patch, HBASE-19358-v4.patch, 
> HBASE-19358-v5.patch, HBASE-19358-v6.patch, HBASE-19358-v7.patch, 
> HBASE-19358-v8.patch, HBASE-19358.patch, split-1-log.png, 
> split-logic-new.jpg, split-logic-old.jpg, split-table.png, 
> split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> !https://issues.apache.org/jira/secure/attachment/12904506/split-logic-old.jpg!
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region and retain 
> it until the end. If the cluster is small and the number of regions per rs is 
> large, it will create too many HDFS streams at the same time. Then it is 
> prone to failure since each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> !https://issues.apache.org/jira/secure/attachment/12904507/split-logic-new.jpg!
> We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, 
> we will pick the largest EntryBuffer and write it to a file (close the writer 
> after finish). Then after we read all entries into memory, we will start a 
> writeAndCloseThreadPool, it starts a certain number of threads to write all 
> buffers to files. Thus it will not create HDFS streams more than 
> *_hbase.regionserver.hlog.splitlog.writer.threads_* we set.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> contains_*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (HBASE-19358) Improve the stability of splitting log when do fail over

2018-01-04 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li reopened HBASE-19358:
---

Reopening since this is not committed to branch-2 yet and the branch-2 v2 patch 
cannot apply cleanly on latest code base thus requires refactor. [~tianjingyun] 
please take a look, thanks.

And please correct the branch-2 patch name since now it's called 
"HBASE-18619-branch-2-v2"...

> Improve the stability of splitting log when do fail over
> 
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Affects Versions: 0.98.24
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Fix For: 2.0.0, 3.0.0, 1.4.1, 1.5.0
>
> Attachments: HBASE-18619-branch-2-v2.patch, 
> HBASE-18619-branch-2-v2.patch, HBASE-18619-branch-2.patch, 
> HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch, 
> HBASE-19358-branch-1.patch, HBASE-19358-v1.patch, HBASE-19358-v4.patch, 
> HBASE-19358-v5.patch, HBASE-19358-v6.patch, HBASE-19358-v7.patch, 
> HBASE-19358-v8.patch, HBASE-19358.patch, split-1-log.png, 
> split-logic-new.jpg, split-logic-old.jpg, split-table.png, 
> split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> !https://issues.apache.org/jira/secure/attachment/12904506/split-logic-old.jpg!
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region and retain 
> it until the end. If the cluster is small and the number of regions per rs is 
> large, it will create too many HDFS streams at the same time. Then it is 
> prone to failure since each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> !https://issues.apache.org/jira/secure/attachment/12904507/split-logic-new.jpg!
> We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, 
> we will pick the largest EntryBuffer and write it to a file (close the writer 
> after finish). Then after we read all entries into memory, we will start a 
> writeAndCloseThreadPool, it starts a certain number of threads to write all 
> buffers to files. Thus it will not create HDFS streams more than 
> *_hbase.regionserver.hlog.splitlog.writer.threads_* we set.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> contains_*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19688) TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate

2018-01-04 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-19688:
--
Status: Patch Available  (was: Open)

> TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate
> ---
>
> Key: HBASE-19688
> URL: https://issues.apache.org/jira/browse/HBASE-19688
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19688.master.001.patch
>
>
> According to {{LogCleaner extends CleanerChore}}, 
> {{TimeToLiveLogCleaner}} should extends {{BaseLogCleanerDelegate}} instead of 
> {{BaseFileCleanerDelegate}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19688) TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate

2018-01-04 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-19688:
--
Attachment: HBASE-19688.master.001.patch

> TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate
> ---
>
> Key: HBASE-19688
> URL: https://issues.apache.org/jira/browse/HBASE-19688
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19688.master.001.patch
>
>
> According to {{LogCleaner extends CleanerChore}}, 
> {{TimeToLiveLogCleaner}} should extends {{BaseLogCleanerDelegate}} instead of 
> {{BaseFileCleanerDelegate}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19688) TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate

2018-01-04 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-19688:
--
Attachment: (was: HBASE-19688.master.001.patch)

> TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate
> ---
>
> Key: HBASE-19688
> URL: https://issues.apache.org/jira/browse/HBASE-19688
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0-beta-2
>
>
> According to {{LogCleaner extends CleanerChore}}, 
> {{TimeToLiveLogCleaner}} should extends {{BaseLogCleanerDelegate}} instead of 
> {{BaseFileCleanerDelegate}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19688) TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate

2018-01-04 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-19688:
--
Status: Open  (was: Patch Available)

> TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate
> ---
>
> Key: HBASE-19688
> URL: https://issues.apache.org/jira/browse/HBASE-19688
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19688.master.001.patch
>
>
> According to {{LogCleaner extends CleanerChore}}, 
> {{TimeToLiveLogCleaner}} should extends {{BaseLogCleanerDelegate}} instead of 
> {{BaseFileCleanerDelegate}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19688) TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate

2018-01-04 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311624#comment-16311624
 ] 

Reid Chan commented on HBASE-19688:
---

QA... [Facepalm]
Trigger it again.

> TimeToLiveProcedureWALCleaner should extends BaseLogCleanerDelegate
> ---
>
> Key: HBASE-19688
> URL: https://issues.apache.org/jira/browse/HBASE-19688
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19688.master.001.patch
>
>
> According to {{LogCleaner extends CleanerChore}}, 
> {{TimeToLiveLogCleaner}} should extends {{BaseLogCleanerDelegate}} instead of 
> {{BaseFileCleanerDelegate}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19636) All rs should already start work with the new peer change when replication peer procedure is finished

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311598#comment-16311598
 ] 

Hadoop QA commented on HBASE-19636:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-19397 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
49s{color} | {color:green} HBASE-19397 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} HBASE-19397 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} HBASE-19397 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
17s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} HBASE-19397 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} The patch hbase-client passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} hbase-replication: The patch generated 0 new + 20 
unchanged - 1 fixed = 20 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
2s{color} | {color:red} hbase-server: The patch generated 1 new + 34 unchanged 
- 5 fixed = 35 total (was 39) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
42s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 52s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19636 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904600/HBASE-19636-HBASE-19397-v5.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  

[jira] [Commented] (HBASE-19207) Create Minimal HBase REST Client

2018-01-04 Thread Rick Kellogg (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311479#comment-16311479
 ] 

Rick Kellogg commented on HBASE-19207:
--

Just added support for use of Kerberos keytab and principal to my external 
project.

> Create Minimal HBase REST Client
> 
>
> Key: HBASE-19207
> URL: https://issues.apache.org/jira/browse/HBASE-19207
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, REST
>Reporter: Rick Kellogg
>
> Create a minimal REST client with only contents of 
> org.apache.hadoop.hbase.rest.client and 
> org.apache.hadoop.hbase.rest.client.models packages in the hbase-rest 
> project.  
> Attempt to reduce the number of third party dependencies and allow user to 
> bring their own Apache HttpClient/Core.  The HttpClient is frequently updated 
> and therefore should not be shaded to allow for upgrades.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns

2018-01-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311396#comment-16311396
 ] 

Ted Yu commented on HBASE-19696:


ExplicitColumnTracker is marked Private.

I feel adding doneWithColumn() to ColumnTracker (with default implementation) 
is better than calling some method which may be modified in the future.

> Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when 
> scan has explicit columns
> 
>
> Key: HBASE-19696
> URL: https://issues.apache.org/jira/browse/HBASE-19696
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19696.patch, HBASE-19696_v1.patch
>
>
> INCLUDE_AND_NEXT_COL from filter doesn't skip remaining versions of the cell 
> if the scan has explicit columns.
> This is because we use a column hint from a column tracker to prepare a cell 
> for seeking to next column but we are not updating column tracker with next 
> column when filter returns INCLUDE_AND_NEXT_COL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19636) All rs should already start work with the new peer change when replication peer procedure is finished

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311359#comment-16311359
 ] 

Hadoop QA commented on HBASE-19636:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-19397 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
39s{color} | {color:green} HBASE-19397 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} HBASE-19397 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} HBASE-19397 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
17s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} HBASE-19397 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
32s{color} | {color:red} hbase-client: The patch generated 82 new + 0 unchanged 
- 0 fixed = 82 total (was 0) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hbase-replication: The patch generated 0 new + 20 
unchanged - 1 fixed = 20 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
12s{color} | {color:red} hbase-server: The patch generated 4 new + 35 unchanged 
- 3 fixed = 39 total (was 38) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 7s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
22m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} hbase-server generated 3 new + 2 unchanged - 0 fixed = 
5 total (was 2) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}136m  4s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}190m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19636 |
| JIRA Patch URL | 

[jira] [Created] (HBASE-19707) Race in start and terminate of a replication source after we async start replicatione endpoint

2018-01-04 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-19707:
-

 Summary: Race in start and terminate of a replication source after 
we async start replicatione endpoint
 Key: HBASE-19707
 URL: https://issues.apache.org/jira/browse/HBASE-19707
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19636) All rs should already start work with the new peer change when replication peer procedure is finished

2018-01-04 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19636:
--
Attachment: HBASE-19636-HBASE-19397-v5.patch

> All rs should already start work with the new peer change when replication 
> peer procedure is finished
> -
>
> Key: HBASE-19636
> URL: https://issues.apache.org/jira/browse/HBASE-19636
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-19636-HBASE-19397-v5.patch, 
> HBASE-19636.HBASE-19397.001.patch, HBASE-19636.HBASE-19397.002.patch, 
> HBASE-19636.HBASE-19397.003.patch, HBASE-19636.HBASE-19397.004.patch
>
>
> When replication peer operations use zk, the master will modify zk directly. 
> Then the rs will asynchronous track the zk event to start work with the new 
> peer change. When replication peer operations use procedure, need to make 
> sure this process is synchronous. All rs should already start work with the 
> new peer change when procedure is finished.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19694) The initialization order for a fresh cluster is incorrect

2018-01-04 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311339#comment-16311339
 ] 

Duo Zhang commented on HBASE-19694:
---

Thanks sir.

> The initialization order for a fresh cluster is incorrect
> -
>
> Key: HBASE-19694
> URL: https://issues.apache.org/jira/browse/HBASE-19694
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
>
> The cluster id will set once we become the active master in 
> finishActiveMasterInitialization, but the blockUntilBecomingActiveMaster and 
> finishActiveMasterInitialization are both called in a thread to make the 
> constructor of HMaster return without blocking. And since HMaster itself is 
> also a HRegionServer, it will create a Connection and then start calling 
> reportForDuty. And when creating the ConnectionImplementation, we will read 
> the cluster id from zk, but the cluster id may have not been set yet since it 
> is set in another thread, we will get an exception and use the default 
> cluster id instead.
> I always get this when running UTs which will start a mini cluster
> {noformat}
> 2018-01-03 15:16:37,916 WARN  [M:0;zhangduo-ubuntu:32848] 
> client.ConnectionImplementation(528): Retrieve cluster id failed
> java.util.concurrent.ExecutionException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase/hbaseid
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:526)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.(ConnectionImplementation.java:286)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.(ConnectionUtils.java:141)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.(ConnectionUtils.java:137)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils.createShortCircuitConnection(ConnectionUtils.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.createClusterConnection(HRegionServer.java:781)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.setupClusterConnection(HRegionServer.java:812)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:827)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:938)
>   at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:550)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.zookeeper.KeeperException$NoNodeException: 
> KeeperErrorCode = NoNode for /hbase/hbaseid
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at 
> org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:163)
>   at 
> org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:311)
>   ... 1 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19706) Cells are always eclipsed by Deleted cells even if in time range scan

2018-01-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311314#comment-16311314
 ] 

Anoop Sam John commented on HBASE-19706:


Another nice one Ankit..   Change looks good... Just add some comments around 
the new code and on skip vs seek to next col/row

> Cells are always eclipsed by Deleted cells even if in time range scan
> -
>
> Key: HBASE-19706
> URL: https://issues.apache.org/jira/browse/HBASE-19706
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19706.patch, HBASE-19706_v1.patch
>
>
> Deleted cells are always hiding the other cells even if the scan ran with 
> time range having no delete marker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18452) VerifyReplication by Snapshot should cache HDFS token before submit job for kerberos env.

2018-01-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311274#comment-16311274
 ] 

Hudson commented on HBASE-18452:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4341 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4341/])
HBASE-18452 VerifyReplication by Snapshot should cache HDFS token before 
(openinx: rev 51954359416b107ce5eda6cb710449edc98ab0e6)
* (edit) 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java


> VerifyReplication by Snapshot should cache HDFS token before submit job for 
> kerberos env. 
> --
>
> Key: HBASE-18452
> URL: https://issues.apache.org/jira/browse/HBASE-18452
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-18452.v1.patch, HBASE-18452.v2.patch, 
> HBASE-18452.v2.patch
>
>
> I've  ported HBASE-16466 to our internal hbase branch,  and tested the 
> feature under our kerberos cluster.   
> The problem we encountered is: 
> {code}
> 17/07/25 21:21:23 INFO mapreduce.Job: Task Id : 
> attempt_1500987232138_0004_m_03_2, Status : FAILED
> Error: java.io.IOException: Failed on local exception: java.io.IOException: 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]; Host Details : local host is: "hadoop-yarn-host"; 
> destination host is: "hadoop-namenode-host":15200; 
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:775)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1481)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1408)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>   at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:807)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2029)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1195)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1191)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1207)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.checkRegionInfoOnFilesystem(HRegionFileSystem.java:778)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:769)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:748)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5188)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5153)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5125)
>   at 
> org.apache.hadoop.hbase.client.ClientSideRegionScanner.(ClientSideRegionScanner.java:60)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl$RecordReader.initialize(TableSnapshotInputFormatImpl.java:191)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat$TableSnapshotRegionRecordReader.initialize(TableSnapshotInputFormat.java:148)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1885)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.io.IOException: 
> 

[jira] [Commented] (HBASE-19636) All rs should already start work with the new peer change when replication peer procedure is finished

2018-01-04 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311270#comment-16311270
 ] 

Duo Zhang commented on HBASE-19636:
---

Is the failed UT related?

> All rs should already start work with the new peer change when replication 
> peer procedure is finished
> -
>
> Key: HBASE-19636
> URL: https://issues.apache.org/jira/browse/HBASE-19636
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-19636.HBASE-19397.001.patch, 
> HBASE-19636.HBASE-19397.002.patch, HBASE-19636.HBASE-19397.003.patch, 
> HBASE-19636.HBASE-19397.004.patch
>
>
> When replication peer operations use zk, the master will modify zk directly. 
> Then the rs will asynchronous track the zk event to start work with the new 
> peer change. When replication peer operations use procedure, need to make 
> sure this process is synchronous. All rs should already start work with the 
> new peer change when procedure is finished.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19636) All rs should already start work with the new peer change when replication peer procedure is finished

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311249#comment-16311249
 ] 

Hadoop QA commented on HBASE-19636:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-19397 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
24s{color} | {color:green} HBASE-19397 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} HBASE-19397 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} HBASE-19397 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
11s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} HBASE-19397 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
31s{color} | {color:red} hbase-client: The patch generated 82 new + 0 unchanged 
- 0 fixed = 82 total (was 0) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hbase-replication: The patch generated 0 new + 20 
unchanged - 1 fixed = 20 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
11s{color} | {color:red} hbase-server: The patch generated 3 new + 29 unchanged 
- 3 fixed = 32 total (was 32) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
22m 56s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} hbase-server generated 3 new + 2 unchanged - 0 fixed = 
5 total (was 2) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
57s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}113m  0s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.replication.regionserver.TestReplicationSourceManagerZkImpl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA 

[jira] [Commented] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-01-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311228#comment-16311228
 ] 

ramkrishna.s.vasudevan commented on HBASE-19703:


I commented before seeing the patch. Ya I t hink this way should be fine.

> Functionality added as part of HBASE-12583 is not working after moving the 
> split code to master
> ---
>
> Key: HBASE-19703
> URL: https://issues.apache.org/jira/browse/HBASE-19703
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19703-WIP.patch
>
>
> As part of HBASE-12583 we are passing split policy to 
> HRegionFileSystem#splitStoreFile so that we can allow to create reference 
> files even the split key is out of HFile key range. This is needed for Local 
> Indexing implementation in Phoenix. But now after moving the split code to 
> master just passing null for split policy.
> {noformat}
> final String familyName = Bytes.toString(family);
> final Path path_first =
> regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, 
> false, null);
> final Path path_second =
> regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, 
> true, null);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-01-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311227#comment-16311227
 ] 

ramkrishna.s.vasudevan commented on HBASE-19703:


I think since the Split proc is in master now 
{code}
TableDescriptor hTableDescriptor =
env.getMasterServices().getTableDescriptors().get(tableName);
{code}
You will know the table name from the regionInfo. And need instantiate the 
split policy every time from conf. Yes this may be a costly operation. Do we 
need a TableDescriptorCache for these type of cases (in future)? I thikn since 
we cannot have region ref here I think this is the best way to do? 

> Functionality added as part of HBASE-12583 is not working after moving the 
> split code to master
> ---
>
> Key: HBASE-19703
> URL: https://issues.apache.org/jira/browse/HBASE-19703
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19703-WIP.patch
>
>
> As part of HBASE-12583 we are passing split policy to 
> HRegionFileSystem#splitStoreFile so that we can allow to create reference 
> files even the split key is out of HFile key range. This is needed for Local 
> Indexing implementation in Phoenix. But now after moving the split code to 
> master just passing null for split policy.
> {noformat}
> final String familyName = Bytes.toString(family);
> final Path path_first =
> regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, 
> false, null);
> final Path path_second =
> regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, 
> true, null);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-01-04 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311197#comment-16311197
 ] 

Rajeshbabu Chintaguntla commented on HBASE-19703:
-

[~anoop.hbase] Here is the WIP patch where we initilize the split policy at HM 
side. It's fine for our usage right now.

> Functionality added as part of HBASE-12583 is not working after moving the 
> split code to master
> ---
>
> Key: HBASE-19703
> URL: https://issues.apache.org/jira/browse/HBASE-19703
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19703-WIP.patch
>
>
> As part of HBASE-12583 we are passing split policy to 
> HRegionFileSystem#splitStoreFile so that we can allow to create reference 
> files even the split key is out of HFile key range. This is needed for Local 
> Indexing implementation in Phoenix. But now after moving the split code to 
> master just passing null for split policy.
> {noformat}
> final String familyName = Bytes.toString(family);
> final Path path_first =
> regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, 
> false, null);
> final Path path_second =
> regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, 
> true, null);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-01-04 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated HBASE-19703:

Attachment: HBASE-19703-WIP.patch

> Functionality added as part of HBASE-12583 is not working after moving the 
> split code to master
> ---
>
> Key: HBASE-19703
> URL: https://issues.apache.org/jira/browse/HBASE-19703
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19703-WIP.patch
>
>
> As part of HBASE-12583 we are passing split policy to 
> HRegionFileSystem#splitStoreFile so that we can allow to create reference 
> files even the split key is out of HFile key range. This is needed for Local 
> Indexing implementation in Phoenix. But now after moving the split code to 
> master just passing null for split policy.
> {noformat}
> final String familyName = Bytes.toString(family);
> final Path path_first =
> regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, 
> false, null);
> final Path path_second =
> regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, 
> true, null);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19664) MOB should compatible with other types of Compactor in addition to DefaultCompactor

2018-01-04 Thread chenxu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311193#comment-16311193
 ] 

chenxu edited comment on HBASE-19664 at 1/4/18 11:16 AM:
-

bq. Though am not fully convinced by the approach of removing the own classes 
for MOB and adding instance checks and both kinds of methods in one place..
yes, that's not a perfect approach, move up performMobFlush and 
performMobCompaction to super class in order to reuse them when there is a new 
Compactor impl


was (Author: javaman_chen):
bg.Though am not fully convinced by the approach of removing the own classes 
for MOB and adding instance checks and both kinds of methods in one place..
yes, that's not a perfect approach, move up performMobFlush and 
performMobCompaction to super class in order to reuse them when there is a new 
Compactor impl

> MOB should compatible with other types of Compactor in addition to 
> DefaultCompactor
> ---
>
> Key: HBASE-19664
> URL: https://issues.apache.org/jira/browse/HBASE-19664
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: chenxu
> Attachments: HBASE-19664-master-v1.patch
>
>
> Currently when MOB feature enabled, we will use MobStoreEngine to deal with 
> flush and compaction, but it extends DefaultStoreEngine, so the stripe 
> compaction feature can not be used.
> In some scenes, Stripe Compaction are very useful, it can reduce the number 
> of regions and prevent a single storefile grow too large,If it
>  compatible with MOB feature, that’s perfect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19664) MOB should compatible with other types of Compactor in addition to DefaultCompactor

2018-01-04 Thread chenxu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311193#comment-16311193
 ] 

chenxu commented on HBASE-19664:


bg.Though am not fully convinced by the approach of removing the own classes 
for MOB and adding instance checks and both kinds of methods in one place..
yes, that's not a perfect approach, move up performMobFlush and 
performMobCompaction to super class in order to reuse them when there is a new 
Compactor impl

> MOB should compatible with other types of Compactor in addition to 
> DefaultCompactor
> ---
>
> Key: HBASE-19664
> URL: https://issues.apache.org/jira/browse/HBASE-19664
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: chenxu
> Attachments: HBASE-19664-master-v1.patch
>
>
> Currently when MOB feature enabled, we will use MobStoreEngine to deal with 
> flush and compaction, but it extends DefaultStoreEngine, so the stripe 
> compaction feature can not be used.
> In some scenes, Stripe Compaction are very useful, it can reduce the number 
> of regions and prevent a single storefile grow too large,If it
>  compatible with MOB feature, that’s perfect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19706) Cells are always eclipsed by Deleted cells even if in time range scan

2018-01-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311191#comment-16311191
 ] 

ramkrishna.s.vasudevan commented on HBASE-19706:


This is also a regression? Older versions were handling this correctly? Thanks 
for reporitng this [~ankit.singhal].

> Cells are always eclipsed by Deleted cells even if in time range scan
> -
>
> Key: HBASE-19706
> URL: https://issues.apache.org/jira/browse/HBASE-19706
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19706.patch, HBASE-19706_v1.patch
>
>
> Deleted cells are always hiding the other cells even if the scan ran with 
> time range having no delete marker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19636) All rs should already start work with the new peer change when replication peer procedure is finished

2018-01-04 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19636:
---
Attachment: HBASE-19636.HBASE-19397.004.patch

> All rs should already start work with the new peer change when replication 
> peer procedure is finished
> -
>
> Key: HBASE-19636
> URL: https://issues.apache.org/jira/browse/HBASE-19636
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-19636.HBASE-19397.001.patch, 
> HBASE-19636.HBASE-19397.002.patch, HBASE-19636.HBASE-19397.003.patch, 
> HBASE-19636.HBASE-19397.004.patch
>
>
> When replication peer operations use zk, the master will modify zk directly. 
> Then the rs will asynchronous track the zk event to start work with the new 
> peer change. When replication peer operations use procedure, need to make 
> sure this process is synchronous. All rs should already start work with the 
> new peer change when procedure is finished.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311136#comment-16311136
 ] 

Hadoop QA commented on HBASE-19696:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
42s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
36s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 17s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 43s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestMemstoreLABWithoutPool |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19696 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904552/HBASE-19696_v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux ac626664d11f 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5195435941 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10884/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10884/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10884/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Filter 

[jira] [Commented] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-01-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311130#comment-16311130
 ] 

Anoop Sam John commented on HBASE-19703:


I see what u say now. We have configureForRegion(HRegion region) setter.. I saw 
the constructor only..  I guess the HM side what u want to use from SplitPolicy 
does not need HRegion.  Will it be possible to have the Split policy instance 
WITH OUT region instance in it, in HM side for using for ur usage?

> Functionality added as part of HBASE-12583 is not working after moving the 
> split code to master
> ---
>
> Key: HBASE-19703
> URL: https://issues.apache.org/jira/browse/HBASE-19703
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0-beta-2
>
>
> As part of HBASE-12583 we are passing split policy to 
> HRegionFileSystem#splitStoreFile so that we can allow to create reference 
> files even the split key is out of HFile key range. This is needed for Local 
> Indexing implementation in Phoenix. But now after moving the split code to 
> master just passing null for split policy.
> {noformat}
> final String familyName = Bytes.toString(family);
> final Path path_first =
> regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, 
> false, null);
> final Path path_second =
> regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, 
> true, null);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19706) Cells are always eclipsed by Deleted cells even if in time range scan

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311123#comment-16311123
 ] 

Hadoop QA commented on HBASE-19706:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
57s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
6s{color} | {color:red} hbase-server: The patch generated 14 new + 28 unchanged 
- 1 fixed = 42 total (was 29) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
44s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
20m 13s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m  5s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | TEST-null |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19706 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904551/HBASE-19706_v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 5c3e9260c5f3 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5195435941 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10883/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10883/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10883/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-19664) MOB should compatible with other types of Compactor in addition to DefaultCompactor

2018-01-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1636#comment-1636
 ] 

Anoop Sam John commented on HBASE-19664:


This is a real concern.. Though am not fully convinced by the approach of 
removing the own classes for MOB and adding instance checks and both kinds of 
methods in one place.. The inclusion of MOB flush/compaction still need a 
smoother approach to make sure other features can be used together with.  
[~dujin...@gmail.com] what do u think boss?  

> MOB should compatible with other types of Compactor in addition to 
> DefaultCompactor
> ---
>
> Key: HBASE-19664
> URL: https://issues.apache.org/jira/browse/HBASE-19664
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: chenxu
> Attachments: HBASE-19664-master-v1.patch
>
>
> Currently when MOB feature enabled, we will use MobStoreEngine to deal with 
> flush and compaction, but it extends DefaultStoreEngine, so the stripe 
> compaction feature can not be used.
> In some scenes, Stripe Compaction are very useful, it can reduce the number 
> of regions and prevent a single storefile grow too large,If it
>  compatible with MOB feature, that’s perfect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19706) Cells are always eclipsed by Deleted cells even if in time range scan

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1632#comment-1632
 ] 

Hadoop QA commented on HBASE-19706:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
44s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
6s{color} | {color:red} hbase-server: The patch generated 15 new + 28 unchanged 
- 1 fixed = 43 total (was 29) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
40s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
20m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 29s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.regionserver.querymatcher.TestUserScanQueryMatcher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19706 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904546/HBASE-19706.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 0e4d2e36ecf9 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5195435941 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10882/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10882/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10882/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console 

[jira] [Commented] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-01-04 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311108#comment-16311108
 ] 

Rajeshbabu Chintaguntla commented on HBASE-19703:
-

[~anoop.hbase]
bq.We can set Split policy at HTD level or using conf 
'hbase.regionserver.region.split.policy'. At HM side also, when dealing with 
split of a region, we can get this info right? 
Right now are doing the same like setting the split policy to HTD so that it 
will be initiated during region initialization.
bq.  Ya u will have to create the instance newly. (In the old patch u were able 
to directly get the split policy instance from HRegion)
Yes we need to create new split policy instance that requires HRegion but we 
don't have that reference at master. With my old patch we are getting the 
splitpolicy from HRegion
bq.  Does this issue look like a regression? IMO old way of split policy usage 
we should continue to have. Thanks for the finding.
Yes it's a regression and better to have old way of getting split policy but 
seems like difficult without HRegion object present. So what I am thinking is 
instead of using split policy we can make use column family attribute flag so 
that based on flag we can take action whether to skip the check whether the 
split key in the range of hfile boundaries or not.

> Functionality added as part of HBASE-12583 is not working after moving the 
> split code to master
> ---
>
> Key: HBASE-19703
> URL: https://issues.apache.org/jira/browse/HBASE-19703
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0-beta-2
>
>
> As part of HBASE-12583 we are passing split policy to 
> HRegionFileSystem#splitStoreFile so that we can allow to create reference 
> files even the split key is out of HFile key range. This is needed for Local 
> Indexing implementation in Phoenix. But now after moving the split code to 
> master just passing null for split policy.
> {noformat}
> final String familyName = Bytes.toString(family);
> final Path path_first =
> regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, 
> false, null);
> final Path path_second =
> regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, 
> true, null);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns

2018-01-04 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311086#comment-16311086
 ] 

Zheng Hu commented on HBASE-19696:
--

Fine, I'm OK  :-)

> Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when 
> scan has explicit columns
> 
>
> Key: HBASE-19696
> URL: https://issues.apache.org/jira/browse/HBASE-19696
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19696.patch, HBASE-19696_v1.patch
>
>
> INCLUDE_AND_NEXT_COL from filter doesn't skip remaining versions of the cell 
> if the scan has explicit columns.
> This is because we use a column hint from a column tracker to prepare a cell 
> for seeking to next column but we are not updating column tracker with next 
> column when filter returns INCLUDE_AND_NEXT_COL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19664) MOB should compatible with other types of Compactor in addition to DefaultCompactor

2018-01-04 Thread chenxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenxu updated HBASE-19664:
---
Attachment: HBASE-19664-master-v1.patch

a simple patch to resolve this

> MOB should compatible with other types of Compactor in addition to 
> DefaultCompactor
> ---
>
> Key: HBASE-19664
> URL: https://issues.apache.org/jira/browse/HBASE-19664
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: chenxu
> Attachments: HBASE-19664-master-v1.patch
>
>
> Currently when MOB feature enabled, we will use MobStoreEngine to deal with 
> flush and compaction, but it extends DefaultStoreEngine, so the stripe 
> compaction feature can not be used.
> In some scenes, Stripe Compaction are very useful, it can reduce the number 
> of regions and prevent a single storefile grow too large,If it
>  compatible with MOB feature, that’s perfect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns

2018-01-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311085#comment-16311085
 ] 

Anoop Sam John commented on HBASE-19696:


Can we fix the other issue as part of another jira pls?  That is kind of 
optimization and this is a bug fix.   Hope u wont mind that [~openinx] :-)

> Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when 
> scan has explicit columns
> 
>
> Key: HBASE-19696
> URL: https://issues.apache.org/jira/browse/HBASE-19696
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19696.patch, HBASE-19696_v1.patch
>
>
> INCLUDE_AND_NEXT_COL from filter doesn't skip remaining versions of the cell 
> if the scan has explicit columns.
> This is because we use a column hint from a column tracker to prepare a cell 
> for seeking to next column but we are not updating column tracker with next 
> column when filter returns INCLUDE_AND_NEXT_COL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns

2018-01-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311082#comment-16311082
 ] 

Anoop Sam John commented on HBASE-19696:


Nice digging in Ankit..  Thanks for the great work.
On the call to getNextRowOrNextColumn() I have a concern though.  What we 
really want to do here is if it is ExplicitCT we would like to 
doneWithColumn(cell). And u say that
bq. Actually, we just need to call ExplicitColumnTracker#doneWithColumn but 
it's not availabe in ColumnTracker.
So we are relying on the side effect of another API call.  I would say it would 
be better if we do a type check and call the ExplicitCT method direct. Or can 
we make doneWithColumn() in ColumnTracker itself.  Dont think it is that wrong. 
 Only the ExplcitCT interested in it. But there is no harm informing all CTs 
that we are done with this column. I mean at a design view.  wdyt guys?
This is great testing happening for 2.0 from the Phoenix team.. Good on you 
guys.

> Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when 
> scan has explicit columns
> 
>
> Key: HBASE-19696
> URL: https://issues.apache.org/jira/browse/HBASE-19696
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19696.patch, HBASE-19696_v1.patch
>
>
> INCLUDE_AND_NEXT_COL from filter doesn't skip remaining versions of the cell 
> if the scan has explicit columns.
> This is because we use a column hint from a column tracker to prepare a cell 
> for seeking to next column but we are not updating column tracker with next 
> column when filter returns INCLUDE_AND_NEXT_COL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns

2018-01-04 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311081#comment-16311081
 ] 

Zheng Hu commented on HBASE-19696:
--

{code}
   case INCLUDE_AND_SEEK_NEXT_ROW:
+if (matchCode == MatchCode.INCLUDE || matchCode == 
MatchCode.INCLUDE_AND_SEEK_NEXT_COL) {
+  matchCode = MatchCode.INCLUDE_AND_SEEK_NEXT_ROW;
+}
 break;
{code}

Can just   return INCLUDE_AND_SEEK_NEXT_ROW ? 

> Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when 
> scan has explicit columns
> 
>
> Key: HBASE-19696
> URL: https://issues.apache.org/jira/browse/HBASE-19696
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19696.patch, HBASE-19696_v1.patch
>
>
> INCLUDE_AND_NEXT_COL from filter doesn't skip remaining versions of the cell 
> if the scan has explicit columns.
> This is because we use a column hint from a column tracker to prepare a cell 
> for seeking to next column but we are not updating column tracker with next 
> column when filter returns INCLUDE_AND_NEXT_COL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns

2018-01-04 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated HBASE-19696:
--
Attachment: HBASE-19696_v1.patch

> Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when 
> scan has explicit columns
> 
>
> Key: HBASE-19696
> URL: https://issues.apache.org/jira/browse/HBASE-19696
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19696.patch, HBASE-19696_v1.patch
>
>
> INCLUDE_AND_NEXT_COL from filter doesn't skip remaining versions of the cell 
> if the scan has explicit columns.
> This is because we use a column hint from a column tracker to prepare a cell 
> for seeking to next column but we are not updating column tracker with next 
> column when filter returns INCLUDE_AND_NEXT_COL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns

2018-01-04 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311073#comment-16311073
 ] 

Ankit Singhal commented on HBASE-19696:
---

Thank you guys for the review. 

bq. BTW, I have another unrelated problem: when filterResponse is 
INCLUDE_AND_SEEK_NEXT_ROW and matchCode is INCLUDE_AND_NEXT_COL, we should 
return INCLUDE_AND_SEEK_NEXT_ROW. but the current code will return 
INCLUDE_AND_NEXT_COL, seems like it can be optimized ?
Yes [~openinx], you are right, have taken care this as well in the new patch.

> Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when 
> scan has explicit columns
> 
>
> Key: HBASE-19696
> URL: https://issues.apache.org/jira/browse/HBASE-19696
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19696.patch, HBASE-19696_v1.patch
>
>
> INCLUDE_AND_NEXT_COL from filter doesn't skip remaining versions of the cell 
> if the scan has explicit columns.
> This is because we use a column hint from a column tracker to prepare a cell 
> for seeking to next column but we are not updating column tracker with next 
> column when filter returns INCLUDE_AND_NEXT_COL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19703) Functionality added as part of HBASE-12583 is not working after moving the split code to master

2018-01-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16311067#comment-16311067
 ] 

Anoop Sam John commented on HBASE-19703:


We can set Split policy at HTD level or using conf 
'hbase.regionserver.region.split.policy'.   At HM side also, when dealing with 
split of a region, we can get this info right? Ya u will have to create the 
instance newly. (In the old patch u were able to directly get the split policy 
instance from HRegion)..Does this issue look like a regression?  IMO old 
way of split policy usage we should continue to have.  Thanks for the finding.

> Functionality added as part of HBASE-12583 is not working after moving the 
> split code to master
> ---
>
> Key: HBASE-19703
> URL: https://issues.apache.org/jira/browse/HBASE-19703
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0-beta-2
>
>
> As part of HBASE-12583 we are passing split policy to 
> HRegionFileSystem#splitStoreFile so that we can allow to create reference 
> files even the split key is out of HFile key range. This is needed for Local 
> Indexing implementation in Phoenix. But now after moving the split code to 
> master just passing null for split policy.
> {noformat}
> final String familyName = Bytes.toString(family);
> final Path path_first =
> regionFs.splitStoreFile(this.daughter_1_RI, familyName, sf, splitRow, 
> false, null);
> final Path path_second =
> regionFs.splitStoreFile(this.daughter_2_RI, familyName, sf, splitRow, 
> true, null);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19706) Cells are always eclipsed by Deleted cells even if in time range scan

2018-01-04 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated HBASE-19706:
--
Attachment: HBASE-19706_v1.patch

> Cells are always eclipsed by Deleted cells even if in time range scan
> -
>
> Key: HBASE-19706
> URL: https://issues.apache.org/jira/browse/HBASE-19706
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19706.patch, HBASE-19706_v1.patch
>
>
> Deleted cells are always hiding the other cells even if the scan ran with 
> time range having no delete marker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19706) Cells are always eclipsed by Deleted cells even if in time range scan

2018-01-04 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated HBASE-19706:
--
Affects Version/s: 2.0.0-beta-1
   Status: Patch Available  (was: Open)

> Cells are always eclipsed by Deleted cells even if in time range scan
> -
>
> Key: HBASE-19706
> URL: https://issues.apache.org/jira/browse/HBASE-19706
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19706.patch
>
>
> Deleted cells are always hiding the other cells even if the scan ran with 
> time range having no delete marker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >