[jira] [Commented] (HBASE-19414) enable TestMasterOperationsForRegionReplicas#testIncompleteMetaTableReplicaInformation

2017-12-05 Thread Yung-An He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279803#comment-16279803
 ] 

Yung-An He commented on HBASE-19414:


Hi [~chia7712]

When enabling the testing method and testing all methods of 
TestMasterOperationsForRegionReplicas, it would fail in branch-2, but success 
in branch-1.2 and branch1.3.
Below is the error message:

{code:java}
[ERROR] Tests run: 3, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 36.693 
s <<< FAILURE! - in 
org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas
[ERROR] 
testIncompleteMetaTableReplicaInformation(org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas)
  Time elapsed: 3.896 s  <<< ERROR!
org.apache.hadoop.hbase.TableNotEnabledException: 
testIncompleteMetaTableReplicaInformation state is ENABLING

[ERROR] 
testCreateTableWithMultipleReplicas(org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas)
  Time elapsed: 4.191 s  <<< FAILURE!
java.lang.AssertionError
at 
org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas.validateFromSnapshotFromMeta(TestMasterOperationsForRegionReplicas.java:318)
at 
org.apache.hadoop.hbase.master.TestMasterOperationsForRegionReplicas.testCreateTableWithMultipleReplicas(TestMasterOperationsForRegionReplicas.java:158)

[INFO]
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR]   
TestMasterOperationsForRegionReplicas.testCreateTableWithMultipleReplicas:158->validateFromSnapshotFromMeta:318
[ERROR] Errors:
[ERROR]   
TestMasterOperationsForRegionReplicas.testIncompleteMetaTableReplicaInformation 
ยป TableNotEnabled
[INFO]
[ERROR] Tests run: 3, Failures: 1, Errors: 1, Skipped: 0
{code}

Should we create another jira to fixed the error in branch-2 first?

> enable 
> TestMasterOperationsForRegionReplicas#testIncompleteMetaTableReplicaInformation
> --
>
> Key: HBASE-19414
> URL: https://issues.apache.org/jira/browse/HBASE-19414
> Project: HBase
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: Yung-An He
>Priority: Trivial
>  Labels: beginner
> Fix For: 2.0.0, 1.3.2, 1.2.7
>
>
> {quote}
> TODO: enable when we have support for alter_table- HBASE-10361
> {quote}
> The HBASE-10361 was resolved 2 years ago, and the test had been enabled in 
> branch-1 and branch-1.4. Hence we should also enable it for all active 
> branches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-12-05 Thread zhaoyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279793#comment-16279793
 ] 

zhaoyuan commented on HBASE-19340:
--

 I have backported all missed options except PRIORITY  .I didn't find a 
constant property named PRIORITY in source code in branch-1.2. 
Others go well in test.
IFY sir [~chia7712]
  

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340-branch-1.2.batch, 
> HBASE-19340-branch-1.2.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over

2017-12-05 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19358:
-
Attachment: split-1-log.png
split_test_result.png

> Improve the stability of splitting log when do fail over
> 
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR
>Affects Versions: 0.98.24
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: newLogic.jpg, previousLogic.jpg, split-1-log.png, 
> split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> !https://issues.apache.org/jira/secure/attachment/12899558/previousLogic.jpg!
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region. If the 
> cluster is small and the number of regions per rs is large, it will create 
> too many HDFS streams at the same time. Then it is prone to failure since 
> each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> !https://issues.apache.org/jira/secure/attachment/12899557/newLogic.jpg!
> We cached the recovered edits unless exceeds the memory limits we set or 
> reach the end, then  we have a thread pool to do the rest things: write them 
> to files and move to the destination.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> contains_*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279782#comment-16279782
 ] 

Hadoop QA commented on HBASE-19340:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 37m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 3s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m  
9s{color} | {color:red} The patch generated 60 new + 887 unchanged - 26 fixed = 
947 total (was 913) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 87 new + 790 unchanged - 5 fixed = 
877 total (was 795) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
4s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:e77c578 |
| JIRA Issue | HBASE-19340 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900805/HBASE-19340-branch-1.2.batch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  rubocop  ruby_lint  |
| uname | Linux 8e50a2bb35de 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-1.2 / 358e2d7 |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_161 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-openjdk-amd64:1.8.0_152 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_161 |
| rubocop | v0.51.0 |
| rubocop | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10253/artifact/patchprocess/diff-patch-rubocop.txt
 |
| ruby-lint | v2.3.1 |
| ruby-lint | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10253/artifact/patchprocess/diff-patch-ruby-lint.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10253/testReport/ |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10253/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> 

[jira] [Commented] (HBASE-19430) Remove the SettableTimestamp and SettableSequenceId

2017-12-05 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279778#comment-16279778
 ] 

Chia-Ping Tsai commented on HBASE-19430:


{code:title=CellUtil.java}
  @Deprecated
  public static void setTimestamp(Cell cell, byte[] ts, int tsOffset) throws 
IOException {
PrivateCellUtil.setTimestamp(cell, ts, tsOffset);
  }
{code}
I didn't remove it as it is used in CellUtil. But it seems we can convert the 
byte to long and then call the setTimestamp(long) instead. Will remove it in 
next patch.

> Remove the SettableTimestamp and SettableSequenceId
> ---
>
> Key: HBASE-19430
> URL: https://issues.apache.org/jira/browse/HBASE-19430
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19430.v0.patch, HBASE-19430.v0.test.patch
>
>
> They are introduced by HBASE-11777 and HBASE-12082. Both of them are IA.LP 
> and denoted with deprecated. To my way of thinking, we should remove them 
> before 2.0 release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18294) Reduce global heap pressure: flush based on heap occupancy

2017-12-05 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279774#comment-16279774
 ] 

Edward Bortnikov commented on HBASE-18294:
--

Chiming in ...

This question seems to be irrelevant to whether MSLAB use is a per-table or 
global flag. Agreed that we should avoid adding new configurations whenever 
possible. 

Let's try to remain factual in the decisions we make. The goal is to get the 
best possible performance from a machine with given RAM resources, on-heap or 
not. [~eshcar], could you please publish some numbers that validate the 
solution's value? [~anoop.hbase], mind sharing any data that proves the 
opposite? 

Thanks!


> Reduce global heap pressure: flush based on heap occupancy
> --
>
> Key: HBASE-18294
> URL: https://issues.apache.org/jira/browse/HBASE-18294
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-18294.01.patch, HBASE-18294.02.patch, 
> HBASE-18294.03.patch, HBASE-18294.04.patch, HBASE-18294.05.patch, 
> HBASE-18294.06.patch
>
>
> A region is flushed if its memory component exceed a threshold (default size 
> is 128MB).
> A flush policy decides whether to flush a store by comparing the size of the 
> store to another threshold (that can be configured with 
> hbase.hregion.percolumnfamilyflush.size.lower.bound).
> Currently the implementation (in both cases) compares the data size 
> (key-value only) to the threshold where it should compare the heap size 
> (which includes index size, and metadata).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19430) Remove the SettableTimestamp and SettableSequenceId

2017-12-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279768#comment-16279768
 ] 

ramkrishna.s.vasudevan commented on HBASE-19430:


bq.void setTimestamp(byte[] ts, int tsOffset) throws IOException;
Did a grep of this. I think every where we use we use it with offset = 0. So do 
we really need this version of the API? Rest LGTM.

> Remove the SettableTimestamp and SettableSequenceId
> ---
>
> Key: HBASE-19430
> URL: https://issues.apache.org/jira/browse/HBASE-19430
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19430.v0.patch, HBASE-19430.v0.test.patch
>
>
> They are introduced by HBASE-11777 and HBASE-12082. Both of them are IA.LP 
> and denoted with deprecated. To my way of thinking, we should remove them 
> before 2.0 release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19430) Remove the SettableTimestamp and SettableSequenceId

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279766#comment-16279766
 ] 

Hadoop QA commented on HBASE-19430:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
30s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} hbase-common: The patch generated 2 new + 160 
unchanged - 8 fixed = 162 total (was 168) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
53s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
54m 21s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
13s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 45s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}191m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.tool.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.tool.TestLoadIncrementalHFiles |
|   | hadoop.hbase.namespace.TestNamespaceAuditor |
|   | hadoop.hbase.filter.TestFilterListOnMini |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19430 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900796/HBASE-19430.v0.test.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux c120e87de4a8 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Comment Edited] (HBASE-15536) Make AsyncFSWAL as our default WAL

2017-12-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279756#comment-16279756
 ] 

Duo Zhang edited comment on HBASE-15536 at 12/6/17 7:02 AM:


The log level is DEBUG sir, this is just a debug log to tell you that you are 
on hadoop-2.7- and there is no PBHelperClient class so we will use PBHelper 
class instead. Maybe I should not print the stack trace out so the message here 
will be less confusing?


was (Author: apache9):
The log level is DEBUG sir, this is just a debug log to tell you that you are 
on hadoop-2.7- and there is PBHelperClient class so we will use PBHelper class 
instead. Maybe I should not print the stack trace out so the message here will 
be less confusing?

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-15536-v1.patch, HBASE-15536-v2.patch, 
> HBASE-15536-v3.patch, HBASE-15536-v4.patch, HBASE-15536-v5.patch, 
> HBASE-15536.patch, latesttrunk_asyncWAL_50threads_10cols.jfr, 
> latesttrunk_defaultWAL_50threads_10cols.jfr
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279765#comment-16279765
 ] 

Duo Zhang commented on HBASE-19410:
---

Yes. Will commit shortly. Thanks.

> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19295) The Configuration returned by CPEnv should be read-only.

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279764#comment-16279764
 ] 

Hadoop QA commented on HBASE-19295:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
22s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} The patch hbase-client passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} hbase-server: The patch generated 0 new + 7 
unchanged - 1 fixed = 7 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
52s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m 
23s{color} | {color:red} The patch causes 17 errors with Hadoop v2.6.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  7m 
56s{color} | {color:red} The patch causes 17 errors with Hadoop v2.6.2. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  9m 
28s{color} | {color:red} The patch causes 17 errors with Hadoop v2.6.3. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 
59s{color} | {color:red} The patch causes 17 errors with Hadoop v2.6.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 
30s{color} | {color:red} The patch causes 17 errors with Hadoop v2.6.5. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 35s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.namespace.TestNamespaceAuditor |
|   | 

[jira] [Commented] (HBASE-19437) Batch operation can't handle the null result for Append/Increment

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279762#comment-16279762
 ] 

Hadoop QA commented on HBASE-19437:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
16s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
9s{color} | {color:red} hbase-server: The patch generated 1 new + 132 unchanged 
- 0 fixed = 133 total (was 132) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 4s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
54m 52s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 34s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.tool.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.tool.TestLoadIncrementalHFiles |
|   | hadoop.hbase.wal.TestWALFiltering |
|   | hadoop.hbase.namespace.TestNamespaceAuditor |
|   | hadoop.hbase.filter.TestFilterListOnMini |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19437 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900795/HBASE-19437.v0.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 636091692a8a 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ed60e4518d |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10248/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| unit | 

[jira] [Commented] (HBASE-19295) The Configuration returned by CPEnv should be read-only.

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279760#comment-16279760
 ] 

Hadoop QA commented on HBASE-19295:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
28s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} The patch hbase-client passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} hbase-server: The patch generated 0 new + 7 
unchanged - 1 fixed = 7 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
49s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m 
20s{color} | {color:red} The patch causes 17 errors with Hadoop v2.6.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  7m 
51s{color} | {color:red} The patch causes 17 errors with Hadoop v2.6.2. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  9m 
26s{color} | {color:red} The patch causes 17 errors with Hadoop v2.6.3. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 
59s{color} | {color:red} The patch causes 17 errors with Hadoop v2.6.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 
33s{color} | {color:red} The patch causes 17 errors with Hadoop v2.6.5. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 38s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.namespace.TestNamespaceAuditor |
|   | 

[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2017-12-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279756#comment-16279756
 ] 

Duo Zhang commented on HBASE-15536:
---

The log level is DEBUG sir, this is just a debug log to tell you that you are 
on hadoop-2.7- and there is PBHelperClient class so we will use PBHelper class 
instead. Maybe I should not print the stack trace out so the message here will 
be less confusing?

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-15536-v1.patch, HBASE-15536-v2.patch, 
> HBASE-15536-v3.patch, HBASE-15536-v4.patch, HBASE-15536-v5.patch, 
> HBASE-15536.patch, latesttrunk_asyncWAL_50threads_10cols.jfr, 
> latesttrunk_defaultWAL_50threads_10cols.jfr
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19357) Bucket cache no longer L2 for LRU cache

2017-12-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279746#comment-16279746
 ] 

stack commented on HBASE-19357:
---

bq. we at least have to cache the META table data within the LRU.

Did you find a config that does this?

How you think it used work? We were doing the victim eviction from L1 out to L2 
of hbase:meta blocks also?

Dunno. Hot blocks from a file will be in the os cache. Will it be a problem?



> Bucket cache no longer L2 for LRU cache
> ---
>
> Key: HBASE-19357
> URL: https://issues.apache.org/jira/browse/HBASE-19357
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19357.patch, HBASE-19357.patch, 
> HBASE-19357_V2.patch, HBASE-19357_V3.patch, HBASE-19357_V3.patch
>
>
> When Bucket cache is used, by default we dont configure it as an L2 cache 
> alone. The default setting is combined mode ON where the data blocks to 
> Bucket cache and index/bloom blocks go to LRU cache. But there is a way to 
> turn this off and make LRU as L1 and Bucket cache as a victim handler for L1. 
> It will be just L2.   
> After the off heap read path optimization Bucket cache is no longer slower 
> compared to L1. We have test results on data sizes from 12 GB.  The Alibaba 
> use case was also with 12 GB and they have observed a ~30% QPS improve over 
> the LRU cache.
> This issue is to remove the option for combined mode = false. So when Bucket 
> cache is in use, data blocks will go to it only and LRU will get only index 
> /meta/bloom blocks.   Bucket cache will no longer be configured as a victim 
> handler for LRU.
> Note : WHen external cache is in use, there only the L1 L2 thing comes. LRU 
> will be L1 and external cache act as its L2. That make full sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19357) Bucket cache no longer L2 for LRU cache

2017-12-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279719#comment-16279719
 ] 

ramkrishna.s.vasudevan commented on HBASE-19357:


Did not go through the patch just saw this now. 
bq.But I fear we can not remove the cacheDataInL1 setting.. This is because we 
have file mode BC also.. When that is in place along with on heap LRU cache
Good point. One thing is that for these META and NS tables probably internally 
we can say always cache data in L1?
So having the option cache data is L1 is something like override the combined 
mode of L1 and L2 and ensure that table's data always be in L1? How ever with 
the intent of these JIRAs I think if user is having a file mode L2 then it is 
like  he is accepting that the data of his tables would go there and do we 
still need to provide the override option?

> Bucket cache no longer L2 for LRU cache
> ---
>
> Key: HBASE-19357
> URL: https://issues.apache.org/jira/browse/HBASE-19357
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19357.patch, HBASE-19357.patch, 
> HBASE-19357_V2.patch, HBASE-19357_V3.patch, HBASE-19357_V3.patch
>
>
> When Bucket cache is used, by default we dont configure it as an L2 cache 
> alone. The default setting is combined mode ON where the data blocks to 
> Bucket cache and index/bloom blocks go to LRU cache. But there is a way to 
> turn this off and make LRU as L1 and Bucket cache as a victim handler for L1. 
> It will be just L2.   
> After the off heap read path optimization Bucket cache is no longer slower 
> compared to L1. We have test results on data sizes from 12 GB.  The Alibaba 
> use case was also with 12 GB and they have observed a ~30% QPS improve over 
> the LRU cache.
> This issue is to remove the option for combined mode = false. So when Bucket 
> cache is in use, data blocks will go to it only and LRU will get only index 
> /meta/bloom blocks.   Bucket cache will no longer be configured as a victim 
> handler for LRU.
> Note : WHen external cache is in use, there only the L1 L2 thing comes. LRU 
> will be L1 and external cache act as its L2. That make full sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19432) Roll the specified writer in HFileOutputFormat2

2017-12-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19432:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0-beta-1
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Guangxu

> Roll the specified writer in HFileOutputFormat2
> ---
>
> Key: HBASE-19432
> URL: https://issues.apache.org/jira/browse/HBASE-19432
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.0-beta-1
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19432.master.001.patch, 
> HBASE-19432.master.002.patch
>
>
> {code}
>   // If any of the HFiles for the column families has reached
>   // maxsize, we need to roll all the writers
>   if (wl != null && wl.written + length >= maxsize) {
> this.rollRequested = true;
>   }
> {code}
> If we always roll all the writers, a large number of small files will be 
> generated in the multi family or multi table scene.
> So we should only roll the specified writer which HFile has reached maxsize.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-12-05 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Attachment: HBASE-19340-branch-1.2.batch

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340-branch-1.2.batch, 
> HBASE-19340-branch-1.2.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279683#comment-16279683
 ] 

Ted Yu edited comment on HBASE-19417 at 12/6/17 6:22 AM:
-

You can see the two test failures here:
https://builds.apache.org/job/HBase-TRUNK_matrix/4176/testReport/




was (Author: yuzhih...@gmail.com):
You can see the two test failures here:
https://builds.apache.org/job/HBase-TRUNK_matrix/4176/testReport/

Seems to be related to HBASE-19323 

> Remove boolean return value from postBulkLoadHFile hook
> ---
>
> Key: HBASE-19417
> URL: https://issues.apache.org/jira/browse/HBASE-19417
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ted Yu
> Attachments: 19417.v1.txt, 19417.v2.txt, 19417.v3.txt, 19417.v4.txt, 
> 19417.v5.txt, 19417.v6.txt, 19417.v7.txt, 19417.v8.txt
>
>
> See the discussion at the tail of HBASE-17123 where Appy pointed out that the 
> override of loaded should be placed inside else block:
> {code}
>   } else {
> // secure bulk load
> map = regionServer.secureBulkLoadManager.secureBulkLoadHFiles(region, 
> request);
>   }
>   BulkLoadHFileResponse.Builder builder = 
> BulkLoadHFileResponse.newBuilder();
>   if (map != null) {
> loaded = true;
>   }
> {code}
> This issue is to address the review comment.
> After several review iterations, here are the changes:
> * Return value of boolean for postBulkLoadHFile() hook are changed to void.
> * Coprocessor hooks (pre and post) are added for the scenario where bulk load 
> manager is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache

2017-12-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279709#comment-16279709
 ] 

ramkrishna.s.vasudevan commented on HBASE-19435:


bq.If there is an HBase thread that keeps interrupting that connection, we 
should fix the error there. Currently, I believe we are seeing this in some 
case where a compaction interrupts the connection, but haven't isolated the 
specific process.
I think immediate refresh is fine here but on the point that you see compaction 
interrupting the file connection - are you saying about the forceful eviction 
of the cache blocks when a compaction happens?
For re enabling the cache I think yes that needs to be done for cases as this. 
Because even if it is invalidated we need the cache back if the bucket cache's 
file connection is back. 


> Reopen Files for ClosedChannelException in BucketCache
> --
>
> Key: HBASE-19435
> URL: https://issues.apache.org/jira/browse/HBASE-19435
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.3.1
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-19435.master.001.patch
>
>
> When using the FileIOEngine for BucketCache, the cache will be disabled if 
> the connection is interrupted or closed. HBase will then get 
> ClosedChannelExceptions trying to access the file. After 60s, the RS will 
> disable the cache. This causes severe read performance degradation for 
> workloads that rely on this cache. FileIOEngine never tries to reopen the 
> connection. This JIRA is to reopen files when the BucketCache encounters a 
> ClosedChannelException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19357) Bucket cache no longer L2 for LRU cache

2017-12-05 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279705#comment-16279705
 ] 

Anoop Sam John commented on HBASE-19357:


I pushed this and reverted.. Now I have a worry. Sorry for not thinking that 
upfront.
Making BC no longer L2 (as subject says) is just fine.  We need those changes.  
And also remove the combined mode = false. It will be always combined.
But I fear we can not remove the cacheDataInL1 setting..  This is because we 
have file mode BC also..  When that is in place along with on heap LRU cache, 
we at least have to cache the META table data within the LRU.  Putting that in 
file mode BC will have huge perf impact. Same with NS table. And who knows 
users may have some very imp tables too.  
So I tend to fix this issue by keeping the other 2 parts apart from the removal 
of setter in CD for caching data in L1.  Any concerns?  Again sorry for 
creating confusion.

> Bucket cache no longer L2 for LRU cache
> ---
>
> Key: HBASE-19357
> URL: https://issues.apache.org/jira/browse/HBASE-19357
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19357.patch, HBASE-19357.patch, 
> HBASE-19357_V2.patch, HBASE-19357_V3.patch, HBASE-19357_V3.patch
>
>
> When Bucket cache is used, by default we dont configure it as an L2 cache 
> alone. The default setting is combined mode ON where the data blocks to 
> Bucket cache and index/bloom blocks go to LRU cache. But there is a way to 
> turn this off and make LRU as L1 and Bucket cache as a victim handler for L1. 
> It will be just L2.   
> After the off heap read path optimization Bucket cache is no longer slower 
> compared to L1. We have test results on data sizes from 12 GB.  The Alibaba 
> use case was also with 12 GB and they have observed a ~30% QPS improve over 
> the LRU cache.
> This issue is to remove the option for combined mode = false. So when Bucket 
> cache is in use, data blocks will go to it only and LRU will get only index 
> /meta/bloom blocks.   Bucket cache will no longer be configured as a victim 
> handler for LRU.
> Note : WHen external cache is in use, there only the L1 L2 thing comes. LRU 
> will be L1 and external cache act as its L2. That make full sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19023) Usage for rowcounter in refguide is out of sync with code

2017-12-05 Thread Tak Lon (Stephen) Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279704#comment-16279704
 ] 

Tak Lon (Stephen) Wu commented on HBASE-19023:
--

Thanks, attached are my patches and review links.

> Usage for rowcounter in refguide is out of sync with code
> -
>
> Key: HBASE-19023
> URL: https://issues.apache.org/jira/browse/HBASE-19023
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: Ted Yu
>Assignee: Tak Lon (Stephen) Wu
>  Labels: document, mapreduce
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-19023.branch-2.001.patch, 
> HBASE-19023.master.001.patch
>
>
> src/main/asciidoc/_chapters/troubleshooting.adoc:
> {code}
> HADOOP_CLASSPATH=`hbase classpath` hadoop jar 
> $HBASE_HOME/hbase-server-VERSION.jar rowcounter usertable
> {code}
> The class is no longer in hbase-server jar. It is in hbase-mapreduce jar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19023) Usage for rowcounter in refguide is out of sync with code

2017-12-05 Thread Tak Lon (Stephen) Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-19023:
-
Fix Version/s: 3.0.0
   2.0.0
   Status: Patch Available  (was: Open)

> Usage for rowcounter in refguide is out of sync with code
> -
>
> Key: HBASE-19023
> URL: https://issues.apache.org/jira/browse/HBASE-19023
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: Ted Yu
>Assignee: Tak Lon (Stephen) Wu
>  Labels: document, mapreduce
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-19023.branch-2.001.patch, 
> HBASE-19023.master.001.patch
>
>
> src/main/asciidoc/_chapters/troubleshooting.adoc:
> {code}
> HADOOP_CLASSPATH=`hbase classpath` hadoop jar 
> $HBASE_HOME/hbase-server-VERSION.jar rowcounter usertable
> {code}
> The class is no longer in hbase-server jar. It is in hbase-mapreduce jar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19301) Provide way for CPs to create short circuited connection with custom configurations

2017-12-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279700#comment-16279700
 ] 

stack commented on HBASE-19301:
---

Yeah, CoreCoprocessors may make heavy-use of the RS Connection. I suppose 
they'd do this even after we integrate them into core.

bq. Ya with warn expose to CPs is fine. But better do not ?

Yes. Would be better if not, but I think we are out of time for messing around 
here. We've done a good cleanup in hbase2. This will be a rough edge, something 
to work on in hbase3. What you think?

bq. But till 1.x we have only this way. 

Agree. It is a problem though, right. Each CP instance w/ a Connection goes to 
hbase:meta scanning to find locations. Each instance has to warm its own cache.

What you think [~anoop.hbase] (and [~zghaobac])? I can put up a patch that 
keeps getConnection with a warning that it is the RS's connection and only for 
light-loading (e.g. create table if it does not exist on 
start...admin-functions). If you need to do heavy-connection work, create your 
own. I'll doc how CP can do it at CP start though it means an ugly cast of the 
passed in CoprocessorEnvironment so they get at the createConnection method 
(will doc. too how createConnection does short-circuit. If they don't need 
short-circuit, then they should do ConnectionFactory). Ok for hbase2?

> Provide way for CPs to create short circuited connection with custom 
> configurations
> ---
>
> Key: HBASE-19301
> URL: https://issues.apache.org/jira/browse/HBASE-19301
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19301-addendum.patch, HBASE-19301.patch, 
> HBASE-19301_V2.patch, HBASE-19301_V2.patch
>
>
> Over in HBASE-18359 we have discussions for this.
> Right now HBase provide getConnection() in RegionCPEnv, MasterCPEnv etc. But 
> this returns a pre created connection (per server).  This uses the configs at 
> hbase-site.xml at that server. 
> Phoenix needs creating connection in CP with some custom configs. Having this 
> custom changes in hbase-site.xml is harmful as that will affect all 
> connections been created at that server.
> This issue is for providing an overloaded getConnection(Configuration) API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19023) Usage for rowcounter in refguide is out of sync with code

2017-12-05 Thread Tak Lon (Stephen) Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-19023:
-
Attachment: HBASE-19023.branch-2.001.patch

> Usage for rowcounter in refguide is out of sync with code
> -
>
> Key: HBASE-19023
> URL: https://issues.apache.org/jira/browse/HBASE-19023
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: Ted Yu
>Assignee: Tak Lon (Stephen) Wu
>  Labels: document, mapreduce
> Attachments: HBASE-19023.branch-2.001.patch, 
> HBASE-19023.master.001.patch
>
>
> src/main/asciidoc/_chapters/troubleshooting.adoc:
> {code}
> HADOOP_CLASSPATH=`hbase classpath` hadoop jar 
> $HBASE_HOME/hbase-server-VERSION.jar rowcounter usertable
> {code}
> The class is no longer in hbase-server jar. It is in hbase-mapreduce jar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-13819) Make RPC layer CellBlock buffer a DirectByteBuffer

2017-12-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279698#comment-16279698
 ] 

ramkrishna.s.vasudevan commented on HBASE-13819:


Just one question now -  since NettyRPCserver is being used now - now both 
requests and response now goes with netty pool? Or atleast one of the path uses 
the reservoir? I remember requests was not using it but response still uses I 
believe. 

> Make RPC layer CellBlock buffer a DirectByteBuffer
> --
>
> Key: HBASE-13819
> URL: https://issues.apache.org/jira/browse/HBASE-13819
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13819.patch, HBASE-13819_branch-1.patch, 
> HBASE-13819_branch-1.patch, HBASE-13819_branch-1.patch, q.png
>
>
> In RPC layer, when we make a cellBlock to put as RPC payload, we will make an 
> on heap byte buffer (via BoundedByteBufferPool). The pool will keep upto 
> certain number of buffers. This jira aims at testing possibility for making 
> this buffers off heap ones. (DBB)  The advantages
> 1. Unsafe based writes to off heap is faster than that to on heap. Now we are 
> not using unsafe based writes at all. Even if we add, DBB will be better
> 2. When Cells are backed by off heap (HBASE-11425) off heap to off heap 
> writes will be better
> 3. When checked the code in SocketChannel impl, if we pass a HeapByteBuffer 
> to the socket channel, it will create a temp DBB and copy data to there and 
> only DBBs will be moved to Sockets. If we make DBB 1st hand itself, we can  
> avoid this one more level of copying.
> Will do different perf testing with changed and report back.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2017-12-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279693#comment-16279693
 ] 

stack commented on HBASE-15536:
---

[~Apache9] Here is why I changed the release note to add this:

commit 95f8e93691ad79b77ae9a4a13208c0d7ca405c96 
Author: Haohui Mai  
Date: Sat Aug 22 13:30:19 2015 -0700 

HDFS-8934. Move ShortCircuitShm to hdfs-client. Contributed by Mingliang 
Liu.


... it is because I get this when I try to run asyncfswal on 
hadoop-2.7.4-SNAPSHOT


2017-12-05 21:52:48,523 DEBUG [main] 
asyncfs.FanOutOneBlockAsyncDFSOutputHelper: No PBHelperClient class found, 
should be hadoop 2.7-
java.lang.ClassNotFoundException: 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createPBHelper(FanOutOneBlockAsyncDFSOutputHelper.java:413)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:546)
at 
org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:62)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:158)
at 
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:167)
at 
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:100)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:621)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:131)
at 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:751)
at 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:489)
at 
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.(AsyncFSWAL.java:257)
at 
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:70)
at 
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:45)
at 
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:139)
at 
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:55)
at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:244)
at 
org.apache.hadoop.hbase.wal.WALPerformanceEvaluation.openRegion(WALPerformanceEvaluation.java:502)
at 
org.apache.hadoop.hbase.wal.WALPerformanceEvaluation.run(WALPerformanceEvaluation.java:336)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.hbase.wal.WALPerformanceEvaluation.innerMain(WALPerformanceEvaluation.java:597)
at 
org.apache.hadoop.hbase.wal.WALPerformanceEvaluation.main(WALPerformanceEvaluation.java:601)

Do you not get the above?

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-15536-v1.patch, HBASE-15536-v2.patch, 
> HBASE-15536-v3.patch, HBASE-15536-v4.patch, HBASE-15536-v5.patch, 
> HBASE-15536.patch, latesttrunk_asyncWAL_50threads_10cols.jfr, 
> latesttrunk_defaultWAL_50threads_10cols.jfr
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19023) Usage for rowcounter in refguide is out of sync with code

2017-12-05 Thread Tak Lon (Stephen) Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-19023:
-
Attachment: HBASE-19023.master.001.patch

> Usage for rowcounter in refguide is out of sync with code
> -
>
> Key: HBASE-19023
> URL: https://issues.apache.org/jira/browse/HBASE-19023
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: Ted Yu
>Assignee: Tak Lon (Stephen) Wu
>  Labels: document, mapreduce
> Attachments: HBASE-19023.master.001.patch
>
>
> src/main/asciidoc/_chapters/troubleshooting.adoc:
> {code}
> HADOOP_CLASSPATH=`hbase classpath` hadoop jar 
> $HBASE_HOME/hbase-server-VERSION.jar rowcounter usertable
> {code}
> The class is no longer in hbase-server jar. It is in hbase-mapreduce jar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279683#comment-16279683
 ] 

Ted Yu commented on HBASE-19417:


You can see the two test failures here:
https://builds.apache.org/job/HBase-TRUNK_matrix/4176/testReport/

Seems to be related to HBASE-19323 

> Remove boolean return value from postBulkLoadHFile hook
> ---
>
> Key: HBASE-19417
> URL: https://issues.apache.org/jira/browse/HBASE-19417
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ted Yu
> Attachments: 19417.v1.txt, 19417.v2.txt, 19417.v3.txt, 19417.v4.txt, 
> 19417.v5.txt, 19417.v6.txt, 19417.v7.txt, 19417.v8.txt
>
>
> See the discussion at the tail of HBASE-17123 where Appy pointed out that the 
> override of loaded should be placed inside else block:
> {code}
>   } else {
> // secure bulk load
> map = regionServer.secureBulkLoadManager.secureBulkLoadHFiles(region, 
> request);
>   }
>   BulkLoadHFileResponse.Builder builder = 
> BulkLoadHFileResponse.newBuilder();
>   if (map != null) {
> loaded = true;
>   }
> {code}
> This issue is to address the review comment.
> After several review iterations, here are the changes:
> * Return value of boolean for postBulkLoadHFile() hook are changed to void.
> * Coprocessor hooks (pre and post) are added for the scenario where bulk load 
> manager is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279680#comment-16279680
 ] 

Ted Yu commented on HBASE-19417:


There was no occurrence of LoadIncrementalHFiles test in the latest build (as 
of now):

https://builds.apache.org/job/HBASE-Flaky-Tests/23784/consoleFull

> Remove boolean return value from postBulkLoadHFile hook
> ---
>
> Key: HBASE-19417
> URL: https://issues.apache.org/jira/browse/HBASE-19417
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ted Yu
> Attachments: 19417.v1.txt, 19417.v2.txt, 19417.v3.txt, 19417.v4.txt, 
> 19417.v5.txt, 19417.v6.txt, 19417.v7.txt, 19417.v8.txt
>
>
> See the discussion at the tail of HBASE-17123 where Appy pointed out that the 
> override of loaded should be placed inside else block:
> {code}
>   } else {
> // secure bulk load
> map = regionServer.secureBulkLoadManager.secureBulkLoadHFiles(region, 
> request);
>   }
>   BulkLoadHFileResponse.Builder builder = 
> BulkLoadHFileResponse.newBuilder();
>   if (map != null) {
> loaded = true;
>   }
> {code}
> This issue is to address the review comment.
> After several review iterations, here are the changes:
> * Return value of boolean for postBulkLoadHFile() hook are changed to void.
> * Coprocessor hooks (pre and post) are added for the scenario where bulk load 
> manager is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19323) Make netty engine default in hbase2

2017-12-05 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279678#comment-16279678
 ] 

binlijin edited comment on HBASE-19323 at 12/6/17 5:38 AM:
---

@stack Thank you sir, you are welcome.


was (Author: aoxiang):
@stack Thanks you sir, you are welcome.

> Make netty engine default in hbase2
> ---
>
> Key: HBASE-19323
> URL: https://issues.apache.org/jira/browse/HBASE-19323
> Project: HBase
>  Issue Type: Task
>  Components: rpc
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: 
> 0001-HBASE-19323-Make-netty-engine-default-in-hbase2.patch, 
> HBASE-19323.master.001.patch
>
>
> HBASE-17263 added netty rpc server. This issue is about making it default 
> given it has seen good service across two singles-days at scale. Netty 
> handles the scenario seen in HBASE-19320 (See tail of HBASE-19320 for 
> suggestion to netty the default)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19323) Make netty engine default in hbase2

2017-12-05 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279678#comment-16279678
 ] 

binlijin commented on HBASE-19323:
--

@stack Thanks you sir, you are welcome.

> Make netty engine default in hbase2
> ---
>
> Key: HBASE-19323
> URL: https://issues.apache.org/jira/browse/HBASE-19323
> Project: HBase
>  Issue Type: Task
>  Components: rpc
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: 
> 0001-HBASE-19323-Make-netty-engine-default-in-hbase2.patch, 
> HBASE-19323.master.001.patch
>
>
> HBASE-17263 added netty rpc server. This issue is about making it default 
> given it has seen good service across two singles-days at scale. Netty 
> handles the scenario seen in HBASE-19320 (See tail of HBASE-19320 for 
> suggestion to netty the default)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279676#comment-16279676
 ] 

Appy commented on HBASE-19417:
--

*LoadIncrementalHFiles (related to this change) are not in flaky list though - 
https://builds.apache.org/job/HBase-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html.
So i'd request you to keep an eye on the flaky list over next few days, in case 
they show up after commit.


> Remove boolean return value from postBulkLoadHFile hook
> ---
>
> Key: HBASE-19417
> URL: https://issues.apache.org/jira/browse/HBASE-19417
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ted Yu
> Attachments: 19417.v1.txt, 19417.v2.txt, 19417.v3.txt, 19417.v4.txt, 
> 19417.v5.txt, 19417.v6.txt, 19417.v7.txt, 19417.v8.txt
>
>
> See the discussion at the tail of HBASE-17123 where Appy pointed out that the 
> override of loaded should be placed inside else block:
> {code}
>   } else {
> // secure bulk load
> map = regionServer.secureBulkLoadManager.secureBulkLoadHFiles(region, 
> request);
>   }
>   BulkLoadHFileResponse.Builder builder = 
> BulkLoadHFileResponse.newBuilder();
>   if (map != null) {
> loaded = true;
>   }
> {code}
> This issue is to address the review comment.
> After several review iterations, here are the changes:
> * Return value of boolean for postBulkLoadHFile() hook are changed to void.
> * Coprocessor hooks (pre and post) are added for the scenario where bulk load 
> manager is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19438) Doc cleanup after removal of features across Cache/BucketCache

2017-12-05 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-19438:
--

 Summary: Doc cleanup after removal of features across 
Cache/BucketCache
 Key: HBASE-19438
 URL: https://issues.apache.org/jira/browse/HBASE-19438
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 2.0.0-beta-1






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19430) Remove the SettableTimestamp and SettableSequenceId

2017-12-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279674#comment-16279674
 ] 

stack commented on HBASE-19430:
---

Pardon me. Misread. Saw the deprecations. Thought they were yours. +1 from me 
but get [~anoop.hbase]/[~ram_krish] blessing. Go easy.

> Remove the SettableTimestamp and SettableSequenceId
> ---
>
> Key: HBASE-19430
> URL: https://issues.apache.org/jira/browse/HBASE-19430
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19430.v0.patch, HBASE-19430.v0.test.patch
>
>
> They are introduced by HBASE-11777 and HBASE-12082. Both of them are IA.LP 
> and denoted with deprecated. To my way of thinking, we should remove them 
> before 2.0 release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19301) Provide way for CPs to create short circuited connection with custom configurations

2017-12-05 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279675#comment-16279675
 ] 

Anoop Sam John commented on HBASE-19301:


On removing the getConnection() - the issue is our CPs like AC etc are making 
heavy use of them?  We have a CoreCoprocessor way and passing a special 
RegionEnvironment to them.  We have getRegionServerServices() extra there.  
Whether this RegionServerServices having a getConnection() which gives the HRS 
connection will work?
Ya with warn expose to CPs is fine. But better do not ?
Ya createConnection() always create new connection. The concern of too many 
cons is valid. But till 1.x we have only this way. There were no 
getConnection() stuff which return shared connection.  The head ache of 
caching/sharing the connection was with CPs. Phoenix and all do that already.  
Just like client side how users handle. The only diff of createConnection() 
here is it gives a short circuited connection. Else same as con created using 
ConnectionFactory..
Ya lets see how/what solution comes up for the other issue of user and sharing 
the RPC context. I believe [~zghaobac] working on that. (?)

> Provide way for CPs to create short circuited connection with custom 
> configurations
> ---
>
> Key: HBASE-19301
> URL: https://issues.apache.org/jira/browse/HBASE-19301
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19301-addendum.patch, HBASE-19301.patch, 
> HBASE-19301_V2.patch, HBASE-19301_V2.patch
>
>
> Over in HBASE-18359 we have discussions for this.
> Right now HBase provide getConnection() in RegionCPEnv, MasterCPEnv etc. But 
> this returns a pre created connection (per server).  This uses the configs at 
> hbase-site.xml at that server. 
> Phoenix needs creating connection in CP with some custom configs. Having this 
> custom changes in hbase-site.xml is harmful as that will affect all 
> connections been created at that server.
> This issue is for providing an overloaded getConnection(Configuration) API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19433) ChangeSplitPolicyAction modifies an immutable HTableDescriptor

2017-12-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279668#comment-16279668
 ] 

stack commented on HBASE-19433:
---

Patch looks fine. How we know it for sure addresses the problem?

> ChangeSplitPolicyAction modifies an immutable HTableDescriptor
> --
>
> Key: HBASE-19433
> URL: https://issues.apache.org/jira/browse/HBASE-19433
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Reporter: Josh Elser
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 19433.v1.txt
>
>
> {noformat}
> 2017-12-01 23:18:51,433 WARN  [ChaosMonkeyThread] policies.Policy: Exception 
> occurred during performing action: java.lang.UnsupportedOperationException: 
> HTableDescriptor is read-only
> at 
> org.apache.hadoop.hbase.client.ImmutableHTableDescriptor.getDelegateeForModification(ImmutableHTableDescriptor.java:59)
> at 
> org.apache.hadoop.hbase.HTableDescriptor.setRegionSplitPolicyClassName(HTableDescriptor.java:333)
> at 
> org.apache.hadoop.hbase.chaos.actions.ChangeSplitPolicyAction.perform(ChangeSplitPolicyAction.java:54)
> at 
> org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59)
> at 
> org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Found during some internal testing. Need to make sure this Action, in 
> addition to the other, don't fall into the trap of modifying the 
> TableDescriptor obtained from Admin.
> [~tedyu], want to take a stab at it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279662#comment-16279662
 ] 

stack commented on HBASE-19410:
---

Open new issue to talk testing revamp or just to dump ideas in (all above are 
good). Meantime, seems like this is good to go?

> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19323) Make netty engine default in hbase2

2017-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279654#comment-16279654
 ] 

Hudson commented on HBASE-19323:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4176 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4176/])
HBASE-19323 Make netty engine default in hbase2 (stack: rev 
ed60e4518dda1172de99db6ace098dd858a3ada8)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerFactory.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/AbstractTestIPC.java


> Make netty engine default in hbase2
> ---
>
> Key: HBASE-19323
> URL: https://issues.apache.org/jira/browse/HBASE-19323
> Project: HBase
>  Issue Type: Task
>  Components: rpc
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0-beta-1
>
> Attachments: 
> 0001-HBASE-19323-Make-netty-engine-default-in-hbase2.patch, 
> HBASE-19323.master.001.patch
>
>
> HBASE-17263 added netty rpc server. This issue is about making it default 
> given it has seen good service across two singles-days at scale. Netty 
> handles the scenario seen in HBASE-19320 (See tail of HBASE-19320 for 
> suggestion to netty the default)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19430) Remove the SettableTimestamp and SettableSequenceId

2017-12-05 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279652#comment-16279652
 ] 

Chia-Ping Tsai commented on HBASE-19430:


bq. why remove the methods in CellUtil, the ones that do is isDelete*, etc.?
Pardon me. I don't catch you pointer. All changes in {{CellUtil}} is about 
checkstyle warings (tab and whitespace). No methods are removed from 
{{CellUtil}}.

> Remove the SettableTimestamp and SettableSequenceId
> ---
>
> Key: HBASE-19430
> URL: https://issues.apache.org/jira/browse/HBASE-19430
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19430.v0.patch, HBASE-19430.v0.test.patch
>
>
> They are introduced by HBASE-11777 and HBASE-12082. Both of them are IA.LP 
> and denoted with deprecated. To my way of thinking, we should remove them 
> before 2.0 release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19436) Remove Jenkinsfile from some branches

2017-12-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279649#comment-16279649
 ] 

stack commented on HBASE-19436:
---

+1

> Remove Jenkinsfile from some branches
> -
>
> Key: HBASE-19436
> URL: https://issues.apache.org/jira/browse/HBASE-19436
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19436.HBASE-14070.HLC.001.patch
>
>
> Some of the branches in https://builds.apache.org/job/HBase%20Nightly/ don't 
> need to be build nightly since they are not in active development. Let's 
> remove them to reduce clutter and free up restricted resources. Suggested 
> branches:
> HBASE-14070.HLC : It got it as a part of forking from master where Sean's 
> work was in progress. Wasn't added intentionally. Removing Jenkinsfile from 
> it.
> HBASE-19297 : deleted as part of closing the jira itself.
> (todo: others?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-13819) Make RPC layer CellBlock buffer a DirectByteBuffer

2017-12-05 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279648#comment-16279648
 ] 

Anoop Sam John commented on HBASE-13819:


The Direct memory (DBB) we allocate for only pooling purpose.  The pool is 
having a fixed max size and as u see, the BBs are not freed. Put back just put 
the DBB back to pool for later use.   When the requests read can not find a 
free BB in pool, we dont create one DBB. We just create on heap only. On demand 
DBB create, we are NOT doing any where. That will be more dangerous IMO..   So 
the DBB exhaust ya possible..  Because as u know, the NIO , if we pass a HBB, 
it creates a DBB on demand (it is having its own pooling also) and read to that 
1st and then copy to passed HBB.   ThreadLocal based pooling in NIO is again 
more problematic.  Seeing more on how Netty handles things.  FOr reading reqs, 
Netty can read into HBB directly?   In all , there are some possible cases of 
Direct memory exhaust. Or more GC time.  Which all can kill the RS.. 

> Make RPC layer CellBlock buffer a DirectByteBuffer
> --
>
> Key: HBASE-13819
> URL: https://issues.apache.org/jira/browse/HBASE-13819
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13819.patch, HBASE-13819_branch-1.patch, 
> HBASE-13819_branch-1.patch, HBASE-13819_branch-1.patch, q.png
>
>
> In RPC layer, when we make a cellBlock to put as RPC payload, we will make an 
> on heap byte buffer (via BoundedByteBufferPool). The pool will keep upto 
> certain number of buffers. This jira aims at testing possibility for making 
> this buffers off heap ones. (DBB)  The advantages
> 1. Unsafe based writes to off heap is faster than that to on heap. Now we are 
> not using unsafe based writes at all. Even if we add, DBB will be better
> 2. When Cells are backed by off heap (HBASE-11425) off heap to off heap 
> writes will be better
> 3. When checked the code in SocketChannel impl, if we pass a HeapByteBuffer 
> to the socket channel, it will create a temp DBB and copy data to there and 
> only DBBs will be moved to Sockets. If we make DBB 1st hand itself, we can  
> avoid this one more level of copying.
> Will do different perf testing with changed and report back.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19430) Remove the SettableTimestamp and SettableSequenceId

2017-12-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279645#comment-16279645
 ] 

stack commented on HBASE-19430:
---

Skimmed. Looks great (why remove the methods in CellUtil, the ones that do is 
isDelete*, etc.?). Get +1 from [~anoop.hbase] or [~ram_krish]

> Remove the SettableTimestamp and SettableSequenceId
> ---
>
> Key: HBASE-19430
> URL: https://issues.apache.org/jira/browse/HBASE-19430
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19430.v0.patch, HBASE-19430.v0.test.patch
>
>
> They are introduced by HBASE-11777 and HBASE-12082. Both of them are IA.LP 
> and denoted with deprecated. To my way of thinking, we should remove them 
> before 2.0 release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19301) Provide way for CPs to create short circuited connection with custom configurations

2017-12-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279640#comment-16279640
 ] 

stack commented on HBASE-19301:
---

To be clear, suggestion:

 * Leave getConfiguration in place. Add warnings. Warn that it returns the RS 
Connection. Its for light-duty incidental usage ONLY. Pros: Its cache is 
populated; it does short-circuit connection. Cons: no heavy-duty usage or will 
impinge on RS operation, a close will kill the RS.
 * Add to createConnection doc on how to create one that does and that does not 
do short-circuiting. Show how to create one on start. Warn about making too 
many or will drive up loading on hbase:meta and bloat threads on local RS.

We'd do the above because we are out of time for hbase2. Lets do better in 
hbase3. Lets work on the User issues in new JIRA.

> Provide way for CPs to create short circuited connection with custom 
> configurations
> ---
>
> Key: HBASE-19301
> URL: https://issues.apache.org/jira/browse/HBASE-19301
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19301-addendum.patch, HBASE-19301.patch, 
> HBASE-19301_V2.patch, HBASE-19301_V2.patch
>
>
> Over in HBASE-18359 we have discussions for this.
> Right now HBase provide getConnection() in RegionCPEnv, MasterCPEnv etc. But 
> this returns a pre created connection (per server).  This uses the configs at 
> hbase-site.xml at that server. 
> Phoenix needs creating connection in CP with some custom configs. Having this 
> custom changes in hbase-site.xml is harmful as that will affect all 
> connections been created at that server.
> This issue is for providing an overloaded getConnection(Configuration) API



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19295) The Configuration returned by CPEnv should be read-only.

2017-12-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279635#comment-16279635
 ] 

stack commented on HBASE-19295:
---

.004 Fix the old hadoop compile issues and checkstyle complaints.

> The Configuration returned by CPEnv should be read-only.
> 
>
> Key: HBASE-19295
> URL: https://issues.apache.org/jira/browse/HBASE-19295
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: stack
>Assignee: stack
>  Labels: incompatible
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19295.master.001.patch, 
> HBASE-19295.master.002.patch, HBASE-19295.master.003.patch, 
> HBASE-19295.master.004.patch
>
>
> The Configuration a CP gets when it does a getConfiguration on the 
> environment is that of the RegionServer. The CP should not be able to modify 
> this config.  We should throw exception if they try to write us.
> Ditto w/ the Connection they can get from the env. They should not be able to 
> close it at min.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19295) The Configuration returned by CPEnv should be read-only.

2017-12-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19295:
--
Attachment: HBASE-19295.master.004.patch

> The Configuration returned by CPEnv should be read-only.
> 
>
> Key: HBASE-19295
> URL: https://issues.apache.org/jira/browse/HBASE-19295
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: stack
>Assignee: stack
>  Labels: incompatible
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19295.master.001.patch, 
> HBASE-19295.master.002.patch, HBASE-19295.master.003.patch, 
> HBASE-19295.master.004.patch
>
>
> The Configuration a CP gets when it does a getConfiguration on the 
> environment is that of the RegionServer. The CP should not be able to modify 
> this config.  We should throw exception if they try to write us.
> Ditto w/ the Connection they can get from the env. They should not be able to 
> close it at min.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279624#comment-16279624
 ] 

Ted Yu commented on HBASE-19417:


Ran failed tests locally which passed.

> Remove boolean return value from postBulkLoadHFile hook
> ---
>
> Key: HBASE-19417
> URL: https://issues.apache.org/jira/browse/HBASE-19417
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ted Yu
> Attachments: 19417.v1.txt, 19417.v2.txt, 19417.v3.txt, 19417.v4.txt, 
> 19417.v5.txt, 19417.v6.txt, 19417.v7.txt, 19417.v8.txt
>
>
> See the discussion at the tail of HBASE-17123 where Appy pointed out that the 
> override of loaded should be placed inside else block:
> {code}
>   } else {
> // secure bulk load
> map = regionServer.secureBulkLoadManager.secureBulkLoadHFiles(region, 
> request);
>   }
>   BulkLoadHFileResponse.Builder builder = 
> BulkLoadHFileResponse.newBuilder();
>   if (map != null) {
> loaded = true;
>   }
> {code}
> This issue is to address the review comment.
> After several review iterations, here are the changes:
> * Return value of boolean for postBulkLoadHFile() hook are changed to void.
> * Coprocessor hooks (pre and post) are added for the scenario where bulk load 
> manager is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19289) CommonFSUtils$StreamLacksCapabilityException: hflush when running test against hadoop3 beta1

2017-12-05 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279621#comment-16279621
 ] 

Sean Busbey commented on HBASE-19289:
-

{quote}
bq. unless we want to expressly tell folks not to run HBase on top of 
LocalFileSystem, in which case why are we running tests against it in the first 
place?
bq. ps: Are people using HBase against file:// today? If so, they've not been 
getting the persistence/durability HBase needs. Tell them to stop it.

Is this solvable by a flag that says "yes I acknowledge that I may lose data"? 
I think we're well aware that we may experience "data loss" with XSumFileSystem 
and this is OK because we just don't care (because it's a short lived test). We 
don't want to wait the extra 5+secs for a full MiniDFSCluster.
{quote}

That doesn't work for our reliance on LocalFileSystem for standalone mode ([ref 
the quickstart 
guide|http://hbase.apache.org/book.html#_get_started_with_hbase]). We don't 
actually call sync / flush or anything like that for the local OS, so the WAL 
is essentially useless.

> CommonFSUtils$StreamLacksCapabilityException: hflush when running test 
> against hadoop3 beta1
> 
>
> Key: HBASE-19289
> URL: https://issues.apache.org/jira/browse/HBASE-19289
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Mike Drob
> Attachments: 19289.v1.txt, 19289.v2.txt, HBASE-19289.patch
>
>
> As of commit d8fb10c8329b19223c91d3cda6ef149382ad4ea0 , I encountered the 
> following exception when running unit test against hadoop3 beta1:
> {code}
> testRefreshStoreFiles(org.apache.hadoop.hbase.regionserver.TestHStore)  Time 
> elapsed: 0.061 sec  <<< ERROR!
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.initHRegion(TestHStore.java:215)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:190)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:185)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.init(TestHStore.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHStore.testRefreshStoreFiles(TestHStore.java:962)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279617#comment-16279617
 ] 

Hadoop QA commented on HBASE-19417:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
18s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
19s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} hbase-backup in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} hbase-backup in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
48s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 
27s{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 15m 
16s{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.2. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 20m 
13s{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.3. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 25m  
7s{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 30m  
1s{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 34m 
58s{color} | {color:red} The patch causes 15 errors with Hadoop v2.7.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 39m 
57s{color} | {color:red} The patch causes 15 errors with Hadoop v2.7.2. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 44m 
57s{color} | {color:red} The patch causes 15 errors with Hadoop v2.7.3. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 49m 
54s{color} | {color:red} The patch causes 15 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 55m  
2s{color} | {color:red} The patch causes 15 errors with Hadoop 

[jira] [Updated] (HBASE-19430) Remove the SettableTimestamp and SettableSequenceId

2017-12-05 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19430:
---
Attachment: HBASE-19430.v0.test.patch

add a trivial change in hbase-server module to run more tests

> Remove the SettableTimestamp and SettableSequenceId
> ---
>
> Key: HBASE-19430
> URL: https://issues.apache.org/jira/browse/HBASE-19430
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19430.v0.patch, HBASE-19430.v0.test.patch
>
>
> They are introduced by HBASE-11777 and HBASE-12082. Both of them are IA.LP 
> and denoted with deprecated. To my way of thinking, we should remove them 
> before 2.0 release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19437) Batch operation can't handle the null result for Append/Increment

2017-12-05 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19437:
---
Attachment: HBASE-19437.v0.patch

> Batch operation can't handle the null result for Append/Increment
> -
>
> Key: HBASE-19437
> URL: https://issues.apache.org/jira/browse/HBASE-19437
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-19437.v0.patch
>
>
> But the Table#append and #increment can handle the null result...that is an 
> inconsistent behavior for user.
> I have noticed two scenarios that server will return null result to user.
> # postAppend/postIncrement return null
> # mutation.isReturnResults() is false and 
> preIncrementAfterRowLock/preAppendAfterRowLock doesn't return null
> We should wrap the null to empty result on server side. CP user should throw 
> Exception rather than return null if they intend to say something is broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19437) Batch operation can't handle the null result for Append/Increment

2017-12-05 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19437:
---
Status: Patch Available  (was: Open)

> Batch operation can't handle the null result for Append/Increment
> -
>
> Key: HBASE-19437
> URL: https://issues.apache.org/jira/browse/HBASE-19437
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-19437.v0.patch
>
>
> But the Table#append and #increment can handle the null result...that is an 
> inconsistent behavior for user.
> I have noticed two scenarios that server will return null result to user.
> # postAppend/postIncrement return null
> # mutation.isReturnResults() is false and 
> preIncrementAfterRowLock/preAppendAfterRowLock doesn't return null
> We should wrap the null to empty result on server side. CP user should throw 
> Exception rather than return null if they intend to say something is broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279606#comment-16279606
 ] 

Hadoop QA commented on HBASE-19417:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
23s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
18s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
46m 53s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m  7s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
32s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.tool.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.namespace.TestNamespaceAuditor |
|   | hadoop.hbase.regionserver.TestHRegionServerBulkLoadWithOldClient |
|   | hadoop.hbase.filter.TestFilterListOnMini |
|   | hadoop.hbase.tool.TestLoadIncrementalHFiles |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19417 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900778/19417.v8.txt |
| Optional Tests |  asflicense  javac  javadoc  

[jira] [Commented] (HBASE-19432) Roll the specified writer in HFileOutputFormat2

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279594#comment-16279594
 ] 

Hadoop QA commented on HBASE-19432:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
1s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 5s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
51s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
51m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
42s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19432 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900789/HBASE-19432.master.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 5bc5c08fcd29 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ed60e4518d |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10247/testReport/ |
| modules | C: hbase-mapreduce U: hbase-mapreduce |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10247/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Roll the specified writer in 

[jira] [Updated] (HBASE-19437) Batch operation can't handle the null result for Append/Increment

2017-12-05 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19437:
---
Description: 
But the Table#append and #increment can handle the null result...that is an 
inconsistent behavior for user.
I have noticed two scenarios that server will return null result to user.
# postAppend/postIncrement return null
# mutation.isReturnResults() is false and 
preIncrementAfterRowLock/preAppendAfterRowLock doesn't return null

We should wrap the null to empty result on server side. CP user should throw 
Exception rather than return null if they intend to say something is broken.



  was:
The null from postAppend/postIncrement sparks inconsistent behavior. The 
Table#append and #increment can handle the null result but the batch operation 
throws IllegalStateException.

We should wrap the null from postAppend/postIncrement to empty result since cp 
user should throw IOException rather than return null if they intend to say 
something is broken.




> Batch operation can't handle the null result for Append/Increment
> -
>
> Key: HBASE-19437
> URL: https://issues.apache.org/jira/browse/HBASE-19437
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
>
> But the Table#append and #increment can handle the null result...that is an 
> inconsistent behavior for user.
> I have noticed two scenarios that server will return null result to user.
> # postAppend/postIncrement return null
> # mutation.isReturnResults() is false and 
> preIncrementAfterRowLock/preAppendAfterRowLock doesn't return null
> We should wrap the null to empty result on server side. CP user should throw 
> Exception rather than return null if they intend to say something is broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19437) Batch operation can't handle the null result for Append/Increment

2017-12-05 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19437:
---
Summary: Batch operation can't handle the null result for Append/Increment  
(was: Wrap the null result from postAppend/postIncrement )

> Batch operation can't handle the null result for Append/Increment
> -
>
> Key: HBASE-19437
> URL: https://issues.apache.org/jira/browse/HBASE-19437
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
>
> The null from postAppend/postIncrement sparks inconsistent behavior. The 
> Table#append and #increment can handle the null result but the batch 
> operation throws IllegalStateException.
> We should wrap the null from postAppend/postIncrement to empty result since 
> cp user should throw IOException rather than return null if they intend to 
> say something is broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19437) Wrap the null result from postAppend/postIncrement

2017-12-05 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19437:
---
Fix Version/s: 2.0.0

> Wrap the null result from postAppend/postIncrement 
> ---
>
> Key: HBASE-19437
> URL: https://issues.apache.org/jira/browse/HBASE-19437
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
>
> The null from postAppend/postIncrement sparks inconsistent behavior. The 
> Table#append and #increment can handle the null result but the batch 
> operation throws IllegalStateException.
> We should wrap the null from postAppend/postIncrement to empty result since 
> cp user should throw IOException rather than return null if they intend to 
> say something is broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19437) Wrap the null result from postAppend/postIncrement

2017-12-05 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-19437:
--

 Summary: Wrap the null result from postAppend/postIncrement 
 Key: HBASE-19437
 URL: https://issues.apache.org/jira/browse/HBASE-19437
 Project: HBase
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


The null from postAppend/postIncrement sparks inconsistent behavior. The 
Table#append and #increment can handle the null result but the batch operation 
throws IllegalStateException.

We should wrap the null from postAppend/postIncrement to empty result since cp 
user should throw IOException rather than return null if they intend to say 
something is broken.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279563#comment-16279563
 ] 

Duo Zhang commented on HBASE-19410:
---

One more thing, MiniHBaseCluster is marked as IA.Public but its parent class 
HBaseCluster is marked as IA.Private...

> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279562#comment-16279562
 ] 

Duo Zhang commented on HBASE-19410:
---

For us it is testing code but for users it is a library... That's why I say 
that we should only expose an interface with minimum functions.

I think for most CRUD users, maybe they even do not need to start a mini 
cluster, a mocked client api which holds all data with an in memory map or a 
local storage is enough...

Thanks.

> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279547#comment-16279547
 ] 

Hadoop QA commented on HBASE-19435:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
 8s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m  
5s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
41s{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 41s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
6s{color} | {color:red} hbase-server: The patch generated 3 new + 10 unchanged 
- 1 fixed = 13 total (was 11) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  3m 
38s{color} | {color:red} patch has 19 errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  5m 
42s{color} | {color:red} The patch causes 19 errors with Hadoop v2.6.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  7m 
31s{color} | {color:red} The patch causes 19 errors with Hadoop v2.6.2. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  9m 
21s{color} | {color:red} The patch causes 19 errors with Hadoop v2.6.3. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 11m 
11s{color} | {color:red} The patch causes 19 errors with Hadoop v2.6.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 13m  
3s{color} | {color:red} The patch causes 19 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 14m 
53s{color} | {color:red} The patch causes 19 errors with Hadoop v2.7.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 16m 
38s{color} | {color:red} The patch causes 19 errors with Hadoop v2.7.2. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 18m 
25s{color} | {color:red} The patch causes 19 errors with Hadoop v2.7.3. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 20m 
11s{color} | {color:red} The patch causes 19 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 22m  
4s{color} | {color:red} The patch causes 19 errors with Hadoop v3.0.0-alpha4. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |

[jira] [Commented] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279545#comment-16279545
 ] 

Appy commented on HBASE-19410:
--

If that's more popular choice, am fine with it.
Personally i'd would prefer looser guarantees for test-code since 1) it's test 
code, and 2) that'll make development easy. :)
Also, LimitedPrivate just feels like pita every time i see it (extra 
hierarchies, gotchas, etc), . :)

> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-12-05 Thread zhaoyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279542#comment-16279542
 ] 

zhaoyuan commented on HBASE-19340:
--

My pleasure.

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340-branch-1.2.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19295) The Configuration returned by CPEnv should be read-only.

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279541#comment-16279541
 ] 

Hadoop QA commented on HBASE-19295:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
35s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
27s{color} | {color:red} hbase-client: The patch generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
5s{color} | {color:red} hbase-server: The patch generated 1 new + 7 unchanged - 
1 fixed = 8 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 5s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m 
42s{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  8m 
21s{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.2. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  9m 
59s{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.3. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 11m 
40s{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 13m 
21s{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.5. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
47s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 97m 
40s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache

2017-12-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279535#comment-16279535
 ] 

Ted Yu commented on HBASE-19435:


bq. but haven't isolated the specific process

You can add debug logs in selected places so that it is easier to find the 
cause when connection closes prematurely.

bq. I could implement some sort of max retries before disabling the cache again

I think the above is better than the current formation. The expected number of 
retries would be low.

bq. what is the right number?

You can choose a small positive number.

Thanks

> Reopen Files for ClosedChannelException in BucketCache
> --
>
> Key: HBASE-19435
> URL: https://issues.apache.org/jira/browse/HBASE-19435
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.3.1
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-19435.master.001.patch
>
>
> When using the FileIOEngine for BucketCache, the cache will be disabled if 
> the connection is interrupted or closed. HBase will then get 
> ClosedChannelExceptions trying to access the file. After 60s, the RS will 
> disable the cache. This causes severe read performance degradation for 
> workloads that rely on this cache. FileIOEngine never tries to reopen the 
> connection. This JIRA is to reopen files when the BucketCache encounters a 
> ClosedChannelException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279532#comment-16279532
 ] 

Duo Zhang commented on HBASE-19410:
---

I think for most users they just want to start a mini hbase cluster and do read 
write, so I do not think it is worth to introduce a new compatible matrix? For 
phoenix and other projects which may reach the internal of HBase, I think we 
can reuse the IA.LimitedPrivate annotation?

Thanks.

> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279524#comment-16279524
 ] 

Appy edited comment on HBASE-19410 at 12/6/17 1:54 AM:
---

What about this suggestion?
bq. Maybe we can add another line to compatibility matrix: Testing API compat - 
major No, minor No, patch Yes.

If we start digging into test code, might spiral out into another CP size 
cleanup. :)
Would rather prefer closing already open ends in prod-code.

Explicitly stating testing compat will make it possible to do testing related 
cleanup in 2.1.





was (Author: appy):
What about this suggestion?
bq. Maybe we can add another line to compatibility matrix: Testing API compat - 
major No, minor No, patch Yes.

Testing related cleanup can be done in 2.1 then.




> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19432) Roll the specified writer in HFileOutputFormat2

2017-12-05 Thread Guangxu Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-19432:
--
Attachment: HBASE-19432.master.002.patch

Attach 002 patch to remove comments.Thanks

> Roll the specified writer in HFileOutputFormat2
> ---
>
> Key: HBASE-19432
> URL: https://issues.apache.org/jira/browse/HBASE-19432
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.0-beta-1
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Attachments: HBASE-19432.master.001.patch, 
> HBASE-19432.master.002.patch
>
>
> {code}
>   // If any of the HFiles for the column families has reached
>   // maxsize, we need to roll all the writers
>   if (wl != null && wl.written + length >= maxsize) {
> this.rollRequested = true;
>   }
> {code}
> If we always roll all the writers, a large number of small files will be 
> generated in the multi family or multi table scene.
> So we should only roll the specified writer which HFile has reached maxsize.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279524#comment-16279524
 ] 

Appy edited comment on HBASE-19410 at 12/6/17 1:50 AM:
---

What about this suggestion?
bq. Maybe we can add another line to compatibility matrix: Testing API compat - 
major No, minor No, patch Yes.

Testing related cleanup can be done in 2.1 then.





was (Author: appy):
What about this suggestion?
bq. Maybe we can add another line to compatibility matrix: Testing API compat - 
major No, minor No, patch Yes.

> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279524#comment-16279524
 ] 

Appy commented on HBASE-19410:
--

What about this suggestion?
bq. Maybe we can add another line to compatibility matrix: Testing API compat - 
major No, minor No, patch Yes.

> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19430) Remove the SettableTimestamp and SettableSequenceId

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279520#comment-16279520
 ] 

Hadoop QA commented on HBASE-19430:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
35s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} hbase-common: The patch generated 2 new + 160 
unchanged - 8 fixed = 162 total (was 168) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
18s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
46m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
10s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19430 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900777/HBASE-19430.v0.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 12545cfcda48 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ed60e4518d |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10244/artifact/patchprocess/diff-checkstyle-hbase-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10244/testReport/ |
| modules | C: hbase-common U: hbase-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10244/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HBASE-19436) Remove Jenkinsfile from some branches

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279521#comment-16279521
 ] 

Appy commented on HBASE-19436:
--

Attached patch for removing from HBASE-14070.HLC . No point running precommit 
since that branch is incomplete.

> Remove Jenkinsfile from some branches
> -
>
> Key: HBASE-19436
> URL: https://issues.apache.org/jira/browse/HBASE-19436
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19436.HBASE-14070.HLC.001.patch
>
>
> Some of the branches in https://builds.apache.org/job/HBase%20Nightly/ don't 
> need to be build nightly since they are not in active development. Let's 
> remove them to reduce clutter and free up restricted resources. Suggested 
> branches:
> HBASE-14070.HLC : It got it as a part of forking from master where Sean's 
> work was in progress. Wasn't added intentionally. Removing Jenkinsfile from 
> it.
> HBASE-19297 : deleted as part of closing the jira itself.
> (todo: others?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19436) Remove Jenkinsfile from some branches

2017-12-05 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19436:
-
Attachment: HBASE-19436.HBASE-14070.HLC.001.patch

> Remove Jenkinsfile from some branches
> -
>
> Key: HBASE-19436
> URL: https://issues.apache.org/jira/browse/HBASE-19436
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19436.HBASE-14070.HLC.001.patch
>
>
> Some of the branches in https://builds.apache.org/job/HBase%20Nightly/ don't 
> need to be build nightly since they are not in active development. Let's 
> remove them to reduce clutter and free up restricted resources. Suggested 
> branches:
> HBASE-14070.HLC : It got it as a part of forking from master where Sean's 
> work was in progress. Wasn't added intentionally. Removing Jenkinsfile from 
> it.
> HBASE-19297 : deleted as part of closing the jira itself.
> (todo: others?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19436) Remove Jenkinsfile from some branches

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279517#comment-16279517
 ] 

Appy commented on HBASE-19436:
--

Ping [~stack] for validation since you worked with this piece of infra for 
branch-1.2.

> Remove Jenkinsfile from some branches
> -
>
> Key: HBASE-19436
> URL: https://issues.apache.org/jira/browse/HBASE-19436
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>
> Some of the branches in https://builds.apache.org/job/HBase%20Nightly/ don't 
> need to be build nightly since they are not in active development. Let's 
> remove them to reduce clutter and free up restricted resources. Suggested 
> branches:
> HBASE-14070.HLC : It got it as a part of forking from master where Sean's 
> work was in progress. Wasn't added intentionally. Removing Jenkinsfile from 
> it.
> HBASE-19297 : deleted as part of closing the jira itself.
> (todo: others?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19436) Remove Jenkinsfile from some branches

2017-12-05 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19436:
-
Description: 
Some of the branches in https://builds.apache.org/job/HBase%20Nightly/ don't 
need to be build nightly since they are not in active development. Let's remove 
them to reduce clutter and free up restricted resources. Suggested branches:
HBASE-14070.HLC : It got it as a part of forking from master where Sean's work 
was in progress. Wasn't added intentionally. Removing Jenkinsfile from it.
HBASE-19297 : deleted as part of closing the jira itself.
(todo: others?)


  was:
Some of the branches in https://builds.apache.org/job/HBase%20Nightly/ don't 
need to be build nightly since they are not in active development. Let's remove 
them to reduce clutter and free up restricted resources. Suggested branches:
HBASE-14070.HLC
HBASE-19297 : deleted as part of closing the jira itself.
(todo: others?)



> Remove Jenkinsfile from some branches
> -
>
> Key: HBASE-19436
> URL: https://issues.apache.org/jira/browse/HBASE-19436
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>
> Some of the branches in https://builds.apache.org/job/HBase%20Nightly/ don't 
> need to be build nightly since they are not in active development. Let's 
> remove them to reduce clutter and free up restricted resources. Suggested 
> branches:
> HBASE-14070.HLC : It got it as a part of forking from master where Sean's 
> work was in progress. Wasn't added intentionally. Removing Jenkinsfile from 
> it.
> HBASE-19297 : deleted as part of closing the jira itself.
> (todo: others?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19436) Remove Jenkinsfile from some branches

2017-12-05 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19436:
-
Description: 
Some of the branches in https://builds.apache.org/job/HBase%20Nightly/ don't 
need to be build nightly since they are not in active development. Let's remove 
them to reduce clutter and free up restricted resources. Suggested branches:
HBASE-14070.HLC
HBASE-19297 : deleted as part of closing the jira itself.
(todo: others?)


  was:
Some of the branches in https://builds.apache.org/job/HBase%20Nightly/ don't 
need to be build nightly since they are not in active development. Let's remove 
them to reduce clutter and free up restricted resources. Suggested branches:
HBASE-14070.HLC
(todo: others?)



> Remove Jenkinsfile from some branches
> -
>
> Key: HBASE-19436
> URL: https://issues.apache.org/jira/browse/HBASE-19436
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>
> Some of the branches in https://builds.apache.org/job/HBase%20Nightly/ don't 
> need to be build nightly since they are not in active development. Let's 
> remove them to reduce clutter and free up restricted resources. Suggested 
> branches:
> HBASE-14070.HLC
> HBASE-19297 : deleted as part of closing the jira itself.
> (todo: others?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19297) Nightly job for master timing out in unit tests

2017-12-05 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19297:
-
Attachment: HBASE-19297.master.001.patch

> Nightly job for master timing out in unit tests
> ---
>
> Key: HBASE-19297
> URL: https://issues.apache.org/jira/browse/HBASE-19297
> Project: HBase
>  Issue Type: Task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HBASE-19297.master.001.patch, results.png
>
>
> master now timing out at 6 hours in master.
> looks like it was making progress still, just in the midst of hte hbase-rest 
> module



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19297) Nightly job for master timing out in unit tests

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279506#comment-16279506
 ] 

Appy commented on HBASE-19297:
--

Attached diff for posterity.

> Nightly job for master timing out in unit tests
> ---
>
> Key: HBASE-19297
> URL: https://issues.apache.org/jira/browse/HBASE-19297
> Project: HBase
>  Issue Type: Task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HBASE-19297.master.001.patch, results.png
>
>
> master now timing out at 6 hours in master.
> looks like it was making progress still, just in the midst of hte hbase-rest 
> module



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-19297) Nightly job for master timing out in unit tests

2017-12-05 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy resolved HBASE-19297.
--
Resolution: Not A Problem

> Nightly job for master timing out in unit tests
> ---
>
> Key: HBASE-19297
> URL: https://issues.apache.org/jira/browse/HBASE-19297
> Project: HBase
>  Issue Type: Task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: results.png
>
>
> master now timing out at 6 hours in master.
> looks like it was making progress still, just in the midst of hte hbase-rest 
> module



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache

2017-12-05 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279504#comment-16279504
 ] 

Zach York commented on HBASE-19435:
---

[~tedyu] I think that scenario would be a bad error... What would be causing 
these connections to get interrupted/closed frequently? If there is an HBase 
thread that keeps interrupting that connection, we should fix the error there. 
Currently, I believe we are seeing this in some case where a compaction 
interrupts the connection, but haven't isolated the specific process.

What do you propose? I guess I could implement some sort of max retries before 
disabling the cache again which would be reset on successful access, but this 
would be fairly fragile (what is the right number?)

I think in addition to the proposed fix, we should look at trying to re-enable 
disabled caches after a period (if disabled due to an error). However, that 
wouldn't invalidate this change since even if the cache is re-enabled, unless 
the connections are refreshed, it will just get disabled again.

> Reopen Files for ClosedChannelException in BucketCache
> --
>
> Key: HBASE-19435
> URL: https://issues.apache.org/jira/browse/HBASE-19435
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.3.1
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-19435.master.001.patch
>
>
> When using the FileIOEngine for BucketCache, the cache will be disabled if 
> the connection is interrupted or closed. HBase will then get 
> ClosedChannelExceptions trying to access the file. After 60s, the RS will 
> disable the cache. This causes severe read performance degradation for 
> workloads that rely on this cache. FileIOEngine never tries to reopen the 
> connection. This JIRA is to reopen files when the BucketCache encounters a 
> ClosedChannelException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19297) Nightly job for master timing out in unit tests

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279503#comment-16279503
 ] 

Appy commented on HBASE-19297:
--

Since it's inception, all builds (19 total) in 
https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19297/ completed in 
less than 6 hours (despite timeout of 18 hrs). 
Attaching screenshot !results.png|width=400px!

Looks like we don't need to increase timeout. Closing this one as not a problem 
and deleting the branch "HBASE-19297" from repo.


> Nightly job for master timing out in unit tests
> ---
>
> Key: HBASE-19297
> URL: https://issues.apache.org/jira/browse/HBASE-19297
> Project: HBase
>  Issue Type: Task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: results.png
>
>
> master now timing out at 6 hours in master.
> looks like it was making progress still, just in the midst of hte hbase-rest 
> module



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19297) Nightly job for master timing out in unit tests

2017-12-05 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19297:
-
Attachment: results.png

> Nightly job for master timing out in unit tests
> ---
>
> Key: HBASE-19297
> URL: https://issues.apache.org/jira/browse/HBASE-19297
> Project: HBase
>  Issue Type: Task
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: results.png
>
>
> master now timing out at 6 hours in master.
> looks like it was making progress still, just in the midst of hte hbase-rest 
> module



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19436) Remove Jenkinsfile from some branches

2017-12-05 Thread Appy (JIRA)
Appy created HBASE-19436:


 Summary: Remove Jenkinsfile from some branches
 Key: HBASE-19436
 URL: https://issues.apache.org/jira/browse/HBASE-19436
 Project: HBase
  Issue Type: Improvement
Reporter: Appy
Assignee: Appy


Some of the branches in https://builds.apache.org/job/HBase%20Nightly/ don't 
need to be build nightly since they are not in active development. Let's remove 
them to reduce clutter and free up restricted resources. Suggested branches:
HBASE-14070.HLC
(todo: others?)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19435) Reopen Files for ClosedChannelException in BucketCache

2017-12-05 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-19435:
--
Status: Patch Available  (was: Open)

> Reopen Files for ClosedChannelException in BucketCache
> --
>
> Key: HBASE-19435
> URL: https://issues.apache.org/jira/browse/HBASE-19435
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 1.3.1, 2.0.0
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HBASE-19435.master.001.patch
>
>
> When using the FileIOEngine for BucketCache, the cache will be disabled if 
> the connection is interrupted or closed. HBase will then get 
> ClosedChannelExceptions trying to access the file. After 60s, the RS will 
> disable the cache. This causes severe read performance degradation for 
> workloads that rely on this cache. FileIOEngine never tries to reopen the 
> connection. This JIRA is to reopen files when the BucketCache encounters a 
> ClosedChannelException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279499#comment-16279499
 ] 

Duo Zhang commented on HBASE-19410:
---

Oh just comment on RB, let me also put the comment here.

HTU is not well designed to be an IA.Public, it should be an interface, or at 
least, we should have an IA.Public base class which only have the stuffs we 
want to expose to users, and in HBase we will use a sub class of it which have 
more functions.

So let's file a new jira to track this? But to be honest, I do not think we 
will do it before the final 2.0.0 release...

Thanks.

> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279492#comment-16279492
 ] 

stack commented on HBASE-19410:
---

Refactor of test util is about due. HBTU was v2 as [~Apache9] says above but it 
has become a dumping ground.

> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279484#comment-16279484
 ] 

Appy edited comment on HBASE-19410 at 12/6/17 1:09 AM:
---

(probably better place for discussion because of wider visibility than RB)
bq. Appy Replied on RB, changing inheritence to composition can not be done in 
one circle since HTU is IA.Public, and I do not think it is necessary to spend 
too many times to polish an existing testing util. Just introduce a new one, as 
the composition change on HTU will finally make you change lots of code, no 
difference from a new testing util.
New testing tool will be lot of work for which we might not have enough time 
(beta 1 is in <20 days). Also, creating new one every time we might want to 
improve our testing framework might be a lot.
Also, going over ref guide, which bucket does testing classes falling into? 
(http://hbase.apache.org/book.html#hbase.versioning.post10) Are they client API 
or server API?
Maybe we can add another line to the matrix:  Testing API compat - major No, 
minor No, patch Yes.

Alternate suggestion (less preferred, above one is more general approach):
Mark HBTU with IS.Evolving so we can break compat between minor versions.

wdyt? [~stack] [~mdrob] [~elserj]

Either way, don't wait your patch on this thing. If other comments in RB are 
addressed, get it in. This can be done in a separate jira.



was (Author: appy):
(probably better place for discussion because of wider visibility than RB)
bq. Appy Replied on RB, changing inheritence to composition can not be done in 
one circle since HTU is IA.Public, and I do not think it is necessary to spend 
too many times to polish an existing testing util. Just introduce a new one, as 
the composition change on HTU will finally make you change lots of code, no 
difference from a new testing util.
New testing tool will be lot of work for which we might not have enough time 
(beta 1 is in <20 days). Also, creating new one every time we might want to 
improve our testing framework might be a lot.
Also, going over ref guide, which bucket does testing classes falling into? 
(http://hbase.apache.org/book.html#hbase.versioning.post10) Are they client API 
or server API?
Maybe we can add another line to the matrix:  Testing API compat - major No, 
minor No, patch Yes.

Alternate suggestion (less preferred, above one is more general approach):
Mark HBTU with IS.Evolving so we can break compat between minor versions.

Either way, don't wait your patch on this thing. If other comments in RB are 
addressed, get it in. This can be done in a separate jira.


> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19410) Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279484#comment-16279484
 ] 

Appy commented on HBASE-19410:
--

(probably better place for discussion because of wider visibility than RB)
bq. Appy Replied on RB, changing inheritence to composition can not be done in 
one circle since HTU is IA.Public, and I do not think it is necessary to spend 
too many times to polish an existing testing util. Just introduce a new one, as 
the composition change on HTU will finally make you change lots of code, no 
difference from a new testing util.
New testing tool will be lot of work for which we might not have enough time 
(beta 1 is in <20 days). Also, creating new one every time we might want to 
improve our testing framework might be a lot.
Also, going over ref guide, which bucket does testing classes falling into? 
(http://hbase.apache.org/book.html#hbase.versioning.post10) Are they client API 
or server API?
Maybe we can add another line to the matrix:  Testing API compat - major No, 
minor No, patch Yes.

Alternate suggestion (less preferred, above one is more general approach):
Mark HBTU with IS.Evolving so we can break compat between minor versions.

Either way, don't wait your patch on this thing. If other comments in RB are 
addressed, get it in. This can be done in a separate jira.


> Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests
> --
>
> Key: HBASE-19410
> URL: https://issues.apache.org/jira/browse/HBASE-19410
> Project: HBase
>  Issue Type: Task
>  Components: test, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19410-v1.patch, HBASE-19410-v2.patch, 
> HBASE-19410.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19416) Document dynamic configurations currently support

2017-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279475#comment-16279475
 ] 

Hudson commented on HBASE-19416:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4175 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4175/])
HBASE-19416 (Addendum) Remove default value and jira link (tedyu: rev 
30ca85d21f50644b32c5aec3f41a4456a15f9801)
* (edit) src/main/asciidoc/_chapters/configuration.adoc


> Document dynamic configurations currently support
> -
>
> Key: HBASE-19416
> URL: https://issues.apache.org/jira/browse/HBASE-19416
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19416.master.001.patch, 
> HBASE-19416.master.002.patch, HBASE-19416.master.003.patch
>
>
> More informative about dynamic configuration currently support



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19422) using hadoop-profile property leads to confusing failures

2017-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279474#comment-16279474
 ] 

Hudson commented on HBASE-19422:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4175 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4175/])
HBASE-19422 Provide clear error message on use of wrong hadoop-profile (appy: 
rev 50c59889717746d3386ecff768d6dbfeb4e743ab)
* (edit) pom.xml


> using hadoop-profile property leads to confusing failures
> -
>
> Key: HBASE-19422
> URL: https://issues.apache.org/jira/browse/HBASE-19422
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Mike Drob
> Fix For: 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-1, 1.1.13
>
> Attachments: 19422.v1.txt, HBASE-19422.patch
>
>
> When building master branch against hadoop 3 beta1,
> {code}
> mvn clean install -Dhadoop-profile=3.0 -Dhadoop-three.version=3.0.0-beta1 
> -Dhadoop-two.version=3.0.0-beta1 -DskipTests
> {code}
> I got:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BannedDependencies failed 
> with message:
> We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
> Found Banned Dependency: com.google.code.findbugs:jsr305:jar:1.3.9
> {code}
> Here is part of the dependency tree showing the dependency:
> {code}
> [INFO] org.apache.hbase:hbase-client:jar:3.0.0-SNAPSHOT
> ...
> [INFO] +- org.apache.hadoop:hadoop-auth:jar:3.0.0-beta1:compile
> ...
> [INFO] |  \- com.google.guava:guava:jar:11.0.2:compile
> [INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:compile
> {code}
> We need to exclude jsr305 so that build succeed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19426) Move has() and setTimestamp() to Mutation

2017-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279473#comment-16279473
 ] 

Hudson commented on HBASE-19426:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4175 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4175/])
HBASE-19426 Move has() and setTimestamp() to Mutation (Chia-Ping Tsai) (stack: 
rev 8e3714e772599cd1e41e469668e06a775fb2519a)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java


> Move has() and setTimestamp() to Mutation
> -
>
> Key: HBASE-19426
> URL: https://issues.apache.org/jira/browse/HBASE-19426
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19426.v0.patch, HBASE-19426.v0.test.patch
>
>
> The {{Put}} has many helper methods to get the inner cell. These methods 
> should be moved to {{Mutation}} so as to have user get the cell from other 
> subclasses of {{Mutation}}. Ditto for {{setTimestamp}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279468#comment-16279468
 ] 

Appy commented on HBASE-19417:
--

+1
Great stuff.

bq. It turns out that SecureBulkLoadManager.java has the calls to pre / post 
hooks. So in patch v8 I narrowed the calls in RSRpcServices.
oh! cool.

{quote}
bq. CPs in 2.0 are radically different than 1.x
Now I know
{quote}
Oh yeah, it's like any old CP will very very likely not work with 2.0. Will 
need a recompile. Base*Observer(s) are gone, many methods have changed, and IA 
of many classes changed.

> Remove boolean return value from postBulkLoadHFile hook
> ---
>
> Key: HBASE-19417
> URL: https://issues.apache.org/jira/browse/HBASE-19417
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ted Yu
> Attachments: 19417.v1.txt, 19417.v2.txt, 19417.v3.txt, 19417.v4.txt, 
> 19417.v5.txt, 19417.v6.txt, 19417.v7.txt, 19417.v8.txt
>
>
> See the discussion at the tail of HBASE-17123 where Appy pointed out that the 
> override of loaded should be placed inside else block:
> {code}
>   } else {
> // secure bulk load
> map = regionServer.secureBulkLoadManager.secureBulkLoadHFiles(region, 
> request);
>   }
>   BulkLoadHFileResponse.Builder builder = 
> BulkLoadHFileResponse.newBuilder();
>   if (map != null) {
> loaded = true;
>   }
> {code}
> This issue is to address the review comment.
> After several review iterations, here are the changes:
> * Return value of boolean for postBulkLoadHFile() hook are changed to void.
> * Coprocessor hooks (pre and post) are added for the scenario where bulk load 
> manager is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19433) ChangeSplitPolicyAction modifies an immutable HTableDescriptor

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279450#comment-16279450
 ] 

Hadoop QA commented on HBASE-19433:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
24s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
23s{color} | {color:red} hbase-it: The patch generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
54m 28s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19433 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900762/19433.v1.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 90c6fabce676 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / ed60e4518d |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10242/artifact/patchprocess/diff-checkstyle-hbase-it.txt
 |
|  Test Results | 

[jira] [Updated] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19417:
---
Attachment: 19417.v8.txt

> Remove boolean return value from postBulkLoadHFile hook
> ---
>
> Key: HBASE-19417
> URL: https://issues.apache.org/jira/browse/HBASE-19417
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ted Yu
> Attachments: 19417.v1.txt, 19417.v2.txt, 19417.v3.txt, 19417.v4.txt, 
> 19417.v5.txt, 19417.v6.txt, 19417.v7.txt, 19417.v8.txt
>
>
> See the discussion at the tail of HBASE-17123 where Appy pointed out that the 
> override of loaded should be placed inside else block:
> {code}
>   } else {
> // secure bulk load
> map = regionServer.secureBulkLoadManager.secureBulkLoadHFiles(region, 
> request);
>   }
>   BulkLoadHFileResponse.Builder builder = 
> BulkLoadHFileResponse.newBuilder();
>   if (map != null) {
> loaded = true;
>   }
> {code}
> This issue is to address the review comment.
> After several review iterations, here are the changes:
> * Return value of boolean for postBulkLoadHFile() hook are changed to void.
> * Coprocessor hooks (pre and post) are added for the scenario where bulk load 
> manager is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19417:
---
Attachment: (was: 19417.v8.txt)

> Remove boolean return value from postBulkLoadHFile hook
> ---
>
> Key: HBASE-19417
> URL: https://issues.apache.org/jira/browse/HBASE-19417
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ted Yu
> Attachments: 19417.v1.txt, 19417.v2.txt, 19417.v3.txt, 19417.v4.txt, 
> 19417.v5.txt, 19417.v6.txt, 19417.v7.txt
>
>
> See the discussion at the tail of HBASE-17123 where Appy pointed out that the 
> override of loaded should be placed inside else block:
> {code}
>   } else {
> // secure bulk load
> map = regionServer.secureBulkLoadManager.secureBulkLoadHFiles(region, 
> request);
>   }
>   BulkLoadHFileResponse.Builder builder = 
> BulkLoadHFileResponse.newBuilder();
>   if (map != null) {
> loaded = true;
>   }
> {code}
> This issue is to address the review comment.
> After several review iterations, here are the changes:
> * Return value of boolean for postBulkLoadHFile() hook are changed to void.
> * Coprocessor hooks (pre and post) are added for the scenario where bulk load 
> manager is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19430) Remove the SettableTimestamp and SettableSequenceId

2017-12-05 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19430:
---
Status: Patch Available  (was: Open)

> Remove the SettableTimestamp and SettableSequenceId
> ---
>
> Key: HBASE-19430
> URL: https://issues.apache.org/jira/browse/HBASE-19430
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19430.v0.patch
>
>
> They are introduced by HBASE-11777 and HBASE-12082. Both of them are IA.LP 
> and denoted with deprecated. To my way of thinking, we should remove them 
> before 2.0 release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19430) Remove the SettableTimestamp and SettableSequenceId

2017-12-05 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-19430:
---
Attachment: HBASE-19430.v0.patch

v0
# keep the setTimestamp(byte[], int) as CellUtil needs this method. We can't 
remove it in 2.0
# add the default methods to {{ExtendedCell}} to minimize the chagnes
# add the heapsize() impl to all fake cells. Perhaps we can throw exception 
directly since we should not call the heapsize on fake cell

> Remove the SettableTimestamp and SettableSequenceId
> ---
>
> Key: HBASE-19430
> URL: https://issues.apache.org/jira/browse/HBASE-19430
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19430.v0.patch
>
>
> They are introduced by HBASE-11777 and HBASE-12082. Both of them are IA.LP 
> and denoted with deprecated. To my way of thinking, we should remove them 
> before 2.0 release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-12-05 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279439#comment-16279439
 ] 

huaxiang sun commented on HBASE-19163:
--

Will commit tomorrow morning PST if no objection, thanks.

> "Maximum lock count exceeded" from region server's batch processing
> ---
>
> Key: HBASE-19163
> URL: https://issues.apache.org/jira/browse/HBASE-19163
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 1.2.7, 2.0.0-alpha-3
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-19163-master-v001.patch, 
> HBASE-19163.master.001.patch, HBASE-19163.master.002.patch, 
> HBASE-19163.master.004.patch, HBASE-19163.master.005.patch, 
> HBASE-19163.master.006.patch, HBASE-19163.master.007.patch, 
> HBASE-19163.master.008.patch, HBASE-19163.master.009.patch, 
> HBASE-19163.master.009.patch, HBASE-19163.master.010.patch, unittest-case.diff
>
>
> In one of use cases, we found the following exception and replication is 
> stuck.
> {code}
> 2017-10-25 19:41:17,199 WARN  [hconnection-0x28db294f-shared--pool4-t936] 
> client.AsyncProcess: #3, table=foo, attempt=5/5 failed=262836ops, last 
> exception: java.io.IOException: java.io.IOException: Maximum lock count 
> exceeded
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2215)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)
> Caused by: java.lang.Error: Maximum lock count exceeded
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:528)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:488)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1327)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:871)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:5163)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3018)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2819)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2148)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> ... 3 more
> {code}
> While we are still examining the data pattern, it is sure that there are too 
> many mutations in the batch against the same row, this exceeds the maximum 
> 64k shared lock count and it throws an error and failed the whole batch.
> There are two approaches to solve this issue.
> 1). Let's say there are mutations against the same row in the batch, we just 
> need to acquire the lock once for the same row vs to acquire the lock for 
> each mutation.
> 2). We catch the error and start to process whatever it gets and loop back.
> With HBASE-17924, approach 1 seems easy to implement now. 
> Create the jira and will post update/patch when investigation moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19417:
---
Attachment: 19417.v8.txt

> Remove boolean return value from postBulkLoadHFile hook
> ---
>
> Key: HBASE-19417
> URL: https://issues.apache.org/jira/browse/HBASE-19417
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ted Yu
> Attachments: 19417.v1.txt, 19417.v2.txt, 19417.v3.txt, 19417.v4.txt, 
> 19417.v5.txt, 19417.v6.txt, 19417.v7.txt, 19417.v8.txt
>
>
> See the discussion at the tail of HBASE-17123 where Appy pointed out that the 
> override of loaded should be placed inside else block:
> {code}
>   } else {
> // secure bulk load
> map = regionServer.secureBulkLoadManager.secureBulkLoadHFiles(region, 
> request);
>   }
>   BulkLoadHFileResponse.Builder builder = 
> BulkLoadHFileResponse.newBuilder();
>   if (map != null) {
> loaded = true;
>   }
> {code}
> This issue is to address the review comment.
> After several review iterations, here are the changes:
> * Return value of boolean for postBulkLoadHFile() hook are changed to void.
> * Coprocessor hooks (pre and post) are added for the scenario where bulk load 
> manager is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19417) Remove boolean return value from postBulkLoadHFile hook

2017-12-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279422#comment-16279422
 ] 

Ted Yu commented on HBASE-19417:


It turns out that SecureBulkLoadManager.java has the calls to pre / post hooks.
So in patch v8 I narrowed the calls in RSRpcServices.

bq. CPs in 2.0 are radically different than 1.x

Now I know :-)
Addressed the comments in previous round.

> Remove boolean return value from postBulkLoadHFile hook
> ---
>
> Key: HBASE-19417
> URL: https://issues.apache.org/jira/browse/HBASE-19417
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ted Yu
> Attachments: 19417.v1.txt, 19417.v2.txt, 19417.v3.txt, 19417.v4.txt, 
> 19417.v5.txt, 19417.v6.txt, 19417.v7.txt
>
>
> See the discussion at the tail of HBASE-17123 where Appy pointed out that the 
> override of loaded should be placed inside else block:
> {code}
>   } else {
> // secure bulk load
> map = regionServer.secureBulkLoadManager.secureBulkLoadHFiles(region, 
> request);
>   }
>   BulkLoadHFileResponse.Builder builder = 
> BulkLoadHFileResponse.newBuilder();
>   if (map != null) {
> loaded = true;
>   }
> {code}
> This issue is to address the review comment.
> After several review iterations, here are the changes:
> * Return value of boolean for postBulkLoadHFile() hook are changed to void.
> * Coprocessor hooks (pre and post) are added for the scenario where bulk load 
> manager is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >