[jira] [Commented] (HBASE-16004) Update to Netty 4.1.1

2016-06-10 Thread Jurriaan Mous (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325732#comment-15325732
 ] 

Jurriaan Mous commented on HBASE-16004:
---

Yes. See 
https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcChannelImpl.java
 and search direct to see where we create our direct BBs. 

> Update to Netty 4.1.1
> -
>
> Key: HBASE-16004
> URL: https://issues.apache.org/jira/browse/HBASE-16004
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jurriaan Mous
>Assignee: Jurriaan Mous
> Fix For: 2.0.0
>
> Attachments: HBASE-16004.patch
>
>
> Netty 4.1 is out and received first bug fix release so it seems good enough 
> for hbase to migrate.
> It seems to have great performance improvements in Cassandra because of 
> optimizations in cleaning direct buffers. (Now is on by default)
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-11818/comment/15306030
> https://github.com/netty/netty/pull/5314



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15112) Allow coprocessors to extend 'software attributes' list

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325720#comment-15325720
 ] 

Hadoop QA commented on HBASE-15112:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 4s {color} 
| {color:red} hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 2 new + 4 
unchanged - 2 fixed = 6 total (was 6) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 59s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 24s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 134m 8s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 

[jira] [Updated] (HBASE-14644) Region in transition metric is broken

2016-06-10 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-14644:

   Resolution: Fixed
Fix Version/s: 1.2.2
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> Region in transition metric is broken
> -
>
> Key: HBASE-14644
> URL: https://issues.apache.org/jira/browse/HBASE-14644
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: huaxiang sun
> Fix For: 2.0.0, 1.3.0, 1.2.2
>
> Attachments: HBASE-14644-v001.patch, HBASE-14644-v002.patch, 
> HBASE-14644-v002.patch, branch-1.diff
>
>
> ritCount stays 0 no matter what



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16005) Implement HFile ref's tracking (bulk loading) in ReplicationQueuesHBaseImpl and ReplicationQueuesClientHBaseImpl

2016-06-10 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325704#comment-15325704
 ] 

Anoop Sam John commented on HBASE-16005:


[~ashish singhi]

> Implement HFile ref's tracking (bulk loading) in ReplicationQueuesHBaseImpl 
> and ReplicationQueuesClientHBaseImpl
> 
>
> Key: HBASE-16005
> URL: https://issues.apache.org/jira/browse/HBASE-16005
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Joseph
>
> Currently ReplicationQueuesHBaseImpl and ReplicationQueuesClientHBaseImpl 
> have not implemented any of the HFile ref methods. They currently throw 
> NotImplementedExceptions. We should implement them eventually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16004) Update to Netty 4.1.1

2016-06-10 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325703#comment-15325703
 ] 

Anoop Sam John commented on HBASE-16004:


In Asyncclient we added which uses Netty, are we using direct BB?  Sorry did 
not check that code.

> Update to Netty 4.1.1
> -
>
> Key: HBASE-16004
> URL: https://issues.apache.org/jira/browse/HBASE-16004
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jurriaan Mous
>Assignee: Jurriaan Mous
> Fix For: 2.0.0
>
> Attachments: HBASE-16004.patch
>
>
> Netty 4.1 is out and received first bug fix release so it seems good enough 
> for hbase to migrate.
> It seems to have great performance improvements in Cassandra because of 
> optimizations in cleaning direct buffers. (Now is on by default)
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-11818/comment/15306030
> https://github.com/netty/netty/pull/5314



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-10 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325679#comment-15325679
 ] 

Anoop Sam John commented on HBASE-15971:


bq.the block seek takes more CPU when passed the 1.0 Cell 
Whether the mvcc decoding have an impact Stack?  In the past (in 0.98 also) we 
will have mostly 0 value for mvcc for Cells. Means just one byte.  But some of 
the jira changed this I believe and we reset the mvcc to 0 become rare (Am I 
right?) Is this happening in 1.x also?

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> Screen Shot 2016-06-10 at 5.08.24 PM.png, Screen Shot 2016-06-10 at 5.08.26 
> PM.png, branch-1.hits.png, branch-1.png, handlers.fp.png, hits.fp.png, 
> hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15112) Allow coprocessors to extend 'software attributes' list

2016-06-10 Thread Matt Warhaftig (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Warhaftig updated HBASE-15112:
---
Attachment: hbase-15112-v1.patch

Don't see why JAVA_HOME error got thrown by Hadoop QA so reattaching to try 
rerun.

> Allow coprocessors to extend 'software attributes' list
> ---
>
> Key: HBASE-15112
> URL: https://issues.apache.org/jira/browse/HBASE-15112
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Reporter: Nick Dimiduk
>Assignee: Matt Warhaftig
> Attachments: hbase-15112-v1.patch, hbase-15112-v1.patch, 
> hbase-15112.tiff
>
>
> Over on the {{/master-status}} and {{/rs-status}} pages we have a list of 
> release properties, giving details about the cluster deployment. We should 
> make this an extension point, allowing coprocessors to register information 
> about themselves as well. For example, Phoenix, Trafodion, Tephra,  might 
> want to advertise installed version and build details as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-06-10 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325666#comment-15325666
 ] 

Enis Soztutar commented on HBASE-15406:
---

The idea is for the master to revert the state because we should not depend on 
another HBCK run to fix it for us. With ephemeral znodes, it is quite easy to 
do so. 

HBASE-16008 is opened for further discussion. I would like to revert this for 
1.3 and decide to use the proposed approach or maybe recommit a more generic 
version of this (covering CJ+balancer) over there. 

> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Heng Chen
>Priority: Critical
>  Labels: reviewed
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, HBASE-15406_v2.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15112) Allow coprocessors to extend 'software attributes' list

2016-06-10 Thread Matt Warhaftig (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Warhaftig updated HBASE-15112:
---
Attachment: hbase-15112.tiff

> Allow coprocessors to extend 'software attributes' list
> ---
>
> Key: HBASE-15112
> URL: https://issues.apache.org/jira/browse/HBASE-15112
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Reporter: Nick Dimiduk
>Assignee: Matt Warhaftig
> Attachments: hbase-15112-v1.patch, hbase-15112.tiff
>
>
> Over on the {{/master-status}} and {{/rs-status}} pages we have a list of 
> release properties, giving details about the cluster deployment. We should 
> make this an extension point, allowing coprocessors to register information 
> about themselves as well. For example, Phoenix, Trafodion, Tephra,  might 
> want to advertise installed version and build details as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15344) add 1.3 to prereq tables in ref guide

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325660#comment-15325660
 ] 

Hadoop QA commented on HBASE-15344:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
1s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 8s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 114m 18s 
{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 158m 33s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809583/HBASE-15344.v1.patch |
| JIRA Issue | HBASE-15344 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / bd45cf3 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2181/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2181/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> add 1.3 to prereq tables in ref guide
> -
>
> Key: HBASE-15344
> URL: https://issues.apache.org/jira/browse/HBASE-15344
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Attachments: HBASE-15344.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15112) Allow coprocessors to extend 'software attributes' list

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325654#comment-15325654
 ] 

Hadoop QA commented on HBASE-15112:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} pre-patch {color} | {color:red} 0m 0s 
{color} | {color:red} JAVA_HOME is not defined. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809608/hbase-15112-v1.patch |
| JIRA Issue | HBASE-15112 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux pietas.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT 
Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / bd45cf3 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2183/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Allow coprocessors to extend 'software attributes' list
> ---
>
> Key: HBASE-15112
> URL: https://issues.apache.org/jira/browse/HBASE-15112
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Reporter: Nick Dimiduk
>Assignee: Matt Warhaftig
> Attachments: hbase-15112-v1.patch
>
>
> Over on the {{/master-status}} and {{/rs-status}} pages we have a list of 
> release properties, giving details about the cluster deployment. We should 
> make this an extension point, allowing coprocessors to register information 
> about themselves as well. For example, Phoenix, Trafodion, Tephra,  might 
> want to advertise installed version and build details as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-10 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325655#comment-15325655
 ] 

Enis Soztutar commented on HBASE-15971:
---

I can also take a look if you attach the JFR outputs directly. They maybe 
sizable though. 

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> Screen Shot 2016-06-10 at 5.08.24 PM.png, Screen Shot 2016-06-10 at 5.08.26 
> PM.png, branch-1.hits.png, branch-1.png, handlers.fp.png, hits.fp.png, 
> hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15112) Allow coprocessors to extend 'software attributes' list

2016-06-10 Thread Matt Warhaftig (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Warhaftig updated HBASE-15112:
---
Attachment: hbase-15112-v1.patch

Attached patch 'hbase-15112-v1.patch' allows coprocessors to add attribute info 
via SPI by implementing org.apache.hadoop.hbase.CoprocessorAttributes.  The 
coprocessors' attributes are published to GUI (screenshot attached) and 
available via API.

> Allow coprocessors to extend 'software attributes' list
> ---
>
> Key: HBASE-15112
> URL: https://issues.apache.org/jira/browse/HBASE-15112
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Reporter: Nick Dimiduk
>Assignee: Matt Warhaftig
> Attachments: hbase-15112-v1.patch
>
>
> Over on the {{/master-status}} and {{/rs-status}} pages we have a list of 
> release properties, giving details about the cluster deployment. We should 
> make this an extension point, allowing coprocessors to register information 
> about themselves as well. For example, Phoenix, Trafodion, Tephra,  might 
> want to advertise installed version and build details as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15112) Allow coprocessors to extend 'software attributes' list

2016-06-10 Thread Matt Warhaftig (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Warhaftig updated HBASE-15112:
---
Status: Patch Available  (was: Open)

> Allow coprocessors to extend 'software attributes' list
> ---
>
> Key: HBASE-15112
> URL: https://issues.apache.org/jira/browse/HBASE-15112
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Reporter: Nick Dimiduk
>Assignee: Matt Warhaftig
>
> Over on the {{/master-status}} and {{/rs-status}} pages we have a list of 
> release properties, giving details about the cluster deployment. We should 
> make this an extension point, allowing coprocessors to register information 
> about themselves as well. For example, Phoenix, Trafodion, Tephra,  might 
> want to advertise installed version and build details as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15484) Correct the semantic of batch and partial

2016-06-10 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325647#comment-15325647
 ] 

Enis Soztutar commented on HBASE-15484:
---

Let's do this for 2.0. The scan API is too confusing now with caching, 
batching, allowPartialResults, setMaxResultsPerColumnFamily, and maxResultSize. 
We are not making the life of the user easy. 

Should we get rid of batching and caching, setMaxResultsPerColumnFamily (turn 
them into no-ops) and only do allowPartialResults and maxResultSize? How 
radical it will be for 2.0? 

> Correct the semantic of batch and partial
> -
>
> Key: HBASE-15484
> URL: https://issues.apache.org/jira/browse/HBASE-15484
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0
>
> Attachments: HBASE-15484-v1.patch, HBASE-15484-v2.patch, 
> HBASE-15484-v3.patch, HBASE-15484-v4.patch
>
>
> Follow-up to HBASE-15325, as discussed, the meaning of setBatch and 
> setAllowPartialResults should not be same. We should not regard setBatch as 
> setAllowPartialResults.
> And isPartial should be define accurately.
> (Considering getBatch==MaxInt if we don't setBatch.) If 
> result.rawcells.length row, isPartial==true, otherwise isPartial == false. So if user don't 
> setAllowPartialResults(true), isPartial should always be false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15484) Correct the semantic of batch and partial

2016-06-10 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-15484:
--
Fix Version/s: 2.0.0

> Correct the semantic of batch and partial
> -
>
> Key: HBASE-15484
> URL: https://issues.apache.org/jira/browse/HBASE-15484
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0
>
> Attachments: HBASE-15484-v1.patch, HBASE-15484-v2.patch, 
> HBASE-15484-v3.patch, HBASE-15484-v4.patch
>
>
> Follow-up to HBASE-15325, as discussed, the meaning of setBatch and 
> setAllowPartialResults should not be same. We should not regard setBatch as 
> setAllowPartialResults.
> And isPartial should be define accurately.
> (Considering getBatch==MaxInt if we don't setBatch.) If 
> result.rawcells.length row, isPartial==true, otherwise isPartial == false. So if user don't 
> setAllowPartialResults(true), isPartial should always be false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325601#comment-15325601
 ] 

Ted Yu commented on HBASE-16009:


Shouldn't the check involve whether the backup being restored is incremental or 
not ?

> Restoring an incremental backup should not require -overwrite
> -
>
> Key: HBASE-16009
> URL: https://issues.apache.org/jira/browse/HBASE-16009
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>  Labels: backup
> Attachments: 16009.v1.txt, HBASE-16009-v2.patch
>
>
> When I tried to restore an incremental backup,
> hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase 
> backup_1465575766499 t1 t2
> I got:
> {code}
> 2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: Existing table found in target while no "-overwrite" 
> option found
> java.io.IOException: Existing table found in target while no "-overwrite" 
> option found
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
> {code}
> The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16009:
--
Attachment: HBASE-16009-v2.patch

I had to reassign it to myself to attach the patch, [~tedyu]. Can you take a 
look at it?



> Restoring an incremental backup should not require -overwrite
> -
>
> Key: HBASE-16009
> URL: https://issues.apache.org/jira/browse/HBASE-16009
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>  Labels: backup
> Attachments: 16009.v1.txt, HBASE-16009-v2.patch
>
>
> When I tried to restore an incremental backup,
> hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase 
> backup_1465575766499 t1 t2
> I got:
> {code}
> 2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: Existing table found in target while no "-overwrite" 
> option found
> java.io.IOException: Existing table found in target while no "-overwrite" 
> option found
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
> {code}
> The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov reassigned HBASE-16009:
-

Assignee: Vladimir Rodionov  (was: Ted Yu)

> Restoring an incremental backup should not require -overwrite
> -
>
> Key: HBASE-16009
> URL: https://issues.apache.org/jira/browse/HBASE-16009
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>  Labels: backup
> Attachments: 16009.v1.txt
>
>
> When I tried to restore an incremental backup,
> hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase 
> backup_1465575766499 t1 t2
> I got:
> {code}
> 2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: Existing table found in target while no "-overwrite" 
> option found
> java.io.IOException: Existing table found in target while no "-overwrite" 
> option found
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
> {code}
> The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15935) Have a separate Walker task running concurrently with Generator

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325572#comment-15325572
 ] 

Hadoop QA commented on HBASE-15935:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 9s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 12s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809590/HBASE-15935.patch |
| JIRA Issue | HBASE-15935 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / bd45cf3 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2182/testReport/ |
| modules | C: hbase-it U: hbase-it |
| Console output | 

[jira] [Commented] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325551#comment-15325551
 ] 

Vladimir Rodionov commented on HBASE-16009:
---

Ok, I did  a small patch, running tests right now. For your use case: restore 
first incremental after full restore you do not have to specify both -automatic 
and -overwrite. If you specify -automatic, then you will need -overwrite if 
table exists, otherwise all data will be messed up in the table. 

> Restoring an incremental backup should not require -overwrite
> -
>
> Key: HBASE-16009
> URL: https://issues.apache.org/jira/browse/HBASE-16009
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 16009.v1.txt
>
>
> When I tried to restore an incremental backup,
> hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase 
> backup_1465575766499 t1 t2
> I got:
> {code}
> 2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: Existing table found in target while no "-overwrite" 
> option found
> java.io.IOException: Existing table found in target while no "-overwrite" 
> option found
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
> {code}
> The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15935) Have a separate Walker task running concurrently with Generator

2016-06-10 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-15935:
---
Status: Patch Available  (was: Open)

> Have a separate Walker task running concurrently with Generator   
> --
>
> Key: HBASE-15935
> URL: https://issues.apache.org/jira/browse/HBASE-15935
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Joseph
>Assignee: Joseph
>Priority: Minor
> Attachments: HBASE-15935.patch
>
>
> Keep track of which linked lists have been flushed in an HBase table, so that 
> we can concurrently Walk these lists during the Generation phase. This will 
> allow us to test:
> 1. HBase under concurrent read/writes
> 2. Availability of data immediately after flushes (as opposed to waiting till 
> the Verification phase)
> The review diff can be found at:
> https://reviews.apache.org/r/48294/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15935) Have a separate Walker task running concurrently with Generator

2016-06-10 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-15935:
---
Attachment: HBASE-15935.patch

Not a final draft. Just running the Hudson/Jenkin's tests on it

> Have a separate Walker task running concurrently with Generator   
> --
>
> Key: HBASE-15935
> URL: https://issues.apache.org/jira/browse/HBASE-15935
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Joseph
>Assignee: Joseph
>Priority: Minor
> Attachments: HBASE-15935.patch
>
>
> Keep track of which linked lists have been flushed in an HBase table, so that 
> we can concurrently Walk these lists during the Generation phase. This will 
> allow us to test:
> 1. HBase under concurrent read/writes
> 2. Availability of data immediately after flushes (as opposed to waiting till 
> the Verification phase)
> The review diff can be found at:
> https://reviews.apache.org/r/48294/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15935) Have a separate Walker task running concurrently with Generator

2016-06-10 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-15935:
---
Status: Open  (was: Patch Available)

> Have a separate Walker task running concurrently with Generator   
> --
>
> Key: HBASE-15935
> URL: https://issues.apache.org/jira/browse/HBASE-15935
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Joseph
>Assignee: Joseph
>Priority: Minor
>
> Keep track of which linked lists have been flushed in an HBase table, so that 
> we can concurrently Walk these lists during the Generation phase. This will 
> allow us to test:
> 1. HBase under concurrent read/writes
> 2. Availability of data immediately after flushes (as opposed to waiting till 
> the Verification phase)
> The review diff can be found at:
> https://reviews.apache.org/r/48294/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15935) Have a separate Walker task running concurrently with Generator

2016-06-10 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-15935:
---
Attachment: (was: HBASE-15935.patch)

> Have a separate Walker task running concurrently with Generator   
> --
>
> Key: HBASE-15935
> URL: https://issues.apache.org/jira/browse/HBASE-15935
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Joseph
>Assignee: Joseph
>Priority: Minor
>
> Keep track of which linked lists have been flushed in an HBase table, so that 
> we can concurrently Walk these lists during the Generation phase. This will 
> allow us to test:
> 1. HBase under concurrent read/writes
> 2. Availability of data immediately after flushes (as opposed to waiting till 
> the Verification phase)
> The review diff can be found at:
> https://reviews.apache.org/r/48294/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15971:
--
Attachment: Screen Shot 2016-06-10 at 5.08.24 PM.png
Screen Shot 2016-06-10 at 5.08.26 PM.png

Comparing JFR output while under same load, the block seek takes more CPU when 
passed the 1.0 Cell and heavy use of thread locals in 1.0 also seems to cost. 
On the other hand, the locking/contention profile looks worse for 0.98 than for 
1.0 with more time lost waiting on locks. It spends more time waiting on the 
regionscanner registration lock than 1.0 and it has the LinkedList blocking 
when doing a response.

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> Screen Shot 2016-06-10 at 5.08.24 PM.png, Screen Shot 2016-06-10 at 5.08.26 
> PM.png, branch-1.hits.png, branch-1.png, handlers.fp.png, hits.fp.png, 
> hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15344) add 1.3 to prereq tables in ref guide

2016-06-10 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15344:

Status: Patch Available  (was: Open)

> add 1.3 to prereq tables in ref guide
> -
>
> Key: HBASE-15344
> URL: https://issues.apache.org/jira/browse/HBASE-15344
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Attachments: HBASE-15344.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15344) add 1.3 to prereq tables in ref guide

2016-06-10 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15344:

Attachment: HBASE-15344.v1.patch

Those are basically the same as for 1.2 at this point, nothing changed.

I'm thinking about moving 2.4 from S to NT state (not sure how many people run 
2.4.* and want 1.3). Thoughts? 

> add 1.3 to prereq tables in ref guide
> -
>
> Key: HBASE-15344
> URL: https://issues.apache.org/jira/browse/HBASE-15344
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Attachments: HBASE-15344.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16004) Update to Netty 4.1.1

2016-06-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325502#comment-15325502
 ] 

Hudson commented on HBASE-16004:


FAILURE: Integrated in HBase-Trunk_matrix #1025 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1025/])
HBASE-16004 Update to Netty 4.1.1 (stack: rev 
bd45cf34762332a3a51f605798a3e050e7a1e62e)
* pom.xml
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcClient.java


> Update to Netty 4.1.1
> -
>
> Key: HBASE-16004
> URL: https://issues.apache.org/jira/browse/HBASE-16004
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jurriaan Mous
>Assignee: Jurriaan Mous
> Fix For: 2.0.0
>
> Attachments: HBASE-16004.patch
>
>
> Netty 4.1 is out and received first bug fix release so it seems good enough 
> for hbase to migrate.
> It seems to have great performance improvements in Cassandra because of 
> optimizations in cleaning direct buffers. (Now is on by default)
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-11818/comment/15306030
> https://github.com/netty/netty/pull/5314



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325494#comment-15325494
 ] 

stack commented on HBASE-15971:
---

Looking at stack traces, they look similar. Only difference was this: 
hbase.ipc.server.reservoir.direct.buffer I set it to false in branch-1 and made 
no difference in throughput (made the stack traces look the same). 0.98 has the 
same ugly contention on registration of region scanners that 1.0 has and this 
is what shows in stack traces -- adding and removal of Scanners. Will be back 
to fix this (HBASE-15716). With the 0.98 scheduler in place under branch-1, the 
difference is 280k/sec to 335k/sec, about 25% (the scheduler that sorts by 
priority costs us 25% throughput... 210k/sec vs 270k/sec)... so looking for 
culprit for this other 25% still.

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> branch-1.hits.png, branch-1.png, handlers.fp.png, hits.fp.png, 
> hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15984) Given failure to parse a given WAL that was closed cleanly, replay the WAL.

2016-06-10 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325487#comment-15325487
 ] 

Jonathan Hsieh commented on HBASE-15984:


Can you add a unit test to exercise this path?  

Can we add some metrics to count how many times the different updated 
ReplicationSource paths are exercised?

Do the trace calls happen frequently?  Would make sense to wrap the cheaper:

{code}
if (LOG.isTraceEnabled()) { 
  LOG.trace("something with a string concat operation "  +  op.getvalue())
}
{code}

> Given failure to parse a given WAL that was closed cleanly, replay the WAL.
> ---
>
> Key: HBASE-15984
> URL: https://issues.apache.org/jira/browse/HBASE-15984
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.2.2, 1.1.6, 0.98.21
>
> Attachments: HBASE-15984.1.patch
>
>
> subtask for a general work around for "underlying reader failed / is in a bad 
> state" just for the case where a WAL 1) was closed cleanly and 2) we can tell 
> that our current offset ought not be the end of parseable entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16007) Job's Configuration should be passed to TableMapReduceUtil#addDependencyJars() in WALPlayer

2016-06-10 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325484#comment-15325484
 ] 

Jerry He commented on HBASE-16007:
--

+1

> Job's Configuration should be passed to 
> TableMapReduceUtil#addDependencyJars() in WALPlayer
> ---
>
> Key: HBASE-16007
> URL: https://issues.apache.org/jira/browse/HBASE-16007
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16007.v1.txt
>
>
> HBASE-15752 tried to fix ClassNotFoundException when there is custom WAL edit 
> Codec involved.
> However, it didn't achieve this goal due to typo in the first parameter 
> passed to TableMapReduceUtil#addDependencyJars().
> job.getConfiguration() should have been used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16010) Put draining function through Admin API

2016-06-10 Thread Jerry He (JIRA)
Jerry He created HBASE-16010:


 Summary: Put draining function through Admin API
 Key: HBASE-16010
 URL: https://issues.apache.org/jira/browse/HBASE-16010
 Project: HBase
  Issue Type: Improvement
Reporter: Jerry He
Priority: Minor


Currently, there is no Amdin API for draining function. Client has to interact 
directly with Zookeeper draining node to add and remove draining servers.
For example, in draining_servers.rb:
{code}
  zkw = org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.new(config, 
"draining_servers", nil)
  parentZnode = zkw.drainingZNode

  begin
for server in servers
  node = ZKUtil.joinZNode(parentZnode, server)
  ZKUtil.createAndFailSilent(zkw, node)
end
  ensure
zkw.close()
  end
{code}

This is not good in cases like secure clusters with protected Zookeeper nodes.
Let's put draining function through Admin API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-16009:
--

Assignee: Ted Yu

> Restoring an incremental backup should not require -overwrite
> -
>
> Key: HBASE-16009
> URL: https://issues.apache.org/jira/browse/HBASE-16009
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 16009.v1.txt
>
>
> When I tried to restore an incremental backup,
> hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase 
> backup_1465575766499 t1 t2
> I got:
> {code}
> 2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: Existing table found in target while no "-overwrite" 
> option found
> java.io.IOException: Existing table found in target while no "-overwrite" 
> option found
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
> {code}
> The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15946) Eliminate possible security concerns in RS web UI's store file metrics

2016-06-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325453#comment-15325453
 ] 

Hudson commented on HBASE-15946:


SUCCESS: Integrated in HBase-1.2 #647 (See 
[https://builds.apache.org/job/HBase-1.2/647/])
HBASE-15946 Eliminate possible security concerns in RS web UI's store (antonov: 
rev d2d3dcdaec0412614badf77f866b89256296d8f4)
* hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java


> Eliminate possible security concerns in RS web UI's store file metrics
> --
>
> Key: HBASE-15946
> URL: https://issues.apache.org/jira/browse/HBASE-15946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Fix For: 1.3.0, 1.2.2
>
> Attachments: HBASE-15946-branch-1.3-mantonov.diff, 
> HBASE-15946-v1.patch, HBASE-15946-v2.patch, HBASE-15946-v3.patch
>
>
> More from static code analysis: it warns about the invoking of a separate 
> command ("hbase hfile -s -f ...") as a possible security issue in 
> hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp.
> It looks to me like one cannot inject arbitrary shell script or even 
> arbitrary arguments: ProcessBuilder makes that fairly safe and only allows 
> the user to specify the argument that comes after -f. However that does 
> potentially allow them to have the daemon's user access files they shouldn't 
> be able to touch, albeit only for reading.
> To more explicitly eliminate any threats here, we should add some validation 
> that the file is at least within HBase's root directory and use the Java API 
> directly instead of invoking a separate executable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325444#comment-15325444
 ] 

Vladimir Rodionov commented on HBASE-16009:
---

That is the case for autoRestore = false. Let me check the code.

> Restoring an incremental backup should not require -overwrite
> -
>
> Key: HBASE-16009
> URL: https://issues.apache.org/jira/browse/HBASE-16009
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>  Labels: backup
> Attachments: 16009.v1.txt
>
>
> When I tried to restore an incremental backup,
> hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase 
> backup_1465575766499 t1 t2
> I got:
> {code}
> 2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: Existing table found in target while no "-overwrite" 
> option found
> java.io.IOException: Existing table found in target while no "-overwrite" 
> option found
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
> {code}
> The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325420#comment-15325420
 ] 

Ted Yu edited comment on HBASE-16009 at 6/10/16 10:44 PM:
--

In my experiment, backup_1465575766499 was an incremental backup.
I first restored full backup (backup Id was different).

Restoring incremental backup should not require -overwrite.
Otherwise user cannot obtain the data from both backup's.


was (Author: yuzhih...@gmail.com):
In my experiment, backup_1465575766499 was an incremental backup.
I first restored full backup (backup rootdir was different).

Restoring incremental backup should not require -overwrite.
Otherwise user cannot obtain the data from both backup's.

> Restoring an incremental backup should not require -overwrite
> -
>
> Key: HBASE-16009
> URL: https://issues.apache.org/jira/browse/HBASE-16009
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>  Labels: backup
> Attachments: 16009.v1.txt
>
>
> When I tried to restore an incremental backup,
> hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase 
> backup_1465575766499 t1 t2
> I got:
> {code}
> 2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: Existing table found in target while no "-overwrite" 
> option found
> java.io.IOException: Existing table found in target while no "-overwrite" 
> option found
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
> {code}
> The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325420#comment-15325420
 ] 

Ted Yu commented on HBASE-16009:


In my experiment, backup_1465575766499 was an incremental backup.
I first restored full backup (backup rootdir was different).

Restoring incremental backup should not require -overwrite.
Otherwise user cannot obtain the data from both backup's.

> Restoring an incremental backup should not require -overwrite
> -
>
> Key: HBASE-16009
> URL: https://issues.apache.org/jira/browse/HBASE-16009
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>  Labels: backup
> Attachments: 16009.v1.txt
>
>
> When I tried to restore an incremental backup,
> hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase 
> backup_1465575766499 t1 t2
> I got:
> {code}
> 2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: Existing table found in target while no "-overwrite" 
> option found
> java.io.IOException: Existing table found in target while no "-overwrite" 
> option found
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
> {code}
> The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325412#comment-15325412
 ] 

Vladimir Rodionov edited comment on HBASE-16009 at 6/10/16 10:34 PM:
-

Why it should not require -overwrite, [~tedyu]? All must be consistent, if we 
require overwrite for restore from full backup, we must require the same for 
incremental.


was (Author: vrodionov):
Why it should not require -overwrite, [~tedyu]? 

> Restoring an incremental backup should not require -overwrite
> -
>
> Key: HBASE-16009
> URL: https://issues.apache.org/jira/browse/HBASE-16009
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>  Labels: backup
> Attachments: 16009.v1.txt
>
>
> When I tried to restore an incremental backup,
> hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase 
> backup_1465575766499 t1 t2
> I got:
> {code}
> 2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: Existing table found in target while no "-overwrite" 
> option found
> java.io.IOException: Existing table found in target while no "-overwrite" 
> option found
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
> {code}
> The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325412#comment-15325412
 ] 

Vladimir Rodionov commented on HBASE-16009:
---

Why it should not require -overwrite, [~tedyu]? 

> Restoring an incremental backup should not require -overwrite
> -
>
> Key: HBASE-16009
> URL: https://issues.apache.org/jira/browse/HBASE-16009
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>  Labels: backup
> Attachments: 16009.v1.txt
>
>
> When I tried to restore an incremental backup,
> hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase 
> backup_1465575766499 t1 t2
> I got:
> {code}
> 2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: Existing table found in target while no "-overwrite" 
> option found
> java.io.IOException: Existing table found in target while no "-overwrite" 
> option found
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
> {code}
> The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15984) Given failure to parse a given WAL that was closed cleanly, replay the WAL.

2016-06-10 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325407#comment-15325407
 ] 

Sean Busbey commented on HBASE-15984:
-

upon review, I can find no current tests of what ReplicationSource does. so 
that's discouraging.

> Given failure to parse a given WAL that was closed cleanly, replay the WAL.
> ---
>
> Key: HBASE-15984
> URL: https://issues.apache.org/jira/browse/HBASE-15984
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.2.2, 1.1.6, 0.98.21
>
> Attachments: HBASE-15984.1.patch
>
>
> subtask for a general work around for "underlying reader failed / is in a bad 
> state" just for the case where a WAL 1) was closed cleanly and 2) we can tell 
> that our current offset ought not be the end of parseable entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16006) FileSystem should be obtained from specified path in WALInputFormat#getSplits()

2016-06-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16006:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the review, Vlad.

> FileSystem should be obtained from specified path in 
> WALInputFormat#getSplits()
> ---
>
> Key: HBASE-16006
> URL: https://issues.apache.org/jira/browse/HBASE-16006
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 16006.v1.txt
>
>
> I was trying out restore feature and encountered the following exception:
> {code}
> 2016-06-10 16:56:57,533 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: java.io.IOException: Can not restore from backup directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
> java.io.IOException: java.io.IOException: Can not restore from backup 
> directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:257)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:112)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:203)
> Caused by: java.io.IOException: Can not restore from backup directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:92)
>   at 
> org.apache.hadoop.hbase.backup.util.RestoreServerUtil.incrementalRestoreTable(RestoreServerUtil.java:165)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreImage(RestoreClientImpl.java:293)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:238)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs, 
> expected: hdfs://hbase-test-rc-rerun-6.openstacklocal:8020
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:658)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:212)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:882)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:112)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:951)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:947)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:947)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getFiles(WALInputFormat.java:266)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:246)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:227)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
>   at org.apache.hadoop.hbase.mapreduce.WALPlayer.run(WALPlayer.java:380)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:73)
>   ... 9 more
> {code}
> It turned out that the refactoring from HBASE-14140 changed the code:
> {code}
> -

[jira] [Updated] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16009:
---
Attachment: 16009.v1.txt

> Restoring an incremental backup should not require -overwrite
> -
>
> Key: HBASE-16009
> URL: https://issues.apache.org/jira/browse/HBASE-16009
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>  Labels: backup
> Attachments: 16009.v1.txt
>
>
> When I tried to restore an incremental backup,
> hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase 
> backup_1465575766499 t1 t2
> I got:
> {code}
> 2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: Existing table found in target while no "-overwrite" 
> option found
> java.io.IOException: Existing table found in target while no "-overwrite" 
> option found
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
> {code}
> The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16006) FileSystem should be obtained from specified path in WALInputFormat#getSplits()

2016-06-10 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325367#comment-15325367
 ] 

Vladimir Rodionov commented on HBASE-16006:
---

Looks good to me.

> FileSystem should be obtained from specified path in 
> WALInputFormat#getSplits()
> ---
>
> Key: HBASE-16006
> URL: https://issues.apache.org/jira/browse/HBASE-16006
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 16006.v1.txt
>
>
> I was trying out restore feature and encountered the following exception:
> {code}
> 2016-06-10 16:56:57,533 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: java.io.IOException: Can not restore from backup directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
> java.io.IOException: java.io.IOException: Can not restore from backup 
> directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:257)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:112)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:203)
> Caused by: java.io.IOException: Can not restore from backup directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:92)
>   at 
> org.apache.hadoop.hbase.backup.util.RestoreServerUtil.incrementalRestoreTable(RestoreServerUtil.java:165)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreImage(RestoreClientImpl.java:293)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:238)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs, 
> expected: hdfs://hbase-test-rc-rerun-6.openstacklocal:8020
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:658)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:212)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:882)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:112)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:951)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:947)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:947)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getFiles(WALInputFormat.java:266)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:246)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:227)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
>   at org.apache.hadoop.hbase.mapreduce.WALPlayer.run(WALPlayer.java:380)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:73)
>   ... 9 more
> {code}
> It turned out that the refactoring from HBASE-14140 changed the code:
> {code}
> -FileSystem fs = inputDir.getFileSystem(conf);
> -

[jira] [Commented] (HBASE-15584) Revisit handling of BackupState#CANCELLED

2016-06-10 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325316#comment-15325316
 ] 

Stephen Yuan Jiang commented on HBASE-15584:


[~tedyu], looks good (with the assumption that {{tableList.size() > 0}} is 
always true).


> Revisit handling of BackupState#CANCELLED
> -
>
> Key: HBASE-15584
> URL: https://issues.apache.org/jira/browse/HBASE-15584
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 15584.v1.txt, 15584.v2.txt
>
>
> During review of HBASE-15411, Enis made the following point:
> {code}
> nobody puts the backup in cancelled state. setCancelled() is not used. So if 
> I abort a backup, who writes to the system table the new state? 
> Not sure whether this is a phase 1 patch issue or due to this patch. We can 
> open a new jira and address it there if you do not want to do it in this 
> patch. 
> Also maybe this should be named ABORTED rather than CANCELLED.
> {code}
> This issue is to decide whether this state should be kept (e.g. through 
> notification from procedure V2 framework in response to abortion).
> If it is to be kept, the state should be renamed ABORTED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16007) Job's Configuration should be passed to TableMapReduceUtil#addDependencyJars() in WALPlayer

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325300#comment-15325300
 ] 

Hadoop QA commented on HBASE-16007:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 87m 41s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 129m 9s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809534/16007.v1.txt |
| JIRA Issue | HBASE-16007 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HBASE-15584) Revisit handling of BackupState#CANCELLED

2016-06-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15584:
---
Attachment: 15584.v2.txt

Patch v2 addressed Stephen's comment.

[~vrodionov]:
Mind taking a look as well ?
BackupState#CANCELLED has been dropped.

> Revisit handling of BackupState#CANCELLED
> -
>
> Key: HBASE-15584
> URL: https://issues.apache.org/jira/browse/HBASE-15584
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 15584.v1.txt, 15584.v2.txt
>
>
> During review of HBASE-15411, Enis made the following point:
> {code}
> nobody puts the backup in cancelled state. setCancelled() is not used. So if 
> I abort a backup, who writes to the system table the new state? 
> Not sure whether this is a phase 1 patch issue or due to this patch. We can 
> open a new jira and address it there if you do not want to do it in this 
> patch. 
> Also maybe this should be named ABORTED rather than CANCELLED.
> {code}
> This issue is to decide whether this state should be kept (e.g. through 
> notification from procedure V2 framework in response to abortion).
> If it is to be kept, the state should be renamed ABORTED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15584) Revisit handling of BackupState#CANCELLED

2016-06-10 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325274#comment-15325274
 ] 

Stephen Yuan Jiang commented on HBASE-15584:


[~tedyu], the following logic will leave a ',' at the end of list:

{code}   
+  for (TableName table : tableList) {
+ sb.append(table).append(",");
+}
{code}

> Revisit handling of BackupState#CANCELLED
> -
>
> Key: HBASE-15584
> URL: https://issues.apache.org/jira/browse/HBASE-15584
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 15584.v1.txt
>
>
> During review of HBASE-15411, Enis made the following point:
> {code}
> nobody puts the backup in cancelled state. setCancelled() is not used. So if 
> I abort a backup, who writes to the system table the new state? 
> Not sure whether this is a phase 1 patch issue or due to this patch. We can 
> open a new jira and address it there if you do not want to do it in this 
> patch. 
> Also maybe this should be named ABORTED rather than CANCELLED.
> {code}
> This issue is to decide whether this state should be kept (e.g. through 
> notification from procedure V2 framework in response to abortion).
> If it is to be kept, the state should be renamed ABORTED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15950) Fix memstore size estimates to be more tighter

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325269#comment-15325269
 ] 

Hadoop QA commented on HBASE-15950:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 15s {color} 
| {color:red} hbase-common-jdk1.8.0 with JDK v1.8.0 generated 3 new + 26 
unchanged - 0 fixed = 29 total (was 26) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 3s {color} 
| {color:red} hbase-common-jdk1.7.0_79 with JDK v1.7.0_79 generated 3 new + 26 
unchanged - 0 fixed = 29 total (was 26) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 46s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 52s 
{color} | {color:red} hbase-common-jdk1.7.0_79 with JDK v1.7.0_79 generated 1 
new + 5 unchanged - 0 fixed = 6 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 44s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 103m 1s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | 

[jira] [Created] (HBASE-16009) Restoring an incremental backup should not require -overwrite

2016-06-10 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16009:
--

 Summary: Restoring an incremental backup should not require 
-overwrite
 Key: HBASE-16009
 URL: https://issues.apache.org/jira/browse/HBASE-16009
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


When I tried to restore an incremental backup,

hbase restore hdfs://hbase-test-rc-rerun-6:8020/user/hbase backup_1465575766499 
t1 t2

I got:
{code}
2016-06-10 19:53:11,317 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
failed with error: Existing table found in target while no "-overwrite" option 
found
java.io.IOException: Existing table found in target while no "-overwrite" 
option found
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.checkTargetTables(RestoreClientImpl.java:186)
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:108)
at 
org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
at 
org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
{code}
The above check should only be performed for restoring full backup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15974) Create a ReplicationQueuesClientHBaseImpl

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325221#comment-15325221
 ] 

Hadoop QA commented on HBASE-15974:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 17s 
{color} | {color:green} hbase-client-jdk1.8.0 with JDK v1.8.0 generated 0 new + 
0 unchanged - 5 fixed = 0 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 6s 
{color} | {color:green} hbase-client-jdk1.7.0_79 with JDK v1.7.0_79 generated 0 
new + 0 unchanged - 5 fixed = 0 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 53s 
{color} | {color:green} hbase-client-jdk1.7.0_79 with JDK v1.7.0_79 generated 0 
new + 13 unchanged - 1 fixed = 13 total (was 14) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} hbase-client in 

[jira] [Commented] (HBASE-15946) Eliminate possible security concerns in RS web UI's store file metrics

2016-06-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325170#comment-15325170
 ] 

Hudson commented on HBASE-15946:


SUCCESS: Integrated in HBase-1.2-IT #529 (See 
[https://builds.apache.org/job/HBase-1.2-IT/529/])
HBASE-15946 Eliminate possible security concerns in RS web UI's store (antonov: 
rev d2d3dcdaec0412614badf77f866b89256296d8f4)
* hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java


> Eliminate possible security concerns in RS web UI's store file metrics
> --
>
> Key: HBASE-15946
> URL: https://issues.apache.org/jira/browse/HBASE-15946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Fix For: 1.3.0, 1.2.2
>
> Attachments: HBASE-15946-branch-1.3-mantonov.diff, 
> HBASE-15946-v1.patch, HBASE-15946-v2.patch, HBASE-15946-v3.patch
>
>
> More from static code analysis: it warns about the invoking of a separate 
> command ("hbase hfile -s -f ...") as a possible security issue in 
> hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp.
> It looks to me like one cannot inject arbitrary shell script or even 
> arbitrary arguments: ProcessBuilder makes that fairly safe and only allows 
> the user to specify the argument that comes after -f. However that does 
> potentially allow them to have the daemon's user access files they shouldn't 
> be able to touch, albeit only for reading.
> To more explicitly eliminate any threats here, we should add some validation 
> that the file is at least within HBase's root directory and use the Java API 
> directly instead of invoking a separate executable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15950) Fix memstore size estimates to be more tighter

2016-06-10 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325136#comment-15325136
 ] 

Dave Latham commented on HBASE-15950:
-

Release notes look good.  Do we have upgrade notes that we can put a warning in?

> Fix memstore size estimates to be more tighter
> --
>
> Key: HBASE-15950
> URL: https://issues.apache.org/jira/browse/HBASE-15950
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: Screen Shot 2016-06-02 at 8.48.27 PM.png, 
> hbase-15950-v0.patch, hbase-15950-v1.patch
>
>
> While testing something else, I was loading a region with a lot of data. 
> Writing 30M cells in 1M rows, with 1 byte values. 
> The memstore size turned out to be estimated as 4.5GB, while with the JFR 
> profiling I can see that we are using 2.8GB for all the objects in the 
> memstore (KV + KV byte[] + CSLM.Node + CSLM.Index). 
> This obviously means that there is room in the write cache that we are not 
> effectively using. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15950) Fix memstore size estimates to be more tighter

2016-06-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325113#comment-15325113
 ] 

stack commented on HBASE-15950:
---

+1 Skimmed. Excellent work [~enis]

> Fix memstore size estimates to be more tighter
> --
>
> Key: HBASE-15950
> URL: https://issues.apache.org/jira/browse/HBASE-15950
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: Screen Shot 2016-06-02 at 8.48.27 PM.png, 
> hbase-15950-v0.patch, hbase-15950-v1.patch
>
>
> While testing something else, I was loading a region with a lot of data. 
> Writing 30M cells in 1M rows, with 1 byte values. 
> The memstore size turned out to be estimated as 4.5GB, while with the JFR 
> profiling I can see that we are using 2.8GB for all the objects in the 
> memstore (KV + KV byte[] + CSLM.Node + CSLM.Index). 
> This obviously means that there is room in the write cache that we are not 
> effectively using. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16004) Update to Netty 4.1.1

2016-06-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16004:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master branch. Thanks [~jurmous]

> Update to Netty 4.1.1
> -
>
> Key: HBASE-16004
> URL: https://issues.apache.org/jira/browse/HBASE-16004
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jurriaan Mous
>Assignee: Jurriaan Mous
> Fix For: 2.0.0
>
> Attachments: HBASE-16004.patch
>
>
> Netty 4.1 is out and received first bug fix release so it seems good enough 
> for hbase to migrate.
> It seems to have great performance improvements in Cassandra because of 
> optimizations in cleaning direct buffers. (Now is on by default)
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-11818/comment/15306030
> https://github.com/netty/netty/pull/5314



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16008) A robust way deal with early termination of HBCK

2016-06-10 Thread Stephen Yuan Jiang (JIRA)
Stephen Yuan Jiang created HBASE-16008:
--

 Summary: A robust way deal with early termination of HBCK
 Key: HBASE-16008
 URL: https://issues.apache.org/jira/browse/HBASE-16008
 Project: HBase
  Issue Type: Improvement
  Components: hbck
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang


When HBCK is running, we want to disable Catalog Janitor, Balancer and 
Split/Merge.  Today, the implementation is not robust.  If HBCK is terminated 
earlier by Control-C, the changed state would not be reset to original.  

HBASE-15406 was trying to solve this problem for Split/Merge switch.  The 
implementation is complicated, and it did not solve CJ and Balancer.  

We also have another problem is that to prevent multiple HBCK run, we used a 
file lock to indicate a running HBCK; earlier terminating might not clean up 
the file.  Sometimes we have to manually remove the file so that future HBCK 
could run.  

The proposal to solve all the problem is to use a znode to indicate that one 
HBCK is running.  CJ, balancer, and Split/Merge switch all look for this znode 
before doing it operation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15344) add 1.3 to prereq tables in ref guide

2016-06-10 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15344:

Affects Version/s: (was: 1.3.0)

> add 1.3 to prereq tables in ref guide
> -
>
> Key: HBASE-15344
> URL: https://issues.apache.org/jira/browse/HBASE-15344
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15344) add 1.3 to prereq tables in ref guide

2016-06-10 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15344:

Fix Version/s: (was: 1.3.0)

> add 1.3 to prereq tables in ref guide
> -
>
> Key: HBASE-15344
> URL: https://issues.apache.org/jira/browse/HBASE-15344
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16007) Job's Configuration should be passed to TableMapReduceUtil#addDependencyJars() in WALPlayer

2016-06-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16007:
---
Attachment: 16007.v1.txt

> Job's Configuration should be passed to 
> TableMapReduceUtil#addDependencyJars() in WALPlayer
> ---
>
> Key: HBASE-16007
> URL: https://issues.apache.org/jira/browse/HBASE-16007
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16007.v1.txt
>
>
> HBASE-15752 tried to fix ClassNotFoundException when there is custom WAL edit 
> Codec involved.
> However, it didn't achieve this goal due to typo in the first parameter 
> passed to TableMapReduceUtil#addDependencyJars().
> job.getConfiguration() should have been used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16007) Job's Configuration should be passed to TableMapReduceUtil#addDependencyJars() in WALPlayer

2016-06-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325074#comment-15325074
 ] 

Ted Yu edited comment on HBASE-16007 at 6/10/16 7:04 PM:
-

Tested the fix on a 7 node cluster running 1.1.x.


was (Author: yuzhih...@gmail.com):
Tested the fix on a 7 node cluster.

> Job's Configuration should be passed to 
> TableMapReduceUtil#addDependencyJars() in WALPlayer
> ---
>
> Key: HBASE-16007
> URL: https://issues.apache.org/jira/browse/HBASE-16007
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16007.v1.txt
>
>
> HBASE-15752 tried to fix ClassNotFoundException when there is custom WAL edit 
> Codec involved.
> However, it didn't achieve this goal due to typo in the first parameter 
> passed to TableMapReduceUtil#addDependencyJars().
> job.getConfiguration() should have been used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16007) Job's Configuration should be passed to TableMapReduceUtil#addDependencyJars() in WALPlayer

2016-06-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16007:
---
Attachment: (was: 16007.v1.txt)

> Job's Configuration should be passed to 
> TableMapReduceUtil#addDependencyJars() in WALPlayer
> ---
>
> Key: HBASE-16007
> URL: https://issues.apache.org/jira/browse/HBASE-16007
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>
> HBASE-15752 tried to fix ClassNotFoundException when there is custom WAL edit 
> Codec involved.
> However, it didn't achieve this goal due to typo in the first parameter 
> passed to TableMapReduceUtil#addDependencyJars().
> job.getConfiguration() should have been used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16004) Update to Netty 4.1.1

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325091#comment-15325091
 ] 

Hadoop QA commented on HBASE-16004:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 52s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 42s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
12s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 48s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 37s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 21s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 110m 32s 
{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
32s {color} | {color:green} Patch does not generate ASF License 

[jira] [Commented] (HBASE-16007) Job's Configuration should be passed to TableMapReduceUtil#addDependencyJars() in WALPlayer

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325081#comment-15325081
 ] 

Hadoop QA commented on HBASE-16007:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} 
| {color:red} HBASE-16007 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809531/16007.v1.txt |
| JIRA Issue | HBASE-16007 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2179/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Job's Configuration should be passed to 
> TableMapReduceUtil#addDependencyJars() in WALPlayer
> ---
>
> Key: HBASE-16007
> URL: https://issues.apache.org/jira/browse/HBASE-16007
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16007.v1.txt
>
>
> HBASE-15752 tried to fix ClassNotFoundException when there is custom WAL edit 
> Codec involved.
> However, it didn't achieve this goal due to typo in the first parameter 
> passed to TableMapReduceUtil#addDependencyJars().
> job.getConfiguration() should have been used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16007) Job's Configuration should be passed to TableMapReduceUtil#addDependencyJars() in WALPlayer

2016-06-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325074#comment-15325074
 ] 

Ted Yu commented on HBASE-16007:


Tested the fix on a 7 node cluster.

> Job's Configuration should be passed to 
> TableMapReduceUtil#addDependencyJars() in WALPlayer
> ---
>
> Key: HBASE-16007
> URL: https://issues.apache.org/jira/browse/HBASE-16007
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16007.v1.txt
>
>
> HBASE-15752 tried to fix ClassNotFoundException when there is custom WAL edit 
> Codec involved.
> However, it didn't achieve this goal due to typo in the first parameter 
> passed to TableMapReduceUtil#addDependencyJars().
> job.getConfiguration() should have been used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16007) Job's Configuration should be passed to TableMapReduceUtil#addDependencyJars() in WALPlayer

2016-06-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16007:
---
Status: Patch Available  (was: Open)

> Job's Configuration should be passed to 
> TableMapReduceUtil#addDependencyJars() in WALPlayer
> ---
>
> Key: HBASE-16007
> URL: https://issues.apache.org/jira/browse/HBASE-16007
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16007.v1.txt
>
>
> HBASE-15752 tried to fix ClassNotFoundException when there is custom WAL edit 
> Codec involved.
> However, it didn't achieve this goal due to typo in the first parameter 
> passed to TableMapReduceUtil#addDependencyJars().
> job.getConfiguration() should have been used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16007) Job's Configuration should be passed to TableMapReduceUtil#addDependencyJars() in WALPlayer

2016-06-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16007:
---
Attachment: 16007.v1.txt

> Job's Configuration should be passed to 
> TableMapReduceUtil#addDependencyJars() in WALPlayer
> ---
>
> Key: HBASE-16007
> URL: https://issues.apache.org/jira/browse/HBASE-16007
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16007.v1.txt
>
>
> HBASE-15752 tried to fix ClassNotFoundException when there is custom WAL edit 
> Codec involved.
> However, it didn't achieve this goal due to typo in the first parameter 
> passed to TableMapReduceUtil#addDependencyJars().
> job.getConfiguration() should have been used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16007) Job's Configuration should be passed to TableMapReduceUtil#addDependencyJars() in WALPlayer

2016-06-10 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16007:
--

 Summary: Job's Configuration should be passed to 
TableMapReduceUtil#addDependencyJars() in WALPlayer
 Key: HBASE-16007
 URL: https://issues.apache.org/jira/browse/HBASE-16007
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


HBASE-15752 tried to fix ClassNotFoundException when there is custom WAL edit 
Codec involved.

However, it didn't achieve this goal due to typo in the first parameter passed 
to TableMapReduceUtil#addDependencyJars().

job.getConfiguration() should have been used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15950) Fix memstore size estimates to be more tighter

2016-06-10 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-15950:
--
Release Note: 
The estimates of heap usage by the memstore objects (KeyValue, object and array 
header sizes, etc) have been made more accurate for heap sizes up to 32G (using 
CompressedOops), resulting in them dropping by 10-50% in practice. This also 
results in less number of flushes and compactions due to "fatter" flushes. 
YMMV. As a result, the actual heap usage of the memstore before being flushed 
may increase by up to 100%. If configured memory limits for the region server 
had been tuned based on observed usage, this change could result in worse GC 
behavior or even OutOfMemory errors. Set the environment property (not 
hbase-site.xml) "hbase.memorylayout.use.unsafe" to false to disable. 



  was:Made the object/array header sizes to be CompressedOops aware and fixed 
object heap size calculations for important objects like KeyValue and 
ConcurrentSkipListMap and . For heap sizes up to 32GB, depending on average 
cell sizes and total memstore size, %10-50 reduction in memstore size and 
flushes and compactions might be observed. YMMV. Due to more tighter than 
before size estimates, total heap space is expected to be utilized more, 
slightly increasing the chance to get OOM if misconfigured. Set the environment 
property (not hbase-site.xml) "hbase.memorylayout.use.unsafe" to false to 
disable. 


> Fix memstore size estimates to be more tighter
> --
>
> Key: HBASE-15950
> URL: https://issues.apache.org/jira/browse/HBASE-15950
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: Screen Shot 2016-06-02 at 8.48.27 PM.png, 
> hbase-15950-v0.patch, hbase-15950-v1.patch
>
>
> While testing something else, I was loading a region with a lot of data. 
> Writing 30M cells in 1M rows, with 1 byte values. 
> The memstore size turned out to be estimated as 4.5GB, while with the JFR 
> profiling I can see that we are using 2.8GB for all the objects in the 
> memstore (KV + KV byte[] + CSLM.Node + CSLM.Index). 
> This obviously means that there is room in the write cache that we are not 
> effectively using. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15950) Fix memstore size estimates to be more tighter

2016-06-10 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325008#comment-15325008
 ] 

Enis Soztutar commented on HBASE-15950:
---

Updates release notes. Let me know. 

> Fix memstore size estimates to be more tighter
> --
>
> Key: HBASE-15950
> URL: https://issues.apache.org/jira/browse/HBASE-15950
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: Screen Shot 2016-06-02 at 8.48.27 PM.png, 
> hbase-15950-v0.patch, hbase-15950-v1.patch
>
>
> While testing something else, I was loading a region with a lot of data. 
> Writing 30M cells in 1M rows, with 1 byte values. 
> The memstore size turned out to be estimated as 4.5GB, while with the JFR 
> profiling I can see that we are using 2.8GB for all the objects in the 
> memstore (KV + KV byte[] + CSLM.Node + CSLM.Index). 
> This obviously means that there is room in the write cache that we are not 
> effectively using. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15950) Fix memstore size estimates to be more tighter

2016-06-10 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-15950:
--
Attachment: hbase-15950-v1.patch

v1 fixes failed unit tests as well. Some assumptions about memstore sizes were 
hard coded. 

Javac and javadoc warnings are coming from use of Unsafe. 

> Fix memstore size estimates to be more tighter
> --
>
> Key: HBASE-15950
> URL: https://issues.apache.org/jira/browse/HBASE-15950
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: Screen Shot 2016-06-02 at 8.48.27 PM.png, 
> hbase-15950-v0.patch, hbase-15950-v1.patch
>
>
> While testing something else, I was loading a region with a lot of data. 
> Writing 30M cells in 1M rows, with 1 byte values. 
> The memstore size turned out to be estimated as 4.5GB, while with the JFR 
> profiling I can see that we are using 2.8GB for all the objects in the 
> memstore (KV + KV byte[] + CSLM.Node + CSLM.Index). 
> This obviously means that there is room in the write cache that we are not 
> effectively using. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16006) FileSystem should be obtained from specified path in WALInputFormat#getSplits()

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324997#comment-15324997
 ] 

Hadoop QA commented on HBASE-16006:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-16006 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809515/16006.v1.txt |
| JIRA Issue | HBASE-16006 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2177/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> FileSystem should be obtained from specified path in 
> WALInputFormat#getSplits()
> ---
>
> Key: HBASE-16006
> URL: https://issues.apache.org/jira/browse/HBASE-16006
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 16006.v1.txt
>
>
> I was trying out restore feature and encountered the following exception:
> {code}
> 2016-06-10 16:56:57,533 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: java.io.IOException: Can not restore from backup directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
> java.io.IOException: java.io.IOException: Can not restore from backup 
> directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:257)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:112)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:203)
> Caused by: java.io.IOException: Can not restore from backup directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:92)
>   at 
> org.apache.hadoop.hbase.backup.util.RestoreServerUtil.incrementalRestoreTable(RestoreServerUtil.java:165)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreImage(RestoreClientImpl.java:293)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:238)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs, 
> expected: hdfs://hbase-test-rc-rerun-6.openstacklocal:8020
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:658)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:212)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:882)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:112)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:951)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:947)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:947)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getFiles(WALInputFormat.java:266)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:246)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:227)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
>   at 
> 

[jira] [Updated] (HBASE-16006) FileSystem should be obtained from specified path in WALInputFormat#getSplits()

2016-06-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16006:
---
Attachment: 16006.v1.txt

> FileSystem should be obtained from specified path in 
> WALInputFormat#getSplits()
> ---
>
> Key: HBASE-16006
> URL: https://issues.apache.org/jira/browse/HBASE-16006
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 16006.v1.txt
>
>
> I was trying out restore feature and encountered the following exception:
> {code}
> 2016-06-10 16:56:57,533 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: java.io.IOException: Can not restore from backup directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
> java.io.IOException: java.io.IOException: Can not restore from backup 
> directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:257)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:112)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:203)
> Caused by: java.io.IOException: Can not restore from backup directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:92)
>   at 
> org.apache.hadoop.hbase.backup.util.RestoreServerUtil.incrementalRestoreTable(RestoreServerUtil.java:165)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreImage(RestoreClientImpl.java:293)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:238)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs, 
> expected: hdfs://hbase-test-rc-rerun-6.openstacklocal:8020
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:658)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:212)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:882)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:112)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:951)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:947)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:947)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getFiles(WALInputFormat.java:266)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:246)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:227)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
>   at org.apache.hadoop.hbase.mapreduce.WALPlayer.run(WALPlayer.java:380)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:73)
>   ... 9 more
> {code}
> It turned out that the refactoring from HBASE-14140 changed the code:
> {code}
> -FileSystem fs = inputDir.getFileSystem(conf);
> -List files = getFiles(fs, inputDir, startTime, 

[jira] [Updated] (HBASE-16006) FileSystem should be obtained from specified path in WALInputFormat#getSplits()

2016-06-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16006:
---
Status: Patch Available  (was: Open)

> FileSystem should be obtained from specified path in 
> WALInputFormat#getSplits()
> ---
>
> Key: HBASE-16006
> URL: https://issues.apache.org/jira/browse/HBASE-16006
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 16006.v1.txt
>
>
> I was trying out restore feature and encountered the following exception:
> {code}
> 2016-06-10 16:56:57,533 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
> failed with error: java.io.IOException: Can not restore from backup directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
> java.io.IOException: java.io.IOException: Can not restore from backup 
> directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:257)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:112)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
>   at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at 
> org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:203)
> Caused by: java.io.IOException: Can not restore from backup directory 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
> Hadoop and HBase logs)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:92)
>   at 
> org.apache.hadoop.hbase.backup.util.RestoreServerUtil.incrementalRestoreTable(RestoreServerUtil.java:165)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreImage(RestoreClientImpl.java:293)
>   at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:238)
>   ... 6 more
> Caused by: java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs, 
> expected: hdfs://hbase-test-rc-rerun-6.openstacklocal:8020
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:658)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:212)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:882)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:112)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:951)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:947)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:947)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getFiles(WALInputFormat.java:266)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:246)
>   at 
> org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:227)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
>   at org.apache.hadoop.hbase.mapreduce.WALPlayer.run(WALPlayer.java:380)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:73)
>   ... 9 more
> {code}
> It turned out that the refactoring from HBASE-14140 changed the code:
> {code}
> -FileSystem fs = inputDir.getFileSystem(conf);
> -List files = getFiles(fs, inputDir, 

[jira] [Created] (HBASE-16006) FileSystem should be obtained from specified path in WALInputFormat#getSplits()

2016-06-10 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16006:
--

 Summary: FileSystem should be obtained from specified path in 
WALInputFormat#getSplits()
 Key: HBASE-16006
 URL: https://issues.apache.org/jira/browse/HBASE-16006
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


I was trying out restore feature and encountered the following exception:
{code}
2016-06-10 16:56:57,533 ERROR [main] impl.RestoreClientImpl: ERROR: restore 
failed with error: java.io.IOException: Can not restore from backup directory 
hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
Hadoop and HBase logs)
java.io.IOException: java.io.IOException: Can not restore from backup directory 
hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
Hadoop and HBase logs)
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:257)
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:112)
at 
org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:169)
at 
org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:198)
at 
org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:203)
Caused by: java.io.IOException: Can not restore from backup directory 
hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs (check 
Hadoop and HBase logs)
at 
org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:92)
at 
org.apache.hadoop.hbase.backup.util.RestoreServerUtil.incrementalRestoreTable(RestoreServerUtil.java:165)
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreImage(RestoreClientImpl.java:293)
at 
org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreStage(RestoreClientImpl.java:238)
... 6 more
Caused by: java.lang.IllegalArgumentException: Wrong FS: 
hdfs://hbase-test-rc-rerun-6:8020/user/hbase/backup_1465575766499/WALs, 
expected: hdfs://hbase-test-rc-rerun-6.openstacklocal:8020
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:658)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:212)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:882)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:112)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:951)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:947)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:947)
at 
org.apache.hadoop.hbase.mapreduce.WALInputFormat.getFiles(WALInputFormat.java:266)
at 
org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:246)
at 
org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:227)
at 
org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at 
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.apache.hadoop.hbase.mapreduce.WALPlayer.run(WALPlayer.java:380)
at 
org.apache.hadoop.hbase.backup.mapreduce.MapReduceRestoreService.run(MapReduceRestoreService.java:73)
... 9 more
{code}
It turned out that the refactoring from HBASE-14140 changed the code:
{code}
-FileSystem fs = inputDir.getFileSystem(conf);
-List files = getFiles(fs, inputDir, startTime, endTime);
-
-List splits = new ArrayList(files.size());
-for (FileStatus file : files) {
+FileSystem fs = FileSystem.get(conf);
{code}
We shouldn't be using the default FileSystem.
Instead, FileSystem should be obtained from specified path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15946) Eliminate possible security concerns in RS web UI's store file metrics

2016-06-10 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15946:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Eliminate possible security concerns in RS web UI's store file metrics
> --
>
> Key: HBASE-15946
> URL: https://issues.apache.org/jira/browse/HBASE-15946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Fix For: 1.3.0, 1.2.2
>
> Attachments: HBASE-15946-branch-1.3-mantonov.diff, 
> HBASE-15946-v1.patch, HBASE-15946-v2.patch, HBASE-15946-v3.patch
>
>
> More from static code analysis: it warns about the invoking of a separate 
> command ("hbase hfile -s -f ...") as a possible security issue in 
> hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp.
> It looks to me like one cannot inject arbitrary shell script or even 
> arbitrary arguments: ProcessBuilder makes that fairly safe and only allows 
> the user to specify the argument that comes after -f. However that does 
> potentially allow them to have the daemon's user access files they shouldn't 
> be able to touch, albeit only for reading.
> To more explicitly eliminate any threats here, we should add some validation 
> that the file is at least within HBase's root directory and use the Java API 
> directly instead of invoking a separate executable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15946) Eliminate possible security concerns in RS web UI's store file metrics

2016-06-10 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324946#comment-15324946
 ] 

Mikhail Antonov commented on HBASE-15946:
-

Thanks [~busbey], pushed to branch-1.2, closing.

> Eliminate possible security concerns in RS web UI's store file metrics
> --
>
> Key: HBASE-15946
> URL: https://issues.apache.org/jira/browse/HBASE-15946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Fix For: 1.3.0, 1.2.2
>
> Attachments: HBASE-15946-branch-1.3-mantonov.diff, 
> HBASE-15946-v1.patch, HBASE-15946-v2.patch, HBASE-15946-v3.patch
>
>
> More from static code analysis: it warns about the invoking of a separate 
> command ("hbase hfile -s -f ...") as a possible security issue in 
> hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp.
> It looks to me like one cannot inject arbitrary shell script or even 
> arbitrary arguments: ProcessBuilder makes that fairly safe and only allows 
> the user to specify the argument that comes after -f. However that does 
> potentially allow them to have the daemon's user access files they shouldn't 
> be able to touch, albeit only for reading.
> To more explicitly eliminate any threats here, we should add some validation 
> that the file is at least within HBase's root directory and use the Java API 
> directly instead of invoking a separate executable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15611) add examples to shell docs

2016-06-10 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324913#comment-15324913
 ] 

Sean Busbey commented on HBASE-15611:
-

h3. how can I enable replication for all user tables and column families?

Provided that your HBase deployment includes the command to enable replication 
for all CFs on a table (HBASE-13057 / HBASE-13137), you only need to iterate 
over all user table names and then call that command.

Presuming you have already set up your replication peer correctly, this command 
will also take care of creating the needed tables and column families on the 
destination cluster(s).

{code}
hbase(main):323:0> list.each do |table|
hbase(main):324:1*   enable_table_replication table
hbase(main):325:1> end
{code}

Note that if you do not have replication peer(s) set up, or if connectivity to 
the destination peer(s) is currently not available, these commands will fail.

> add examples to shell docs 
> ---
>
> Key: HBASE-15611
> URL: https://issues.apache.org/jira/browse/HBASE-15611
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, shell
>Reporter: Sean Busbey
>  Labels: beginner
> Fix For: 2.0.0
>
>
> It would be nice if our shell documentation included some additional examples 
> of operational tasks one can perform.
> things to include to come in comments. when we have a patch to submit we can 
> update the jira summary to better reflect what scope we end up with.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16004) Update to Netty 4.1.1

2016-06-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324906#comment-15324906
 ] 

stack commented on HBASE-16004:
---

Patch LGTM. Wait on hadoopqa.

> Update to Netty 4.1.1
> -
>
> Key: HBASE-16004
> URL: https://issues.apache.org/jira/browse/HBASE-16004
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jurriaan Mous
>Assignee: Jurriaan Mous
> Attachments: HBASE-16004.patch
>
>
> Netty 4.1 is out and received first bug fix release so it seems good enough 
> for hbase to migrate.
> It seems to have great performance improvements in Cassandra because of 
> optimizations in cleaning direct buffers. (Now is on by default)
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-11818/comment/15306030
> https://github.com/netty/netty/pull/5314



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15974) Create a ReplicationQueuesClientHBaseImpl

2016-06-10 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-15974:
---
Attachment: (was: HBASE-15974.patch)

> Create a ReplicationQueuesClientHBaseImpl
> -
>
> Key: HBASE-15974
> URL: https://issues.apache.org/jira/browse/HBASE-15974
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Joseph
>Assignee: Joseph
> Attachments: HBASE-15974.patch
>
>
> Currently ReplicationQueuesClient utilizes a ZooKeeper implementation 
> ReplicationQueuesClientZkImpl that attempts to read from the ZNode where 
> ReplicationQueuesZkImpl tracked WAL's. So we need to create a HBase 
> implementation for ReplicationQueuesClient.
> The review is posted at https://reviews.apache.org/r/48521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15974) Create a ReplicationQueuesClientHBaseImpl

2016-06-10 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-15974:
---
Status: Patch Available  (was: Open)

> Create a ReplicationQueuesClientHBaseImpl
> -
>
> Key: HBASE-15974
> URL: https://issues.apache.org/jira/browse/HBASE-15974
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Joseph
>Assignee: Joseph
> Attachments: HBASE-15974.patch
>
>
> Currently ReplicationQueuesClient utilizes a ZooKeeper implementation 
> ReplicationQueuesClientZkImpl that attempts to read from the ZNode where 
> ReplicationQueuesZkImpl tracked WAL's. So we need to create a HBase 
> implementation for ReplicationQueuesClient.
> The review is posted at https://reviews.apache.org/r/48521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15974) Create a ReplicationQueuesClientHBaseImpl

2016-06-10 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-15974:
---
Status: Open  (was: Patch Available)

> Create a ReplicationQueuesClientHBaseImpl
> -
>
> Key: HBASE-15974
> URL: https://issues.apache.org/jira/browse/HBASE-15974
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Joseph
>Assignee: Joseph
> Attachments: HBASE-15974.patch
>
>
> Currently ReplicationQueuesClient utilizes a ZooKeeper implementation 
> ReplicationQueuesClientZkImpl that attempts to read from the ZNode where 
> ReplicationQueuesZkImpl tracked WAL's. So we need to create a HBase 
> implementation for ReplicationQueuesClient.
> The review is posted at https://reviews.apache.org/r/48521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324860#comment-15324860
 ] 

stack commented on HBASE-15971:
---

Oh, I have 8 servers beating up on a single RS.

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> branch-1.hits.png, branch-1.png, handlers.fp.png, hits.fp.png, 
> hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15974) Create a ReplicationQueuesClientHBaseImpl

2016-06-10 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-15974:
---
Attachment: HBASE-15974.patch

1.Added in the ReplicationQueuesClientZkImpl constructor. Would have thrown a 
MethodDoesNotExist exception
2. Addressed Jenkin's style changes
3. Refactored ReplicationQueuesHbaseImpl and ReplicationQueuesClientHbaseImpl 
-> TableBasedReplicationQueuesImpl and TableBasedReplicationQueuesClientImpl

> Create a ReplicationQueuesClientHBaseImpl
> -
>
> Key: HBASE-15974
> URL: https://issues.apache.org/jira/browse/HBASE-15974
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Joseph
>Assignee: Joseph
> Attachments: HBASE-15974.patch
>
>
> Currently ReplicationQueuesClient utilizes a ZooKeeper implementation 
> ReplicationQueuesClientZkImpl that attempts to read from the ZNode where 
> ReplicationQueuesZkImpl tracked WAL's. So we need to create a HBase 
> implementation for ReplicationQueuesClient.
> The review is posted at https://reviews.apache.org/r/48521/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15971:
--
Attachment: hits.patched1.0.vs.unpatched1.0.vs.098.png

One difference is the sort in the scheduler by priority in SimpleRpcScheduler.

In 1.0 we do the following as our default scheduler:
{code}
CallPriorityComparator callPriority = new CallPriorityComparator(conf, 
this.priority);
callExecutor = new BalancedQueueRpcExecutor("B.default", handlerCount, 
numCallQueues,
  conf, abortable, BoundedPriorityBlockingQueue.class, maxQueueLength, 
callPriority);
{code}

In 0.98 we do:

{code}
  callExecutor = new BalancedQueueRpcExecutor("B.Default", handlerCount,
numCallQueues, maxQueueLength, conf, abortable);
{code}

In the graph, you see three humps. The first is branch-1 with the same default 
as 0.98. It does 290k with 24% idle. Next is branch-1 default. It does 210k 
with 40% of cpu idle. The third hump is default 0.98 with 21% of cpu idle.

Loading for the record is workloadc using asynchbase (because it seems to be 
able to put up more load):

{code}
% for i in `seq 0 24`; do for i in `cat /tmp/slaves`; do echo $i; ssh $i "sh -c 
'nohup ./bin/run_ycsb.sh > /dev/null 2>&1 &'"; done; done
{code}

The script is attached (stolen from Busbey)



> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> branch-1.hits.png, branch-1.png, handlers.fp.png, hits.fp.png, 
> hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15971:
--
Attachment: run_ycsb.sh

Busbey script hacked.

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> branch-1.hits.png, branch-1.png, handlers.fp.png, hits.fp.png, 
> hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15584) Revisit handling of BackupState#CANCELLED

2016-06-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15584:
---
Attachment: 15584.v1.txt

Thanks for the feedback.

See if the change to FullTableBackupProcedure is good.
If so, I will modify IncrementalTableBackupProcedure as well.

> Revisit handling of BackupState#CANCELLED
> -
>
> Key: HBASE-15584
> URL: https://issues.apache.org/jira/browse/HBASE-15584
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 15584.v1.txt
>
>
> During review of HBASE-15411, Enis made the following point:
> {code}
> nobody puts the backup in cancelled state. setCancelled() is not used. So if 
> I abort a backup, who writes to the system table the new state? 
> Not sure whether this is a phase 1 patch issue or due to this patch. We can 
> open a new jira and address it there if you do not want to do it in this 
> patch. 
> Also maybe this should be named ABORTED rather than CANCELLED.
> {code}
> This issue is to decide whether this state should be kept (e.g. through 
> notification from procedure V2 framework in response to abortion).
> If it is to be kept, the state should be renamed ABORTED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15584) Revisit handling of BackupState#CANCELLED

2016-06-10 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324802#comment-15324802
 ] 

Stephen Yuan Jiang edited comment on HBASE-15584 at 6/10/16 4:58 PM:
-

[~tedyu], current code in {{FullTableBackupProcedure#toStringClassDetails}} 
only contains backup root directory, you can add more information there (eg. 
such as backupId; or list of tables that backup is running).  

{code}
  @Override
  public void toStringClassDetails(StringBuilder sb) {
sb.append(getClass().getSimpleName());
sb.append(" (targetRootDir=");
sb.append(targetRootDir);
sb.append(")");
{code}

Also, I agree with [~mbertozzi] that we don't need backup ID for abort.  For in 
progress operation, proc Id should be sufficient.   The backup ID is more like 
meta data that stores in system table for future reference (eg. allows user to 
find history of backup; or the backup chain mixed with full and incremental).  


was (Author: syuanjiang):
[~tedyu], current code in {{FullTableBackupProcedure#toStringClassDetails}} 
only contains backup root directory, you can add more information there (eg. 
such as backupId; or list of tables that backup is running).  

{code}
  @Override
  public void toStringClassDetails(StringBuilder sb) {
sb.append(getClass().getSimpleName());
sb.append(" (targetRootDir=");
sb.append(targetRootDir);
sb.append(")");
{code}

> Revisit handling of BackupState#CANCELLED
> -
>
> Key: HBASE-15584
> URL: https://issues.apache.org/jira/browse/HBASE-15584
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Priority: Minor
>
> During review of HBASE-15411, Enis made the following point:
> {code}
> nobody puts the backup in cancelled state. setCancelled() is not used. So if 
> I abort a backup, who writes to the system table the new state? 
> Not sure whether this is a phase 1 patch issue or due to this patch. We can 
> open a new jira and address it there if you do not want to do it in this 
> patch. 
> Also maybe this should be named ABORTED rather than CANCELLED.
> {code}
> This issue is to decide whether this state should be kept (e.g. through 
> notification from procedure V2 framework in response to abortion).
> If it is to be kept, the state should be renamed ABORTED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15584) Revisit handling of BackupState#CANCELLED

2016-06-10 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324802#comment-15324802
 ] 

Stephen Yuan Jiang commented on HBASE-15584:


[~tedyu], current code in {{FullTableBackupProcedure#toStringClassDetails}} 
only contains backup root directory, you can add more information there (eg. 
such as backupId; or list of tables that backup is running).  

{code}
  @Override
  public void toStringClassDetails(StringBuilder sb) {
sb.append(getClass().getSimpleName());
sb.append(" (targetRootDir=");
sb.append(targetRootDir);
sb.append(")");
{code}

> Revisit handling of BackupState#CANCELLED
> -
>
> Key: HBASE-15584
> URL: https://issues.apache.org/jira/browse/HBASE-15584
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Priority: Minor
>
> During review of HBASE-15411, Enis made the following point:
> {code}
> nobody puts the backup in cancelled state. setCancelled() is not used. So if 
> I abort a backup, who writes to the system table the new state? 
> Not sure whether this is a phase 1 patch issue or due to this patch. We can 
> open a new jira and address it there if you do not want to do it in this 
> patch. 
> Also maybe this should be named ABORTED rather than CANCELLED.
> {code}
> This issue is to decide whether this state should be kept (e.g. through 
> notification from procedure V2 framework in response to abortion).
> If it is to be kept, the state should be renamed ABORTED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16005) Implement HFile ref's tracking (bulk loading) in ReplicationQueuesHBaseImpl and ReplicationQueuesClientHBaseImpl

2016-06-10 Thread Joseph (JIRA)
Joseph created HBASE-16005:
--

 Summary: Implement HFile ref's tracking (bulk loading) in 
ReplicationQueuesHBaseImpl and ReplicationQueuesClientHBaseImpl
 Key: HBASE-16005
 URL: https://issues.apache.org/jira/browse/HBASE-16005
 Project: HBase
  Issue Type: Sub-task
Reporter: Joseph


Currently ReplicationQueuesHBaseImpl and ReplicationQueuesClientHBaseImpl have 
not implemented any of the HFile ref methods. They currently throw 
NotImplementedExceptions. We should implement them eventually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15871) Memstore flush doesn't finish because of backwardseek() in memstore scanner.

2016-06-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324698#comment-15324698
 ] 

Ted Yu commented on HBASE-15871:


{code}
+   * Creates a snapshot of the current memstore.
+   * Snapshot must be cleared by call to {@link #clearSnapshot(long)}
+   * @param flushOpSeqId The sequence id of the flush operation.
+   */
+  @Override
+  public MemStoreSnapshot snapshot(long flushOpSeqId) {
{code}
The Id passed to clearSnapshot() is compared against snapshotId field, not 
snapshotOpSeqId.
Better explain this clearly in the javadoc.

Some lines are longer than 100 characters, please wrap them.

> Memstore flush doesn't finish because of backwardseek() in memstore scanner.
> 
>
> Key: HBASE-15871
> URL: https://issues.apache.org/jira/browse/HBASE-15871
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.1.2
>Reporter: Jeongdae Kim
> Fix For: 1.1.2
>
> Attachments: HBASE-15871.branch-1.1.001.patch, 
> HBASE-15871.branch-1.1.002.patch, HBASE-15871.branch-1.1.003.patch, 
> memstore_backwardSeek().PNG
>
>
> Sometimes in our production hbase cluster, it takes a long time to finish 
> memstore flush.( for about more than 30 minutes)
> the reason is that a memstore flusher thread calls 
> StoreScanner.updateReaders(), waits for acquiring a lock that store scanner 
> holds in StoreScanner.next() and backwardseek() in memstore scanner runs for 
> a long time.
> I think that this condition could occur in reverse scan by the following 
> process.
> 1) create a reversed store scanner by requesting a reverse scan.
> 2) flush a memstore in the same HStore.
> 3) puts a lot of cells in memstore and memstore is almost full.
> 4) call the reverse scanner.next() and re-create all scanners in this store 
> because all scanners was already closed by 2)'s flush() and backwardseek() 
> with store's lastTop for all new scanners.
> 5) in this status, memstore is almost full by 2) and all cells in memstore 
> have sequenceID greater than this scanner's readPoint because of 2)'s 
> flush(). this condition causes searching all cells in memstore, and 
> seekToPreviousRow() repeatly seach cells that are already searched if a row 
> has one column. (described this in more detail in a attached file.)
> 6) flush a memstore again in the same HStore, and wait until 4-5) process 
> finished, to update store files in the same HStore after flusing.
> I searched HBase jira. and found a similar issue. (HBASE-14497) but, 
> HBASE-14497's fix can't solve this issue because that fix just changed 
> recursive call to loop.(and already applied to our HBase version)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16004) Update to Netty 4.1.1

2016-06-10 Thread Jurriaan Mous (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jurriaan Mous updated HBASE-16004:
--
Status: Patch Available  (was: Open)

> Update to Netty 4.1.1
> -
>
> Key: HBASE-16004
> URL: https://issues.apache.org/jira/browse/HBASE-16004
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jurriaan Mous
>Assignee: Jurriaan Mous
> Attachments: HBASE-16004.patch
>
>
> Netty 4.1 is out and received first bug fix release so it seems good enough 
> for hbase to migrate.
> It seems to have great performance improvements in Cassandra because of 
> optimizations in cleaning direct buffers. (Now is on by default)
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-11818/comment/15306030
> https://github.com/netty/netty/pull/5314



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16004) Update to Netty 4.1.1

2016-06-10 Thread Jurriaan Mous (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jurriaan Mous updated HBASE-16004:
--
Attachment: HBASE-16004.patch

Updated Netty to 4.1.1

Changed some deprecated methods to the current support ones

> Update to Netty 4.1.1
> -
>
> Key: HBASE-16004
> URL: https://issues.apache.org/jira/browse/HBASE-16004
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jurriaan Mous
>Assignee: Jurriaan Mous
> Attachments: HBASE-16004.patch
>
>
> Netty 4.1 is out and received first bug fix release so it seems good enough 
> for hbase to migrate.
> It seems to have great performance improvements in Cassandra because of 
> optimizations in cleaning direct buffers. (Now is on by default)
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-11818/comment/15306030
> https://github.com/netty/netty/pull/5314



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16004) Update to Netty 4.1.1

2016-06-10 Thread Jurriaan Mous (JIRA)
Jurriaan Mous created HBASE-16004:
-

 Summary: Update to Netty 4.1.1
 Key: HBASE-16004
 URL: https://issues.apache.org/jira/browse/HBASE-16004
 Project: HBase
  Issue Type: Improvement
Reporter: Jurriaan Mous
Assignee: Jurriaan Mous


Netty 4.1 is out and received first bug fix release so it seems good enough for 
hbase to migrate.

It seems to have great performance improvements in Cassandra because of 
optimizations in cleaning direct buffers. (Now is on by default)
https://issues.apache.org/jira/plugins/servlet/mobile#issue/CASSANDRA-11818/comment/15306030
https://github.com/netty/netty/pull/5314




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2016-06-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324666#comment-15324666
 ] 

stack commented on HBASE-15971:
---

Not yet [~mantonov], not until I finger what is difference. Working on this now.

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> branch-1.hits.png, branch-1.png, handlers.fp.png, hits.fp.png
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14743) Add metrics around HeapMemoryManager

2016-06-10 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324502#comment-15324502
 ] 

Reid Chan commented on HBASE-14743:
---

Hope to get reviews, and suggestions.
It is available at review board as well.
https://reviews.apache.org/r/48475/
Thx!

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-14743.001.patch, HBASE-14743.002.patch, 
> HBASE-14743.003.patch, HBASE-14743.004.patch, HBASE-14743.005.patch, 
> HBASE-14743.006.patch, HBASE-14743.007.patch
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14743) Add metrics around HeapMemoryManager

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324488#comment-15324488
 ] 

Hadoop QA commented on HBASE-14743:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 29s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 86m 54s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
43s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 136m 43s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809433/HBASE-14743.007.patch 
|
| JIRA Issue | HBASE-14743 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | 

[jira] [Commented] (HBASE-15946) Eliminate possible security concerns in RS web UI's store file metrics

2016-06-10 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324465#comment-15324465
 ] 

Sean Busbey commented on HBASE-15946:
-

+1 for branch-1.2

> Eliminate possible security concerns in RS web UI's store file metrics
> --
>
> Key: HBASE-15946
> URL: https://issues.apache.org/jira/browse/HBASE-15946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Fix For: 1.3.0, 1.2.2
>
> Attachments: HBASE-15946-branch-1.3-mantonov.diff, 
> HBASE-15946-v1.patch, HBASE-15946-v2.patch, HBASE-15946-v3.patch
>
>
> More from static code analysis: it warns about the invoking of a separate 
> command ("hbase hfile -s -f ...") as a possible security issue in 
> hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp.
> It looks to me like one cannot inject arbitrary shell script or even 
> arbitrary arguments: ProcessBuilder makes that fairly safe and only allows 
> the user to specify the argument that comes after -f. However that does 
> potentially allow them to have the daemon's user access files they shouldn't 
> be able to touch, albeit only for reading.
> To more explicitly eliminate any threats here, we should add some validation 
> that the file is at least within HBase's root directory and use the Java API 
> directly instead of invoking a separate executable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15871) Memstore flush doesn't finish because of backwardseek() in memstore scanner.

2016-06-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324392#comment-15324392
 ] 

Ted Yu commented on HBASE-15871:


{code}
Failed tests: 
  TestDistributedLogSplitting.testSameVersionUpdatesRecoveryWithCompaction:1412 
expected:<2000> but was:<1936>
  TestHBaseFsck.testHbckThreadpooling:529 expected:<[]> but 
was:<[NOT_IN_META_OR_DEPLOYED, HOLE_IN_REGION_CHAIN]>
Tests in error: 
  TestWALLockup.testLockupWhenSyncInMiddleOfZigZagSetup » DamagedWAL Append 
sequ...
  
TestHBaseFsck.testQuarantineMissingRegionDir:2296->doQuarantineTest:2208->cleanupTable:488->deleteTable:2882
 » TableNotDisabled
{code}
Please investigate above test failures.

> Memstore flush doesn't finish because of backwardseek() in memstore scanner.
> 
>
> Key: HBASE-15871
> URL: https://issues.apache.org/jira/browse/HBASE-15871
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.1.2
>Reporter: Jeongdae Kim
> Fix For: 1.1.2
>
> Attachments: HBASE-15871.branch-1.1.001.patch, 
> HBASE-15871.branch-1.1.002.patch, HBASE-15871.branch-1.1.003.patch, 
> memstore_backwardSeek().PNG
>
>
> Sometimes in our production hbase cluster, it takes a long time to finish 
> memstore flush.( for about more than 30 minutes)
> the reason is that a memstore flusher thread calls 
> StoreScanner.updateReaders(), waits for acquiring a lock that store scanner 
> holds in StoreScanner.next() and backwardseek() in memstore scanner runs for 
> a long time.
> I think that this condition could occur in reverse scan by the following 
> process.
> 1) create a reversed store scanner by requesting a reverse scan.
> 2) flush a memstore in the same HStore.
> 3) puts a lot of cells in memstore and memstore is almost full.
> 4) call the reverse scanner.next() and re-create all scanners in this store 
> because all scanners was already closed by 2)'s flush() and backwardseek() 
> with store's lastTop for all new scanners.
> 5) in this status, memstore is almost full by 2) and all cells in memstore 
> have sequenceID greater than this scanner's readPoint because of 2)'s 
> flush(). this condition causes searching all cells in memstore, and 
> seekToPreviousRow() repeatly seach cells that are already searched if a row 
> has one column. (described this in more detail in a attached file.)
> 6) flush a memstore again in the same HStore, and wait until 4-5) process 
> finished, to update store files in the same HStore after flusing.
> I searched HBase jira. and found a similar issue. (HBASE-14497) but, 
> HBASE-14497's fix can't solve this issue because that fix just changed 
> recursive call to loop.(and already applied to our HBase version)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15871) Memstore flush doesn't finish because of backwardseek() in memstore scanner.

2016-06-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324360#comment-15324360
 ] 

Hadoop QA commented on HBASE-15871:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
59s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} branch-1.1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} branch-1.1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 42s 
{color} | {color:red} hbase-server in branch-1.1 has 78 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 30s 
{color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 14 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 53s 
{color} | {color:red} hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 1 
new + 16 unchanged - 0 fixed = 17 total (was 16) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 9s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 109m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.util.TestHBaseFsck |
|   | hadoop.hbase.regionserver.TestWALLockup |
|   | hadoop.hbase.master.TestDistributedLogSplitting |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809427/HBASE-15871.branch-1.1.003.patch
 |
| JIRA Issue | HBASE-15871 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu 

[jira] [Updated] (HBASE-14743) Add metrics around HeapMemoryManager

2016-06-10 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14743:
--
Attachment: HBASE-14743.007.patch

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-14743.001.patch, HBASE-14743.002.patch, 
> HBASE-14743.003.patch, HBASE-14743.004.patch, HBASE-14743.005.patch, 
> HBASE-14743.006.patch, HBASE-14743.007.patch
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >