[jira] [Commented] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095589#comment-15095589
 ] 

Anoop Sam John commented on HBASE-15089:


with release notes explicitly telling abt scr compatibility, I think we can not 
take it as a bug.  When src is recompiled, the required changes also to be done 
in app code.  Thanks for sharing the thoughts guys. 

> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch, HBASE-15089.v2.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
> problem.
> After a check, the compatibility issue was introduced by HBASE-12728. Will 
> recover the compatibility In this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14747) Make it possible to build Javadoc and xref reports for 0.94 again

2016-01-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095738#comment-15095738
 ] 

Hudson commented on HBASE-14747:


FAILURE: Integrated in HBase-0.94 #1483 (See 
[https://builds.apache.org/job/HBase-0.94/1483/])
HBASE-14747 Addendum, do not generate a complete cross reference of all (larsh: 
rev 34487ecc6f90f486325b625a6909de888008f4b2)
* pom.xml


> Make it possible to build Javadoc and xref reports for 0.94 again
> -
>
> Key: HBASE-14747
> URL: https://issues.apache.org/jira/browse/HBASE-14747
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 0.94.27
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 0.94.28
>
> Attachments: 14747-addendum.txt, 14747-addendum2.txt, 
> HBASE-14747-0.94.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15055) Major compaction is not triggered when both of TTL and hbase.hstore.compaction.max.size are set

2016-01-12 Thread Eungsop Yoo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HBASE-15055:

Attachment: HBASE-15055-v11.patch

To address comments of Anoop and Ted, the patch (including some test cases) is 
updated.

> Major compaction is not triggered when both of TTL and 
> hbase.hstore.compaction.max.size are set
> ---
>
> Key: HBASE-15055
> URL: https://issues.apache.org/jira/browse/HBASE-15055
> Project: HBase
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Assignee: Eungsop Yoo
>Priority: Minor
> Attachments: HBASE-15055-v1.patch, HBASE-15055-v10.patch, 
> HBASE-15055-v11.patch, HBASE-15055-v2.patch, HBASE-15055-v3.patch, 
> HBASE-15055-v4.patch, HBASE-15055-v5.patch, HBASE-15055-v6.patch, 
> HBASE-15055-v7.patch, HBASE-15055-v8.patch, HBASE-15055-v9.patch, 
> HBASE-15055.patch
>
>
> Some large files may be skipped by hbase.hstore.compaction.max.size in 
> candidate selection. It causes skipping of major compaction. So the TTL 
> expired records are still remained in the disks and keep consuming disks.
> To resolve this issue, I suggest that to skip large files only if there is no 
> TTL expired record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15055) Major compaction is not triggered when both of TTL and hbase.hstore.compaction.max.size are set

2016-01-12 Thread Eungsop Yoo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HBASE-15055:

Status: Patch Available  (was: Open)

> Major compaction is not triggered when both of TTL and 
> hbase.hstore.compaction.max.size are set
> ---
>
> Key: HBASE-15055
> URL: https://issues.apache.org/jira/browse/HBASE-15055
> Project: HBase
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Assignee: Eungsop Yoo
>Priority: Minor
> Attachments: HBASE-15055-v1.patch, HBASE-15055-v10.patch, 
> HBASE-15055-v11.patch, HBASE-15055-v2.patch, HBASE-15055-v3.patch, 
> HBASE-15055-v4.patch, HBASE-15055-v5.patch, HBASE-15055-v6.patch, 
> HBASE-15055-v7.patch, HBASE-15055-v8.patch, HBASE-15055-v9.patch, 
> HBASE-15055.patch
>
>
> Some large files may be skipped by hbase.hstore.compaction.max.size in 
> candidate selection. It causes skipping of major compaction. So the TTL 
> expired records are still remained in the disks and keep consuming disks.
> To resolve this issue, I suggest that to skip large files only if there is no 
> TTL expired record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15096) precommit protoc check runs twice

2016-01-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15096:

Attachment: HBASE-15075.yetus.log

attaching full log from test run.

> precommit protoc check runs twice
> -
>
> Key: HBASE-15096
> URL: https://issues.apache.org/jira/browse/HBASE-15096
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Sean Busbey
> Attachments: HBASE-15075.yetus.log
>
>
> Our check for protoc runs both for pre-patch and again for patch. In both 
> cases  it acts as though it is working on the patch.
> log sample in the pre-patch check:
> {code}
> 
> 
> Pre-patch master maven eclipse verification
> 
> 
> cd /Users/busbey/projects/hbase/hbase-protocol
> mvn -DHBasePatchProcess eclipse:clean eclipse:eclipse > 
> /private/tmp/yetus-26424.18057/branch-mvneclipse-hbase-protocol.txt 2>&1
> Elapsed:   0m 11s
> cd /Users/busbey/projects/hbase/hbase-client
> mvn -DHBasePatchProcess eclipse:clean eclipse:eclipse > 
> /private/tmp/yetus-26424.18057/branch-mvneclipse-hbase-client.txt 2>&1
> Elapsed:   0m 19s
> cd /Users/busbey/projects/hbase/hbase-server
> mvn -DHBasePatchProcess eclipse:clean eclipse:eclipse > 
> /private/tmp/yetus-26424.18057/branch-mvneclipse-hbase-server.txt 2>&1
> Elapsed:   0m 30s
> 
> 
>  Patch HBase protoc plugin
> 
> 
> cd /Users/busbey/projects/hbase/hbase-protocol
> mvn -DHBasePatchProcess compile -DskipTests -Pcompile-protobuf -X 
> -DHBasePatchProcess > 
> /private/tmp/yetus-26424.18057/patch-hbaseprotoc-hbase-protocol.txt 2>&1
> Elapsed:   0m 31s
> cd /Users/busbey/projects/hbase/hbase-client
> mvn -DHBasePatchProcess compile -DskipTests -Pcompile-protobuf -X 
> -DHBasePatchProcess > 
> /private/tmp/yetus-26424.18057/patch-hbaseprotoc-hbase-client.txt 2>&1
> Elapsed:   0m 19s
> cd /Users/busbey/projects/hbase/hbase-server
> mvn -DHBasePatchProcess compile -DskipTests -Pcompile-protobuf -X 
> -DHBasePatchProcess > 
> /private/tmp/yetus-26424.18057/patch-hbaseprotoc-hbase-server.txt 2>&1
> Elapsed:   0m 27s
> 
> 
>Pre-patch findbugs detection
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13372) Unit tests for SplitTransaction and RegionMergeTransaction listeners

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095711#comment-15095711
 ] 

Hadoop QA commented on HBASE-13372:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 51s 
{color} | {color:red} hbase-server in master has 83 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
21m 6s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m 7s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 100m 55s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 242m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781977/HBASE-13372.4.patch |
| JIRA Issue | HBASE-13372 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5e89ebc |
| findbugs | v3.0.0 |
| findbugs | 

[jira] [Commented] (HBASE-14457) Umbrella: Improve Multiple WAL for production usage

2016-01-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095513#comment-15095513
 ] 

Sean Busbey commented on HBASE-14457:
-

what version of YCSB did y'all use?

> Umbrella: Improve Multiple WAL for production usage
> ---
>
> Key: HBASE-14457
> URL: https://issues.apache.org/jira/browse/HBASE-14457
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: Action in Multiple WAL.pdf, Action in Multiple WAL.pdf
>
>
> HBASE-5699 proposed the idea to run with multiple WAL in regionserver and did 
> a great initial work there, but when trying to use it in our production 
> cluster, we still found several issues to resolve, like tracking multiple WAL 
> paths in replication (HBASE-6617), fixing UT with multiwal provider 
> (HBASE-14411), introducing a namespace-based strategy for 
> RegionGroupingProvider (HBASE-14456), etc. This is an umbrella including(but 
> not limited of) all these works and efforts to make multiple wal ready for 
> production usage and give user a clear picture about it.
> Besides the developing works done, I'd also like to share some scenarios and 
> testing/online data in this JIRA about our usage/performance of multiple wal, 
> to(hopefully) help people better judge whether to enable multiple wal or not 
> in their own cluster and what they might gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15095) isReturnResult=false on fast path in branch-1.1 and branch-1.0 is not respected

2016-01-12 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-15095:
--
Fix Version/s: 1.0.0
   1.1.0
   Status: Patch Available  (was: Open)

> isReturnResult=false  on fast path in branch-1.1 and branch-1.0 is not 
> respected
> 
>
> Key: HBASE-15095
> URL: https://issues.apache.org/jira/browse/HBASE-15095
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 1.0.3, 1.1.2
>Reporter: stack
> Fix For: 1.1.0, 1.0.0
>
> Attachments: HBASE-15095-branch-1.0.patch
>
>
> We don't pay attention to the isReturnResult when we go the fast path 
> increment. Fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15096) precommit protoc check runs twice

2016-01-12 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-15096:
---

 Summary: precommit protoc check runs twice
 Key: HBASE-15096
 URL: https://issues.apache.org/jira/browse/HBASE-15096
 Project: HBase
  Issue Type: Bug
  Components: build
Reporter: Sean Busbey


Our check for protoc runs both for pre-patch and again for patch. In both cases 
 it acts as though it is working on the patch.

log sample in the pre-patch check:

{code}


Pre-patch master maven eclipse verification




cd /Users/busbey/projects/hbase/hbase-protocol
mvn -DHBasePatchProcess eclipse:clean eclipse:eclipse > 
/private/tmp/yetus-26424.18057/branch-mvneclipse-hbase-protocol.txt 2>&1
Elapsed:   0m 11s
cd /Users/busbey/projects/hbase/hbase-client
mvn -DHBasePatchProcess eclipse:clean eclipse:eclipse > 
/private/tmp/yetus-26424.18057/branch-mvneclipse-hbase-client.txt 2>&1
Elapsed:   0m 19s
cd /Users/busbey/projects/hbase/hbase-server
mvn -DHBasePatchProcess eclipse:clean eclipse:eclipse > 
/private/tmp/yetus-26424.18057/branch-mvneclipse-hbase-server.txt 2>&1
Elapsed:   0m 30s




 Patch HBase protoc plugin




cd /Users/busbey/projects/hbase/hbase-protocol
mvn -DHBasePatchProcess compile -DskipTests -Pcompile-protobuf -X 
-DHBasePatchProcess > 
/private/tmp/yetus-26424.18057/patch-hbaseprotoc-hbase-protocol.txt 2>&1
Elapsed:   0m 31s
cd /Users/busbey/projects/hbase/hbase-client
mvn -DHBasePatchProcess compile -DskipTests -Pcompile-protobuf -X 
-DHBasePatchProcess > 
/private/tmp/yetus-26424.18057/patch-hbaseprotoc-hbase-client.txt 2>&1
Elapsed:   0m 19s
cd /Users/busbey/projects/hbase/hbase-server
mvn -DHBasePatchProcess compile -DskipTests -Pcompile-protobuf -X 
-DHBasePatchProcess > 
/private/tmp/yetus-26424.18057/patch-hbaseprotoc-hbase-server.txt 2>&1
Elapsed:   0m 27s




   Pre-patch findbugs detection


{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14457) Umbrella: Improve Multiple WAL for production usage

2016-01-12 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095593#comment-15095593
 ] 

Yu Li commented on HBASE-14457:
---

bq.  What are the units for throughput on the last table?
As Ted explained, it's a Chinese abbreviation for 10k

bq. Was the network ever saturated in the test?
No, from the monitoring data, network peak is less than 140MB/s, should be ok 
for 2Gb network card
Regarding the relatively high average latency, I think it's because there're 16 
column qualifier in our test table (to simulate our specific online scenario)

bq. Were flushes ever the bottleneck?
No, didn't observe blocking update in the test

> Umbrella: Improve Multiple WAL for production usage
> ---
>
> Key: HBASE-14457
> URL: https://issues.apache.org/jira/browse/HBASE-14457
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: Action in Multiple WAL.pdf, Action in Multiple WAL.pdf
>
>
> HBASE-5699 proposed the idea to run with multiple WAL in regionserver and did 
> a great initial work there, but when trying to use it in our production 
> cluster, we still found several issues to resolve, like tracking multiple WAL 
> paths in replication (HBASE-6617), fixing UT with multiwal provider 
> (HBASE-14411), introducing a namespace-based strategy for 
> RegionGroupingProvider (HBASE-14456), etc. This is an umbrella including(but 
> not limited of) all these works and efforts to make multiple wal ready for 
> production usage and give user a clear picture about it.
> Besides the developing works done, I'd also like to share some scenarios and 
> testing/online data in this JIRA about our usage/performance of multiple wal, 
> to(hopefully) help people better judge whether to enable multiple wal or not 
> in their own cluster and what they might gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095622#comment-15095622
 ] 

Yu Li commented on HBASE-15089:
---

Thanks for the detailed explanation [~elserj], [~enis] and [~busbey]! I agree 
that this could be closed as "Not a problem" since source compatibility 
description in release note is clear (my fault that didn't read it carefully 
enough). And thanks [~anoop.hbase] for coordination. HBase team is excellent as 
always! :-)

bq. Alternatively, would an addition to the upgrade docs that gives an example 
of moving from HTable in 0.98 to BufferedMutator in 1.0 help ease this pain Yu 
Li?
Yes, I think this would be helpful, thanks.

> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch, HBASE-15089.v2.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
> problem.
> After a check, the compatibility issue was introduced by HBASE-12728. Will 
> recover the compatibility In this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14877) maven archetype: client application

2016-01-12 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14877:
--
Attachment: HBASE-14877.patch

Patch: HBASE-14877 - add Maven archetype infrastructure and hbase-client 
archetype

> maven archetype: client application
> ---
>
> Key: HBASE-14877
> URL: https://issues.apache.org/jira/browse/HBASE-14877
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Usability
>Reporter: Nick Dimiduk
>Assignee: Daniel Vimont
>  Labels: beginner
> Attachments: HBASE-14877.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14876) Provide maven archetypes

2016-01-12 Thread Daniel Vimont (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095660#comment-15095660
 ] 

Daniel Vimont commented on HBASE-14876:
---

Just submitted a patch for the first sub-task -- HBASE-14877
This patch introduces the overall infrastructure for creation/maintenance of 
Maven archetypes, and also the first archetype (which end-users will use to 
generate a simple Maven project with hbase-client dependency).

After this first sub-task patch is committed, I'll submit patches for the other 
two originally-planned archetypes (hbase-shaded-client and hbase-mapreduce), 
and then (time permitting) I'll also work on the fourth archetype (for 
hbase-spark examples).

> Provide maven archetypes
> 
>
> Key: HBASE-14876
> URL: https://issues.apache.org/jira/browse/HBASE-14876
> Project: HBase
>  Issue Type: New Feature
>  Components: build, Usability
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: Daniel Vimont
>  Labels: beginner, maven
> Attachments: HBASE-14876-v2.patch, HBASE-14876.patch, 
> archetype_prototype.zip, archetype_prototype02.zip, 
> archetype_shaded_prototype01.zip
>
>
> To help onboard new users, we should provide maven archetypes for hbase 
> client applications. Off the top of my head, we should have templates for
>  - hbase client application with all dependencies
>  - hbase client application using client-shaded jar
>  - mapreduce application with hbase as input and output (ie, copy table)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15095) isReturnResult=false on fast path in branch-1.1 and branch-1.0 is not respected

2016-01-12 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-15095:
--
Attachment: HBASE-15095-branch-1.0.patch

make a patch on branch-1.0

As for testcase,  i found restart RS not load new config,  i am not sure why...

So i just start miniCluster directly for each instance.

> isReturnResult=false  on fast path in branch-1.1 and branch-1.0 is not 
> respected
> 
>
> Key: HBASE-15095
> URL: https://issues.apache.org/jira/browse/HBASE-15095
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 1.1.2, 1.0.3
>Reporter: stack
> Attachments: HBASE-15095-branch-1.0.patch
>
>
> We don't pay attention to the isReturnResult when we go the fast path 
> increment. Fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15075) Allow region split request to carry identification information

2016-01-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095572#comment-15095572
 ] 

Ted Yu commented on HBASE-15075:


Ran test suite locally based on patch v4.
No regression was found.

> Allow region split request to carry identification information
> --
>
> Key: HBASE-15075
> URL: https://issues.apache.org/jira/browse/HBASE-15075
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15075-v0.txt, 15075-v1.txt, 15075-v2.txt, 
> HBASE-15075.v2.patch, HBASE-15075.v3.patch, HBASE-15075.v4.patch
>
>
> During the process of improving region normalization feature, I found that if 
> region split request triggered by the execution of SplitNormalizationPlan 
> fails, there is no way of knowing whether the failed split originated from 
> region normalization.
> The association of particular split request with outcome of split would give 
> RegionNormalizer information so that it can make better normalization 
> decisions in the subsequent invocations.
> One approach is to embed metadata, such as a UUID, in SplitRequest which gets 
> passed through RegionStateTransitionContext when 
> RegionServerServices#reportRegionStateTransition() is called.
> This way, RegionStateListener can be notified with the metadata (id of the 
> requester).
> See discussion on dev mailing list
> http://search-hadoop.com/m/YGbbCXdkivihp2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15082) Fix merge of MVCC and SequenceID performance regression

2016-01-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15082:
--
Attachment: 15082v13.patch

Working on forward-port of the branch-1.1 patch,  I found a bug in this patch 
and an explanation for something that baffled me; add fix and doc.

Let me start up some perf comparisons of this patch against what we have 
currently to see if the reordering and holding on to mvcc longer changes our 
perf profile significantly.

> Fix merge of MVCC and SequenceID performance regression
> ---
>
> Key: HBASE-15082
> URL: https://issues.apache.org/jira/browse/HBASE-15082
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 15082.patch, 15082v10.patch, 15082v12.patch, 
> 15082v13.patch, 15082v2.patch, 15082v2.txt, 15082v3.txt, 15082v4.patch, 
> 15082v5.patch, 15082v6.patch, 15082v7.patch, 15082v8.patch
>
>
> This is general fix for increments (appends, checkAnd*) perf-regression 
> identified in the parent issue. HBASE-15031 has a narrow fix for branch-1.1 
> and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14457) Umbrella: Improve Multiple WAL for production usage

2016-01-12 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095481#comment-15095481
 ] 

Yu Li commented on HBASE-14457:
---

Thanks [~tedyu] for help clarify the throughput units, have just updated the 
doc and changed unit to k for better understanding. Also thanks for review the 
doc offline before I upload it here.

> Umbrella: Improve Multiple WAL for production usage
> ---
>
> Key: HBASE-14457
> URL: https://issues.apache.org/jira/browse/HBASE-14457
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: Action in Multiple WAL.pdf, Action in Multiple WAL.pdf
>
>
> HBASE-5699 proposed the idea to run with multiple WAL in regionserver and did 
> a great initial work there, but when trying to use it in our production 
> cluster, we still found several issues to resolve, like tracking multiple WAL 
> paths in replication (HBASE-6617), fixing UT with multiwal provider 
> (HBASE-14411), introducing a namespace-based strategy for 
> RegionGroupingProvider (HBASE-14456), etc. This is an umbrella including(but 
> not limited of) all these works and efforts to make multiple wal ready for 
> production usage and give user a clear picture about it.
> Besides the developing works done, I'd also like to share some scenarios and 
> testing/online data in this JIRA about our usage/performance of multiple wal, 
> to(hopefully) help people better judge whether to enable multiple wal or not 
> in their own cluster and what they might gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15055) Major compaction is not triggered when both of TTL and hbase.hstore.compaction.max.size are set

2016-01-12 Thread Eungsop Yoo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HBASE-15055:

Status: Open  (was: Patch Available)

> Major compaction is not triggered when both of TTL and 
> hbase.hstore.compaction.max.size are set
> ---
>
> Key: HBASE-15055
> URL: https://issues.apache.org/jira/browse/HBASE-15055
> Project: HBase
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Assignee: Eungsop Yoo
>Priority: Minor
> Attachments: HBASE-15055-v1.patch, HBASE-15055-v10.patch, 
> HBASE-15055-v2.patch, HBASE-15055-v3.patch, HBASE-15055-v4.patch, 
> HBASE-15055-v5.patch, HBASE-15055-v6.patch, HBASE-15055-v7.patch, 
> HBASE-15055-v8.patch, HBASE-15055-v9.patch, HBASE-15055.patch
>
>
> Some large files may be skipped by hbase.hstore.compaction.max.size in 
> candidate selection. It causes skipping of major compaction. So the TTL 
> expired records are still remained in the disks and keep consuming disks.
> To resolve this issue, I suggest that to skip large files only if there is no 
> TTL expired record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14457) Umbrella: Improve Multiple WAL for production usage

2016-01-12 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095606#comment-15095606
 ] 

Yu Li commented on HBASE-14457:
---

Oh yes, forgot to mention that in doc. The YCSB version is 0.3.1, and the hbase 
version in last test (PCIe SSD) is our 0.98.12 with multiple wal function 
backported. Use 0.98.12 since we need a comparison with our online data, JFYI.

> Umbrella: Improve Multiple WAL for production usage
> ---
>
> Key: HBASE-14457
> URL: https://issues.apache.org/jira/browse/HBASE-14457
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0
>
> Attachments: Action in Multiple WAL.pdf, Action in Multiple WAL.pdf
>
>
> HBASE-5699 proposed the idea to run with multiple WAL in regionserver and did 
> a great initial work there, but when trying to use it in our production 
> cluster, we still found several issues to resolve, like tracking multiple WAL 
> paths in replication (HBASE-6617), fixing UT with multiwal provider 
> (HBASE-14411), introducing a namespace-based strategy for 
> RegionGroupingProvider (HBASE-14456), etc. This is an umbrella including(but 
> not limited of) all these works and efforts to make multiple wal ready for 
> production usage and give user a clear picture about it.
> Besides the developing works done, I'd also like to share some scenarios and 
> testing/online data in this JIRA about our usage/performance of multiple wal, 
> to(hopefully) help people better judge whether to enable multiple wal or not 
> in their own cluster and what they might gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14877) maven archetype: client application

2016-01-12 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14877:
--
   Labels: archetype beginner maven  (was: beginner)
 Release Note: 
This patch introduces a new infrastructure for creation and maintenance of 
Maven archetypes in the context of the hbase project, and it also introduces 
the first archetype, which end-users may utilize to generate a simple 
hbase-client dependent project.

NOTE that this patch should introduce two new WARNINGs ("Using platform 
encoding ... to copy filtered resources") into the hbase install process. These 
warnings are hard-wired into the maven-archetype-plugin:create-from-project 
goal. See hbase/hbase-archetypes/README.txt, footnote [7] for details.

After applying the patch, see hbase/hbase-archetypes/README.txt for details 
regarding the new archetype infrastructure introduced by this patch. (The 
README text is also conveniently positioned at the top of the patch itself.) 

Here is the opening paragraph of the README.txt file: 
= 
The hbase-archetypes subproject of hbase provides an infrastructure for 
creation and maintenance of Maven archetypes pertinent to HBase. Upon 
deployment to the archetype catalog of the central Maven repository, these 
archetypes may be used by end-user developers to autogenerate completely 
configured Maven projects (including fully-functioning sample code) through 
invocation of the archetype:generate goal of the maven-archetype-plugin. 
 
The README.txt file also contains several paragraphs under the heading, "Notes 
for contributors to the HBase project", which explains the layout of 
'hbase-archetypes', and how archetypes are created and installed into the local 
Maven repository, ready for deployment to the central Maven repository. It also 
outlines how new archetypes may be developed and added to the collection in the 
future.
Affects Version/s: 2.0.0
   Status: Patch Available  (was: Open)

Inserting Release Notes for this sub-task, and submitting HBASE-14877.patch

> maven archetype: client application
> ---
>
> Key: HBASE-14877
> URL: https://issues.apache.org/jira/browse/HBASE-14877
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Usability
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: Daniel Vimont
>  Labels: beginner, maven, archetype
> Attachments: HBASE-14877.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14876) Provide maven archetypes

2016-01-12 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-14876:
--
Release Note: 
HBASE-14876 - new infrastructure for Maven archetypes

Patches are being submitted via the sub-tasks of this task. Please see release 
notes of the sub-tasks for details.

  was:
HBASE-14876 - new infrastructure for Maven archetypes

After applying the patch, see README.txt in the directory 'hbase-archetypes' 
(new subdirectory of 'hbase') for details. (The README text is also 
conveniently positioned at the top of the patch itself.)

Here is the opening paragraph of the README.txt file:
=
The hbase-archetypes subproject of hbase provides an infrastructure for 
creation and maintenance of Maven archetypes pertinent to HBase. Upon 
deployment to the archetype catalog of the central Maven repository, these 
archetypes may be used by end-user developers to autogenerate completely 
configured Maven projects (including fully-functioning sample code) through 
invocation of the archetype:generate goal of the maven-archetype-plugin.

The README.txt file also contains several paragraphs under the heading, "Notes 
for contributors to the HBase project", which explains the layout of 
'hbase-archetypes', and how archetypes are created and installed into the local 
Maven repository, ready for deployment to the central Maven repository. It also 
outlines how new archetypes may be developed and added to the collection in the 
future.


> Provide maven archetypes
> 
>
> Key: HBASE-14876
> URL: https://issues.apache.org/jira/browse/HBASE-14876
> Project: HBase
>  Issue Type: New Feature
>  Components: build, Usability
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: Daniel Vimont
>  Labels: beginner, maven
> Attachments: HBASE-14876-v2.patch, HBASE-14876.patch, 
> archetype_prototype.zip, archetype_prototype02.zip, 
> archetype_shaded_prototype01.zip
>
>
> To help onboard new users, we should provide maven archetypes for hbase 
> client applications. Off the top of my head, we should have templates for
>  - hbase client application with all dependencies
>  - hbase client application using client-shaded jar
>  - mapreduce application with hbase as input and output (ie, copy table)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13372) Unit tests for SplitTransaction and RegionMergeTransaction listeners

2016-01-12 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HBASE-13372:
-
Attachment: HBASE-13372.4.patch

> Unit tests for SplitTransaction and RegionMergeTransaction listeners
> 
>
> Key: HBASE-13372
> URL: https://issues.apache.org/jira/browse/HBASE-13372
> Project: HBase
>  Issue Type: Test
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Andrew Purtell
>Assignee: Gabor Liptak
>  Labels: beginner
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-13372.2.patch, HBASE-13372.3.patch, 
> HBASE-13372.4.patch
>
>
> We have new Listener interfaces in SplitTransaction and 
> RegionMergeTransaction. There are no use cases for these yet, nor unit tests. 
> We should have unit tests for these that do something just a bit nontrivial 
> so as to provide a useful example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15055) Major compaction is not triggered when both of TTL and hbase.hstore.compaction.max.size are set

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095694#comment-15095694
 ] 

Hadoop QA commented on HBASE-15055:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 3s 
{color} | {color:red} hbase-server in master has 83 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m 30s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 31s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 28s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 235m 1s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781975/HBASE-15055-v11.patch 
|
| JIRA Issue | HBASE-15055 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5e89ebc |
| findbugs | v3.0.0 |
| findbugs | 

[jira] [Commented] (HBASE-15094) Selection of WAL files eligible for incremental backup is broken

2016-01-12 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095718#comment-15095718
 ] 

Jerry He commented on HBASE-15094:
--

bq.Backup T1 (WAL1 becomes eligible for deletion)
Is this a full backup or incremental?

> Selection of WAL files eligible for incremental backup is broken
> 
>
> Key: HBASE-15094
> URL: https://issues.apache.org/jira/browse/HBASE-15094
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
>
> We should add to a selection WAL files which have been copied over by 
> different backup sessions and which are newer than the most recent files 
> processed for a given table. Currently, we look for WAL files only in a files 
> system (WALs and OldWALs directories)
> Scenario
> Backup T1
> Backup T2
> add WAL1
> Backup T1 (WAL1 becomes eligible for deletion)
> some time later
> Backup T2 (WAL1 can be deleted by this time and won't get it into WAL file 
> list for T2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15095) isReturnResult=false on fast path in branch-1.1 and branch-1.0 is not respected

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095728#comment-15095728
 ] 

Hadoop QA commented on HBASE-15095:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
1s {color} | {color:green} branch-1.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s 
{color} | {color:green} branch-1.0 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} branch-1.0 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} branch-1.0 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} branch-1.0 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 55s 
{color} | {color:red} hbase-client in branch-1.0 has 14 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 36s 
{color} | {color:red} hbase-server in branch-1.0 has 60 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s 
{color} | {color:red} hbase-client in branch-1.0 failed with JDK v1.8.0. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s 
{color} | {color:red} hbase-server in branch-1.0 failed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} branch-1.0 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 18s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
54s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hbase-client in the patch failed with JDK v1.8.0. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.8.0. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 0s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 47s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 30s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
41s 

[jira] [Updated] (HBASE-15055) Major compaction is not triggered when both of TTL and hbase.hstore.compaction.max.size are set

2016-01-12 Thread Eungsop Yoo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HBASE-15055:

Release Note: If the major compaction period has elapsed, we should run a 
major compaction irrespective of hbase.hstore.compaction.max.size.  (was: Do 
not skip large files when the sum of the size of TTL expired store files is 
greater than threshold. Major compaction should be triggered to delete TTL 
expired records.

One parameter is added, "hbase.hstore.expired.size.ratio". When 0.0 is set to 
this parameter, major compaction is triggered if there is any TTL expired 
records. When 1.0 is set, all of large files are skipped regardless of the 
existence of TTL expired records. The default is 0.5, major compaction is 
triggered if above 50% of the store files in size have TTL expired records.
)

> Major compaction is not triggered when both of TTL and 
> hbase.hstore.compaction.max.size are set
> ---
>
> Key: HBASE-15055
> URL: https://issues.apache.org/jira/browse/HBASE-15055
> Project: HBase
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Assignee: Eungsop Yoo
>Priority: Minor
> Attachments: HBASE-15055-v1.patch, HBASE-15055-v10.patch, 
> HBASE-15055-v11.patch, HBASE-15055-v2.patch, HBASE-15055-v3.patch, 
> HBASE-15055-v4.patch, HBASE-15055-v5.patch, HBASE-15055-v6.patch, 
> HBASE-15055-v7.patch, HBASE-15055-v8.patch, HBASE-15055-v9.patch, 
> HBASE-15055.patch
>
>
> Some large files may be skipped by hbase.hstore.compaction.max.size in 
> candidate selection. It causes skipping of major compaction. So the TTL 
> expired records are still remained in the disks and keep consuming disks.
> To resolve this issue, I suggest that to skip large files only if there is no 
> TTL expired record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-15089:
---
Assignee: (was: Yu Li)

> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch, HBASE-15089.v2.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
> problem.
> After a check, the compatibility issue was introduced by HBASE-12728. Will 
> recover the compatibility In this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-15089:
---
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch, HBASE-15089.v2.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
> problem.
> After a check, the compatibility issue was introduced by HBASE-12728. Will 
> recover the compatibility In this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15083) Gets from Multiactions are not counted in metrics for gets.

2016-01-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093501#comment-15093501
 ] 

Hudson commented on HBASE-15083:


SUCCESS: Integrated in HBase-1.2-IT #389 (See 
[https://builds.apache.org/job/HBase-1.2-IT/389/])
HBASE-15083 Gets from Multiactions are not counted in metrics for gets 
(chenheng: rev c1d916d83fad74801a77de3d0e77aa8db721b85f)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java


> Gets from Multiactions are not counted in metrics for gets.
> ---
>
> Key: HBASE-15083
> URL: https://issues.apache.org/jira/browse/HBASE-15083
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Heng Chen
> Fix For: 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15083-branch-1.patch, HBASE-15083.patch, 
> HBASE-15083.patch
>
>
> RSRpcServices#get updates the get metrics. However Multiactions do not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15087) Fix hbase-common findbugs complaints

2016-01-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093532#comment-15093532
 ] 

stack commented on HBASE-15087:
---

Thanks for +1s lads.

I pushed the patch. This fixes the findbugs complaints noted above in common 
though it complains they still exist. Lets see if commit changes hbase-common 
findbugs counts.

> Fix hbase-common findbugs complaints
> 
>
> Key: HBASE-15087
> URL: https://issues.apache.org/jira/browse/HBASE-15087
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: stack
>Assignee: Stack
> Fix For: 2.0.0
>
> Attachments: 15087.patch, 15087v2.patch, 15087v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14159) Resolve warning introduced by HBase-Spark module

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093534#comment-15093534
 ] 

Hadoop QA commented on HBASE-14159:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
31s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
21m 24s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s 
{color} | {color:green} hbase-spark in the patch passed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hbase-spark in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
7s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781629/HBASE-14159-master-v1.patch
 |
| JIRA Issue | HBASE-14159 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  |
| uname | Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8ee9158 |
| JDK v1.7.0_79  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/76/testReport/ |
| modules | C: hbase-spark U: hbase-spark |
| Max memory used | 191MB |
| Powered by | Apache Yetus 0.1.0   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/76/console |


This message was automatically generated.



> Resolve warning introduced by HBase-Spark module
> 
>
> Key: HBASE-14159
> URL: 

[jira] [Updated] (HBASE-15087) Fix hbase-common findbugs complaints

2016-01-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15087:
--
   Resolution: Fixed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

> Fix hbase-common findbugs complaints
> 
>
> Key: HBASE-15087
> URL: https://issues.apache.org/jira/browse/HBASE-15087
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: stack
>Assignee: Stack
> Fix For: 2.0.0
>
> Attachments: 15087.patch, 15087v2.patch, 15087v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15089) Compatibility issue on HTable#flushCommits

2016-01-12 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093553#comment-15093553
 ] 

Yu Li commented on HBASE-15089:
---

As a compatibility fix, it's by design that no new UT added for this JIRA. 
Regarding the findbugs warnings, none of them are introduced by this patch.

> Compatibility issue on HTable#flushCommits
> --
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid).
> In this JIRA we propose to recover the compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093561#comment-15093561
 ] 

ramkrishna.s.vasudevan commented on HBASE-15085:


+1 will commit this. Pls attach corresponding branch-1 patches also with the 
updated checkstyle. If you can't let me know I can update the patch while 
commit to branch-1.

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>  Labels: hfile
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-branch-1.0-v1.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.2-v1.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, Reference reference,
>   HColumnDescriptor familyDescriptor)
>   throws IOException {
> FileSystem fs = inFile.getFileSystem(conf);
> CacheConfig cacheConf = new CacheConfig(conf);
> HalfStoreFileReader halfReader = null;
> StoreFile.Writer halfWriter = null;
> try {
>

[jira] [Updated] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15085:
---
   Resolution: Fixed
Fix Version/s: 1.0.4
   0.98.17
   1.1.3
   1.2.1
   1.3.0
   1.2.0
   2.0.0
 Release Note: Pushed to 0.98, 1.0 and master  branches. Thanks for the 
patch [~victorunique].
   Status: Resolved  (was: Patch Available)

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-branch-1.0-v1.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.2-v1.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, Reference reference,
>   HColumnDescriptor familyDescriptor)
>   throws 

[jira] [Updated] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-15089:
--
Summary: Compatibility issue on flushCommits and put methods in HTable  
(was: Compatibility issue on HTable#flushCommits)

> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid).
> In this JIRA we propose to recover the compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093644#comment-15093644
 ] 

ramkrishna.s.vasudevan commented on HBASE-15085:


Any FileInfo that is not explicitly written by the halfWriter but that is there 
is in the bulk loaded file those will have issues. Others may get overwritten 
during the halfWriter.close() call. So in that case Bloom will be a problem.

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, 

[jira] [Updated] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-15089:
--
Attachment: HBASE-15089.v2.patch

More fix on the put methods.

Notice that the flushCommits and put methods signature in HTableInterface are 
all declared to throw IOException, no matter in 0.98 or 1.x, but the 
implementation methods in HTable change to throw InterruptedIOException and 
RetriesExhaustedWithDetailsException in 0.98 codes. Fix in the patch follows 
the 0.98 way.

> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch, HBASE-15089.v2.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
> problem.
> After a check, the compatibility issue was introduced by HBASE-12728. Will 
> recover the compatibility In this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15083) Gets from Multiactions are not counted in metrics for gets.

2016-01-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093663#comment-15093663
 ] 

Hudson commented on HBASE-15083:


SUCCESS: Integrated in HBase-1.2 #500 (See 
[https://builds.apache.org/job/HBase-1.2/500/])
HBASE-15083 Gets from Multiactions are not counted in metrics for gets 
(chenheng: rev c1d916d83fad74801a77de3d0e77aa8db721b85f)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java


> Gets from Multiactions are not counted in metrics for gets.
> ---
>
> Key: HBASE-15083
> URL: https://issues.apache.org/jira/browse/HBASE-15083
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Heng Chen
> Fix For: 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15083-branch-1.patch, HBASE-15083.patch, 
> HBASE-15083.patch
>
>
> RSRpcServices#get updates the get metrics. However Multiactions do not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093674#comment-15093674
 ] 

Anoop Sam John commented on HBASE-15085:


Ya make sense.  Just a question.
We may have to handle other possible issues with other meta info as well. 
(Bloom being the main).. And see the later comments, am wondering why we need 
to copy any. All the meta should be written by the writer which is doing the 
split now.

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path 

[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093528#comment-15093528
 ] 

Hadoop QA commented on HBASE-15085:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 56s 
{color} | {color:red} hbase-server in master has 83 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 13s 
{color} | {color:red} Patch generated 1 new checkstyle issues in hbase-server 
(total was 18, now 19). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 26s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 46s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 6s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 28s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 49s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 13s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 37s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 59s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 23s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} 

[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093536#comment-15093536
 ] 

ramkrishna.s.vasudevan commented on HBASE-15085:


bq. import org.apache.hadoop.hbase.io.hfile.*;
There is this checkstyle warning. I think the test case failures are unrelated. 

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>  Labels: hfile
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-branch-1.0-v1.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.2-v1.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, Reference reference,
>   HColumnDescriptor familyDescriptor)
>   throws IOException {
> FileSystem fs = inFile.getFileSystem(conf);
> CacheConfig cacheConf = new CacheConfig(conf);
> HalfStoreFileReader halfReader = null;
> StoreFile.Writer halfWriter = null;
> try {
>   halfReader = new HalfStoreFileReader(fs, inFile, cacheConf, 

[jira] [Commented] (HBASE-14962) TestSplitWalDataLoss fails on all branches

2016-01-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093546#comment-15093546
 ] 

stack commented on HBASE-14962:
---

Will take a look at it in morning Sir.

> TestSplitWalDataLoss fails on all branches
> --
>
> Key: HBASE-14962
> URL: https://issues.apache.org/jira/browse/HBASE-14962
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: stack
>Priority: Blocker
> Fix For: 1.2.0
>
> Attachments: 
> org.apache.hadoop.hbase.regionserver.TestSplitWalDataLoss-output.txt, 
> org.apache.hadoop.hbase.regionserver.TestSplitWalDataLoss-output.txt
>
>
> With some regularity I am seeing: 
> {code}
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: TestSplitWalDataLoss:dataloss: 1 time, 
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1712)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.regionserver.TestSplitWalDataLoss.test(TestSplitWalDataLoss.java:121)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093606#comment-15093606
 ] 

ramkrishna.s.vasudevan commented on HBASE-15085:


>>Like the DBE, the bloom type also can get mismatch. So we should not copy the 
>>src file's bloom type also into the written file's FileInfo?
May be yes. We can check for other info also if there needs some change - in 
another JIRA ?

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, 

[jira] [Updated] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-15089:
--
Description: 
Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
has been changed to throw IOException, which will force application code 
changes for exception handling (previous catch on InterruptedIOException and 
RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
problem.

After a check, the compatibility issue was introduced by HBASE-12728. Will 
recover the compatibility In this JIRA.

  was:
Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
has been changed to throw IOException, which will force application code 
changes for exception handling (previous catch on InterruptedIOException and 
RetriesExhaustedWithDetailsException become invalid).

In this JIRA we propose to recover the compatibility.


> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
> problem.
> After a check, the compatibility issue was introduced by HBASE-12728. Will 
> recover the compatibility In this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093671#comment-15093671
 ] 

Anoop Sam John commented on HBASE-15089:


[~ndimiduk]  The change went in with out any deprecation path as we dont have 
compatibility guidelines for older versions like 98? It was intended?

> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch, HBASE-15089.v2.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
> problem.
> After a check, the compatibility issue was introduced by HBASE-12728. Will 
> recover the compatibility In this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-15085:
---
Fix Version/s: (was: 1.2.1)

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, Reference reference,
>   HColumnDescriptor familyDescriptor)
>   throws IOException {
> FileSystem fs = inFile.getFileSystem(conf);
> CacheConfig cacheConf = new CacheConfig(conf);
> HalfStoreFileReader halfReader = null;
> 

[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093613#comment-15093613
 ] 

Anoop Sam John commented on HBASE-15085:


I think we should handle the inconsistency with bloom type also.. And any other 
meta info we are taking from the src file unnecessarily?

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, Reference reference,
>   HColumnDescriptor familyDescriptor)
>   throws IOException {
> FileSystem fs = 

[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093625#comment-15093625
 ] 

Anoop Sam John commented on HBASE-15085:


Ya fine for another Jira.
Having a check at the FileInfo items what we have I am thinking why we should 
copy any of the item from the split src file.
We have these items
MOB_CELLS_COUNT  ->  When the file is split into 2, this count in the src will 
get split into 2 files
MAJOR_COMPACTION_KEY ->  In the src table yes it might be a major compacted 
file but whether it is making any sense in the destination table where the file 
is bulk loaded?
MAX_SEQ_ID_KEY  -> Again the src file contain max of all cells and these cells 
we are going to split into 2 files now.
MAX_MEMSTORE_TS_KEY -> Same as above

BLOOM_FILTER_TYPE_KEY -> Any way an issue
DELETE_FAMILY_COUNT -> Again this count in the src is split into 2 files now
LAST_BLOOM_KEY -> This also changes
KEY_VALUE_VERSION -> ideally this version is not changing.  Still , in the file 
which was written now, the version used is the version in this current table's 
cluster not the one in the src file


> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> 

[jira] [Commented] (HBASE-10742) Data temperature aware compaction policy

2016-01-12 Thread Orange (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093497#comment-15093497
 ] 

Orange commented on HBASE-10742:


Whether hbase can determine data temperature now?

> Data temperature aware compaction policy
> 
>
> Key: HBASE-10742
> URL: https://issues.apache.org/jira/browse/HBASE-10742
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Andrew Purtell
>
> Reading "Identifying Hot and Cold Data in Main-Memory Databases" (Levandoski, 
> Larson, and Stoica), it occurred to me that some of the motivation applies to 
> HBase and some of the results can inform a data temperature aware compaction 
> policy implementation.
> We also wish to optimize retention of cells in the working set in memory, in 
> blockcache. 
> We can also consider further and related performance optimizations in HBase 
> that awareness of hot and cold data can enable, even for cases where the 
> working set does not fit in memory. If we could partition HFiles into hot and 
> cold (cold+lukewarm) and move cells between them at compaction time, then we 
> could:
> - Migrate hot HFiles onto alternate storage tiers with improved read latency 
> and throughput characteristics. This has been discussed before on HBASE-6572. 
> Or, migrate cold HFiles to an archival tier.
> - Preload hot HFiles into blockcache to increase cache hit rates, especially 
> when regions are first brought online. And/or add another LRU priority to 
> increase the likelihood of retention of blocks in hot HFiles. This could be 
> sufficiently different from ARC to avoid issues there. 
> - Reduce the compaction priorities of cold HFiles, with proportional 
> reduction in priority IO and write amplification, since cold files would less 
> frequently participate in reads.
> Levandoski et. al. describe determining data temperature with low overhead 
> using an out of band estimation process running in the background over an 
> access log. We could consider logging reads along with mutations and 
> similarly process the result in the background. The WAL could be overloaded 
> to carry access log records, or we could follow the approach described in the 
> paper and maintain an in memory access log only. 
> {quote}
> We chose the offline approach for several reasons. First, as mentioned 
> earlier, the overhead of even the simplest caching scheme is very high. 
> Second, the offline approach is generic and requires minimum changes to the 
> database engine. Third, logging imposes very little overhead during normal 
> operation. Finally, it allows flexibility in when, where, and how to analyze 
> the log and estimate access frequencies. For instance, the analysis can be 
> done on a separate machine, thus reducing overhead on the system running the 
> transactional workloads.
> {quote}
> Importantly, they only log a sample of all accesses.
> {quote}
> To implement sampling, we have each worker thread flip a biased coin before 
> starting a new query (where bias correlates with sample rate). The thread 
> records its accesses in log buffers (or not) based on the outcome of the coin 
> flip. In Section V, we report experimental results showing that sampling 10% 
> of the accesses reduces the accuracy by only 2.5%,
> {quote}
> Likewise we would only record a subset of all accesses to limit overheads.
> The offline process estimates access frequencies over discrete time slices 
> using exponential smoothing. (Markers representing time slice boundaries are 
> interleaved with access records in the log.) Forward and backward 
> classification algorithms are presented. The forward algorithm requires a 
> full scan over the log and storage proportional to the number of unique cell 
> addresses, while the backward algorithm requires reading a least the tail of 
> the log in reverse order.
> If we overload the WAL to carry the access log, offline data temperature 
> estimation can piggyback as a WAL listener. The forward algorithm would then 
> be a natural choice. The HBase master is fairly idle most of the time and 
> less memory hungry as a regionserver, at least in today's architecture. We 
> could probably get away with considering only row+family as a unique 
> coordinate to minimize space overhead.  Or if instead we maintain the access 
> logs in memory at the RegionServer, then there is a parallel formulation and 
> we could benefit from the reverse algorithm's ability to terminate early once 
> confidence bounds are reached and backwards scanning IO wouldn't be a 
> concern. This handwaves over a lot of details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15089) Compatibility issue on HTable#flushCommits

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093541#comment-15093541
 ] 

Hadoop QA commented on HBASE-15089:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 57s 
{color} | {color:red} hbase-client in master has 13 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
21m 2s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781764/HBASE-15089.patch |
| JIRA Issue | HBASE-15089 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Victor Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor Xu updated HBASE-15085:
--
Attachment: HBASE-15085-v5.patch

Fix check style error.

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>  Labels: hfile
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-branch-1.0-v1.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.2-v1.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, Reference reference,
>   HColumnDescriptor familyDescriptor)
>   throws IOException {
> FileSystem fs = inFile.getFileSystem(conf);
> CacheConfig cacheConf = new CacheConfig(conf);
> HalfStoreFileReader halfReader = null;
> StoreFile.Writer halfWriter = null;
> try {
>   halfReader = new HalfStoreFileReader(fs, inFile, cacheConf, reference, 
> conf);
>   Map fileInfo = halfReader.loadFileInfo();
>   int blocksize = 

[jira] [Updated] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15085:
---
Release Note:   (was: Pushed to 0.98, 1.0 and master  branches. Thanks for 
the patch [~victorunique].)

Pushed to 0.98, 1.0 and master  branches. Thanks for the patch [~victorunique].

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-branch-1.0-v1.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.2-v1.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, Reference reference,
>   HColumnDescriptor familyDescriptor)
>   throws IOException {
> FileSystem fs = inFile.getFileSystem(conf);
> CacheConfig cacheConf = new CacheConfig(conf);
> HalfStoreFileReader halfReader = null;
> 

[jira] [Commented] (HBASE-15089) Compatibility issue on HTable#flushCommits

2016-01-12 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093586#comment-15093586
 ] 

Yu Li commented on HBASE-15089:
---

Checking the commit history, it should be HBASE-12728, commit 8556e25 in 
branch-1. And by checking the changes in that commit, I found exception thrown 
for put also got changed (we only found the issue on flushCommits but didn't on 
put), let me try to fix this part also.

> Compatibility issue on HTable#flushCommits
> --
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid).
> In this JIRA we propose to recover the compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093598#comment-15093598
 ] 

Hadoop QA commented on HBASE-15085:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
3s {color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} branch-1.2 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} branch-1.2 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} branch-1.2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s 
{color} | {color:red} hbase-server in branch-1.2 has 83 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} branch-1.2 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} branch-1.2 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s 
{color} | {color:red} Patch generated 1 new checkstyle issues in hbase-server 
(total was 8, now 9). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m 21s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 36s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 59s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 18s 
{color} | {color:red} Patch generated 24 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 188m 36s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0 Failed junit tests | hadoop.hbase.master.TestTableLockManager |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hbase.mapreduce.TestTableInputFormatScan1 |
|   | hadoop.hbase.mapreduce.TestTableInputFormatScan2 |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781742/HBASE-15085-branch-1.2-v1.patch
 |
| JIRA Issue | HBASE-15085 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu 

[jira] [Updated] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-15085:
---
Priority: Critical  (was: Major)

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, Reference reference,
>   HColumnDescriptor familyDescriptor)
>   throws IOException {
> FileSystem fs = inFile.getFileSystem(conf);
> CacheConfig cacheConf = new CacheConfig(conf);
> HalfStoreFileReader halfReader = null;
> 

[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Victor Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093634#comment-15093634
 ] 

Victor Xu commented on HBASE-15085:
---

Thanks for your question. 
If there is region cross over, the bulk load process could generate two 
splitted hfile directly with DBE defined by the table.
If there is no region cross over, the bulk loaded hfile would keep its DBE 
while the table has none. Sooner or later, major compaction can make them 
match. 
Thus, It is eventual consistent.
Even if there is no major compaction (by any reason), the hfile should be 
scan/get without any exception caused by this kind of DBE mismatch. 

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that 

[jira] [Updated] (HBASE-15090) Remove the throws clause from OnlineRegions.getOnlineRegions(TableName)

2016-01-12 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15090:
---
Description: The implementation of OnlineRegions.getOnlineRegions(TableName 
tableName) does not throw any error. So there is no need for the interface to 
throw Exception.   (was: The implementation of 
OnlineRegions.getOnlineRegions(TableName tableName) does not throw any error. 
So there is not need for the interface to throw error. )

> Remove the throws clause from OnlineRegions.getOnlineRegions(TableName)
> ---
>
> Key: HBASE-15090
> URL: https://issues.apache.org/jira/browse/HBASE-15090
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
>
> The implementation of OnlineRegions.getOnlineRegions(TableName tableName) 
> does not throw any error. So there is no need for the interface to throw 
> Exception. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15090) Remove the throws clause from OnlineRegions.getOnlineRegions(TableName)

2016-01-12 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-15090:
--

 Summary: Remove the throws clause from 
OnlineRegions.getOnlineRegions(TableName)
 Key: HBASE-15090
 URL: https://issues.apache.org/jira/browse/HBASE-15090
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor


The implementation of OnlineRegions.getOnlineRegions(TableName tableName) does 
not throw any error. So there is not need for the interface to throw error. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15082) Fix merge of MVCC and SequenceID performance regression

2016-01-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15082:
--
Attachment: 15082v8.patch

Address [~anoop.hbase] and [~ramkrishna] feedback. The mvccNum not assigned was 
a good one in particular.

Unifies how we set sequenceid on a Cell post WAL write.

Adds in the missing READ_UNCOMMITTED when doing the Get as part of a 
read-modify op like increment/append.

Adds wait on sequenceid assign needed when no sync going on (this is rare event 
but does loop/sleep because no more latches).

Move WAL messing to WALUtil and did some cleanup.

Clean some checkstyle.

Lets see how this does.

> Fix merge of MVCC and SequenceID performance regression
> ---
>
> Key: HBASE-15082
> URL: https://issues.apache.org/jira/browse/HBASE-15082
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 15082.patch, 15082v2.patch, 15082v2.txt, 15082v3.txt, 
> 15082v4.patch, 15082v5.patch, 15082v6.patch, 15082v7.patch, 15082v8.patch
>
>
> This is general fix for increments (appends, checkAnd*) perf-regression 
> identified in the parent issue. HBASE-15031 has a narrow fix for branch-1.1 
> and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Victor Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor Xu updated HBASE-15085:
--
Attachment: HBASE-15085-branch-1.2-v2.patch
HBASE-15085-branch-1.1-v2.patch
HBASE-15085-branch-1.0-v2.patch
HBASE-15085-0.98-v5.patch

fix check style error for branch-0.98, 1.0, 1.1, 1.2

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, Reference reference,
>   HColumnDescriptor familyDescriptor)
>   throws IOException {

[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093630#comment-15093630
 ] 

Anoop Sam John commented on HBASE-15085:


Suggest raise another Jira and check all the file info items and decide we need 
to copy any.

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, Reference reference,
>   HColumnDescriptor familyDescriptor)
>   throws IOException {
> FileSystem fs = inFile.getFileSystem(conf);
> CacheConfig 

[jira] [Commented] (HBASE-15082) Fix merge of MVCC and SequenceID performance regression

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093545#comment-15093545
 ] 

Hadoop QA commented on HBASE-15082:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} 
| {color:red} HBASE-15082 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/latest/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781769/15082v8.patch |
| JIRA Issue | HBASE-15082 |
| Powered by | Apache Yetus 0.1.0   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/79/console |


This message was automatically generated.



> Fix merge of MVCC and SequenceID performance regression
> ---
>
> Key: HBASE-15082
> URL: https://issues.apache.org/jira/browse/HBASE-15082
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 15082.patch, 15082v2.patch, 15082v2.txt, 15082v3.txt, 
> 15082v4.patch, 15082v5.patch, 15082v6.patch, 15082v7.patch, 15082v8.patch
>
>
> This is general fix for increments (appends, checkAnd*) perf-regression 
> identified in the parent issue. HBASE-15031 has a narrow fix for branch-1.1 
> and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15082) Fix merge of MVCC and SequenceID performance regression

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093544#comment-15093544
 ] 

Anoop Sam John commented on HBASE-15082:


Can post it in RB now?

> Fix merge of MVCC and SequenceID performance regression
> ---
>
> Key: HBASE-15082
> URL: https://issues.apache.org/jira/browse/HBASE-15082
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 15082.patch, 15082v2.patch, 15082v2.txt, 15082v3.txt, 
> 15082v4.patch, 15082v5.patch, 15082v6.patch, 15082v7.patch, 15082v8.patch
>
>
> This is general fix for increments (appends, checkAnd*) perf-regression 
> identified in the parent issue. HBASE-15031 has a narrow fix for branch-1.1 
> and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15089) Compatibility issue on HTable#flushCommits

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093555#comment-15093555
 ] 

Anoop Sam John commented on HBASE-15089:


Which Jira done this compatibility break you know?

> Compatibility issue on HTable#flushCommits
> --
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid).
> In this JIRA we propose to recover the compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093588#comment-15093588
 ] 

Anoop Sam John commented on HBASE-15085:


Like the DBE, the bloom type also can get mismatch.  So we should not copy the 
src file's bloom type also into the written file's FileInfo?

> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-branch-1.0-v1.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.2-v1.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new HFile.
>*/
>   private static void copyHFileHalf(
>   Configuration conf, Path inFile, Path outFile, Reference reference,
>   HColumnDescriptor familyDescriptor)
>   throws IOException {
> FileSystem fs = inFile.getFileSystem(conf);
> CacheConfig cacheConf = new CacheConfig(conf);
> HalfStoreFileReader halfReader = null;
> StoreFile.Writer 

[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093596#comment-15093596
 ] 

Anoop Sam John commented on HBASE-15085:


Also one more Q.
The src HFile was having one DBE type and the destination table is not having 
any DBE.  You want the resulting HFiles to be DBEed? Because while bulk load, 
if there was no need to split the loaded file (no region cross over) the file 
which was added to the table would have been DBEed.  And if the split happens 
it is not.  Is that a kind of inconsistent behavior?



> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>* Copy half of an HFile into a new 

[jira] [Commented] (HBASE-15075) Allow region split request to carry identification information

2016-01-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093664#comment-15093664
 ] 

Ted Yu commented on HBASE-15075:


>From 
>https://builds.apache.org/job/PreCommit-HBASE-Build/69/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt
{code}
[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR] 
/testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SplitNormalizationPlan.java:[95,12]
 no suitable method found for splitRegion(byte[],byte[],java.util.UUID)
method org.apache.hadoop.hbase.client.Admin.splitRegion(byte[]) is not 
applicable
  (actual and formal argument lists differ in length)
method org.apache.hadoop.hbase.client.Admin.splitRegion(byte[],byte[]) is 
not applicable
  (actual and formal argument lists differ in length)
{code}
Looks like the following addition to Admin.java was not effective:
{code}
+  void splitRegion(final byte[] regionName, final byte[] splitPoint, final 
UUID id)
{code}
I built / ran test suite locally using 1.7.0_60 but didn't reproduce the above.
[~busbey]:
Do you have some idea ?

> Allow region split request to carry identification information
> --
>
> Key: HBASE-15075
> URL: https://issues.apache.org/jira/browse/HBASE-15075
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15075-v0.txt, 15075-v1.txt, 15075-v2.txt, 
> HBASE-15075.v2.patch, HBASE-15075.v3.patch
>
>
> During the process of improving region normalization feature, I found that if 
> region split request triggered by the execution of SplitNormalizationPlan 
> fails, there is no way of knowing whether the failed split originated from 
> region normalization.
> The association of particular split request with outcome of split would give 
> RegionNormalizer information so that it can make better normalization 
> decisions in the subsequent invocations.
> One approach is to embed metadata, such as a UUID, in SplitRequest which gets 
> passed through RegionStateTransitionContext when 
> RegionServerServices#reportRegionStateTransition() is called.
> This way, RegionStateListener can be notified with the metadata (id of the 
> requester).
> See discussion on dev mailing list
> http://search-hadoop.com/m/YGbbCXdkivihp2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15052) Use EnvironmentEdgeManager in ReplicationSource

2016-01-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093807#comment-15093807
 ] 

Hudson commented on HBASE-15052:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1159 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1159/])
HBASE-15052 Use EnvironmentEdgeManager in ReplicationSource (matteo.bertozzi: 
rev 43fb23527e8c53f096c2d876d2a919c7d3bc33ea)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


> Use EnvironmentEdgeManager in ReplicationSource 
> 
>
> Key: HBASE-15052
> URL: https://issues.apache.org/jira/browse/HBASE-15052
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.0.3, 0.98.16.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Fix For: 2.0.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15052-v0.patch, HBASE-15052-v00.patch
>
>
> ReplicationSource is passing System.currentTimeMillis() to 
> MetricsSource.setAgeOfLastShippedOp() which is subtracting that from 
> EnvironmentEdgeManager.currentTime().
> {code}
> // if there was nothing to ship and it's not an error
> // set "ageOfLastShippedOp" to  to indicate that we're current
> metrics.setAgeOfLastShippedOp(System.currentTimeMillis(), walGroupId);
> public void setAgeOfLastShippedOp(long timestamp, String walGroup) {
> long age = EnvironmentEdgeManager.currentTime() - timestamp;
> {code}
>  we should just use EnvironmentEdgeManager.currentTime() in ReplicationSource



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15083) Gets from Multiactions are not counted in metrics for gets.

2016-01-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093715#comment-15093715
 ] 

Hudson commented on HBASE-15083:


SUCCESS: Integrated in HBase-1.3 #491 (See 
[https://builds.apache.org/job/HBase-1.3/491/])
HBASE-15083 Gets from Multiactions are not counted in metrics for gets 
(chenheng: rev 417e3c4a73a8efcc7a212b1cf77bee7a691cbe24)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java


> Gets from Multiactions are not counted in metrics for gets.
> ---
>
> Key: HBASE-15083
> URL: https://issues.apache.org/jira/browse/HBASE-15083
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Heng Chen
> Fix For: 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15083-branch-1.patch, HBASE-15083.patch, 
> HBASE-15083.patch
>
>
> RSRpcServices#get updates the get metrics. However Multiactions do not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13590) TestEnableTableHandler.testEnableTableWithNoRegionServers is flakey

2016-01-12 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093783#comment-15093783
 ] 

Yu Li commented on HBASE-13590:
---

Checking the UT failures and findbugs/asflicense warnings, I don't think any is 
relative to the patch here.

> TestEnableTableHandler.testEnableTableWithNoRegionServers is flakey
> ---
>
> Key: HBASE-13590
> URL: https://issues.apache.org/jira/browse/HBASE-13590
> Project: HBase
>  Issue Type: Test
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.4
>
> Attachments: HBASE-13590.branch-1.patch
>
>
> Looking at our [build 
> history|https://builds.apache.org/job/HBase-1.1/buildTimeTrend], it seems 
> this test is flakey. See builds 429, 431, 439.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093794#comment-15093794
 ] 

Hudson commented on HBASE-15085:


FAILURE: Integrated in HBase-1.2-IT #390 (See 
[https://builds.apache.org/job/HBase-1.2-IT/390/])
HBASE-15085 IllegalStateException was thrown when scanning on (ramkrishna: rev 
4b9f8f44fef51e70381dcd022a870c8939422f55)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/HFileTestUtil.java


> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had 

[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093808#comment-15093808
 ] 

Hadoop QA commented on HBASE-15085:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 2s 
{color} | {color:red} hbase-server in master has 83 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 35s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 11s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 43s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 27s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 58s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 36s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 11m 21s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 52s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 14m 24s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 86m 29s 
{color} | {color:green} 

[jira] [Commented] (HBASE-15055) Major compaction is not triggered when both of TTL and hbase.hstore.compaction.max.size are set

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093911#comment-15093911
 ] 

Anoop Sam John commented on HBASE-15055:


new method isPeriodicMC() is having code copied from isMajorCompaction.  The 
latter one is doing some more work indeed.   No problem in adding the new 
method. Can the old one make use of the old and avoid duplication?
{code}
boolean isTryingMajor = (forceMajor && isAllFiles && isUserCompaction)
109 || (((forceMajor && isAllFiles) || (isAllFiles && isPeriodicMC)
110 || isMajorCompaction(candidateSelection))
{code}
The || condition may bypass the call to isMajorCompaction and avoid the further 
checks it is doing.  

> Major compaction is not triggered when both of TTL and 
> hbase.hstore.compaction.max.size are set
> ---
>
> Key: HBASE-15055
> URL: https://issues.apache.org/jira/browse/HBASE-15055
> Project: HBase
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Assignee: Eungsop Yoo
>Priority: Minor
> Attachments: HBASE-15055-v1.patch, HBASE-15055-v10.patch, 
> HBASE-15055-v2.patch, HBASE-15055-v3.patch, HBASE-15055-v4.patch, 
> HBASE-15055-v5.patch, HBASE-15055-v6.patch, HBASE-15055-v7.patch, 
> HBASE-15055-v8.patch, HBASE-15055-v9.patch, HBASE-15055.patch
>
>
> Some large files may be skipped by hbase.hstore.compaction.max.size in 
> candidate selection. It causes skipping of major compaction. So the TTL 
> expired records are still remained in the disks and keep consuming disks.
> To resolve this issue, I suggest that to skip large files only if there is no 
> TTL expired record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093888#comment-15093888
 ] 

Hudson commented on HBASE-15085:


SUCCESS: Integrated in HBase-1.2 #501 (See 
[https://builds.apache.org/job/HBase-1.2/501/])
HBASE-15085 IllegalStateException was thrown when scanning on bulkloaded 
(ramkrishna: rev 4b9f8f44fef51e70381dcd022a870c8939422f55)
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/HFileTestUtil.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java


> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 

[jira] [Commented] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093896#comment-15093896
 ] 

Anoop Sam John commented on HBASE-14970:


Other than these 2 comments remaining patch looks ok.

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1_1.patch, HBASE-14970_branch-1_2.patch, 
> HBASE-14970_branch-1_4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14161) Add hbase-spark integration tests to IT jenkins job

2016-01-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093910#comment-15093910
 ] 

Sean Busbey commented on HBASE-14161:
-

/cc [~ted.m]

> Add hbase-spark integration tests to IT jenkins job
> ---
>
> Key: HBASE-14161
> URL: https://issues.apache.org/jira/browse/HBASE-14161
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0
>
>
> expand the set of ITs we run to include the new hbase-spark tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14159) Resolve warning introduced by HBase-Spark module

2016-01-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14159:

Resolution: Fixed
  Assignee: Appy  (was: Sean Busbey)
Status: Resolved  (was: Patch Available)

pushed to master. Thanks [~appy]!

> Resolve warning introduced by HBase-Spark module
> 
>
> Key: HBASE-14159
> URL: https://issues.apache.org/jira/browse/HBASE-14159
> Project: HBase
>  Issue Type: Improvement
>  Components: build, spark
>Reporter: Ted Malaska
>Assignee: Appy
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14159-master-v1.patch
>
>
> Fix the following warning that is a result of something in the modules pom 
> file
> [WARNING] warning: Class org.apache.hadoop.mapred.MiniMRCluster not found - 
> continuing with a stub.
> [WARNING] one warning found



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093862#comment-15093862
 ] 

Anoop Sam John commented on HBASE-14970:


Quick comment
bq.private CompactedHFilesDischarger compactedFileDischarger;
Why we need this ref in Hregion?  I dont see it is getting initialized any where

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1_1.patch, HBASE-14970_branch-1_2.patch, 
> HBASE-14970_branch-1_4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14970) Backport HBASE-13082 and its sub-jira to branch-1

2016-01-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093895#comment-15093895
 ] 

Anoop Sam John commented on HBASE-14970:


bq.int cleanerInterval =
623 conf.getInt("hbase.hfile.compaction.discharger.interval", 2 * 
60 * 1000);
This is from HRegionServer where the config name is hard coded, you have it 
added into CompactionConfiguration also. Refer from there?  How abt this in 
trunk code base? 

> Backport HBASE-13082 and its sub-jira to branch-1
> -
>
> Key: HBASE-14970
> URL: https://issues.apache.org/jira/browse/HBASE-14970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-13082-branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1.patch, HBASE-14970_branch-1.patch, 
> HBASE-14970_branch-1_1.patch, HBASE-14970_branch-1_2.patch, 
> HBASE-14970_branch-1_4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15085) IllegalStateException was thrown when scanning on bulkloaded HFiles

2016-01-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093937#comment-15093937
 ] 

Hudson commented on HBASE-15085:


FAILURE: Integrated in HBase-1.3 #492 (See 
[https://builds.apache.org/job/HBase-1.3/492/])
HBASE-15085 IllegalStateException was thrown when scanning on bulkloaded 
(ramkrishna: rev 89eba459f290ba3f30541db4c79c7bfd291fe78b)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/HFileTestUtil.java


> IllegalStateException was thrown when scanning on bulkloaded HFiles
> ---
>
> Key: HBASE-15085
> URL: https://issues.apache.org/jira/browse/HBASE-15085
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.12, 1.1.2
> Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Critical
>  Labels: hfile
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-0.98-v5.patch, 
> HBASE-15085-branch-1.0-v1.patch, HBASE-15085-branch-1.0-v2.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.1-v2.patch, 
> HBASE-15085-branch-1.2-v1.patch, HBASE-15085-branch-1.2-v2.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v5.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:188)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:4068)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
> BLOOM_FILTER_TYPE = ROW
> BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_12_0
> BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
> DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:18\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 

[jira] [Commented] (HBASE-15082) Fix merge of MVCC and SequenceID performance regression

2016-01-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094468#comment-15094468
 ] 

stack commented on HBASE-15082:
---

Removed 1.2.0 as target. [~eclark] in reivew found flaw. I have updates 
completing mvcc only rather than completing AND waiting on mvcc transaction to 
catch up. Means no guarantee that you can read your own writes. Need to go back 
to perf test before making progress here. Means for sure doesn't make 1.2.0. I 
opened HBASE-15091 to do forward-port of the branch-1 patch (though HBASE-12751 
changes the landscape).



> Fix merge of MVCC and SequenceID performance regression
> ---
>
> Key: HBASE-15082
> URL: https://issues.apache.org/jira/browse/HBASE-15082
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 15082.patch, 15082v10.patch, 15082v2.patch, 15082v2.txt, 
> 15082v3.txt, 15082v4.patch, 15082v5.patch, 15082v6.patch, 15082v7.patch, 
> 15082v8.patch
>
>
> This is general fix for increments (appends, checkAnd*) perf-regression 
> identified in the parent issue. HBASE-15031 has a narrow fix for branch-1.1 
> and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094521#comment-15094521
 ] 

Sean Busbey commented on HBASE-15089:
-

{quote}
We had made the explicit choice for the API clean up, and deprecated, cleaned 
up and moved some of the APIs to Private intentionally in various issues. I 
suggest we close this as "Not a Problem".
{quote}

Alternatively, would an addition to the upgrade docs that gives an example of 
moving from HTable in 0.98 to BufferedMutator in 1.0 help ease this pain 
[~carp84]?

> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch, HBASE-15089.v2.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
> problem.
> After a check, the compatibility issue was introduced by HBASE-12728. Will 
> recover the compatibility In this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094440#comment-15094440
 ] 

Sean Busbey commented on HBASE-15089:
-

Generally, our promises are for wire and source compatibility (so that those 
who need binary can just keep the same bits). I believe you're correct in this 
case though. 0.98 -> 1.0 was a major version increment and HTable went from 
IA.Public to IA.Private; that was the break for downstream folks. It's worth 
noting that even after this patch, the replacement class Table still throws 
IOException generally.

I presume the point of that was to give us more breathing room on what *could* 
be thrown without breaking the behavior for folks client side farther down the 
line. As is, clients could keep the 0.98 client code to have the same behavior 
while talking to a 1.y server (though I'm not sure what our plans are for 0.98 
-> 2.0.) Having them continue to rely on HTable directly is dangerous, as 
there's no promise it won't change in breaking ways even in patch releases 
post-1.0.0.

> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch, HBASE-15089.v2.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
> problem.
> After a check, the compatibility issue was introduced by HBASE-12728. Will 
> recover the compatibility In this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15082) Fix merge of MVCC and SequenceID performance regression

2016-01-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15082:
--
Fix Version/s: (was: 1.2.0)

> Fix merge of MVCC and SequenceID performance regression
> ---
>
> Key: HBASE-15082
> URL: https://issues.apache.org/jira/browse/HBASE-15082
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 15082.patch, 15082v10.patch, 15082v2.patch, 15082v2.txt, 
> 15082v3.txt, 15082v4.patch, 15082v5.patch, 15082v6.patch, 15082v7.patch, 
> 15082v8.patch
>
>
> This is general fix for increments (appends, checkAnd*) perf-regression 
> identified in the parent issue. HBASE-15031 has a narrow fix for branch-1.1 
> and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15091) Forward-port HBASE-15031 "Fix merge of MVCC and SequenceID performance regression in branch-1.0"

2016-01-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15091:

Fix Version/s: 1.2.0

> Forward-port HBASE-15031 "Fix merge of MVCC and SequenceID performance 
> regression in branch-1.0"
> 
>
> Key: HBASE-15091
> URL: https://issues.apache.org/jira/browse/HBASE-15091
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Priority: Blocker
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15091) Forward-port HBASE-15031 "Fix merge of MVCC and SequenceID performance regression in branch-1.0"

2016-01-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15091:

Priority: Blocker  (was: Major)

> Forward-port HBASE-15031 "Fix merge of MVCC and SequenceID performance 
> regression in branch-1.0"
> 
>
> Key: HBASE-15091
> URL: https://issues.apache.org/jira/browse/HBASE-15091
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Priority: Blocker
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094431#comment-15094431
 ] 

Josh Elser commented on HBASE-15089:


[~ndimiduk] asked if I could take a look here because we had talked about 
compat in earlier 1.1 releases.

bq.  The change went in with out any deprecation path as we dont have 
compatibility guidelines for older versions like 98? It was intended?

>From the book:

bq. There are no known issues running a rolling upgrade from HBase 0.98.x to 
HBase 1.0.0.

Which implies to me that the only binary compatibility, not source 
compatibility, is guaranteed between 0.98 and 1.0. Looking at the original 
changes: (for context, I'm looking at 
https://issues.apache.org/jira/secure/attachment/12694007/HBASE-12728.06-branch-1.0.patch
 as the basis of my opinion).

{code}
-  public void put(final List puts)
-  throws InterruptedIOException, RetriesExhaustedWithDetailsException {
-for (Put put : puts) {
-  doPut(put);
-}
+  public void put(final List puts) throws IOException {
+getBufferedMutator().mutate(puts);
{code}

This is definitely an issue with source compatibility on HTable only because 
callers who were catching IIOException and REWDException would now fail because 
of the more general IOException being thrown instead. However, there is binary 
compatibility with 0.98 as long as these methods whose "throws" signature was 
changed did not actually throw any IOExceptions than the IIOException and 
REWDException. I feel good about this because I see the original two exceptions 
being thrown on BufferedMutatorImpl.

The changes to BufferedMutator are just fine. It was a new API addition. There 
are no expectations on its compatibility for this change going into 1.0.0.

As such, I don't think this actually violated any compatibility agreements 
between 0.98 and 1.0 (explicitly -- binary compatibility is retained, so 
rolling upgrades are possible). I think these changes adhered to the policy 
that HBase intended to adhere to. I understand the pain, but this cross-over 
period was bound to have some issues like this. 1.0+ compatibility has more 
stringent requirements than before which is a step in the right direction.

Would be nice if [~enis] and [~busbey] could verify this too (to make sure I'm 
not talking out of turn).

> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch, HBASE-15089.v2.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
> problem.
> After a check, the compatibility issue was introduced by HBASE-12728. Will 
> recover the compatibility In this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14460) [Perf Regression] Merge of MVCC and SequenceId (HBASE-8763) slowed Increments, CheckAndPuts, batch operations

2016-01-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14460:
--
Summary: [Perf Regression] Merge of MVCC and SequenceId (HBASE-8763) slowed 
Increments, CheckAndPuts, batch operations  (was: [Perf Regression] Merge of 
MVCC and SequenceId (HBASE-HBASE-8763) slowed Increments, CheckAndPuts, batch 
operations)

> [Perf Regression] Merge of MVCC and SequenceId (HBASE-8763) slowed 
> Increments, CheckAndPuts, batch operations
> -
>
> Key: HBASE-14460
> URL: https://issues.apache.org/jira/browse/HBASE-14460
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 0.94.test.patch, 0.98.test.patch, 
> 1.0.80.flamegraph-7932.svg, 14460.txt, 14460.v0.branch-1.0.patch, 
> 98.80.flamegraph-11428.svg, HBASE-14460-discussion.patch, client.test.patch, 
> flamegraph-13120.svg.master.singlecell.svg, flamegraph-26636.094.100.svg, 
> flamegraph-28066.098.singlecell.svg, flamegraph-28767.098.100.svg, 
> flamegraph-31647.master.100.svg, flamegraph-9466.094.singlecell.svg, 
> hack.flamegraph-16593.svg, hack.uncommitted.patch, m.test.patch, 
> region_lock.png, testincrement.094.patch, testincrement.098.patch, 
> testincrement.master.patch
>
>
> As reported by 鈴木俊裕 up on the mailing list -- see "Performance degradation 
> between CDH5.3.1(HBase0.98.6) and CDH5.4.5(HBase1.0.0)" -- our unification of 
> sequenceid and MVCC slows Increments (and other ops) as the mvcc needs to 
> 'catch up' to our current point before we can read the last Increment value 
> that we need to update.
> We can say that our Increment is just done wrong, we should just be writing 
> Increments and summing on read, but checkAndPut as well as batching 
> operations have the same issue. Fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15075) Allow region split request to carry identification information

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094448#comment-15094448
 ] 

Hadoop QA commented on HBASE-15075:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s 
{color} | {color:red} hbase-client in master has 13 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s 
{color} | {color:red} hbase-server in master has 83 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 16s 
{color} | {color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 26s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} 
|
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 26s {color} | 
{color:red} hbase-server in the patch failed with JDK v1.8.0. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 26s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 24s {color} | 
{color:red} hbase-server in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 24s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 9m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 21s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 41s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 3s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 25s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 44s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 6s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 27s 
{color} | {color:red} Patch causes 11 errors with 

[jira] [Commented] (HBASE-15089) Compatibility issue on flushCommits and put methods in HTable

2016-01-12 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094508#comment-15094508
 ] 

Enis Soztutar commented on HBASE-15089:
---

Thanks [~carp84]. We had done some clean up and tidying of the method 
signatures and as in this case generalized some exception declarations so that 
new Connection / Table / BufferedMutator APIs are future-proof as much as 
possible. As has been said above, there is no guarantee that 0.98 -> 1.0 is 
source compatible. Here is the relevant section from 1.0 release notes: 
{quote}
Compatibility
-
Source Compatibility:
Client side code in HBase-1.0.0 is (mostly) source compatible with earlier
versions. Some minor API changes might be needed from the client side.

Wire Compatibility:
HBase-1.0.0 release is wire compatible with 0.98.x releases. Clients and
servers running in different versions as long as new features are not used
should be possible.
A rolling upgrade from 0.98.x clusters to 1.0.0 is supported as well. 1.0.0
introduces a new file format (hfile v3) that is enabled by default that
0.96.x code cannot read. Thus, rolling upgrade from 0.96 directly to 1.0.0
is
not supported.
1.0.0 is NOT wire compatible with earlier releases (0.94, etc).

Binary Compatibility:
Binary compatibility at the Java API layer with earlier versions (0.98.x,
0.96.x and 0.94.x) is NOT supported. You may have to recompile your client
code and any server side code (coprocessors, filters etc) referring to
hbase jars.
{quote}

Full release notes: http://markmail.org/message/u43qluenc7soxloe. 

We had made the explicit choice for the API clean up, and deprecated, cleaned 
up and moved some of the APIs to Private intentionally in various issues. I 
suggest we close this as "Not a Problem". 


> Compatibility issue on flushCommits and put methods in HTable
> -
>
> Key: HBASE-15089
> URL: https://issues.apache.org/jira/browse/HBASE-15089
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Attachments: HBASE-15089.patch, HBASE-15089.v2.patch
>
>
> Previously in 0.98 HTable#flushCommits throws InterruptedIOException and 
> RetriesExhaustedWithDetailsException, but now in 1.1.2 this method signature 
> has been changed to throw IOException, which will force application code 
> changes for exception handling (previous catch on InterruptedIOException and 
> RetriesExhaustedWithDetailsException become invalid). HTable#put has the same 
> problem.
> After a check, the compatibility issue was introduced by HBASE-12728. Will 
> recover the compatibility In this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15038) ExportSnapshot should support separate configurations for source and destination clusters

2016-01-12 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-15038:
--
Fix Version/s: (was: 1.2.1)
   1.2.0

> ExportSnapshot should support separate configurations for source and 
> destination clusters
> -
>
> Key: HBASE-15038
> URL: https://issues.apache.org/jira/browse/HBASE-15038
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: hbase-15038.patch
>
>
> Currently ExportSnapshot uses a single Configuration instance for both the 
> source and destination FileSystem instances to use.  It should allow 
> overriding properties for each filesystem connection separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15082) Fix merge of MVCC and SequenceID performance regression

2016-01-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15082:
--
Attachment: 15082v12.patch

Address [~eclark] review. In particular:

 * Change mvcc commit in most places to mvcc commitAndWait ("read your writes")
 * Because of above, change the Get under increment/append/etc to do a straight 
Get, not a READ_UNCOMMITTED (can do this now that all updates wait on mvcc to 
catch up before letting go of locks and proceeding).

This patch should be good to go now but needs perf testing again since basic 
tenet has changed (see above commitAndWait vs commit). Will do in sibling 
issue. Meantime reviews appreciated. Loads of cleanup and removal of unused and 
dup'd code. Updated rb. Thanks.

> Fix merge of MVCC and SequenceID performance regression
> ---
>
> Key: HBASE-15082
> URL: https://issues.apache.org/jira/browse/HBASE-15082
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 15082.patch, 15082v10.patch, 15082v12.patch, 
> 15082v2.patch, 15082v2.txt, 15082v3.txt, 15082v4.patch, 15082v5.patch, 
> 15082v6.patch, 15082v7.patch, 15082v8.patch
>
>
> This is general fix for increments (appends, checkAnd*) perf-regression 
> identified in the parent issue. HBASE-15031 has a narrow fix for branch-1.1 
> and branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14872) Scan different timeRange per column family doesn't percolate down to the memstore

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095029#comment-15095029
 ] 

Hadoop QA commented on HBASE-14872:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} 
| {color:red} HBASE-14872 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/latest/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781924/HBASE-14872-0.98-v1.patch
 |
| JIRA Issue | HBASE-14872 |
| Powered by | Apache Yetus 0.1.0   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/85/console |


This message was automatically generated.



> Scan different timeRange per column family doesn't percolate down to the 
> memstore 
> --
>
> Key: HBASE-14872
> URL: https://issues.apache.org/jira/browse/HBASE-14872
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver, Scanners
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.0.0, 1.2.0, 0.98.18
>
> Attachments: HBASE-14872-0.98-v1.patch, HBASE-14872-0.98.patch, 
> HBASE-14872-v1.patch, HBASE-14872.patch
>
>
> HBASE-14355 The scan different time range for column family feature was not 
> applied to the memstore it was only done for the store files.  This breaks 
> the contract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14747) Make it possible to build Javadoc and xref reports for 0.94 again

2016-01-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095064#comment-15095064
 ] 

Lars Hofhansl commented on HBASE-14747:
---

 [~misty] :)

> Make it possible to build Javadoc and xref reports for 0.94 again
> -
>
> Key: HBASE-14747
> URL: https://issues.apache.org/jira/browse/HBASE-14747
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 0.94.27
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 0.94.28
>
> Attachments: 14747-addendum.txt, 14747-addendum2.txt, 
> HBASE-14747-0.94.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15075) Allow region split request to carry identification information

2016-01-12 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095082#comment-15095082
 ] 

Jerry He commented on HBASE-15075:
--

Hi, [~tedyu]

A couple of comments.
Can the UUID be generated on the server?  In the Proc v2 implementation, the 
proc-id is generated on the server side.
I guess using a UUID is ok in this JIRA. The proc-id in Proc v2 is a 'long'.  
If it will eventually be implemented with Proc v2, it will just be an 
implementation detail.
Also in this JIRA the client does not really need or need to know the UUID, 
since it is not doing any tracking or waiting.

I see the RegionStateTransitionContext is getting the UUID.  But who is using 
this info is not in the patch, right?
For example, the RegionStateListener you mentioned in the description.

 

> Allow region split request to carry identification information
> --
>
> Key: HBASE-15075
> URL: https://issues.apache.org/jira/browse/HBASE-15075
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15075-v0.txt, 15075-v1.txt, 15075-v2.txt, 
> HBASE-15075.v2.patch, HBASE-15075.v3.patch, HBASE-15075.v3.patch
>
>
> During the process of improving region normalization feature, I found that if 
> region split request triggered by the execution of SplitNormalizationPlan 
> fails, there is no way of knowing whether the failed split originated from 
> region normalization.
> The association of particular split request with outcome of split would give 
> RegionNormalizer information so that it can make better normalization 
> decisions in the subsequent invocations.
> One approach is to embed metadata, such as a UUID, in SplitRequest which gets 
> passed through RegionStateTransitionContext when 
> RegionServerServices#reportRegionStateTransition() is called.
> This way, RegionStateListener can be notified with the metadata (id of the 
> requester).
> See discussion on dev mailing list
> http://search-hadoop.com/m/YGbbCXdkivihp2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15055) Major compaction is not triggered when both of TTL and hbase.hstore.compaction.max.size are set

2016-01-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094990#comment-15094990
 ] 

Ted Yu commented on HBASE-15055:


Some minor comments:
{code}
+boolean isPeriodicMC = isPeriodicMC(candidateSelection);
{code}
Please use MajorCompaction instead of MC so that the meaning of the method name 
is clearer.
{code}
+if (lowTimestamp > 0l && lowTimestamp < (now - mcTime)) {
+  LOG.debug("Major compaction period has elapsed. Need to run a major 
compaction");
{code}
Please log major compaction period to make the message more informative.


> Major compaction is not triggered when both of TTL and 
> hbase.hstore.compaction.max.size are set
> ---
>
> Key: HBASE-15055
> URL: https://issues.apache.org/jira/browse/HBASE-15055
> Project: HBase
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Assignee: Eungsop Yoo
>Priority: Minor
> Attachments: HBASE-15055-v1.patch, HBASE-15055-v10.patch, 
> HBASE-15055-v2.patch, HBASE-15055-v3.patch, HBASE-15055-v4.patch, 
> HBASE-15055-v5.patch, HBASE-15055-v6.patch, HBASE-15055-v7.patch, 
> HBASE-15055-v8.patch, HBASE-15055-v9.patch, HBASE-15055.patch
>
>
> Some large files may be skipped by hbase.hstore.compaction.max.size in 
> candidate selection. It causes skipping of major compaction. So the TTL 
> expired records are still remained in the disks and keep consuming disks.
> To resolve this issue, I suggest that to skip large files only if there is no 
> TTL expired record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14623) Implement dedicated WAL for system tables

2016-01-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095001#comment-15095001
 ] 

Ted Yu commented on HBASE-14623:


Review comment is appreciated.

> Implement dedicated WAL for system tables
> -
>
> Key: HBASE-14623
> URL: https://issues.apache.org/jira/browse/HBASE-14623
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: wal
> Fix For: 2.0.0
>
> Attachments: 14623-v1.txt, 14623-v2.txt, 14623-v2.txt, 14623-v2.txt, 
> 14623-v2.txt, 14623-v3.txt, 14623-v4.txt
>
>
> As Stephen suggested in parent JIRA, dedicating separate WAL for system 
> tables (other than hbase:meta) should be done in new JIRA.
> This task is to fulfill the system WAL separation.
> Below is summary of discussion:
> For system table to have its own WAL, we would recover system table faster 
> (fast log split, fast log replay). It would probably benefit 
> AssignmentManager on system table region assignment. At this time, the new 
> AssignmentManager is not planned to change WAL. So the existence of this JIRA 
> is good for overall system, not specific to AssignmentManager.
> There are 3 strategies for implementing system table WAL:
> 1. one WAL for all non-meta system tables
> 2. one WAL for each non-meta system table
> 3. one WAL for each region of non-meta system table
> Currently most system tables are one region table (only ACL table may become 
> big). Choices 2 and 3 basically are the same.
> From implementation point of view, choices 2 and 3 are cleaner than choice 1 
> (as we have already had 1 WAL for META table and we can reuse the logic). 
> With choice 2 or 3, assignment manager performance should not be impacted and 
> it would be easier for assignment manager to assign system table region (eg. 
> without waiting for user table log split to complete for assigning system 
> table region).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14872) Scan different timeRange per column family doesn't percolate down to the memstore

2016-01-12 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-14872:
---
Status: Patch Available  (was: Open)

> Scan different timeRange per column family doesn't percolate down to the 
> memstore 
> --
>
> Key: HBASE-14872
> URL: https://issues.apache.org/jira/browse/HBASE-14872
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver, Scanners
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.0.0, 1.2.0, 0.98.18
>
> Attachments: HBASE-14872-0.98-v1.patch, HBASE-14872-0.98.patch, 
> HBASE-14872-v1.patch, HBASE-14872.patch
>
>
> HBASE-14355 The scan different time range for column family feature was not 
> applied to the memstore it was only done for the store files.  This breaks 
> the contract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14872) Scan different timeRange per column family doesn't percolate down to the memstore

2016-01-12 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-14872:
---
Attachment: HBASE-14872-0.98-v1.patch

[~apurtell] attached is a new patch with the Client.proto fixes and lazily 
initialized map.  Left the exception message alone as really didn't know what 
to do with that one, but open to suggestions, don't know why TimeRange threw an 
IOException on construction originally.  

> Scan different timeRange per column family doesn't percolate down to the 
> memstore 
> --
>
> Key: HBASE-14872
> URL: https://issues.apache.org/jira/browse/HBASE-14872
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver, Scanners
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.0.0, 1.2.0, 0.98.18
>
> Attachments: HBASE-14872-0.98-v1.patch, HBASE-14872-0.98.patch, 
> HBASE-14872-v1.patch, HBASE-14872.patch
>
>
> HBASE-14355 The scan different time range for column family feature was not 
> applied to the memstore it was only done for the store files.  This breaks 
> the contract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >