[jira] [Commented] (HBASE-20335) nightly jobs no longer contain machine information

2018-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440397#comment-16440397
 ] 

Hudson commented on HBASE-20335:


Results for branch HBASE-20335
[build #7 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20335/7/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20335/7//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20335/7//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20335/7//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> nightly jobs no longer contain machine information
> --
>
> Key: HBASE-20335
> URL: https://issues.apache.org/jira/browse/HBASE-20335
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.2.7, 1.3.3, 1.4.4, 2.0.1
>
> Attachments: HBASE-20335.0.patch, HBASE-20335.1.patch
>
>
> something is up with nightly jobs. they no longer have the machine 
> information from HBASE-19228.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19893) restore_snapshot is broken in master branch when region splits

2018-04-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440409#comment-16440409
 ] 

ramkrishna.s.vasudevan commented on HBASE-19893:


LGTM. Since this patch solves [~nihaljain.cs]'s issue also I think this is good 
to go. Thanks [~brfrn169].

> restore_snapshot is broken in master branch when region splits
> --
>
> Key: HBASE-19893
> URL: https://issues.apache.org/jira/browse/HBASE-19893
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Critical
> Attachments: HBASE-19893.master.001.patch, 
> HBASE-19893.master.002.patch, HBASE-19893.master.003.patch, 
> HBASE-19893.master.003.patch, HBASE-19893.master.004.patch
>
>
> When I was investigating HBASE-19850, I found restore_snapshot didn't work in 
> master branch.
>  
> Steps to reproduce are as follows:
> 1. Create a table
> {code:java}
> create "test", "cf"
> {code}
> 2. Load data (2000 rows) to the table
> {code:java}
> (0...2000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> {code}
> 3. Split the table
> {code:java}
> split "test"
> {code}
> 4. Take a snapshot
> {code:java}
> snapshot "test", "snap"
> {code}
> 5. Load more data (2000 rows) to the table and split the table agin
> {code:java}
> (2000...4000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> split "test"
> {code}
> 6. Restore the table from the snapshot 
> {code:java}
> disable "test"
> restore_snapshot "snap"
> enable "test"
> {code}
> 7. Scan the table
> {code:java}
> scan "test"
> {code}
> However, this scan returns only 244 rows (it should return 2000 rows) like 
> the following:
> {code:java}
> hbase(main):038:0> scan "test"
> ROW COLUMN+CELL
>  row78 column=cf:col, timestamp=1517298307049, value=val
> 
>   row999 column=cf:col, timestamp=1517298307608, value=val
> 244 row(s)
> Took 0.1500 seconds
> {code}
>  
> Also, the restored table should have 2 online regions but it has 3 online 
> regions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20341) Nothing in refguide on hedgedreads; fix

2018-04-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-20341.
---
   Resolution: Invalid
 Assignee: Wei-Chiu Chuang
Fix Version/s: (was: 2.0.0)

It is not valid [~jojochuang]. I must have mis-searched. Hedged Read doc has 
been around for ever, before..

commit 1a21c1684c5d68cb2d1da8ed33500993b0965f8a
Author: Misty Stanley-Jones 
Date:   Wed Jan 7 14:02:16 2015 +1000

HBASE-11533 Asciidoc Proof of Concept
...

And then got updates with the likes of the


commit 6ee2dcf480dd95877a20e33086a020eb1a19e41f
Author: Michael Stack 
Date:   Mon Nov 14 10:27:58 2016 -0800

HBASE-17089 Add doc on experience running with hedged reads


Which is Yu Li's experience w/ hedged reads... and then below


commit 86df89b01608052dad4ef75abde5a3fe79447ac0
Author: Michael Stack 
Date:   Mon Nov 14 21:06:29 2016 -0800

HBASE-17089 Add doc on experience running with hedged reads; ADDENDUM 
adding in Ashu Pachauri's experience


I'm not sure how I got it so wrong.

Thanks [~jojochuang] Assigning you the issue because you noticed the mess-up.

> Nothing in refguide on hedgedreads; fix
> ---
>
> Key: HBASE-20341
> URL: https://issues.apache.org/jira/browse/HBASE-20341
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: stack
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>
> There are even metrics from HBASE-12220 that expose counts. Talk them up and 
> hedged reads in refguide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20420) Fix Some Potential NPE

2018-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439693#comment-16439693
 ] 

Hadoop QA commented on HBASE-20420:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
33s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
28s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}107m 
28s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
19s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
14s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 6s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20420 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919222/HBASE-20420_3.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux b55b3bbfb049 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-19963) TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+

2018-04-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439742#comment-16439742
 ] 

Wei-Chiu Chuang commented on HBASE-19963:
-

Thanks [~mdrob] Really appreciate it.

> TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+
> 
>
> Key: HBASE-19963
> URL: https://issues.apache.org/jira/browse/HBASE-19963
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Mike Drob
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-19963.master.001.patch, 
> HBASE-19963.master.002.patch
>
>
> We try to accommodate HDFS changing ports when testing if it is the same FS 
> in our tests:
> https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java#L156-L162
> {code}
> if (isHadoop3) {
>   // Hadoop 3.0.0 alpha1+ change default nn port to 9820. See HDFS-9427
>   testIsSameHdfs(9820);
> } else {
>   // pre hadoop 3.0.0 defaults to port 8020
>   testIsSameHdfs(8020);
> }
> {code}
> But in Hadoop 3.0.1, they decided to go back to the old port - see HDFS-12990.
> So our tests will fail against the snapshot and against future releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20411) Ameliorate MutableSegment synchronize

2018-04-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20411:
--
Attachment: 2.more.patch.12010.lock.svg

> Ameliorate MutableSegment synchronize
> -
>
> Key: HBASE-20411
> URL: https://issues.apache.org/jira/browse/HBASE-20411
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: 2.load.patched.17704.lock.svg, 
> 2.load.patched.2.17704.lock.svg, 2.more.patch.12010.lock.svg, 41901.lock.svg, 
> HBASE-20411.branch-2.0.001.patch, HBASE-20411.branch-2.0.002.patch, 
> HBASE-20411.branch-2.0.003.patch, HBASE-20411.branch-2.0.004.patch, 
> HBASE-20411.branch-2.0.005.patch, HBASE-20411.branch-2.0.006.patch, 
> HBASE-20411.branch-2.0.007.patch
>
>
> This item is migrated from HBASE-20236 so it gets dedicated issue.
> Let me upload evidence that has this synchronize as a stake in our write-time 
> perf. I'll migrate the patch I posted with updates that come of comments 
> posted by [~mdrob] on the HBASE-20236 issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20411) Ameliorate MutableSegment synchronize

2018-04-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439828#comment-16439828
 ] 

stack commented on HBASE-20411:
---

.007 fix tests. The memstore accounting has good coverage to fixing the above 
failing tests uncovered problems in my patch that I was unable to find via 
inspection. The patch runs to completion on cluster doing ycsb now... Doesn't 
get stuck any more. Ready for review.

{code}
Change the MemStore size accounting so we don't synchronize across three
volatiles applying deltas. Instead:

 * Make MemStoreSize, a datastructure of our memstore size longs, 
immutable.
   Create a new instance on every increment.
 * Undo MemStoreSizing being an instance of MemStoreSize; instead it 
has-a.
 * Make two MemStoreSizing implementations; one thread-safe, the other 
not.
 * Use an AtomicReference#checkAndPut (lockless) where concurrent 
updates
 * Otherwise, use unsynchronized accounting.
 * Review all use of MemStoreSizing. Many are single-threaded and do
   not need to be synchronized.

TODO: Use this technique accounting at the global level too.

M 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSizing.java
 Make this an Interface. Implementations are a thread-safe instance and
 a non-thread-safe version.
{code}

> Ameliorate MutableSegment synchronize
> -
>
> Key: HBASE-20411
> URL: https://issues.apache.org/jira/browse/HBASE-20411
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: 2.load.patched.17704.lock.svg, 
> 2.load.patched.2.17704.lock.svg, 2.more.patch.12010.lock.svg, 41901.lock.svg, 
> HBASE-20411.branch-2.0.001.patch, HBASE-20411.branch-2.0.002.patch, 
> HBASE-20411.branch-2.0.003.patch, HBASE-20411.branch-2.0.004.patch, 
> HBASE-20411.branch-2.0.005.patch, HBASE-20411.branch-2.0.006.patch, 
> HBASE-20411.branch-2.0.007.patch
>
>
> This item is migrated from HBASE-20236 so it gets dedicated issue.
> Let me upload evidence that has this synchronize as a stake in our write-time 
> perf. I'll migrate the patch I posted with updates that come of comments 
> posted by [~mdrob] on the HBASE-20236 issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20419) Fix potential NPE in ZKUtil#listChildrenAndWatchForNewChildren callers

2018-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439697#comment-16439697
 ] 

Hudson commented on HBASE-20419:


Results for branch branch-2
[build #621 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/621/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/621//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/621//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/621//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix potential NPE in ZKUtil#listChildrenAndWatchForNewChildren callers
> --
>
> Key: HBASE-20419
> URL: https://issues.apache.org/jira/browse/HBASE-20419
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.1, 2.0.0-beta-2, 1.1.13
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20419.v3.patch, HBASE-20419_1.patch, 
> HBASE-20419_2.patch
>
>
> We have developed a static analysis tool 
> [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential 
> NPE. Our analysis shows that some callees may return null in corner case(e.g. 
> node crash , IO exception), some of their callers have  _!=null_ check but 
> some do not have.  For example:
> Callee ZKUtil#listChildrenAndWatchForNewChildren may return null, it has 8 
> callers, 6 of the caller have null checker like:
> {code:java}
> List children = ZKUtil.listChildrenAndWatchForNewChildren(zkw, 
> zkw.znodePaths.rsZNode);
> if (children == null) {
> return Collections.emptyList();
> }
> {code}
> but another two callers do not have null 
> checker:RSGroupInfoManagerImpl#retrieveGroupListFromZookeeper,ZKProcedureMemberRpcs#watchForAbortedProcedures.
>  
> We attach the patch to fix this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19994) Create a new class for RPC throttling exception, make it retryable.

2018-04-16 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439803#comment-16439803
 ] 

huaxiang sun commented on HBASE-19994:
--

Thanks [~esteban], I will add the release notes and commit it later.

> Create a new class for RPC throttling exception, make it retryable. 
> 
>
> Key: HBASE-19994
> URL: https://issues.apache.org/jira/browse/HBASE-19994
> Project: HBase
>  Issue Type: Improvement
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Major
> Attachments: HBASE-19994-master-v01.patch, 
> HBASE-19994-master-v02.patch, HBASE-19994-master-v03.patch, 
> HBASE-19994-master-v04.patch, HBASE-19994-master-v05.patch, 
> HBASE-19994-master-v06.patch, HBASE-19994-master-v07.patch
>
>
> Based on a discussion at dev mailing list.
>  
> {code:java}
> Thanks Andrew.
> +1 for the second option, I will create a jira for this change.
> Huaxiang
> On Feb 9, 2018, at 1:09 PM, Andrew Purtell  wrote:
> We have
> public class ThrottlingException extends QuotaExceededException
> public class QuotaExceededException extends DoNotRetryIOException
> Let the storage quota limits throw QuotaExceededException directly (based
> on DNRIOE). That seems fine.
> However, ThrottlingException is thrown as a result of a temporal quota,
> so it is inappropriate for this to inherit from DNRIOE, it should inherit
> IOException instead so the client is allowed to retry until successful, or
> until the retry policy is exhausted.
> We are in a bit of a pickle because we've released with this inheritance
> hierarchy, so to change it we will need a new minor, or we will want to
> deprecate ThrottlingException and use a new exception class instead, one
> which does not inherit from DNRIOE.
> On Feb 7, 2018, at 9:25 AM, Huaxiang Sun  wrote:
> Hi Mike,
>   You are right. For rpc throttling, definitely it is retryable. For storage 
> quota, I think it will be fail faster (non-retryable).
>   We probably need to separate these two types of exceptions, I will do some 
> more research and follow up.
>   Thanks,
>   Huaxiang
> On Feb 7, 2018, at 9:16 AM, Mike Drob  wrote:
> I think, philosophically, there can be two kinds of QEE -
> For throttling, we can retry. The quota is a temporal quota - you have done
> too many operations this minute, please try again next minute and
> everything will work.
> For storage, we shouldn't retry. The quota is a fixed quote - you have
> exceeded your allotted disk space, please do not try again until you have
> remedied the situation.
> Our current usage conflates the two, sometimes it is correct, sometimes not.
> On Wed, Feb 7, 2018 at 11:00 AM, Huaxiang Sun  wrote:
> Hi Stack,
>  I run into a case that a mapreduce job in hive cannot finish because
> it runs into a QEE.
> I need to look into the hive mr task to see if QEE is not handled
> correctly in hbase code or in hive code.
> I am thinking that if  QEE is a retryable exception, then it should be
> taken care of by the hbase code.
> I will check more and report back.
> Thanks,
> Huaxiang
> On Feb 7, 2018, at 8:23 AM, Stack  wrote:
> QEE being a DNRIOE seems right on the face of it.
> But if throttling, a DNRIOE is inappropriate. Where you seeing a QEE in a
> throttling scenario Huaxiang?
> Thanks,
> S
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439456#comment-16439456
 ] 

Sean Busbey commented on HBASE-20369:
-

please fix the whitespace issues pointed out by qabot.

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch, HBASE-20369_v1.patch, book.adoc
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19963) TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+

2018-04-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439734#comment-16439734
 ] 

stack commented on HBASE-19963:
---

+1

> TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+
> 
>
> Key: HBASE-19963
> URL: https://issues.apache.org/jira/browse/HBASE-19963
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Mike Drob
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-19963.master.001.patch, 
> HBASE-19963.master.002.patch
>
>
> We try to accommodate HDFS changing ports when testing if it is the same FS 
> in our tests:
> https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java#L156-L162
> {code}
> if (isHadoop3) {
>   // Hadoop 3.0.0 alpha1+ change default nn port to 9820. See HDFS-9427
>   testIsSameHdfs(9820);
> } else {
>   // pre hadoop 3.0.0 defaults to port 8020
>   testIsSameHdfs(8020);
> }
> {code}
> But in Hadoop 3.0.1, they decided to go back to the old port - see HDFS-12990.
> So our tests will fail against the snapshot and against future releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19963) TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+

2018-04-16 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-19963:
--
   Resolution: Fixed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch! pushed to branch-2.0+

FYI [~stack]

> TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+
> 
>
> Key: HBASE-19963
> URL: https://issues.apache.org/jira/browse/HBASE-19963
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Mike Drob
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-19963.master.001.patch, 
> HBASE-19963.master.002.patch
>
>
> We try to accommodate HDFS changing ports when testing if it is the same FS 
> in our tests:
> https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java#L156-L162
> {code}
> if (isHadoop3) {
>   // Hadoop 3.0.0 alpha1+ change default nn port to 9820. See HDFS-9427
>   testIsSameHdfs(9820);
> } else {
>   // pre hadoop 3.0.0 defaults to port 8020
>   testIsSameHdfs(8020);
> }
> {code}
> But in Hadoop 3.0.1, they decided to go back to the old port - see HDFS-12990.
> So our tests will fail against the snapshot and against future releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20332) shaded mapreduce module shouldn't include hadoop

2018-04-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20332:

Status: Patch Available  (was: In Progress)

Okay, fell down a bit of a rabbit hole with this one, so here's what I have so 
far. I'm testing this on a cluster now, so might end up with more changes. 
Feedback on the direction of the approach so far please.

Note that I've stumbled on to what seems to be a bug in the maven-shade-plugin: 
activation clauses for profiles are stripped in the dependency-reduced-pom. 
That means that rather than default to seeing the hadoop 2 profile's provided 
dependencies, our shaded mapreduce artifact defaults to just not showing any of 
the provided scope hadoop dependencies.

To see the actual dependency tree/list for use with a particular hadoop, you 
have to manually activate the relevant profile, e.g. {{mvn -Phadoop-2.0 
dependency:tree -f 
/path/to/maven/repo/org/apache/hbase/hbase-shaded-mapreduce/3.0.0-SNAPSHOT/hbase-shaded-mapreduce-3.0.0-SNAPSHOT.pom}}.

I think this is fine since the vast majority of users will not programmatically 
look at the pom to figure out specific jars to get from the environment, given 
that our expressed goal usage is via the Hadoop commands.

-v0
 * modify the jar checking script to take args; make hadoop stuff optional
 * separate out checking the artifacts that have hadoop vs those that don't.
 ** Unfortunately means we need two modules for checking things
 ** put in a safety check that the support script for checking jar contents is 
maintained in both modules
 * move hadoop deps for the mapreduce module to provided. we should be getting 
stuff from hadoop at runtime for the non-shaded artifact as well.
 ** have to carve out an exception for o.a.hadoop.metrics2. :(
 * fix duplicated class warning

> shaded mapreduce module shouldn't include hadoop
> 
>
> Key: HBASE-20332
> URL: https://issues.apache.org/jira/browse/HBASE-20332
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20332.0.patch
>
>
> AFAICT, we should just entirely skip including hadoop in our shaded mapreduce 
> module
> 1) Folks expect to run yarn / mr apps via {{hadoop jar}} / {{yarn jar}}
> 2) those commands include all the needed Hadoop jars in your classpath by 
> default (both client side and in the containers)
> 3) If you try to use "user classpath first" for your job as a workaround 
> (e.g. for some library your application needs that hadoop provides) then our 
> inclusion of *some but not all* hadoop classes then causes everything to fall 
> over because of mixing rewritten and non-rewritten hadoop classes
> 4) if you don't use "user classpath first" then all of our 
> non-relocated-but-still-shaded hadoop classes are ignored anyways so we're 
> just wasting space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20411) Ameliorate MutableSegment synchronize

2018-04-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439840#comment-16439840
 ] 

stack commented on HBASE-20411:
---

2.simple.patch.69074.lock.svg is what our locking profile looks like after this 
patch has been applied. Blocking on MutableSegment has been removed. We are 
left with the Semaphore on RPC scheduling and mvcc completion (This makes our 
locking profile looks like 1.2.7 again). Not much by way of perf improvement 
though.

> Ameliorate MutableSegment synchronize
> -
>
> Key: HBASE-20411
> URL: https://issues.apache.org/jira/browse/HBASE-20411
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: 2.load.patched.17704.lock.svg, 
> 2.load.patched.2.17704.lock.svg, 2.more.patch.12010.lock.svg, 41901.lock.svg, 
> HBASE-20411.branch-2.0.001.patch, HBASE-20411.branch-2.0.002.patch, 
> HBASE-20411.branch-2.0.003.patch, HBASE-20411.branch-2.0.004.patch, 
> HBASE-20411.branch-2.0.005.patch, HBASE-20411.branch-2.0.006.patch, 
> HBASE-20411.branch-2.0.007.patch
>
>
> This item is migrated from HBASE-20236 so it gets dedicated issue.
> Let me upload evidence that has this synchronize as a stake in our write-time 
> perf. I'll migrate the patch I posted with updates that come of comments 
> posted by [~mdrob] on the HBASE-20236 issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19761) Fix Checkstyle errors in hbase-zookeeper

2018-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439687#comment-16439687
 ] 

Hadoop QA commented on HBASE-19761:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
39s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} hbase-zookeeper: The patch generated 0 new + 0 
unchanged - 13 fixed = 0 total (was 13) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} hbase-replication: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
31s{color} | {color:red} hbase-server: The patch generated 16 new + 475 
unchanged - 4 fixed = 491 total (was 479) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} The patch hbase-rsgroup passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} The patch hbase-it passed checkstyle {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
29s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 47s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hbase-zookeeper in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 

[jira] [Commented] (HBASE-20420) Fix Some Potential NPE

2018-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439731#comment-16439731
 ] 

Hadoop QA commented on HBASE-20420:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
21s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
12s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m 36s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}106m 
30s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
34s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
31s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 1s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20420 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919227/HBASE-20420_4.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux d38858d2c925 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HBASE-20217) Make MoveRegionProcedure hold region lock for life of the procedure

2018-04-16 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20217:
--
Fix Version/s: (was: 2.0.0)

> Make MoveRegionProcedure hold region lock for life of the procedure
> ---
>
> Key: HBASE-20217
> URL: https://issues.apache.org/jira/browse/HBASE-20217
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Critical
>
> From HBASE-20202, make procedure hold lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20416) [DOC] Fix hbck option intros

2018-04-16 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439801#comment-16439801
 ] 

huaxiang sun commented on HBASE-20416:
--

Copy [~esteban]. 

> [DOC] Fix hbck option intros
> 
>
> Key: HBASE-20416
> URL: https://issues.apache.org/jira/browse/HBASE-20416
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-20416.master.001.patch
>
>
> {quote}In this case, you can use the -fixSplitParents 
>  This option should not normally be used, and it is not in -fixAll.
> {quote}
> There is no such option "-fixAll". From the context, it seems to refer to 
> -repair
> In addition, -repair option also covers -fixReferenceFiles, -fixHFileLinks, 
> which are not introduced in the doc.
> {code:java|title=HBaseFsck#exec}
> else if (cmd.equals("-repair")) {
> // this attempts to merge overlapping hdfs regions, needs testing
> // under load
> setFixHdfsHoles(true);
> setFixHdfsOrphans(true);
> setFixMeta(true);
> setFixAssignments(true);
> setFixHdfsOverlaps(true);
> setFixVersionFile(true);
> setSidelineBigOverlaps(true);
> setFixSplitParents(false);
> setCheckHdfs(true);
> setFixReferenceFiles(true);
> setFixHFileLinks(true);
> {code}
> {quote}-repair includes all the region consistency options and only the hole 
> repairing table integrity options.
> {quote}
> ... seems untrue to me.
>  
> Finally,
>  {quote}
> In this case there is a special -fixMetaOnly option that can try to fix meta 
> assignments.
> {quote}
> -fixMetaOnly option no longer exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20332) shaded mapreduce module shouldn't include hadoop

2018-04-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20332:

Attachment: HBASE-20332.0.patch

> shaded mapreduce module shouldn't include hadoop
> 
>
> Key: HBASE-20332
> URL: https://issues.apache.org/jira/browse/HBASE-20332
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20332.0.patch
>
>
> AFAICT, we should just entirely skip including hadoop in our shaded mapreduce 
> module
> 1) Folks expect to run yarn / mr apps via {{hadoop jar}} / {{yarn jar}}
> 2) those commands include all the needed Hadoop jars in your classpath by 
> default (both client side and in the containers)
> 3) If you try to use "user classpath first" for your job as a workaround 
> (e.g. for some library your application needs that hadoop provides) then our 
> inclusion of *some but not all* hadoop classes then causes everything to fall 
> over because of mixing rewritten and non-rewritten hadoop classes
> 4) if you don't use "user classpath first" then all of our 
> non-relocated-but-still-shaded hadoop classes are ignored anyways so we're 
> just wasting space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20411) Ameliorate MutableSegment synchronize

2018-04-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20411:
--
Attachment: HBASE-20411.branch-2.0.007.patch

> Ameliorate MutableSegment synchronize
> -
>
> Key: HBASE-20411
> URL: https://issues.apache.org/jira/browse/HBASE-20411
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: 2.load.patched.17704.lock.svg, 
> 2.load.patched.2.17704.lock.svg, 41901.lock.svg, 
> HBASE-20411.branch-2.0.001.patch, HBASE-20411.branch-2.0.002.patch, 
> HBASE-20411.branch-2.0.003.patch, HBASE-20411.branch-2.0.004.patch, 
> HBASE-20411.branch-2.0.005.patch, HBASE-20411.branch-2.0.006.patch, 
> HBASE-20411.branch-2.0.007.patch
>
>
> This item is migrated from HBASE-20236 so it gets dedicated issue.
> Let me upload evidence that has this synchronize as a stake in our write-time 
> perf. I'll migrate the patch I posted with updates that come of comments 
> posted by [~mdrob] on the HBASE-20236 issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2018-04-16 Thread Dequan Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439575#comment-16439575
 ] 

Dequan Chen commented on HBASE-7129:


To [~mdrob],


Thanks for your comment the other day. I just double-checked the checkAndPut 
examples in both xml- and json-format. They are correct and different from the 
regular Put. I will explain the json example as follows with the purpose that 
you and others can fully understand how the checkAndPut examples work:

curl -vi -X PUT \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d 
'\{"Row":[{"key":"cm93MQ==","Cell":[{"column":"Y2ZhOmFsaWFz","$":"T2xkR3V5"},\{"column":"Y2ZhOmFsaWFz",
 "$":"TmV3R3V5"}] }]}' \
"http://example.com:8000/users/row1/?check=put;

In the above json-format example: 
(1) \{"column":"Y2ZhOmFsaWFz", "$":"TmV3R3V5"} at the end of -d option are the 
check cell name and check cell value in Base-64 respectively: "Y2ZhOmFsaWFz" 
for "cfa:alias", and "TmV3R3V5" for "NewGuy". 
(2) \{"column":"Y2ZhOmFsaWFz","$":"T2xkR3V5"} are the new Put cell name and 
cell value in Base-64 respectively: "Y2ZhOmFsaWFz" for "cfa:alias", and 
"T2xkR3V5" for "OldGuy".
(3) "cm93MQ==" is the Base-64 for "row1" for the checkAndPut row key
(4) "/?check=put" after the "row key' in the request URL is required for 
checkAndPut WebHBase operation to work
Note: "cfa" is the column family name and "alia" are the column (qualifier) 
name for the non-Base64 encoded cell name.

Basically, the xml-format example is the same as the json-format example, I 
will not duplicate here.

In addition, can you tell me how to change all the values in monospaced font in 
the patch. Thanks.

I hope that the above explanation can help.

Have a Good Day!

Dequan

 

> Need documentation for REST atomic operations (HBASE-4720)
> --
>
> Key: HBASE-7129
> URL: https://issues.apache.org/jira/browse/HBASE-7129
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Joe Pallas
>Assignee: Dequan Chen
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-7129.0001.patch, HBASE-7129.0002.patch, 
> HBASE-7129.patch
>
>
> HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
> but the REST documentation (in the package summary) needs to be updated so 
> people know that this feature exists and how to use it.
> http://wiki.apache.org/hadoop/Hbase/Stargate
> http://hbase.apache.org/book/rest.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20406) HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods

2018-04-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439580#comment-16439580
 ] 

Ted Yu commented on HBASE-20406:


lgtm

> HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods
> --
>
> Key: HBASE-20406
> URL: https://issues.apache.org/jira/browse/HBASE-20406
> Project: HBase
>  Issue Type: Improvement
>  Components: security, Thrift
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: HBASE-20406.master.001.patch, 
> HBASE-20406.master.002.patch
>
>
> HBASE-10473 introduced a utility HttpServerUtil.constrainHttpMethods to 
> prevent Jetty from answering on TRACE and OPTIONS methods. This should be 
> added to Thrift in HTTP mode as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19761) Fix Checkstyle errors in hbase-zookeeper

2018-04-16 Thread maoling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maoling updated HBASE-19761:

Status: Patch Available  (was: Open)

> Fix Checkstyle errors in hbase-zookeeper
> 
>
> Key: HBASE-19761
> URL: https://issues.apache.org/jira/browse/HBASE-19761
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Assignee: maoling
>Priority: Minor
> Attachments: HBASE-19761-master-v0.patch, HBASE-19761-master-v1.patch
>
>
> Fix the remaining Checkstyle errors in the *hbase-zookeeper* module and 
> enable Checkstyle to fail on violations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439433#comment-16439433
 ] 

Sean Busbey commented on HBASE-20369:
-

hurm. why hasn't qabot come through yet. let me go kick off a run manually.

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch, HBASE-20369_v1.patch, book.adoc
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439462#comment-16439462
 ] 

Sean Busbey commented on HBASE-20369:
-

looking at the reference guide after the patch is applied (it's in the footer 
of the qabot report from the "refguide" plugin: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12471/artifact/patchprocess/patch-site/book.html)
 and I don't see the appendix?

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch, HBASE-20369_v1.patch, book.adoc
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20427) thrift.jsp displays "Framed transport" incorrectly

2018-04-16 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-20427:

Attachment: HBASE-20427.master.001.patch

> thrift.jsp displays "Framed transport" incorrectly 
> ---
>
> Key: HBASE-20427
> URL: https://issues.apache.org/jira/browse/HBASE-20427
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
>Priority: Major
> Fix For: 3.0.0, 2.0.0
>
> Attachments: HBASE-20427.master.001.patch
>
>
> According to thrift usage text:
> {code}
>  -nonblocking  Use the TNonblockingServer This implies the
>framed transport.
> {code}
> But the web page at port 9095 indicates {{framed = false}} when I start it 
> with {{-nonblocking}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20169) NPE when calling HBTU.shutdownMiniCluster (TestAssignmentManagerMetrics is flakey)

2018-04-16 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439563#comment-16439563
 ] 

Chia-Ping Tsai commented on HBASE-20169:


{quote}Trying to understand the severity here, sounds like not something that 
can happen on production (or even dev) deployment?
{quote}
It may happen on production. If HMaster#stop is called by any components in 
shutdown path, this issue will happen. For example, if no live regionservers 
are in the cluster, the ServerManager#shutdownCluster will call the 
HMaster#stop.
{quote}The trick always works here is make timeoutExecutor volatile, and assign 
it to a local variable, and then do the null check and call its method, or just 
do not set it to null...But I prefer we analysis the shutdown method again to 
see if we really need to call procedureExecutor.stop? 

We use timeout executor in a lot of places without null checks, so adding a 
single check here definitely feels insufficient.
{quote}
You are right. I''m trying to understand why we have to stop the timeout 
executor in HMaster#shutdown...the code is introduced by HBASE-19840. I run the 
TestMetaWithReplicas 30 times without stopping  timeout executor in 
HMaster#shutdown. All pass.  [~stack] WDYT?

BTW, the NPE is not related to this issue. Perhaps we can push the fix to 
TestAssignmentManagerMetrics first. And discuss the NPE in follow-up.

> NPE when calling HBTU.shutdownMiniCluster (TestAssignmentManagerMetrics is 
> flakey)
> --
>
> Key: HBASE-20169
> URL: https://issues.apache.org/jira/browse/HBASE-20169
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Major
> Attachments: HBASE-20169.branch-2.001.patch, 
> HBASE-20169.branch-2.002.patch, HBASE-20169.branch-2.003.patch, 
> HBASE-20169.branch-2.004.patch, HBASE-20169.branch-2.005.patch, 
> HBASE-20169.v0.addendum.patch
>
>
> This usually happens when some master or rs has already been down before we 
> calling shutdownMiniCluster.
> See
> https://builds.apache.org/job/HBASE-Flaky-Tests/27223/testReport/junit/org.apache.hadoop.hbase.master/TestAssignmentManagerMetrics/org_apache_hadoop_hbase_master_TestAssignmentManagerMetrics/
> and also
> http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34873/testReport/junit/org.apache.hadoop.hbase.master/TestRestartCluster/testRetainAssignmentOnRestart/
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.TestAssignmentManagerMetrics.after(TestAssignmentManagerMetrics.java:100)
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.TestRestartCluster.testRetainAssignmentOnRestart(TestRestartCluster.java:156)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20427) thrift.jsp displays "Framed transport" incorrectly

2018-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439586#comment-16439586
 ] 

Hadoop QA commented on HBASE-20427:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
44s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 34s{color} 
| {color:red} hbase-thrift generated 2 new + 14 unchanged - 2 fixed = 16 total 
(was 16) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
44s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 27s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
8s{color} | {color:green} hbase-thrift in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20427 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919229/HBASE-20427.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 8b9ba6366ee3 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 773aff90fd |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC3 |
| javac | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12475/artifact/patchprocess/diff-compile-javac-hbase-thrift.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12475/testReport/ |
| Max. process+thread 

[jira] [Commented] (HBASE-20169) NPE when calling HBTU.shutdownMiniCluster (TestAssignmentManagerMetrics is flakey)

2018-04-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439530#comment-16439530
 ] 

Mike Drob commented on HBASE-20169:
---

It's a race between the master shutting down when there are no RS and the 
master shutting down because our test scaffolding is cleaning up? Trying to 
understand the severity here, sounds like not something that can happen on 
production (or even dev) deployment?

We use timeout executor in a lot of places without null checks, so adding a 
single check here definitely feels insufficient.

There might be a separate bug in ProcedureExecutor where even if we send a stop 
signal, but somebody keeps adding chores then the executor will never shut down.

> NPE when calling HBTU.shutdownMiniCluster (TestAssignmentManagerMetrics is 
> flakey)
> --
>
> Key: HBASE-20169
> URL: https://issues.apache.org/jira/browse/HBASE-20169
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Major
> Attachments: HBASE-20169.branch-2.001.patch, 
> HBASE-20169.branch-2.002.patch, HBASE-20169.branch-2.003.patch, 
> HBASE-20169.branch-2.004.patch, HBASE-20169.branch-2.005.patch, 
> HBASE-20169.v0.addendum.patch
>
>
> This usually happens when some master or rs has already been down before we 
> calling shutdownMiniCluster.
> See
> https://builds.apache.org/job/HBASE-Flaky-Tests/27223/testReport/junit/org.apache.hadoop.hbase.master/TestAssignmentManagerMetrics/org_apache_hadoop_hbase_master_TestAssignmentManagerMetrics/
> and also
> http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34873/testReport/junit/org.apache.hadoop.hbase.master/TestRestartCluster/testRetainAssignmentOnRestart/
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.TestAssignmentManagerMetrics.after(TestAssignmentManagerMetrics.java:100)
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.TestRestartCluster.testRetainAssignmentOnRestart(TestRestartCluster.java:156)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2018-04-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439611#comment-16439611
 ] 

Mike Drob commented on HBASE-7129:
--

Oh, the expected value and the new value are not explicitly labelled as such in 
the check-and-put operation? Is it based on order then? First value for the 
column is always current, second value is always new?

The explanation that you wrote up is helpful, and including the base64 
decodings in the document (maybe after the table?) would be good addition so 
that folks have both an example that they can copy/paste and also understand 
what it means.

> In addition, can you tell me how to change all the values in monospaced font 
> in the patch. Thanks.
In asciidoc, you can use backticks to format text as monospaced: {{`like this`}}

> Need documentation for REST atomic operations (HBASE-4720)
> --
>
> Key: HBASE-7129
> URL: https://issues.apache.org/jira/browse/HBASE-7129
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Joe Pallas
>Assignee: Dequan Chen
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-7129.0001.patch, HBASE-7129.0002.patch, 
> HBASE-7129.patch
>
>
> HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
> but the REST documentation (in the package summary) needs to be updated so 
> people know that this feature exists and how to use it.
> http://wiki.apache.org/hadoop/Hbase/Stargate
> http://hbase.apache.org/book/rest.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20395) Displaying thrift server type on the thrift page

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439465#comment-16439465
 ] 

Sean Busbey commented on HBASE-20395:
-

How about an update to the python examples that talk to the thrift services to 
have each check that it's talking to the one it expects? They're in 
{{hbase-examples/src/main/python}}

> Displaying thrift server type on the thrift page
> 
>
> Key: HBASE-20395
> URL: https://issues.apache.org/jira/browse/HBASE-20395
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-20395.master.001.patch, 
> HBASE-20395.master.002.patch, HBASE-20395.master.003.patch, result.png
>
>
> HBase supports two types of thrift server: thrift and thrift2.
> But after start the thrift server successfully, we can not get the thrift 
> server type conveniently. 
> So, displaying thrift server type on the thrift page may provide some 
> convenience for the users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-16 Thread Thiriguna Bharat Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439512#comment-16439512
 ] 

Thiriguna Bharat Rao commented on HBASE-20369:
--

Many thanks [~busbey], for firing a manual build and verifying it. I highly 
appreciate it.

 

I see the change that I've included in v1 patch in book.adoc. Not sure, why 
it's not appearing in the build. 

= Appendix include::_chapters/appendix_contributing_to_documentation.adoc[] 
include::_chapters/faq.adoc[] include::_chapters/hbck_in_depth.adoc[] 
include::_chapters/appendix_acl_matrix.adoc[] 
include::_chapters/compression.adoc[] include::_chapters/sql.adoc[] 
include::_chapters/ycsb.adoc[] include::_chapters/appendix_hfile_format.adoc[] 
include::_chapters/other_info.adoc[] include::_chapters/hbase_history.adoc[] 
include::_chapters/asf.adoc[] include::_chapters/orca.adoc[] 
{color:#f691b2}include::_chapters/tracing.adoc[] include::_chapters/rpc.adoc[] 
include::_chapters/appendix_hbase_incompatibilities.adoc[] -{color}-

Best,

Triguna

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch, HBASE-20369_v1.patch, book.adoc
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20414) TestLockProcedure#testMultipleLocks may fail on slow machine

2018-04-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439424#comment-16439424
 ] 

Ted Yu commented on HBASE-20414:


[~chia7712]:
Can you take a look ?

Thanks

> TestLockProcedure#testMultipleLocks may fail on slow machine
> 
>
> Key: HBASE-20414
> URL: https://issues.apache.org/jira/browse/HBASE-20414
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20414.v1.txt
>
>
> Here was recent failure : 
> https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/172/testReport/junit/org.apache.hadoop.hbase.master.locking/TestLockProcedure/health_checks___yetus_jdk8_hadoop2_checks___testMultipleLocks/
> {code}
> java.lang.AssertionError: expected: but was:
>   at 
> org.apache.hadoop.hbase.master.locking.TestLockProcedure.sendHeartbeatAndCheckLocked(TestLockProcedure.java:221)
>   at 
> org.apache.hadoop.hbase.master.locking.TestLockProcedure.testMultipleLocks(TestLockProcedure.java:311)
> {code}
> In the test output, we can see this:
> {code}
> 2018-04-13 20:19:54,230 DEBUG [Time-limited test] 
> locking.TestLockProcedure(225): Proc id 22 : LOCKED.
> ...
> 2018-04-13 20:19:55,529 DEBUG [Time-limited test] 
> procedure2.ProcedureExecutor(865): Stored pid=26, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure 
> regions=a7f9adfd047350eabb199a39564ba4db,c1e297609590b707233a2f9c8bb51fa1, 
> type=EXCLUSIVE
> 2018-04-13 20:19:56,230 DEBUG [ProcExecTimeout] locking.LockProcedure(207): 
> Timeout failure ProcedureEvent for pid=22, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=22, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE]
> {code}
> After the pid=26 log, the code does this (1 second wait):
> {code}
> // Assert tables & region locks are waiting because of namespace lock.
> Thread.sleep(HEARTBEAT_TIMEOUT / 2);
> {code}
> On a slow machine (in the case above), there was only 730 msec between the 
> queueing of regionsLock2 and the WAITING_TIMEOUT state of the nsLock. The 1 
> second wait was too long, leading to assertion failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20420) Fix Some Potential NPE

2018-04-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-20420:
--

Assignee: lujie

> Fix Some Potential NPE 
> ---
>
> Key: HBASE-20420
> URL: https://issues.apache.org/jira/browse/HBASE-20420
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Attachments: HBASE-20420_2.patch, hbase-20420.patch
>
>
> We have used the  tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] 
> find another  six problems that similar to  HBASE-20419.
> list here and attach the patch.
>  CommonFSUtils#listStatus
> RSGroupInfoManagerImpl#getRSGroupOfServer
> BackupSystemTable#readBackupInfo
> SnapshotManifest#getRegionManifestsMap
> HRegionFileSystem#getFamilies
> Result#getFamilyMap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20417) Do not read wal entries when peer is disabled

2018-04-16 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439469#comment-16439469
 ] 

Guanghao Zhang commented on HBASE-20417:


Why remove "if(LOG.isTranceEnabled)"? 

> Do not read wal entries when peer is disabled
> -
>
> Key: HBASE-20417
> URL: https://issues.apache.org/jira/browse/HBASE-20417
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20417-v1.patch, HBASE-20417.patch
>
>
> Now, the disabled check is in ReplicationSourceShipper. If peer is disabled, 
> then we will not take entry batch from ReplicationSourceWALReader. But 
> ReplicationSourceWALReader will keep reading wal entries until the buffer is 
> full.
> For serial replication, the canPush check is in ReplicationSourceWALReader, 
> so even when we disabled the peer during the modification for a serial peer, 
> we could still run into the SerialReplicationChecker. Theoretically there 
> will be no problem, since in the procedure we will only update last pushed 
> sequence ids to a greater value. If canPush is true then a greater value does 
> not make any difference, if canPush is false then we are still safe since the 
> ReplicationSourceWALReader will be blocked.
> But this still makes me a little nervous, and also, it does not make sense to 
> still read wal entries when the peer is disabled. So let's change the 
> behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-16 Thread Thiriguna Bharat Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439512#comment-16439512
 ] 

Thiriguna Bharat Rao edited comment on HBASE-20369 at 4/16/18 2:34 PM:
---

Many thanks [~busbey], for firing a manual build and verifying it. I highly 
appreciate it.

 

I see the change that I've included in v1 patch in book.adoc. Not sure, why 
it's not appearing in the build. 

= Appendix include::_chapters/appendix_contributing_to_documentation.adoc[] 
include::_chapters/faq.adoc[] include::_chapters/hbck_in_depth.adoc[] 
include::_chapters/appendix_acl_matrix.adoc[] 
include::_chapters/compression.adoc[] include::_chapters/sql.adoc[] 
include::_chapters/ycsb.adoc[] include::_chapters/appendix_hfile_format.adoc[] 
include::_chapters/other_info.adoc[] include::_chapters/hbase_history.adoc[] 
include::_chapters/asf.adoc[] include::_chapters/orca.adoc[]

{color:#33}{color:#f691b2}include::_chapters/tracing.adoc[] 
include::_chapters/rpc.adoc[]{color} {color}

{color:#f691b2}include::_chapters/appendix_hbase_incompatibilities.adoc[]{color}
 --

Best,

Triguna


was (Author: trigunab):
Many thanks [~busbey], for firing a manual build and verifying it. I highly 
appreciate it.

 

I see the change that I've included in v1 patch in book.adoc. Not sure, why 
it's not appearing in the build. 

= Appendix include::_chapters/appendix_contributing_to_documentation.adoc[] 
include::_chapters/faq.adoc[] include::_chapters/hbck_in_depth.adoc[] 
include::_chapters/appendix_acl_matrix.adoc[] 
include::_chapters/compression.adoc[] include::_chapters/sql.adoc[] 
include::_chapters/ycsb.adoc[] include::_chapters/appendix_hfile_format.adoc[] 
include::_chapters/other_info.adoc[] include::_chapters/hbase_history.adoc[] 
include::_chapters/asf.adoc[] include::_chapters/orca.adoc[] 
{color:#f691b2}{color:#33}include::_chapters/tracing.adoc[] 
include::_chapters/rpc.adoc[]{color} 
include::_chapters/appendix_hbase_incompatibilities.adoc[] -{color}-

Best,

Triguna

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch, HBASE-20369_v1.patch, book.adoc
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-18812) Recategorize some of classes used as tools

2018-04-16 Thread Guangxu Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng reassigned HBASE-18812:
-

Assignee: Guangxu Cheng  (was: Chia-Ping Tsai)

> Recategorize some of classes used as tools
> --
>
> Key: HBASE-18812
> URL: https://issues.apache.org/jira/browse/HBASE-18812
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Guangxu Cheng
>Priority: Major
>
> The classes used from cmd line should be made as LimitedPrivate.TOOLS. The 
> candidates are shown below.
> # BackupDriver
> # RestoreDriver
> # CreateSnapshot
> # SnapshotInfo
> # ExportSnapshot
> # Canary
> # VersionInfo
> # RegionMover
> # CellCounter
> # CopyTable
> # DumpReplicationQueues
> # Export
> # HashTable
> # Import
> # ImportTsv
> # LoadIncrementalHFiles
> # ReplicationSyncUp
> # SyncTable
> # VerifyReplication
> # WALPlayer
> # ZkAclReset



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20395) Displaying thrift server type on the thrift page

2018-04-16 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439306#comment-16439306
 ] 

Guangxu Cheng commented on HBASE-20395:
---

bq.Since this is for checking programmatically, can you verify that thrift2 
client calling thrift server can get the type correctly ?
It's necessary.I will try to add unit tests in this scenario.Thanks 
[~yuzhih...@gmail.com]

> Displaying thrift server type on the thrift page
> 
>
> Key: HBASE-20395
> URL: https://issues.apache.org/jira/browse/HBASE-20395
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-20395.master.001.patch, 
> HBASE-20395.master.002.patch, HBASE-20395.master.003.patch, result.png
>
>
> HBase supports two types of thrift server: thrift and thrift2.
> But after start the thrift server successfully, we can not get the thrift 
> server type conveniently. 
> So, displaying thrift server type on the thrift page may provide some 
> convenience for the users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20169) NPE when calling HBTU.shutdownMiniCluster (TestAssignmentManagerMetrics is flakey)

2018-04-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439405#comment-16439405
 ] 

Duo Zhang commented on HBASE-20169:
---

OK, so there is a race. Then a simple null check can not solve the problem 
perfectly. It could still happen that the timeoutExecutor is set to null right 
after your null check...

The trick always works here is make timeoutExecutor volatile, and assign it to 
a local variable, and then do the null check and call its method, or just do 
not set it to null...

But I prefer we analysis the shutdown method again to see if we really need to 
call procedureExecutor.stop? BTW the clusterConnection.close is needed as we 
need to interrupt the thread which is accessing meta.

Thanks.

> NPE when calling HBTU.shutdownMiniCluster (TestAssignmentManagerMetrics is 
> flakey)
> --
>
> Key: HBASE-20169
> URL: https://issues.apache.org/jira/browse/HBASE-20169
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Major
> Attachments: HBASE-20169.branch-2.001.patch, 
> HBASE-20169.branch-2.002.patch, HBASE-20169.branch-2.003.patch, 
> HBASE-20169.branch-2.004.patch, HBASE-20169.branch-2.005.patch, 
> HBASE-20169.v0.addendum.patch
>
>
> This usually happens when some master or rs has already been down before we 
> calling shutdownMiniCluster.
> See
> https://builds.apache.org/job/HBASE-Flaky-Tests/27223/testReport/junit/org.apache.hadoop.hbase.master/TestAssignmentManagerMetrics/org_apache_hadoop_hbase_master_TestAssignmentManagerMetrics/
> and also
> http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34873/testReport/junit/org.apache.hadoop.hbase.master/TestRestartCluster/testRetainAssignmentOnRestart/
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.TestAssignmentManagerMetrics.after(TestAssignmentManagerMetrics.java:100)
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.TestRestartCluster.testRetainAssignmentOnRestart(TestRestartCluster.java:156)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20420) Fix Some Potential NPE

2018-04-16 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-20420:
--
Attachment: HBASE-20420_3.patch

> Fix Some Potential NPE 
> ---
>
> Key: HBASE-20420
> URL: https://issues.apache.org/jira/browse/HBASE-20420
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Attachments: HBASE-20420_2.patch, HBASE-20420_3.patch, 
> hbase-20420.patch
>
>
> We have used the  tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] 
> find another  six problems that similar to  HBASE-20419.
> list here and attach the patch.
>  CommonFSUtils#listStatus
> RSGroupInfoManagerImpl#getRSGroupOfServer
> BackupSystemTable#readBackupInfo
> SnapshotManifest#getRegionManifestsMap
> HRegionFileSystem#getFamilies
> Result#getFamilyMap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20420) Fix Some Potential NPE

2018-04-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439473#comment-16439473
 ] 

Duo Zhang commented on HBASE-20420:
---

I think for scan we will not get an empty result?

> Fix Some Potential NPE 
> ---
>
> Key: HBASE-20420
> URL: https://issues.apache.org/jira/browse/HBASE-20420
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Attachments: HBASE-20420_2.patch, HBASE-20420_3.patch, 
> hbase-20420.patch
>
>
> We have used the  tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] 
> find another  six problems that similar to  HBASE-20419.
> list here and attach the patch.
>  CommonFSUtils#listStatus
> RSGroupInfoManagerImpl#getRSGroupOfServer
> BackupSystemTable#readBackupInfo
> SnapshotManifest#getRegionManifestsMap
> HRegionFileSystem#getFamilies
> Result#getFamilyMap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20420) Fix Some Potential NPE

2018-04-16 Thread lujie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439493#comment-16439493
 ] 

lujie commented on HBASE-20420:
---

[~Apache9]:

I have delete the null check for the scan in _AccessControlLists._

Thanks for you review!!! Any suggestion for other checker?

> Fix Some Potential NPE 
> ---
>
> Key: HBASE-20420
> URL: https://issues.apache.org/jira/browse/HBASE-20420
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Attachments: HBASE-20420_2.patch, HBASE-20420_3.patch, 
> HBASE-20420_4.patch, hbase-20420.patch
>
>
> We have used the  tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] 
> find another  six problems that similar to  HBASE-20419.
> list here and attach the patch.
>  CommonFSUtils#listStatus
> RSGroupInfoManagerImpl#getRSGroupOfServer
> BackupSystemTable#readBackupInfo
> SnapshotManifest#getRegionManifestsMap
> HRegionFileSystem#getFamilies
> Result#getFamilyMap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-16 Thread Thiriguna Bharat Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439512#comment-16439512
 ] 

Thiriguna Bharat Rao edited comment on HBASE-20369 at 4/16/18 2:33 PM:
---

Many thanks [~busbey], for firing a manual build and verifying it. I highly 
appreciate it.

 

I see the change that I've included in v1 patch in book.adoc. Not sure, why 
it's not appearing in the build. 

= Appendix include::_chapters/appendix_contributing_to_documentation.adoc[] 
include::_chapters/faq.adoc[] include::_chapters/hbck_in_depth.adoc[] 
include::_chapters/appendix_acl_matrix.adoc[] 
include::_chapters/compression.adoc[] include::_chapters/sql.adoc[] 
include::_chapters/ycsb.adoc[] include::_chapters/appendix_hfile_format.adoc[] 
include::_chapters/other_info.adoc[] include::_chapters/hbase_history.adoc[] 
include::_chapters/asf.adoc[] include::_chapters/orca.adoc[] 
{color:#f691b2}{color:#33}include::_chapters/tracing.adoc[] 
include::_chapters/rpc.adoc[]{color} 
include::_chapters/appendix_hbase_incompatibilities.adoc[] -{color}-

Best,

Triguna


was (Author: trigunab):
Many thanks [~busbey], for firing a manual build and verifying it. I highly 
appreciate it.

 

I see the change that I've included in v1 patch in book.adoc. Not sure, why 
it's not appearing in the build. 

= Appendix include::_chapters/appendix_contributing_to_documentation.adoc[] 
include::_chapters/faq.adoc[] include::_chapters/hbck_in_depth.adoc[] 
include::_chapters/appendix_acl_matrix.adoc[] 
include::_chapters/compression.adoc[] include::_chapters/sql.adoc[] 
include::_chapters/ycsb.adoc[] include::_chapters/appendix_hfile_format.adoc[] 
include::_chapters/other_info.adoc[] include::_chapters/hbase_history.adoc[] 
include::_chapters/asf.adoc[] include::_chapters/orca.adoc[] 
{color:#f691b2}include::_chapters/tracing.adoc[] include::_chapters/rpc.adoc[] 
include::_chapters/appendix_hbase_incompatibilities.adoc[] -{color}-

Best,

Triguna

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch, HBASE-20369_v1.patch, book.adoc
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20427) thrift.jsp displays "Framed transport" incorrectly

2018-04-16 Thread Balazs Meszaros (JIRA)
Balazs Meszaros created HBASE-20427:
---

 Summary: thrift.jsp displays "Framed transport" incorrectly 
 Key: HBASE-20427
 URL: https://issues.apache.org/jira/browse/HBASE-20427
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Affects Versions: 2.0.0
Reporter: Balazs Meszaros
 Fix For: 3.0.0, 2.0.0


According to thrift usage text:
{code}
 -nonblocking  Use the TNonblockingServer This implies the
   framed transport.
{code}

But the web page at port 9095 indicates {{framed = false}} when I start it with 
{{-nonblocking}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20427) thrift.jsp displays "Framed transport" incorrectly

2018-04-16 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-20427:

Status: Patch Available  (was: Open)

> thrift.jsp displays "Framed transport" incorrectly 
> ---
>
> Key: HBASE-20427
> URL: https://issues.apache.org/jira/browse/HBASE-20427
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
>Priority: Major
> Fix For: 3.0.0, 2.0.0
>
> Attachments: HBASE-20427.master.001.patch
>
>
> According to thrift usage text:
> {code}
>  -nonblocking  Use the TNonblockingServer This implies the
>framed transport.
> {code}
> But the web page at port 9095 indicates {{framed = false}} when I start it 
> with {{-nonblocking}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20419) Fix potential NPE in ZKUtil#listChildrenAndWatchForNewChildren callers

2018-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439360#comment-16439360
 ] 

Hudson commented on HBASE-20419:


Results for branch master
[build #301 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/301/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/301//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/301//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/301//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix potential NPE in ZKUtil#listChildrenAndWatchForNewChildren callers
> --
>
> Key: HBASE-20419
> URL: https://issues.apache.org/jira/browse/HBASE-20419
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.1, 2.0.0-beta-2, 1.1.13
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20419.v3.patch, HBASE-20419_1.patch, 
> HBASE-20419_2.patch
>
>
> We have developed a static analysis tool 
> [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential 
> NPE. Our analysis shows that some callees may return null in corner case(e.g. 
> node crash , IO exception), some of their callers have  _!=null_ check but 
> some do not have.  For example:
> Callee ZKUtil#listChildrenAndWatchForNewChildren may return null, it has 8 
> callers, 6 of the caller have null checker like:
> {code:java}
> List children = ZKUtil.listChildrenAndWatchForNewChildren(zkw, 
> zkw.znodePaths.rsZNode);
> if (children == null) {
> return Collections.emptyList();
> }
> {code}
> but another two callers do not have null 
> checker:RSGroupInfoManagerImpl#retrieveGroupListFromZookeeper,ZKProcedureMemberRpcs#watchForAbortedProcedures.
>  
> We attach the patch to fix this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20169) NPE when calling HBTU.shutdownMiniCluster (TestAssignmentManagerMetrics is flakey)

2018-04-16 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439385#comment-16439385
 ] 

Chia-Ping Tsai commented on HBASE-20169:


{quote} if (procedureExecutor != null) {
  
configurationManager.deregisterObserver(procedureExecutor.getEnvironment());
  procedureExecutor.getEnvironment().getRemoteDispatcher().stop();
  procedureExecutor.stop();
  procedureExecutor.join();
  procedureExecutor = null;
}{quote}
That (#join) is executed by the first thread I have mentioned. The second 
thread is the test thread used to do the @afterClass
{code:java}
TestAssignmentManagerMetrics classs

@AfterClass
public static void after() throws Exception {
  LOG.info("AFTER {} <= IS THIS NULL?", TEST_UTIL);
  TEST_UTIL.shutdownMiniCluster();  // here
}{code}
The JVMClusterUtil#shutdown invoked by TEST_UTIL#shutdownMiniCluster calls the 
HMaster#shutdown and that execution of procedureExecutor#stop ensues.
{code:java}
JVMClusterUtil class

public static void shutdown(final List masters,
final List regionservers) {
  LOG.debug("Shutting down HBase Cluster");
  if (masters != null) {
...
// Do active after.
if (activeMaster != null) {
  try {
activeMaster.master.shutdown();  // here
  } catch (IOException e) {
LOG.error("Exception occurred in HMaster.shutdown()", e);
  }
}
  }{code}
{code:java}
HMaster class

public void shutdown() throws IOException {
  ...
  // Stop the procedure executor. Will stop any ongoing assign, unassign, 
server crash etc.,
  // processing so we can go down.
  if (this.procedureExecutor != null) {
this.procedureExecutor.stop();  // here
  }{code}
 

> NPE when calling HBTU.shutdownMiniCluster (TestAssignmentManagerMetrics is 
> flakey)
> --
>
> Key: HBASE-20169
> URL: https://issues.apache.org/jira/browse/HBASE-20169
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Major
> Attachments: HBASE-20169.branch-2.001.patch, 
> HBASE-20169.branch-2.002.patch, HBASE-20169.branch-2.003.patch, 
> HBASE-20169.branch-2.004.patch, HBASE-20169.branch-2.005.patch, 
> HBASE-20169.v0.addendum.patch
>
>
> This usually happens when some master or rs has already been down before we 
> calling shutdownMiniCluster.
> See
> https://builds.apache.org/job/HBASE-Flaky-Tests/27223/testReport/junit/org.apache.hadoop.hbase.master/TestAssignmentManagerMetrics/org_apache_hadoop_hbase_master_TestAssignmentManagerMetrics/
> and also
> http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34873/testReport/junit/org.apache.hadoop.hbase.master/TestRestartCluster/testRetainAssignmentOnRestart/
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.TestAssignmentManagerMetrics.after(TestAssignmentManagerMetrics.java:100)
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.TestRestartCluster.testRetainAssignmentOnRestart(TestRestartCluster.java:156)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20411) Ameliorate MutableSegment synchronize

2018-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439435#comment-16439435
 ] 

Hadoop QA commented on HBASE-20411:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
31s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
40s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 58s{color} 
| {color:red} hbase-server generated 6 new + 182 unchanged - 6 fixed = 188 
total (was 188) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
39s{color} | {color:red} hbase-server: The patch generated 5 new + 391 
unchanged - 1 fixed = 396 total (was 392) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
43s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
12m 28s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
35s{color} | {color:red} hbase-server generated 2 new + 0 unchanged - 0 fixed = 
2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} hbase-server generated 0 new + 0 unchanged - 2 fixed 
= 0 total (was 2) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 38s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Dead store to flushableDataSize in 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(WAL, 
MonitoredTask, HRegion$PrepareFlushResult, Collection)  At 
HRegion.java:org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(WAL,
 MonitoredTask, HRegion$PrepareFlushResult, Collection)  At HRegion.java:[line 
2710] |
|  |  Dead store to flushableHeapSize in 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(WAL, 
MonitoredTask, HRegion$PrepareFlushResult, Collection)  At 
HRegion.java:org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(WAL,
 MonitoredTask, HRegion$PrepareFlushResult, Collection)  At HRegion.java:[line 
2711] |
| Failed junit tests | 
hadoop.hbase.regionserver.TestWalAndCompactingMemStoreFlush |
|   | hadoop.hbase.regionserver.TestCompactingToCellFlatMapMemStore |
|   | hadoop.hbase.regionserver.TestHRegionReplayEvents |
|   | 

[jira] [Commented] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439436#comment-16439436
 ] 

Sean Busbey commented on HBASE-20369:
-

[here's the precommit run I 
started|https://builds.apache.org/job/PreCommit-HBASE-Build/12471/], in case 
something goes wrong and it never comes back to comment.

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch, HBASE-20369_v1.patch, book.adoc
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439453#comment-16439453
 ] 

Hadoop QA commented on HBASE-20369:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  3m 
25s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 107 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 4 line(s) with tabs. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  3m 
31s{color} | {color:blue} patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918901/HBASE-20369_v1.patch |
| Optional Tests |  asflicense  refguide  |
| uname | Linux 5fda43ec5805 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 1339ff9666 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12471/artifact/patchprocess/branch-site/book.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12471/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12471/artifact/patchprocess/whitespace-tabs.txt
 |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12471/artifact/patchprocess/patch-site/book.html
 |
| Max. process+thread count | 93 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12471/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch, HBASE-20369_v1.patch, book.adoc
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20427) thrift.jsp displays "Framed transport" incorrectly

2018-04-16 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros reassigned HBASE-20427:
---

Assignee: Balazs Meszaros

> thrift.jsp displays "Framed transport" incorrectly 
> ---
>
> Key: HBASE-20427
> URL: https://issues.apache.org/jira/browse/HBASE-20427
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
>Priority: Major
> Fix For: 3.0.0, 2.0.0
>
>
> According to thrift usage text:
> {code}
>  -nonblocking  Use the TNonblockingServer This implies the
>framed transport.
> {code}
> But the web page at port 9095 indicates {{framed = false}} when I start it 
> with {{-nonblocking}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439515#comment-16439515
 ] 

Sean Busbey commented on HBASE-20369:
-

try applying the patch to a fresh checkout of the master branch and see if the 
changes are still in the correct place.

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch, HBASE-20369_v1.patch, book.adoc
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20169) NPE when calling HBTU.shutdownMiniCluster (TestAssignmentManagerMetrics is flakey)

2018-04-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439374#comment-16439374
 ] 

Duo Zhang commented on HBASE-20169:
---

But stop is called before join I believe?

{code}
if (procedureExecutor != null) {
  
configurationManager.deregisterObserver(procedureExecutor.getEnvironment());
  procedureExecutor.getEnvironment().getRemoteDispatcher().stop();
  procedureExecutor.stop();
  procedureExecutor.join();
  procedureExecutor = null;
}
{code}


And this method is called in HRegionServer.run so it is not likely to be called 
twice? It is a bit strange.

> NPE when calling HBTU.shutdownMiniCluster (TestAssignmentManagerMetrics is 
> flakey)
> --
>
> Key: HBASE-20169
> URL: https://issues.apache.org/jira/browse/HBASE-20169
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Major
> Attachments: HBASE-20169.branch-2.001.patch, 
> HBASE-20169.branch-2.002.patch, HBASE-20169.branch-2.003.patch, 
> HBASE-20169.branch-2.004.patch, HBASE-20169.branch-2.005.patch, 
> HBASE-20169.v0.addendum.patch
>
>
> This usually happens when some master or rs has already been down before we 
> calling shutdownMiniCluster.
> See
> https://builds.apache.org/job/HBASE-Flaky-Tests/27223/testReport/junit/org.apache.hadoop.hbase.master/TestAssignmentManagerMetrics/org_apache_hadoop_hbase_master_TestAssignmentManagerMetrics/
> and also
> http://104.198.223.121:8080/job/HBASE-Flaky-Tests/34873/testReport/junit/org.apache.hadoop.hbase.master/TestRestartCluster/testRetainAssignmentOnRestart/
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.TestAssignmentManagerMetrics.after(TestAssignmentManagerMetrics.java:100)
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.TestRestartCluster.testRetainAssignmentOnRestart(TestRestartCluster.java:156)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20417) Do not read wal entries when peer is disabled

2018-04-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439476#comment-16439476
 ] 

Duo Zhang commented on HBASE-20417:
---

With the '{}' operator, we can delay the concat of the strings to the place 
after the isTraceEnabled check in LOG.trace, so the LOG.isTraceEnabled is not 
necessary any more.

> Do not read wal entries when peer is disabled
> -
>
> Key: HBASE-20417
> URL: https://issues.apache.org/jira/browse/HBASE-20417
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20417-v1.patch, HBASE-20417.patch
>
>
> Now, the disabled check is in ReplicationSourceShipper. If peer is disabled, 
> then we will not take entry batch from ReplicationSourceWALReader. But 
> ReplicationSourceWALReader will keep reading wal entries until the buffer is 
> full.
> For serial replication, the canPush check is in ReplicationSourceWALReader, 
> so even when we disabled the peer during the modification for a serial peer, 
> we could still run into the SerialReplicationChecker. Theoretically there 
> will be no problem, since in the procedure we will only update last pushed 
> sequence ids to a greater value. If canPush is true then a greater value does 
> not make any difference, if canPush is false then we are still safe since the 
> ReplicationSourceWALReader will be blocked.
> But this still makes me a little nervous, and also, it does not make sense to 
> still read wal entries when the peer is disabled. So let's change the 
> behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19761) Fix Checkstyle errors in hbase-zookeeper

2018-04-16 Thread maoling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maoling updated HBASE-19761:

Status: Open  (was: Patch Available)

> Fix Checkstyle errors in hbase-zookeeper
> 
>
> Key: HBASE-19761
> URL: https://issues.apache.org/jira/browse/HBASE-19761
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Assignee: maoling
>Priority: Minor
> Attachments: HBASE-19761-master-v0.patch, HBASE-19761-master-v1.patch
>
>
> Fix the remaining Checkstyle errors in the *hbase-zookeeper* module and 
> enable Checkstyle to fail on violations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20390) IMC Default Parameters for 2.0.0

2018-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439316#comment-16439316
 ] 

Hadoop QA commented on HBASE-20390:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
52s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
39s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 8s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}106m 
26s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:369877d |
| JIRA Issue | HBASE-20390 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919186/HBASE-20390.branch-2.0.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux d6d95fbf32f1 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.0 / bbe15510ec |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12469/testReport/ |
| Max. process+thread count | 4053 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12469/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HBASE-19761) Fix Checkstyle errors in hbase-zookeeper

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439440#comment-16439440
 ] 

Sean Busbey commented on HBASE-19761:
-

I manually resubmitted the jenkins job for qabot.

> Fix Checkstyle errors in hbase-zookeeper
> 
>
> Key: HBASE-19761
> URL: https://issues.apache.org/jira/browse/HBASE-19761
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Assignee: maoling
>Priority: Minor
> Attachments: HBASE-19761-master-v0.patch, HBASE-19761-master-v1.patch
>
>
> Fix the remaining Checkstyle errors in the *hbase-zookeeper* module and 
> enable Checkstyle to fail on violations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439460#comment-16439460
 ] 

Sean Busbey commented on HBASE-20369:
-

{code}

diff --git a/src/main/asciidoc/_chapters/security.adoc 
b/src/main/asciidoc/_chapters/security.adoc
index ef7d6c46b5..0ab26d4cd1 100644
--- a/src/main/asciidoc/_chapters/security.adoc
+++ b/src/main/asciidoc/_chapters/security.adoc
@@ -663,6 +663,7 @@ You can enable compression of each tag in the WAL, if WAL 
compression is also en
 Tag compression uses dictionary encoding.
 
 Tag compression is not supported when using WAL encryption.
+Tags are not available for get/set from client operations including 
coprocessors.
{code}

I thought we could _set_ tags from the client, we just wouldn't be able to read 
them back out again afterwards?

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch, HBASE-20369_v1.patch, book.adoc
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20420) Fix Some Potential NPE

2018-04-16 Thread lujie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439459#comment-16439459
 ] 

lujie commented on HBASE-20420:
---

fix the checkstyle error!!!

> Fix Some Potential NPE 
> ---
>
> Key: HBASE-20420
> URL: https://issues.apache.org/jira/browse/HBASE-20420
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Attachments: HBASE-20420_2.patch, HBASE-20420_3.patch, 
> hbase-20420.patch
>
>
> We have used the  tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] 
> find another  six problems that similar to  HBASE-20419.
> list here and attach the patch.
>  CommonFSUtils#listStatus
> RSGroupInfoManagerImpl#getRSGroupOfServer
> BackupSystemTable#readBackupInfo
> SnapshotManifest#getRegionManifestsMap
> HRegionFileSystem#getFamilies
> Result#getFamilyMap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20420) Fix Some Potential NPE

2018-04-16 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-20420:
--
Attachment: HBASE-20420_4.patch

> Fix Some Potential NPE 
> ---
>
> Key: HBASE-20420
> URL: https://issues.apache.org/jira/browse/HBASE-20420
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Attachments: HBASE-20420_2.patch, HBASE-20420_3.patch, 
> HBASE-20420_4.patch, hbase-20420.patch
>
>
> We have used the  tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] 
> find another  six problems that similar to  HBASE-20419.
> list here and attach the patch.
>  CommonFSUtils#listStatus
> RSGroupInfoManagerImpl#getRSGroupOfServer
> BackupSystemTable#readBackupInfo
> SnapshotManifest#getRegionManifestsMap
> HRegionFileSystem#getFamilies
> Result#getFamilyMap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20417) Do not read wal entries when peer is disabled

2018-04-16 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20417:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Thanks [~zghaobac] for reviewing.

> Do not read wal entries when peer is disabled
> -
>
> Key: HBASE-20417
> URL: https://issues.apache.org/jira/browse/HBASE-20417
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20417-v1.patch, HBASE-20417.patch
>
>
> Now, the disabled check is in ReplicationSourceShipper. If peer is disabled, 
> then we will not take entry batch from ReplicationSourceWALReader. But 
> ReplicationSourceWALReader will keep reading wal entries until the buffer is 
> full.
> For serial replication, the canPush check is in ReplicationSourceWALReader, 
> so even when we disabled the peer during the modification for a serial peer, 
> we could still run into the SerialReplicationChecker. Theoretically there 
> will be no problem, since in the procedure we will only update last pushed 
> sequence ids to a greater value. If canPush is true then a greater value does 
> not make any difference, if canPush is false then we are still safe since the 
> ReplicationSourceWALReader will be blocked.
> But this still makes me a little nervous, and also, it does not make sense to 
> still read wal entries when the peer is disabled. So let's change the 
> behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20428) [shell] list.first method in HBase2 shell fails with "NoMethodError: undefined method `first' for nil:NilClass"

2018-04-16 Thread Arpit Jindal (JIRA)
Arpit Jindal created HBASE-20428:


 Summary: [shell] list.first method in HBase2 shell fails with 
"NoMethodError: undefined method `first' for nil:NilClass"
 Key: HBASE-20428
 URL: https://issues.apache.org/jira/browse/HBASE-20428
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 2.0.0-beta-2
Reporter: Arpit Jindal


list.fist in hbase shell does not list the first table
{code}
hbase(main):001:0> list.first
TABLE
IntegrationTestBigLinkedList_20180331004141
IntegrationTestBigLinkedList_20180403004104
IntegrationTestBigLinkedList_20180409123038
IntegrationTestBigLinkedList_20180409172704
IntegrationTestBigLinkedList_20180410103309
IntegrationTestBigLinkedList_20180411151159
IntegrationTestBigLinkedList_20180411172500
IntegrationTestBigLinkedList_20180412095403
8 row(s)
Took 0.5432 seconds
NoMethodError: undefined method `first' for nil:NilClass
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-04-16 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439949#comment-16439949
 ] 

Umesh Agashe commented on HBASE-20403:
--

Out of 4 ITBLL runs, this stack showed up in log files for 3 runs. 4 runs after 
disabling prefetch pass and no logs have the above stack trace.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-04-16 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-20429:
--

 Summary: Support for mixed or write-heavy workloads on non-HDFS 
filesystems
 Key: HBASE-20429
 URL: https://issues.apache.org/jira/browse/HBASE-20429
 Project: HBase
  Issue Type: Umbrella
Reporter: Andrew Purtell


We can support reasonably well use cases on non-HDFS filesystems, like S3, 
where an external writer has loaded (and continues to load) HFiles via the bulk 
load mechanism, and then we serve out a read only workload at the HBase API.

Mixed workloads or write-heavy workloads won't fare as well. In fact, data loss 
seems certain. It will depend in the specific filesystem, but all of the S3 
backed Hadoop filesystems suffer from a couple of obvious problems, notably a 
lack of atomic rename. 

This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-04-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439990#comment-16439990
 ] 

Andrew Purtell commented on HBASE-20429:


Let me just state this to get it out of the way. As you can imagine, reading 
between the lines, the motivation to look at this where I work is the good 
probability our storage stack is either going to utilize Amazon's S3 service 
"where applicable" or a compatible API analogue. Please don't take this to 
infer anything about business relationships, or not. Really, I would personally 
have no idea one way or the other. 

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20411) Ameliorate MutableSegment synchronize

2018-04-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439840#comment-16439840
 ] 

stack edited comment on HBASE-20411 at 4/16/18 6:34 PM:


2.simple.patch.69074.lock.svg is what our locking profile looks like after this 
patch has been applied. Blocking on MutableSegment has been removed (It does 
not show in this svg and in jfr, it no longer shows up in events or in 
'contention' list). We are left with the Semaphore on RPC scheduling and mvcc 
completion (This makes our locking profile looks like 1.2.7 again <= This 
statement is wrong; in 1.2.7 our locking profile does not have the rpc 
semaphore; it has the mvcc completion and then a bunch of blocking in dfsclient 
that we don't have in 2.0.0).

Not much by way of throughput improvement after this goes in.


was (Author: stack):
2.simple.patch.69074.lock.svg is what our locking profile looks like after this 
patch has been applied. Blocking on MutableSegment has been removed. We are 
left with the Semaphore on RPC scheduling and mvcc completion (This makes our 
locking profile looks like 1.2.7 again <= This statement is wrong; in 1.2.7 our 
locking profile does not have the rpc semaphore; it has the mvcc completion and 
then a bunch of blocking in dfsclient that we don't have in 2.0.0).

Not much by way of throughput improvement after this goes in.

> Ameliorate MutableSegment synchronize
> -
>
> Key: HBASE-20411
> URL: https://issues.apache.org/jira/browse/HBASE-20411
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: 2.load.patched.17704.lock.svg, 
> 2.load.patched.2.17704.lock.svg, 2.more.patch.12010.lock.svg, 41901.lock.svg, 
> HBASE-20411.branch-2.0.001.patch, HBASE-20411.branch-2.0.002.patch, 
> HBASE-20411.branch-2.0.003.patch, HBASE-20411.branch-2.0.004.patch, 
> HBASE-20411.branch-2.0.005.patch, HBASE-20411.branch-2.0.006.patch, 
> HBASE-20411.branch-2.0.007.patch
>
>
> This item is migrated from HBASE-20236 so it gets dedicated issue.
> Let me upload evidence that has this synchronize as a stake in our write-time 
> perf. I'll migrate the patch I posted with updates that come of comments 
> posted by [~mdrob] on the HBASE-20236 issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20411) Ameliorate MutableSegment synchronize

2018-04-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439840#comment-16439840
 ] 

stack edited comment on HBASE-20411 at 4/16/18 6:32 PM:


2.simple.patch.69074.lock.svg is what our locking profile looks like after this 
patch has been applied. Blocking on MutableSegment has been removed. We are 
left with the Semaphore on RPC scheduling and mvcc completion (This makes our 
locking profile looks like 1.2.7 again <= This statement is wrong; in 1.2.7 our 
locking profile does not have the rpc semaphore; it has the mvcc completion and 
then a bunch of blocking in dfsclient that we don't have in 2.0.0).

Not much by way of throughput improvement after this goes in.


was (Author: stack):
2.simple.patch.69074.lock.svg is what our locking profile looks like after this 
patch has been applied. Blocking on MutableSegment has been removed. We are 
left with the Semaphore on RPC scheduling and mvcc completion (This makes our 
locking profile looks like 1.2.7 again <= This statement is wrong; in 1.2.7 our 
locking profile does not have the rpc semaphore; it has the mvcc completion and 
then a bunch of blocking in dfsclient that we don't have in 2.0.0).

Not much by way of perf improvement after this goes in.

> Ameliorate MutableSegment synchronize
> -
>
> Key: HBASE-20411
> URL: https://issues.apache.org/jira/browse/HBASE-20411
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: 2.load.patched.17704.lock.svg, 
> 2.load.patched.2.17704.lock.svg, 2.more.patch.12010.lock.svg, 41901.lock.svg, 
> HBASE-20411.branch-2.0.001.patch, HBASE-20411.branch-2.0.002.patch, 
> HBASE-20411.branch-2.0.003.patch, HBASE-20411.branch-2.0.004.patch, 
> HBASE-20411.branch-2.0.005.patch, HBASE-20411.branch-2.0.006.patch, 
> HBASE-20411.branch-2.0.007.patch
>
>
> This item is migrated from HBASE-20236 so it gets dedicated issue.
> Let me upload evidence that has this synchronize as a stake in our write-time 
> perf. I'll migrate the patch I posted with updates that come of comments 
> posted by [~mdrob] on the HBASE-20236 issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20332) shaded mapreduce module shouldn't include hadoop

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439846#comment-16439846
 ] 

Sean Busbey commented on HBASE-20332:
-

to test out, first do a local install so you can get what the pom/jar will look 
like:
{code}
mvn -Psite-install-step -Prelease install
 {code}

Now you can look in your local maven repo for the jar(s) and the poms that a 
client will get (default user repo listed in this example):
{code}
mvn dependency:list -f 
~/.m2/repository/org/apache/hbase/hbase-shaded-mapreduce/3.0.0-SNAPSHOT/hbase-shaded-mapreduce-3.0.0-SNAPSHOT.pom
mvn dependency:tree -f 
~/.m2/repository/org/apache/hbase/hbase-shaded-mapreduce/3.0.0-SNAPSHOT/hbase-shaded-mapreduce-3.0.0-SNAPSHOT.pom
{code}

junit shows up because of our root parent pom giving it as a dependency. I 
tried a few things to get rid of it but nothing worked. I think we need to fix 
that generally (i.e. remove the top level listing of it as a dependency) rather 
than try to do it here.

> shaded mapreduce module shouldn't include hadoop
> 
>
> Key: HBASE-20332
> URL: https://issues.apache.org/jira/browse/HBASE-20332
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20332.0.patch
>
>
> AFAICT, we should just entirely skip including hadoop in our shaded mapreduce 
> module
> 1) Folks expect to run yarn / mr apps via {{hadoop jar}} / {{yarn jar}}
> 2) those commands include all the needed Hadoop jars in your classpath by 
> default (both client side and in the containers)
> 3) If you try to use "user classpath first" for your job as a workaround 
> (e.g. for some library your application needs that hadoop provides) then our 
> inclusion of *some but not all* hadoop classes then causes everything to fall 
> over because of mixing rewritten and non-rewritten hadoop classes
> 4) if you don't use "user classpath first" then all of our 
> non-relocated-but-still-shaded hadoop classes are ignored anyways so we're 
> just wasting space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2018-04-16 Thread Dequan Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439891#comment-16439891
 ] 

Dequan Chen edited comment on HBASE-7129 at 4/16/18 7:14 PM:
-

[^HBASE-7129.0003.patch]

To [~mdrob] ,

Thanks for your instructions on monospaced font. I have followed you suggestion 
to put all critical values in the monospaced font while add a Detailed 
Explanation section after the Table for the endpoint of Check-And-PUT - 
basically all the explanation points I put in the previous comment..

I believe that with the above changes, a user is easier to follow the examples 
to perform checkAndPut WebHBase operation on their own HBase clusters.

Have a good day!

Dequan

 

 


was (Author: dequanchen):
HBASE-7129.0003.patch^!/jira/images/icons/link_attachment_7.gif|width=7,height=7!^

To [~mdrob] ,

Thanks for your instructions on monospaced font. I have followed you suggestion 
to put all critical values in the monospaced font while add a Detailed 
Explanation section after the Table for the endpoint of Check-And-PUT - 
basically all the explanation points I put in the previous comment..

I believe that with the above changes, a user is easier to follow the examples 
to perform checkAndPut WebHBase operation on their own HBase clusters.

Have a good day!

Dequan

 

 

> Need documentation for REST atomic operations (HBASE-4720)
> --
>
> Key: HBASE-7129
> URL: https://issues.apache.org/jira/browse/HBASE-7129
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Joe Pallas
>Assignee: Dequan Chen
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-7129.0001.patch, HBASE-7129.0002.patch, 
> HBASE-7129.0003.patch, HBASE-7129.patch
>
>
> HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
> but the REST documentation (in the package summary) needs to be updated so 
> people know that this feature exists and how to use it.
> http://wiki.apache.org/hadoop/Hbase/Stargate
> http://hbase.apache.org/book/rest.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20332) shaded mapreduce module shouldn't include hadoop

2018-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439911#comment-16439911
 ] 

Hadoop QA commented on HBASE-20332:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
48s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
1s{color} | {color:red} The patch generated 2 new + 0 unchanged - 0 fixed = 2 
total (was 0) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
43s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m  
6s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hbase-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hbase-shaded-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hbase-shaded-check-invariants in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hbase-shaded-with-hadoop-check-invariants in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
57s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20332 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919266/HBASE-20332.0.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  shadedjars  hadoopcheck  
xml  compile  shellcheck  shelldocs  |
| uname | Linux 73d3fed7d2c7 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 

[jira] [Commented] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2018-04-16 Thread Dequan Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439974#comment-16439974
 ] 

Dequan Chen commented on HBASE-7129:


[^HBASE-7129.0004.patch]

Just updated the patch and reloaded here as ...0004.patch to remove the newly 
introduced 4 white spaces in the Detailed Explanation section at the end of the 
Check-And-Put EndPoint.

Dequan

> Need documentation for REST atomic operations (HBASE-4720)
> --
>
> Key: HBASE-7129
> URL: https://issues.apache.org/jira/browse/HBASE-7129
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Joe Pallas
>Assignee: Dequan Chen
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-7129.0001.patch, HBASE-7129.0002.patch, 
> HBASE-7129.0003.patch, HBASE-7129.0004.patch, HBASE-7129.patch
>
>
> HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
> but the REST documentation (in the package summary) needs to be updated so 
> people know that this feature exists and how to use it.
> http://wiki.apache.org/hadoop/Hbase/Stargate
> http://hbase.apache.org/book/rest.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20430) Improve store file management for non-HDFS filesystems

2018-04-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439985#comment-16439985
 ] 

Andrew Purtell edited comment on HBASE-20430 at 4/16/18 8:42 PM:
-

I suppose this could also be handled with an enhancement to Hadoop's S3A to 
multiplex open files among the internal resource pools it already maintains. 
[~steve_l]


was (Author: apurtell):
I suppose this could also be handled with an enhancement to Hadoop's S3A to 
multiplex open files among the internal resource pools it already maintains.

> Improve store file management for non-HDFS filesystems
> --
>
> Key: HBASE-20430
> URL: https://issues.apache.org/jira/browse/HBASE-20430
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Priority: Major
>
> HBase keeps a file open for every active store file so no additional round 
> trips to the NameNode are needed after the initial open. HDFS internally 
> multiplexes open files, but the Hadoop S3 filesystem implementations do not, 
> or, at least, not as well. As the bulk of data under management increases we 
> observe the required number of concurrently open connections will rise, and 
> expect it will eventually exhaust a limit somewhere (the client, the OS file 
> descriptor table or open file limits, or the S3 service).
> Initially we can simply introduce an option to close every store file after 
> the reader has finished, and determine the performance impact. Use cases 
> backed by non-HDFS filesystems will already have to cope with a different 
> read performance profile. Based on experiments with the S3 backed Hadoop 
> filesystems, notably S3A, even with aggressively tuned options simple reads 
> can be very slow when there are blockcache misses, 15-20 seconds observed for 
> Get of a single small row, for example. We expect extensive use of the 
> BucketCache to mitigate in this application already. Could be backed by 
> offheap storage, but more likely a large number of cache files managed by the 
> file engine on local SSD storage. If misses are already going to be super 
> expensive, then the motivation to do more than simply open store files on 
> demand is largely absent.
> Still, we could employ a predictive cache. Where frequent access to a given 
> store file (or, at least, its store) is predicted, keep a reference to the 
> store file open. Can keep statistics about read frequency, write it out to 
> HFiles during compaction, and note these stats when opening the region, 
> perhaps by reading all meta blocks of region HFiles when opening. Otherwise, 
> close the file after reading and open again on demand. Need to be careful not 
> to use ARC or equivalent as cache replacement strategy as it is encumbered. 
> The size of the cache can be determined at startup after detecting the 
> underlying filesystem. Eg. setCacheSize(VERY_LARGE_CONSTANT) if (fs 
> instanceof DistributedFileSystem), so we don't lose much when on HDFS still.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20411) Ameliorate MutableSegment synchronize

2018-04-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439840#comment-16439840
 ] 

stack edited comment on HBASE-20411 at 4/16/18 6:32 PM:


2.simple.patch.69074.lock.svg is what our locking profile looks like after this 
patch has been applied. Blocking on MutableSegment has been removed. We are 
left with the Semaphore on RPC scheduling and mvcc completion (This makes our 
locking profile looks like 1.2.7 again <= This statement is wrong; in 1.2.7 our 
locking profile does not have the rpc semaphore; it has the mvcc completion and 
then a bunch of blocking in dfsclient that we don't have in 2.0.0).

Not much by way of perf improvement after this goes in.


was (Author: stack):
2.simple.patch.69074.lock.svg is what our locking profile looks like after this 
patch has been applied. Blocking on MutableSegment has been removed. We are 
left with the Semaphore on RPC scheduling and mvcc completion (This makes our 
locking profile looks like 1.2.7 again). Not much by way of perf improvement 
though.

> Ameliorate MutableSegment synchronize
> -
>
> Key: HBASE-20411
> URL: https://issues.apache.org/jira/browse/HBASE-20411
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Attachments: 2.load.patched.17704.lock.svg, 
> 2.load.patched.2.17704.lock.svg, 2.more.patch.12010.lock.svg, 41901.lock.svg, 
> HBASE-20411.branch-2.0.001.patch, HBASE-20411.branch-2.0.002.patch, 
> HBASE-20411.branch-2.0.003.patch, HBASE-20411.branch-2.0.004.patch, 
> HBASE-20411.branch-2.0.005.patch, HBASE-20411.branch-2.0.006.patch, 
> HBASE-20411.branch-2.0.007.patch
>
>
> This item is migrated from HBASE-20236 so it gets dedicated issue.
> Let me upload evidence that has this synchronize as a stake in our write-time 
> perf. I'll migrate the patch I posted with updates that come of comments 
> posted by [~mdrob] on the HBASE-20236 issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2018-04-16 Thread Dequan Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439891#comment-16439891
 ] 

Dequan Chen commented on HBASE-7129:


HBASE-7129.0003.patch^!/jira/images/icons/link_attachment_7.gif|width=7,height=7!^

To [~mdrob] ,

Thanks for your instructions on monospaced font. I have followed you suggestion 
to put all critical values in the monospaced font while add a Detailed 
Explanation section after the Table for the endpoint of Check-And-PUT - 
basically all the explanation points I put in the previous comment..

I believe that with the above changes, a user is easier to follow the examples 
to perform checkAndPut WebHBase operation on their own HBase clusters.

Have a good day!

Dequan

 

 

> Need documentation for REST atomic operations (HBASE-4720)
> --
>
> Key: HBASE-7129
> URL: https://issues.apache.org/jira/browse/HBASE-7129
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Joe Pallas
>Assignee: Dequan Chen
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-7129.0001.patch, HBASE-7129.0002.patch, 
> HBASE-7129.0003.patch, HBASE-7129.patch
>
>
> HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
> but the REST documentation (in the package summary) needs to be updated so 
> people know that this feature exists and how to use it.
> http://wiki.apache.org/hadoop/Hbase/Stargate
> http://hbase.apache.org/book/rest.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19994) Create a new class for RPC throttling exception, make it retryable.

2018-04-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439896#comment-16439896
 ] 

stack commented on HBASE-19994:
---

+1

Signature changes are in audience private classes.

> Create a new class for RPC throttling exception, make it retryable. 
> 
>
> Key: HBASE-19994
> URL: https://issues.apache.org/jira/browse/HBASE-19994
> Project: HBase
>  Issue Type: Improvement
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Major
> Attachments: HBASE-19994-master-v01.patch, 
> HBASE-19994-master-v02.patch, HBASE-19994-master-v03.patch, 
> HBASE-19994-master-v04.patch, HBASE-19994-master-v05.patch, 
> HBASE-19994-master-v06.patch, HBASE-19994-master-v07.patch
>
>
> Based on a discussion at dev mailing list.
>  
> {code:java}
> Thanks Andrew.
> +1 for the second option, I will create a jira for this change.
> Huaxiang
> On Feb 9, 2018, at 1:09 PM, Andrew Purtell  wrote:
> We have
> public class ThrottlingException extends QuotaExceededException
> public class QuotaExceededException extends DoNotRetryIOException
> Let the storage quota limits throw QuotaExceededException directly (based
> on DNRIOE). That seems fine.
> However, ThrottlingException is thrown as a result of a temporal quota,
> so it is inappropriate for this to inherit from DNRIOE, it should inherit
> IOException instead so the client is allowed to retry until successful, or
> until the retry policy is exhausted.
> We are in a bit of a pickle because we've released with this inheritance
> hierarchy, so to change it we will need a new minor, or we will want to
> deprecate ThrottlingException and use a new exception class instead, one
> which does not inherit from DNRIOE.
> On Feb 7, 2018, at 9:25 AM, Huaxiang Sun  wrote:
> Hi Mike,
>   You are right. For rpc throttling, definitely it is retryable. For storage 
> quota, I think it will be fail faster (non-retryable).
>   We probably need to separate these two types of exceptions, I will do some 
> more research and follow up.
>   Thanks,
>   Huaxiang
> On Feb 7, 2018, at 9:16 AM, Mike Drob  wrote:
> I think, philosophically, there can be two kinds of QEE -
> For throttling, we can retry. The quota is a temporal quota - you have done
> too many operations this minute, please try again next minute and
> everything will work.
> For storage, we shouldn't retry. The quota is a fixed quote - you have
> exceeded your allotted disk space, please do not try again until you have
> remedied the situation.
> Our current usage conflates the two, sometimes it is correct, sometimes not.
> On Wed, Feb 7, 2018 at 11:00 AM, Huaxiang Sun  wrote:
> Hi Stack,
>  I run into a case that a mapreduce job in hive cannot finish because
> it runs into a QEE.
> I need to look into the hive mr task to see if QEE is not handled
> correctly in hbase code or in hive code.
> I am thinking that if  QEE is a retryable exception, then it should be
> taken care of by the hbase code.
> I will check more and report back.
> Thanks,
> Huaxiang
> On Feb 7, 2018, at 8:23 AM, Stack  wrote:
> QEE being a DNRIOE seems right on the face of it.
> But if throttling, a DNRIOE is inappropriate. Where you seeing a QEE in a
> throttling scenario Huaxiang?
> Thanks,
> S
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-04-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439961#comment-16439961
 ] 

Andrew Purtell commented on HBASE-20403:


AFAIK, prefetch does a pass over all blocks of the hfile, reading in index and 
data blocks, in a manner similar to HFileReader but not 100% reusing reader 
code for the purpose. Maybe a refactor would help. Maybe the reader was updated 
for some reason but the prefetch code not. It's quite unlikely prefetch code 
has been exercised as well. 

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20431) Store commit transaction for filesystems that do not support an atomic rename

2018-04-16 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-20431:
--

 Summary: Store commit transaction for filesystems that do not 
support an atomic rename
 Key: HBASE-20431
 URL: https://issues.apache.org/jira/browse/HBASE-20431
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell


HBase expects the Hadoop filesystem implementation to support an atomic 
rename() operation. HDFS does. The S3 backed filesystems do not. The 
fundamental issue is the non-atomic and eventually consistent nature of the S3 
service. A S3 bucket is not a filesystem. S3 is not always immediately 
read-your-writes. Object metadata can be temporarily inconsistent just after 
new objects are stored. There can be a settling period to ride over. 
Renaming/moving objects from one path to another are copy operations with 
O(file) complexity and O(data) time followed by a series of deletes with 
O(file) complexity. Failures at any point prior to completion will leave the 
operation in an inconsistent state. The missing atomic rename semantic opens 
opportunities for corruption and data loss, which may or may not be repairable 
with HBCK.

Handling this at the HBase level could be done with a new multi-step filesystem 
transaction framework. Call it StoreCommitTransaction. SplitTransaction and 
MergeTransaction are well established cases where even on HDFS we have 
non-atomic filesystem changes and are our implementation template for the new 
work. In this new StoreCommitTransaction we'd be moving flush and compaction 
temporaries out of the temporary directory into the region store directory. On 
HDFS the implementation would be easy. We can rely on the filesystem's atomic 
rename semantics. On S3 it would be work: First we would build the list of 
objects to move, then copy each object into the destination, and then finally 
delete all objects at the original path. We must handle transient errors with 
retry strategies appropriate for the action at hand. We must handle serious or 
permanent errors where the RS doesn't need to be aborted with a rollback that 
cleans it all up. Finally, we must handle permanent errors where the RS must be 
aborted with a rollback during region open/recovery. Note that after all 
objects have been copied and we are deleting obsolete source objects we must 
roll forward, not back. To support recovery after an abort we must utilize the 
WAL to track transaction progress. Put markers in for StoreCommitTransaction 
start and completion state, with details of the store file(s) involved, so it 
can be rolled back during region recovery at open. This will be significant 
work in HFile, HStore, flusher, compactor, and HRegion. Wherever we use HDFS's 
rename now we would substitute the running of this new multi-step filesystem 
transaction.

We need to determine this for certain, but I believe the PUT or multipart 
upload of an object must complete before the object is visible, so we don't 
have to worry about the case where an object is visible before fully uploaded 
as part of normal operations. So an individual object copy will either happen 
entirely and the target will then become visible, or it won't and the target 
won't exist.

S3 has an optimization, PUT COPY 
(https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html), which 
the AmazonClient embedded in S3A utilizes for moves. When designing the 
StoreCommitTransaction be sure to allow for filesystem implementations that 
leverage a server side copy operation. Doing a get-then-put should be optional. 
(Not sure Hadoop has an interface that advertises this capability yet; we can 
add one if not.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20430) Improve store file management for non-HDFS filesystems

2018-04-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439986#comment-16439986
 ] 

Andrew Purtell commented on HBASE-20430:


I'm curious what the EMR S3 filesystem might or might not do here [~zyork]. Is 
there anything you can say about that?

> Improve store file management for non-HDFS filesystems
> --
>
> Key: HBASE-20430
> URL: https://issues.apache.org/jira/browse/HBASE-20430
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Priority: Major
>
> HBase keeps a file open for every active store file so no additional round 
> trips to the NameNode are needed after the initial open. HDFS internally 
> multiplexes open files, but the Hadoop S3 filesystem implementations do not, 
> or, at least, not as well. As the bulk of data under management increases we 
> observe the required number of concurrently open connections will rise, and 
> expect it will eventually exhaust a limit somewhere (the client, the OS file 
> descriptor table or open file limits, or the S3 service).
> Initially we can simply introduce an option to close every store file after 
> the reader has finished, and determine the performance impact. Use cases 
> backed by non-HDFS filesystems will already have to cope with a different 
> read performance profile. Based on experiments with the S3 backed Hadoop 
> filesystems, notably S3A, even with aggressively tuned options simple reads 
> can be very slow when there are blockcache misses, 15-20 seconds observed for 
> Get of a single small row, for example. We expect extensive use of the 
> BucketCache to mitigate in this application already. Could be backed by 
> offheap storage, but more likely a large number of cache files managed by the 
> file engine on local SSD storage. If misses are already going to be super 
> expensive, then the motivation to do more than simply open store files on 
> demand is largely absent.
> Still, we could employ a predictive cache. Where frequent access to a given 
> store file (or, at least, its store) is predicted, keep a reference to the 
> store file open. Can keep statistics about read frequency, write it out to 
> HFiles during compaction, and note these stats when opening the region, 
> perhaps by reading all meta blocks of region HFiles when opening. Otherwise, 
> close the file after reading and open again on demand. Need to be careful not 
> to use ARC or equivalent as cache replacement strategy as it is encumbered. 
> The size of the cache can be determined at startup after detecting the 
> underlying filesystem. Eg. setCacheSize(VERY_LARGE_CONSTANT) if (fs 
> instanceof DistributedFileSystem), so we don't lose much when on HDFS still.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20430) Improve store file management for non-HDFS filesystems

2018-04-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439985#comment-16439985
 ] 

Andrew Purtell commented on HBASE-20430:


I suppose this could also be handled with an enhancement to Hadoop's S3A to 
multiplex open files among the internal resource pools it already maintains.

> Improve store file management for non-HDFS filesystems
> --
>
> Key: HBASE-20430
> URL: https://issues.apache.org/jira/browse/HBASE-20430
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Priority: Major
>
> HBase keeps a file open for every active store file so no additional round 
> trips to the NameNode are needed after the initial open. HDFS internally 
> multiplexes open files, but the Hadoop S3 filesystem implementations do not, 
> or, at least, not as well. As the bulk of data under management increases we 
> observe the required number of concurrently open connections will rise, and 
> expect it will eventually exhaust a limit somewhere (the client, the OS file 
> descriptor table or open file limits, or the S3 service).
> Initially we can simply introduce an option to close every store file after 
> the reader has finished, and determine the performance impact. Use cases 
> backed by non-HDFS filesystems will already have to cope with a different 
> read performance profile. Based on experiments with the S3 backed Hadoop 
> filesystems, notably S3A, even with aggressively tuned options simple reads 
> can be very slow when there are blockcache misses, 15-20 seconds observed for 
> Get of a single small row, for example. We expect extensive use of the 
> BucketCache to mitigate in this application already. Could be backed by 
> offheap storage, but more likely a large number of cache files managed by the 
> file engine on local SSD storage. If misses are already going to be super 
> expensive, then the motivation to do more than simply open store files on 
> demand is largely absent.
> Still, we could employ a predictive cache. Where frequent access to a given 
> store file (or, at least, its store) is predicted, keep a reference to the 
> store file open. Can keep statistics about read frequency, write it out to 
> HFiles during compaction, and note these stats when opening the region, 
> perhaps by reading all meta blocks of region HFiles when opening. Otherwise, 
> close the file after reading and open again on demand. Need to be careful not 
> to use ARC or equivalent as cache replacement strategy as it is encumbered. 
> The size of the cache can be determined at startup after detecting the 
> underlying filesystem. Eg. setCacheSize(VERY_LARGE_CONSTANT) if (fs 
> instanceof DistributedFileSystem), so we don't lose much when on HDFS still.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20431) Store commit transaction for filesystems that do not support an atomic rename

2018-04-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20431:
---
Description: 
HBase expects the Hadoop filesystem implementation to support an atomic 
rename() operation. HDFS does. The S3 backed filesystems do not. The 
fundamental issue is the non-atomic and eventually consistent nature of the S3 
service. A S3 bucket is not a filesystem. S3 is not always immediately 
read-your-writes. Object metadata can be temporarily inconsistent just after 
new objects are stored. There can be a settling period to ride over. 
Renaming/moving objects from one path to another are copy operations with 
O(file) complexity and O(data) time followed by a series of deletes with 
O(file) complexity. Failures at any point prior to completion will leave the 
operation in an inconsistent state. The missing atomic rename semantic opens 
opportunities for corruption and data loss, which may or may not be repairable 
with HBCK.

Handling this at the HBase level could be done with a new multi-step filesystem 
transaction framework. Call it StoreCommitTransaction. SplitTransaction and 
MergeTransaction are well established cases where even on HDFS we have 
non-atomic filesystem changes and are our implementation template for the new 
work. In this new StoreCommitTransaction we'd be moving flush and compaction 
temporaries out of the temporary directory into the region store directory. On 
HDFS the implementation would be easy. We can rely on the filesystem's atomic 
rename semantics. On S3 it would be work: First we would build the list of 
objects to move, then copy each object into the destination, and then finally 
delete all objects at the original path. We must handle transient errors with 
retry strategies appropriate for the action at hand. We must handle serious or 
permanent errors where the RS doesn't need to be aborted with a rollback that 
cleans it all up. Finally, we must handle permanent errors where the RS must be 
aborted with a rollback during region open/recovery. Note that after all 
objects have been copied and we are deleting obsolete source objects we must 
roll forward, not back. To support recovery after an abort we must utilize the 
WAL to track transaction progress. Put markers in for StoreCommitTransaction 
start and completion state, with details of the store file(s) involved, so it 
can be rolled back during region recovery at open. This will be significant 
work in HFile, HStore, flusher, compactor, and HRegion. Wherever we use HDFS's 
rename now we would substitute the running of this new multi-step filesystem 
transaction.

We need to determine this for certain, but I believe on S3 the PUT or multipart 
upload of an object must complete before the object is visible, so we don't 
have to worry about the case where an object is visible before fully uploaded 
as part of normal operations. So an individual object copy will either happen 
entirely and the target will then become visible, or it won't and the target 
won't exist.

S3 has an optimization, PUT COPY 
(https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html), which 
the AmazonClient embedded in S3A utilizes for moves. When designing the 
StoreCommitTransaction be sure to allow for filesystem implementations that 
leverage a server side copy operation. Doing a get-then-put should be optional. 
(Not sure Hadoop has an interface that advertises this capability yet; we can 
add one if not.)

  was:
HBase expects the Hadoop filesystem implementation to support an atomic 
rename() operation. HDFS does. The S3 backed filesystems do not. The 
fundamental issue is the non-atomic and eventually consistent nature of the S3 
service. A S3 bucket is not a filesystem. S3 is not always immediately 
read-your-writes. Object metadata can be temporarily inconsistent just after 
new objects are stored. There can be a settling period to ride over. 
Renaming/moving objects from one path to another are copy operations with 
O(file) complexity and O(data) time followed by a series of deletes with 
O(file) complexity. Failures at any point prior to completion will leave the 
operation in an inconsistent state. The missing atomic rename semantic opens 
opportunities for corruption and data loss, which may or may not be repairable 
with HBCK.

Handling this at the HBase level could be done with a new multi-step filesystem 
transaction framework. Call it StoreCommitTransaction. SplitTransaction and 
MergeTransaction are well established cases where even on HDFS we have 
non-atomic filesystem changes and are our implementation template for the new 
work. In this new StoreCommitTransaction we'd be moving flush and compaction 
temporaries out of the temporary directory into the region store directory. On 
HDFS the implementation would be easy. We can rely on the filesystem's atomic 
rename semantics. On S3 it would be 

[jira] [Commented] (HBASE-20332) shaded mapreduce module shouldn't include hadoop

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439925#comment-16439925
 ] 

Sean Busbey commented on HBASE-20332:
-

those shellcheck and whitespace things should be easy enough to fix. I'll take 
care of that after I have something from the cluster testing to incorporate.

> shaded mapreduce module shouldn't include hadoop
> 
>
> Key: HBASE-20332
> URL: https://issues.apache.org/jira/browse/HBASE-20332
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20332.0.patch
>
>
> AFAICT, we should just entirely skip including hadoop in our shaded mapreduce 
> module
> 1) Folks expect to run yarn / mr apps via {{hadoop jar}} / {{yarn jar}}
> 2) those commands include all the needed Hadoop jars in your classpath by 
> default (both client side and in the containers)
> 3) If you try to use "user classpath first" for your job as a workaround 
> (e.g. for some library your application needs that hadoop provides) then our 
> inclusion of *some but not all* hadoop classes then causes everything to fall 
> over because of mixing rewritten and non-rewritten hadoop classes
> 4) if you don't use "user classpath first" then all of our 
> non-relocated-but-still-shaded hadoop classes are ignored anyways so we're 
> just wasting space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20428) [shell] list.first method in HBase2 shell fails with "NoMethodError: undefined method `first' for nil:NilClass"

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439928#comment-16439928
 ] 

Sean Busbey commented on HBASE-20428:
-

what's the commit hash this shell is based on? this looks like before 
HBASE-20276 landed.

> [shell] list.first method in HBase2 shell fails with "NoMethodError: 
> undefined method `first' for nil:NilClass"
> ---
>
> Key: HBASE-20428
> URL: https://issues.apache.org/jira/browse/HBASE-20428
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 2.0.0-beta-2
>Reporter: Arpit Jindal
>Priority: Minor
>
> list.fist in hbase shell does not list the first table
> {code}
> hbase(main):001:0> list.first
> TABLE
> IntegrationTestBigLinkedList_20180331004141
> IntegrationTestBigLinkedList_20180403004104
> IntegrationTestBigLinkedList_20180409123038
> IntegrationTestBigLinkedList_20180409172704
> IntegrationTestBigLinkedList_20180410103309
> IntegrationTestBigLinkedList_20180411151159
> IntegrationTestBigLinkedList_20180411172500
> IntegrationTestBigLinkedList_20180412095403
> 8 row(s)
> Took 0.5432 seconds
> NoMethodError: undefined method `first' for nil:NilClass
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20428) [shell] list.first method in HBase2 shell fails with "NoMethodError: undefined method `first' for nil:NilClass"

2018-04-16 Thread Arpit Jindal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Jindal resolved HBASE-20428.
--
Resolution: Duplicate

Fixed in HBase-20276

> [shell] list.first method in HBase2 shell fails with "NoMethodError: 
> undefined method `first' for nil:NilClass"
> ---
>
> Key: HBASE-20428
> URL: https://issues.apache.org/jira/browse/HBASE-20428
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 2.0.0-beta-2
>Reporter: Arpit Jindal
>Priority: Minor
>
> list.fist in hbase shell does not list the first table
> {code}
> hbase(main):001:0> list.first
> TABLE
> IntegrationTestBigLinkedList_20180331004141
> IntegrationTestBigLinkedList_20180403004104
> IntegrationTestBigLinkedList_20180409123038
> IntegrationTestBigLinkedList_20180409172704
> IntegrationTestBigLinkedList_20180410103309
> IntegrationTestBigLinkedList_20180411151159
> IntegrationTestBigLinkedList_20180411172500
> IntegrationTestBigLinkedList_20180412095403
> 8 row(s)
> Took 0.5432 seconds
> NoMethodError: undefined method `first' for nil:NilClass
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2018-04-16 Thread Dequan Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dequan Chen updated HBASE-7129:
---
Attachment: HBASE-7129.0004.patch

> Need documentation for REST atomic operations (HBASE-4720)
> --
>
> Key: HBASE-7129
> URL: https://issues.apache.org/jira/browse/HBASE-7129
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Joe Pallas
>Assignee: Dequan Chen
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-7129.0001.patch, HBASE-7129.0002.patch, 
> HBASE-7129.0003.patch, HBASE-7129.0004.patch, HBASE-7129.patch
>
>
> HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
> but the REST documentation (in the package summary) needs to be updated so 
> people know that this feature exists and how to use it.
> http://wiki.apache.org/hadoop/Hbase/Stargate
> http://hbase.apache.org/book/rest.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-04-16 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe reassigned HBASE-20403:


Assignee: Umesh Agashe

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20430) Improve store file management for non-HDFS filesystems

2018-04-16 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-20430:
--

 Summary: Improve store file management for non-HDFS filesystems
 Key: HBASE-20430
 URL: https://issues.apache.org/jira/browse/HBASE-20430
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell


HBase keeps a file open for every active store file so no additional round 
trips to the NameNode are needed after the initial open. HDFS internally 
multiplexes open files, but the Hadoop S3 filesystem implementations do not, 
or, at least, not as well. As the bulk of data under management increases we 
observe the required number of concurrently open connections will rise, and 
expect it will eventually exhaust a limit somewhere (the client, the OS file 
descriptor table or open file limits, or the S3 service).

Initially we can simply introduce an option to close every store file after the 
reader has finished, and determine the performance impact. Use cases backed by 
non-HDFS filesystems will already have to cope with a different read 
performance profile. Based on experiments with the S3 backed Hadoop 
filesystems, notably S3A, even with aggressively tuned options simple reads can 
be very slow when there are blockcache misses, 15-20 seconds observed for Get 
of a single small row, for example. We expect extensive use of the BucketCache 
to mitigate in this application already. Could be backed by offheap storage, 
but more likely a large number of cache files managed by the file engine on 
local SSD storage. If misses are already going to be super expensive, then the 
motivation to do more than simply open store files on demand is largely absent.

Still, we could employ a predictive cache. Where frequent access to a given 
store file (or, at least, its store) is predicted, keep a reference to the 
store file open. Can keep statistics about read frequency, write it out to 
HFiles during compaction, and note these stats when opening the region, perhaps 
by reading all meta blocks of region HFiles when opening. Otherwise, close the 
file after reading and open again on demand. Need to be careful not to use ARC 
or equivalent as cache replacement strategy as it is encumbered. The size of 
the cache can be determined at startup after detecting the underlying 
filesystem. Eg. setCacheSize(VERY_LARGE_CONSTANT) if (fs instanceof 
DistributedFileSystem), so we don't lose much when on HDFS still.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2018-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439982#comment-16439982
 ] 

Hadoop QA commented on HBASE-7129:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 7s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  2m 
58s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  2m 
59s{color} | {color:blue} patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-7129 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919278/HBASE-7129.0004.patch 
|
| Optional Tests |  asflicense  refguide  |
| uname | Linux 9c426d205838 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 44ebd28093 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12479/artifact/patchprocess/branch-site/book.html
 |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12479/artifact/patchprocess/patch-site/book.html
 |
| Max. process+thread count | 93 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12479/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Need documentation for REST atomic operations (HBASE-4720)
> --
>
> Key: HBASE-7129
> URL: https://issues.apache.org/jira/browse/HBASE-7129
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Joe Pallas
>Assignee: Dequan Chen
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-7129.0001.patch, HBASE-7129.0002.patch, 
> HBASE-7129.0003.patch, HBASE-7129.0004.patch, HBASE-7129.patch
>
>
> HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
> but the REST documentation (in the package summary) needs to be updated so 
> people know that this feature exists and how to use it.
> http://wiki.apache.org/hadoop/Hbase/Stargate
> http://hbase.apache.org/book/rest.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19994) Create a new class for RPC throttling exception, make it retryable.

2018-04-16 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19994:
-
Release Note: A new RpcThrottlingException deprecates ThrottlingException. 
The new RpcThrottlingException is a retryable Exception that clients will retry 
when Rpc throttling quota is exceeded. The deprecated ThrottlingException is a 
nonretryable Exception.

> Create a new class for RPC throttling exception, make it retryable. 
> 
>
> Key: HBASE-19994
> URL: https://issues.apache.org/jira/browse/HBASE-19994
> Project: HBase
>  Issue Type: Improvement
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Major
> Attachments: HBASE-19994-master-v01.patch, 
> HBASE-19994-master-v02.patch, HBASE-19994-master-v03.patch, 
> HBASE-19994-master-v04.patch, HBASE-19994-master-v05.patch, 
> HBASE-19994-master-v06.patch, HBASE-19994-master-v07.patch
>
>
> Based on a discussion at dev mailing list.
>  
> {code:java}
> Thanks Andrew.
> +1 for the second option, I will create a jira for this change.
> Huaxiang
> On Feb 9, 2018, at 1:09 PM, Andrew Purtell  wrote:
> We have
> public class ThrottlingException extends QuotaExceededException
> public class QuotaExceededException extends DoNotRetryIOException
> Let the storage quota limits throw QuotaExceededException directly (based
> on DNRIOE). That seems fine.
> However, ThrottlingException is thrown as a result of a temporal quota,
> so it is inappropriate for this to inherit from DNRIOE, it should inherit
> IOException instead so the client is allowed to retry until successful, or
> until the retry policy is exhausted.
> We are in a bit of a pickle because we've released with this inheritance
> hierarchy, so to change it we will need a new minor, or we will want to
> deprecate ThrottlingException and use a new exception class instead, one
> which does not inherit from DNRIOE.
> On Feb 7, 2018, at 9:25 AM, Huaxiang Sun  wrote:
> Hi Mike,
>   You are right. For rpc throttling, definitely it is retryable. For storage 
> quota, I think it will be fail faster (non-retryable).
>   We probably need to separate these two types of exceptions, I will do some 
> more research and follow up.
>   Thanks,
>   Huaxiang
> On Feb 7, 2018, at 9:16 AM, Mike Drob  wrote:
> I think, philosophically, there can be two kinds of QEE -
> For throttling, we can retry. The quota is a temporal quota - you have done
> too many operations this minute, please try again next minute and
> everything will work.
> For storage, we shouldn't retry. The quota is a fixed quote - you have
> exceeded your allotted disk space, please do not try again until you have
> remedied the situation.
> Our current usage conflates the two, sometimes it is correct, sometimes not.
> On Wed, Feb 7, 2018 at 11:00 AM, Huaxiang Sun  wrote:
> Hi Stack,
>  I run into a case that a mapreduce job in hive cannot finish because
> it runs into a QEE.
> I need to look into the hive mr task to see if QEE is not handled
> correctly in hbase code or in hive code.
> I am thinking that if  QEE is a retryable exception, then it should be
> taken care of by the hbase code.
> I will check more and report back.
> Thanks,
> Huaxiang
> On Feb 7, 2018, at 8:23 AM, Stack  wrote:
> QEE being a DNRIOE seems right on the face of it.
> But if throttling, a DNRIOE is inappropriate. Where you seeing a QEE in a
> throttling scenario Huaxiang?
> Thanks,
> S
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2018-04-16 Thread Dequan Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dequan Chen updated HBASE-7129:
---
Attachment: HBASE-7129.0003.patch

> Need documentation for REST atomic operations (HBASE-4720)
> --
>
> Key: HBASE-7129
> URL: https://issues.apache.org/jira/browse/HBASE-7129
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Joe Pallas
>Assignee: Dequan Chen
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-7129.0001.patch, HBASE-7129.0002.patch, 
> HBASE-7129.0003.patch, HBASE-7129.patch
>
>
> HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
> but the REST documentation (in the package summary) needs to be updated so 
> people know that this feature exists and how to use it.
> http://wiki.apache.org/hadoop/Hbase/Stargate
> http://hbase.apache.org/book/rest.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2018-04-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439909#comment-16439909
 ] 

Hadoop QA commented on HBASE-7129:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 5s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  3m 
36s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  3m 
22s{color} | {color:blue} patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-7129 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919271/HBASE-7129.0003.patch 
|
| Optional Tests |  asflicense  refguide  |
| uname | Linux d03d906e9984 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 44ebd28093 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12478/artifact/patchprocess/branch-site/book.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12478/artifact/patchprocess/whitespace-eol.txt
 |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12478/artifact/patchprocess/patch-site/book.html
 |
| Max. process+thread count | 83 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12478/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Need documentation for REST atomic operations (HBASE-4720)
> --
>
> Key: HBASE-7129
> URL: https://issues.apache.org/jira/browse/HBASE-7129
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Joe Pallas
>Assignee: Dequan Chen
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-7129.0001.patch, HBASE-7129.0002.patch, 
> HBASE-7129.0003.patch, HBASE-7129.patch
>
>
> HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
> but the REST documentation (in the package summary) needs to be updated so 
> people know that this feature exists and how to use it.
> http://wiki.apache.org/hadoop/Hbase/Stargate
> http://hbase.apache.org/book/rest.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-04-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439957#comment-16439957
 ] 

Andrew Purtell commented on HBASE-20403:


What about prefetch triggers this specifically?

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-04-16 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1643#comment-1643
 ] 

Zach York commented on HBASE-20429:
---

[~apurtell] I'll look into the specifics in a little bit, but I feel like 
relying less on the FS (atomic renames for example) might be the right way to 
go here. A while back there was some work done (or proposed) to have HBase 
handle the file metadata somewhere to avoid the necessity of renames (HBase 
would update the path/location in this table so in effect, the rename would be 
atomic). I didn't spend a ton of time looking for the old issues, but I think 
this one was related: HBASE-14090. 

[~stack] and [~uagashe] did some initial planning on it and I planned to help 
out, but got sidelined by other stuff. They might be able to chime in here as 
well.

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2018-04-16 Thread Dequan Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440076#comment-16440076
 ] 

Dequan Chen commented on HBASE-7129:


To [~mdrob] ,

It appeared that the newly submitted [^HBASE-7129.0004.patch] is good. Do you 
think that we can conclude the present Jira now. If not, what else I need to 
do. Thanks.

Have a good night!

Dequan

> Need documentation for REST atomic operations (HBASE-4720)
> --
>
> Key: HBASE-7129
> URL: https://issues.apache.org/jira/browse/HBASE-7129
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Joe Pallas
>Assignee: Dequan Chen
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-7129.0001.patch, HBASE-7129.0002.patch, 
> HBASE-7129.0003.patch, HBASE-7129.0004.patch, HBASE-7129.patch
>
>
> HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
> but the REST documentation (in the package summary) needs to be updated so 
> people know that this feature exists and how to use it.
> http://wiki.apache.org/hadoop/Hbase/Stargate
> http://hbase.apache.org/book/rest.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19963) TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+

2018-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440119#comment-16440119
 ] 

Hudson commented on HBASE-19963:


Results for branch branch-2
[build #622 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/622/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/622//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/622//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/622//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> TestFSHDFSUtils assumes wrong default port for Hadoop 3.0.1+
> 
>
> Key: HBASE-19963
> URL: https://issues.apache.org/jira/browse/HBASE-19963
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Mike Drob
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-19963.master.001.patch, 
> HBASE-19963.master.002.patch
>
>
> We try to accommodate HDFS changing ports when testing if it is the same FS 
> in our tests:
> https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java#L156-L162
> {code}
> if (isHadoop3) {
>   // Hadoop 3.0.0 alpha1+ change default nn port to 9820. See HDFS-9427
>   testIsSameHdfs(9820);
> } else {
>   // pre hadoop 3.0.0 defaults to port 8020
>   testIsSameHdfs(8020);
> }
> {code}
> But in Hadoop 3.0.1, they decided to go back to the old port - see HDFS-12990.
> So our tests will fail against the snapshot and against future releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20417) Do not read wal entries when peer is disabled

2018-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440118#comment-16440118
 ] 

Hudson commented on HBASE-20417:


Results for branch branch-2
[build #622 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/622/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/622//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/622//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/622//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Do not read wal entries when peer is disabled
> -
>
> Key: HBASE-20417
> URL: https://issues.apache.org/jira/browse/HBASE-20417
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20417-v1.patch, HBASE-20417.patch
>
>
> Now, the disabled check is in ReplicationSourceShipper. If peer is disabled, 
> then we will not take entry batch from ReplicationSourceWALReader. But 
> ReplicationSourceWALReader will keep reading wal entries until the buffer is 
> full.
> For serial replication, the canPush check is in ReplicationSourceWALReader, 
> so even when we disabled the peer during the modification for a serial peer, 
> we could still run into the SerialReplicationChecker. Theoretically there 
> will be no problem, since in the procedure we will only update last pushed 
> sequence ids to a greater value. If canPush is true then a greater value does 
> not make any difference, if canPush is false then we are still safe since the 
> ReplicationSourceWALReader will be blocked.
> But this still makes me a little nervous, and also, it does not make sense to 
> still read wal entries when the peer is disabled. So let's change the 
> behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-04-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440006#comment-16440006
 ] 

Andrew Purtell commented on HBASE-20429:


bq. A while back there was some work done (or proposed) to have HBase handle 
the file metadata somewhere to avoid the necessity of renames (HBase would 
update the path/location in this table so in effect, the rename would be 
atomic).

This is interesting. Of course I missed it mired in 0.98 fleet maintenance. :-/ 
Could do this instead of or in addition to HBASE-20431, which proposes 
something like SplitTransaction but for commits of store files after compaction 
or flush.

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20332) shaded mapreduce module shouldn't include hadoop

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440030#comment-16440030
 ] 

Sean Busbey commented on HBASE-20332:
-

things to check still

# the export snapshot failure
# WALPlayer / verifyrep
# Use with a MR job that's not built-in
# Make sure we're not doing something with the Configuration object that's 
causing the classloader issue. see if there's an easy workaround to avoid the 
extra HADOOP_CLASSPATH entry.
# ref guide addition

> shaded mapreduce module shouldn't include hadoop
> 
>
> Key: HBASE-20332
> URL: https://issues.apache.org/jira/browse/HBASE-20332
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20332.0.patch
>
>
> AFAICT, we should just entirely skip including hadoop in our shaded mapreduce 
> module
> 1) Folks expect to run yarn / mr apps via {{hadoop jar}} / {{yarn jar}}
> 2) those commands include all the needed Hadoop jars in your classpath by 
> default (both client side and in the containers)
> 3) If you try to use "user classpath first" for your job as a workaround 
> (e.g. for some library your application needs that hadoop provides) then our 
> inclusion of *some but not all* hadoop classes then causes everything to fall 
> over because of mixing rewritten and non-rewritten hadoop classes
> 4) if you don't use "user classpath first" then all of our 
> non-relocated-but-still-shaded hadoop classes are ignored anyways so we're 
> just wasting space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2018-04-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440043#comment-16440043
 ] 

Mike Drob commented on HBASE-7129:
--

Can you add a similar detailed explanation for the check-and-delete? Fine to 
refer back "see check-and-put" for common pieces, but good to explain the 
additional detail.

So that I better understand the limitations - the row name appears in both the 
URL and base-64 encoded in the payload? So we currently can only operate on 
rows that consist entirely of url allowable characters? If this is the case, 
can you add it as a known limitation? We should file a JIRA to figure out how 
to improve that generally.

Similar for the permutations on the check-and-delete stuff? Because row key, 
column, etc are in the url? Yea, probably a code issue that needs to be 
addressed here.

> Need documentation for REST atomic operations (HBASE-4720)
> --
>
> Key: HBASE-7129
> URL: https://issues.apache.org/jira/browse/HBASE-7129
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation, REST
>Reporter: Joe Pallas
>Assignee: Dequan Chen
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-7129.0001.patch, HBASE-7129.0002.patch, 
> HBASE-7129.0003.patch, HBASE-7129.0004.patch, HBASE-7129.patch
>
>
> HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
> but the REST documentation (in the package summary) needs to be updated so 
> people know that this feature exists and how to use it.
> http://wiki.apache.org/hadoop/Hbase/Stargate
> http://hbase.apache.org/book/rest.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20364) nightly job gives old results or no results for stages that timeout on SCM

2018-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440060#comment-16440060
 ] 

Hudson commented on HBASE-20364:


Results for branch branch-1.2
[build #302 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/302/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/302//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/302//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/302//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> nightly job gives old results or no results for stages that timeout on SCM
> --
>
> Key: HBASE-20364
> URL: https://issues.apache.org/jira/browse/HBASE-20364
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.2.7, 1.3.3, 1.4.4, 2.0.1
>
> Attachments: HBASE-20364.0.patch
>
>
> seen in the branch-2.0 nightly report for HBASE-18828:
>  
> {quote}
> Results for branch branch-2.0
>  [build #143 on 
> builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143/]:
>  (x) *\{color:red}-1 overall\{color}*
> 
> details (if available):
> (/) \{color:green}+1 general checks\{color}
> -- For more information [see general 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/140//General_Nightly_Build_Report/]
>  
> (/) \{color:green}+1 jdk8 hadoop2 checks\{color}
> -- For more information [see jdk8 (hadoop2) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop2)/]
> (/) \{color:green}+1 jdk8 hadoop3 checks\{color}
> -- For more information [see jdk8 (hadoop3) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop3)/]
>  
> {quote}
>  
> -1 for the overall build was correct. build #143 failed both the general 
> check and the source tarball check.
>  
> but in the posted comment, we get a false "passing" that links to the general 
> result from build #140. and we get no result for the source tarball at all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20332) shaded mapreduce module shouldn't include hadoop

2018-04-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440079#comment-16440079
 ] 

Sean Busbey commented on HBASE-20332:
-

okay the export snapshot failure is a polluted classpath in my YARN 
installation. I would like to confirm I can work around it, if only so that I 
can check the actual command, but I think that command is probably fine.

> shaded mapreduce module shouldn't include hadoop
> 
>
> Key: HBASE-20332
> URL: https://issues.apache.org/jira/browse/HBASE-20332
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20332.0.patch
>
>
> AFAICT, we should just entirely skip including hadoop in our shaded mapreduce 
> module
> 1) Folks expect to run yarn / mr apps via {{hadoop jar}} / {{yarn jar}}
> 2) those commands include all the needed Hadoop jars in your classpath by 
> default (both client side and in the containers)
> 3) If you try to use "user classpath first" for your job as a workaround 
> (e.g. for some library your application needs that hadoop provides) then our 
> inclusion of *some but not all* hadoop classes then causes everything to fall 
> over because of mixing rewritten and non-rewritten hadoop classes
> 4) if you don't use "user classpath first" then all of our 
> non-relocated-but-still-shaded hadoop classes are ignored anyways so we're 
> just wasting space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >