[jira] [Commented] (HBASE-19064) Synchronous replication for HBase

2018-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491515#comment-16491515
 ] 

Hudson commented on HBASE-19064:


Results for branch HBASE-19064
[build #142 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/142/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/142//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/142//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/142//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> Synchronous replication for HBase
> -
>
> Key: HBASE-19064
> URL: https://issues.apache.org/jira/browse/HBASE-19064
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> The guys from Alibaba made a presentation on HBaseCon Asia about the 
> synchronous replication for HBase. We(Xiaomi) think this is a very useful 
> feature for HBase so we want to bring it into the community version.
> This is a big feature so we plan to do it in a feature branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20645) Fix security_available method in security.rb

2018-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491494#comment-16491494
 ] 

Hudson commented on HBASE-20645:


Results for branch branch-2
[build #783 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/783/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/783//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/783//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/783//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> Fix security_available method in security.rb 
> -
>
> Key: HBASE-20645
> URL: https://issues.apache.org/jira/browse/HBASE-20645
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20645.patch
>
>
> "exists?" method expects parameter tableName to be String but ACL_TABLE_NAME 
> is of org.apache.hadoop.hbase.TableName form.
> {code}
> raise(ArgumentError, 'DISABLED: Security features are not available') unless \
>   
> exists?(org.apache.hadoop.hbase.security.access.AccessControlLists::ACL_TABLE_NAME.getNameAsString)
> {code}
> Impact of the bug:-
> So , if a user is running any security related 
> command(revoke,user_permission) and there is an exception(MasterNotRunning) 
> while checking security capabilities, then instead of seeing the underlying 
> exception, user is seeing 
> {code}
> ERROR: no method 'valueOf' for arguments (org.apache.hadoop.hbase.TableName) 
> on Java::OrgApacheHadoopHbase::TableName
>   available overloads:
> (java.lang.String)
> (byte[])
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20079) Report all the new test classes missing HBaseClassTestRule in one patch

2018-05-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-20079.

Resolution: Later

> Report all the new test classes missing HBaseClassTestRule in one patch
> ---
>
> Key: HBASE-20079
> URL: https://issues.apache.org/jira/browse/HBASE-20079
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Trivial
>
> Currently if there are both new small and large tests without 
> HBaseClassTestRule in a single patch, the QA bot would report the small test 
> class as missing HBaseClassTestRule but not the large test.
> All new test classes missing HBaseClassTestRule should be reported in the 
> same QA run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20081) TestDisableTableProcedure sometimes hung in MiniHBaseCluster#waitUntilShutDown

2018-05-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-20081.

Resolution: Cannot Reproduce

> TestDisableTableProcedure sometimes hung in MiniHBaseCluster#waitUntilShutDown
> --
>
> Key: HBASE-20081
> URL: https://issues.apache.org/jira/browse/HBASE-20081
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Major
>
> https://builds.apache.org/job/HBase-2.0-hadoop3-tests/lastCompletedBuild/org.apache.hbase$hbase-server/testReport/org.apache.hadoop.hbase.master.procedure/TestDisableTableProcedure/org_apache_hadoop_hbase_master_procedure_TestDisableTableProcedure/
>  was one recent occurrence.
> I noticed two things in test output:
> {code}
> 2018-02-25 18:12:45,053 WARN  [Time-limited test-EventThread] 
> master.RegionServerTracker(136): asf912.gq1.ygridcore.net,45649,1519582305777 
> is not online or isn't known to the master.The latter could be caused by a 
> DNS misconfiguration.
> {code}
> Since DNS misconfiguration was very unlikely on Apache Jenkins nodes, the 
> above should not have been logged.
> {code}
> 2018-02-25 18:16:51,531 WARN  [master/asf912:0.Chore.1] 
> master.CatalogJanitor(127): Failed scan of catalog table
> java.io.IOException: connection is closed
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.getMetaHTable(MetaTableAccessor.java:263)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:761)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:680)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.scanMetaForTableRegions(MetaTableAccessor.java:675)
>   at 
> org.apache.hadoop.hbase.master.CatalogJanitor.getMergedRegionsAndSplitParents(CatalogJanitor.java:188)
>   at 
> org.apache.hadoop.hbase.master.CatalogJanitor.getMergedRegionsAndSplitParents(CatalogJanitor.java:140)
>   at 
> org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:246)
>   at 
> org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:119)
>   at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186)
> {code}
> The above was possibly related to the lost region server.
> I searched test output of successful run where none of the above two can be 
> seen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20578) Support region server group in target cluster

2018-05-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20578:
---
Component/s: Replication

> Support region server group in target cluster
> -
>
> Key: HBASE-20578
> URL: https://issues.apache.org/jira/browse/HBASE-20578
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Ted Yu
>Assignee: Albert Lee
>Priority: Major
>
> When source tables belong to non-default region server group(s) and there are 
> region server group counterpart in the target cluster, we should support 
> replicating to target cluster using the region server group mapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20257) hbase-spark should not depend on com.google.code.findbugs.jsr305

2018-05-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20257:
---
Component/s: spark

> hbase-spark should not depend on com.google.code.findbugs.jsr305
> 
>
> Key: HBASE-20257
> URL: https://issues.apache.org/jira/browse/HBASE-20257
> Project: HBase
>  Issue Type: Task
>  Components: build, spark
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Artem Ervits
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20257.v01.patch, HBASE-20257.v02.patch, 
> HBASE-20257.v03.patch, HBASE-20257.v04.patch
>
>
> The following can be observed in the build output of master branch:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BannedDependencies failed 
> with message:
> We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
> Found Banned Dependency: com.google.code.findbugs:jsr305:jar:1.3.9
> Use 'mvn dependency:tree' to locate the source of the banned dependencies.
> {code}
> Here is related snippet from hbase-spark/pom.xml:
> {code}
> 
>   com.google.code.findbugs
>   jsr305
> {code}
> Dependency on jsr305 should be dropped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18116) Replication source in-memory accounting should not include bulk transfer hfiles

2018-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491479#comment-16491479
 ] 

Hadoop QA commented on HBASE-18116:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
50s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
57s{color} | {color:blue} hbase-server in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
46s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 40s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}106m 
37s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-18116 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925230/HBASE-18116.master.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 286bf86741d0 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 874f1e8e6a |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12969/testReport/ |
| Max. process+thread count | 4854 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491477#comment-16491477
 ] 

Hadoop QA commented on HBASE-19722:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
49s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
52s{color} | {color:blue} hbase-server in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
11s{color} | {color:red} hbase-server: The patch generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
45s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 24s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
13s{color} | {color:red} hbase-server generated 1 new + 2 unchanged - 0 fixed = 
3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}156m  0s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  2m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}210m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  org.apache.hadoop.hbase.util.LossyCounting.sweep() makes inefficient use 
of keySet iterator instead of entrySet iterator  At LossyCounting.java:keySet 
iterator instead of entrySet iterator  At LossyCounting.java:[line 101] |
| Failed junit tests | hadoop.hbase.master.procedure.TestTruncateTableProcedure 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-19722 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925226/HBASE-19722.master.014.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  

[jira] [Commented] (HBASE-20478) move import checks from hbaseanti to checkstyle

2018-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491458#comment-16491458
 ] 

Hudson commented on HBASE-20478:


Results for branch HBASE-20478
[build #6 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20478/6/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20478/6//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20478/6//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20478/6//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> move import checks from hbaseanti to checkstyle
> ---
>
> Key: HBASE-20478
> URL: https://issues.apache.org/jira/browse/HBASE-20478
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Mike Drob
>Priority: Minor
> Attachments: HBASE-20478.0.patch, HBASE-20478.1.patch, 
> HBASE-20478.2.patch, HBASE-20478.3.patch, HBASE-20478.4.patch, 
> HBASE-20478.WIP.2.patch, HBASE-20478.WIP.2.patch, HBASE-20478.WIP.patch, 
> HBASE-anti-check.patch
>
>
> came up in discussion on HBASE-20332. our check of "don't do this" things in 
> the codebase doesn't log the specifics of complaints anywhere, which forces 
> those who want to follow up to reverse engineer the check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18118) Default storage policy if not configured cannot be "NONE"

2018-05-25 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491437#comment-16491437
 ] 

Sean Busbey commented on HBASE-18118:
-

+1 for a new JIRA.

> Default storage policy if not configured cannot be "NONE"
> -
>
> Key: HBASE-18118
> URL: https://issues.apache.org/jira/browse/HBASE-18118
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18118.patch
>
>
> HBase can't use 'NONE' as default storage policy if not configured because 
> HDFS supports no such policy. This policy name was probably available in a 
> precommit or early version of the HDFS side support for heterogeneous 
> storage. Now the best default is 'HOT'. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18116) Replication source in-memory accounting should not include bulk transfer hfiles

2018-05-25 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-18116:

Attachment: HBASE-18116.master.002.patch

> Replication source in-memory accounting should not include bulk transfer 
> hfiles
> ---
>
> Key: HBASE-18116
> URL: https://issues.apache.org/jira/browse/HBASE-18116
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-18116.master.001.patch, 
> HBASE-18116.master.002.patch
>
>
> In ReplicationSourceWALReaderThread we maintain a global quota on enqueued 
> replication work for preventing OOM by queuing up too many edits into queues 
> on heap. When calculating the size of a given replication queue entry, if it 
> has associated hfiles (is a bulk load to be replicated as a batch of hfiles), 
> we get the file sizes and include the sum. We then apply that result to the 
> quota. This isn't quite right. Those hfiles will be pulled by the sink as a 
> file copy, not pushed by the source. The cells in those files are not queued 
> in memory at the source and therefore shouldn't be counted against the quota.
> Related, the sum of the hfile sizes are also included when checking if queued 
> work exceeds the configured replication queue capacity, which is by default 
> 64 MB. HFiles are commonly much larger than this. 
> So what happens is when we encounter a bulk load replication entry typically 
> both the quota and capacity limits are exceeded, we break out of loops, and 
> send right away. What is transferred on the wire via HBase RPC though has 
> only a partial relationship to the calculation. 
> Depending how you look at it, it makes sense to factor hfile file sizes 
> against replication queue capacity limits. The sink will be occupied 
> transferring those files at the HDFS level. Anyway, this is how we have been 
> doing it and it is too late to change now. I do not however think it is 
> correct to apply hfile file sizes against a quota for in memory state on the 
> source. The source doesn't queue or even transfer those bytes. 
> Something I noticed while working on HBASE-18027.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20582) Bump up JRuby version because of some reported vulnerabilities

2018-05-25 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491427#comment-16491427
 ] 

Sean Busbey commented on HBASE-20582:
-

+1 presuming QABot doesn't find something surprising in the shell tests.

> Bump up JRuby version because of some reported vulnerabilities
> --
>
> Key: HBASE-20582
> URL: https://issues.apache.org/jira/browse/HBASE-20582
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, shell
>Reporter: Ankit Singhal
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20582.002.patch, HBASE-20582.addendum.patch, 
> HBASE-20582.patch
>
>
> There are some vulnerabilities reported with two of the libraries used in 
> HBase.
> {code:java}
> Jruby(version:9.1.10.0):
> CVE-2009-5147
> CVE-2013-4363
> CVE-2014-4975
> CVE-2014-8080
> CVE-2014-8090
> CVE-2015-3900
> CVE-2015-7551
> CVE-2015-9096
> CVE-2017-0899
> CVE-2017-0900
> CVE-2017-0901
> CVE-2017-0902
> CVE-2017-0903
> CVE-2017-10784
> CVE-2017-14064
> CVE-2017-9224
> CVE-2017-9225
> CVE-2017-9226
> CVE-2017-9227
> CVE-2017-9228
> {code}
> Tool somehow able to relate the vulnerability of Ruby with JRuby(Java 
> implementation). (Jackson will be handled in a different issue.)
> Not all of them directly affects HBase but [~elserj] suggested that it is 
> better to be on the updated version to avoid issues during an audit in 
> security sensitive organization.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-05-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491404#comment-16491404
 ] 

stack commented on HBASE-20642:
---

bq. If the master is swapped, nonces map will be rebuilt from uncompleted 
procedure during the replay so we should not have a problem checking on the new 
master as well. right?

That is not my understanding. The nonces are in an in-memory-only map in the 
Master process. They will not be migrated from one Master to the new one 
so, even if you put calls behind a nonce-check, it'll fail since the nonce-map 
is empty on new Master.

bq. Yes, they will get this on their first submission if the master goes down 
in between.

Because the Master is failing which broke the synchronous wait on add column? 
Maybe add a check if master is going down and if it is throw that for an 
exception instead of doing this pre-flight check against current state of table 
descriptor? Would that be more meaningful?

bq. This is addColumnFamily() synchronous call and it is getting moved to the 
new master.

It is pretty cool that the call keeps going though the Master has crashed...  I 
think it is a bit much to expect that this call can pick up where it left off 
on the old Master though. It has no reference to the original transaction (it 
does not have a Future  ). We want to move folks over to the async calls 
where they check to see if the Procedure is completed. Thats the style we'd 
prefer.

Meantime, I agree this exception message is confusing. Lets fix it (see above 
for suggestion).

bq. No problem, probably I'm not putting the problem in right words stack

Nah. I think its the receiving end that has the problem (smile).

Thanks.



> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-05-25 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491398#comment-16491398
 ] 

Ankit Singhal commented on HBASE-20642:
---

bq.On their first submission, they get this? I don't follow. Is it something to 
do w/ Master going down?
Yes, they will get this on their first submission if the master goes down in 
between.

bq. What retrying mechanism is this? Is this the (deprecated) addColumn – 
synchronous call? Are you seeing the call move from the dead Master to the new 
Master?
This is addColumnFamily() synchronous call and it is getting moved to the new 
master. 

bq. I like the idea of putting all behind Nonces but nonce are no good if the 
Master is swapped during the call?
If the master is swapped, nonces map will be rebuilt from uncompleted procedure 
during the replay so we should not have a problem checking on the new master as 
well. right?

{quote}Thanks for helping me understand.{quote}
No problem, probably I'm not putting the problem in right words [~stack]


> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Xu Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-19722:

Attachment: HBASE-19722.master.014.patch

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch, 
> HBASE-19722.master.014.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-05-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491374#comment-16491374
 ] 

stack commented on HBASE-20642:
---

bq. , it's just user will get InvalidFamilyOperationException even for the 
first attempt only.

On their first submission, they get this? I don't follow. Is it something to do 
w/ Master going down?

bq. It's actually not the user, user is making a call only once but HBase 
client itself retries the call while master is restarting and if master come 
back in between and the procedure is completed, user will see 
InvalidFamilyOperationException because HBase consider it as a second call from 
the user although it is coming as part of retry by HBase client.

What retrying mechanism is this? Is this the (deprecated) addColumn -- 
synchronous call? Are you seeing the call move from the dead Master to the new 
Master?

I like the idea of putting all behind Nonces but nonce are no good if the 
Master is swapped during the call?

Thanks for helping me understand.



> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20548) Master fails to startup on large clusters, refreshing block distribution

2018-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491370#comment-16491370
 ] 

Hudson commented on HBASE-20548:


Results for branch branch-1.3
[build #341 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/341/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/341//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/341//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/341//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Master fails to startup on large clusters, refreshing block distribution
> 
>
> Key: HBASE-20548
> URL: https://issues.apache.org/jira/browse/HBASE-20548
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.4
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20548.branch-1.4.001.patch, 
> HBASE-20548.branch-2.0.001.patch, HBASE-20548.master.001.patch
>
>
> On our large clusters with, master has failed to startup within specified 
> time and aborted itself since it was initializing HDFS block distribution. 
> Enable table also takes time for larger tables for the same reason. My 
> proposal is to refresh HDFS block distribution at the end of master 
> initialization and not at retainAssignment()'s createCluster(). This would 
> address HBASE-16570's intention, but avoid the problems we ran into.
> cc [~aoxiang] [~tedyu]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20331) clean up shaded packaging for 2.1

2018-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491367#comment-16491367
 ] 

Hudson commented on HBASE-20331:


Results for branch HBASE-20331
[build #20 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/20/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/20//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/20//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/20//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/20//artifacts/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> clean up shaded packaging for 2.1
> -
>
> Key: HBASE-20331
> URL: https://issues.apache.org/jira/browse/HBASE-20331
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client, mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
>
> polishing pass on shaded modules for 2.0 based on trying to use them in more 
> contexts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20556) Backport HBASE-16490 to branch-1

2018-05-25 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491358#comment-16491358
 ] 

Zach York commented on HBASE-20556:
---

+1 I will commit if nobody has anymore comments.

> Backport HBASE-16490 to branch-1
> 
>
> Key: HBASE-20556
> URL: https://issues.apache.org/jira/browse/HBASE-20556
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, snapshots
>Affects Versions: 1.4.4, 1.4.5
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Major
> Attachments: HBASE-20556.branch-1.001.patch, 
> HBASE-20556.branch-1.002.patch, HBASE-20556.branch-1.003.patch, 
> HBASE-20556.branch-1.004.patch
>
>
> As part of HBASE-20555, HBASE-16490 is the first patch that is needed for 
> backporting HBASE-18083



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-05-25 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491348#comment-16491348
 ] 

Ankit Singhal commented on HBASE-20642:
---

bq. The procedure was submitted, right, and started to make progress (it got as 
far as changing the table descriptor?). Did the procedure not succeed? Though 
there was a crash of Master in the middle of its running? If it did not 
complete, that is a problem.
bq. Sounds like the original procedure did not complete? Is that so? That it 
died in the middle of its running and so you tried to resubmit the add 
column... but it fails because the original procedure died half-way through? Is 
this what is happening?
No, The procedure will get succeed eventually after replaying procedure WALs, 
it's just user will get InvalidFamilyOperationException even for the first 
attempt only. 

bq.You mean, a user will retry because they think their original submission did 
not take? In this case, if a Procedure in-flight modifying the table, this 
second submission should fail.
It's actually not the user, user is making a call only once but HBase client 
itself retries the call while master is restarting and if master come back in 
between and the procedure is completed, user will see 
InvalidFamilyOperationException because HBase consider it as a second call from 
the user although it is coming as part of retry by HBase client.

So the patch is to move all the checks in Procedure so that we do nonce check 
to differentiate whether it is a retry or new call before actually executing 
them.





> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491343#comment-16491343
 ] 

Xu Cang edited comment on HBASE-19722 at 5/25/18 10:42 PM:
---

"7 / e number of items   (350 in this example with error rate being 0.02)"  is 
the extreme case that all elements' frequency is evenly distributed. 

(such as we have 17500 data points and each data appears exactly 50 times. )

 

Another interesting character from this algorithm is: Item with frequency lower 
than 'CurrentTerm - errorRate' will be swept out of this bucket. 

E.g. 1

For example, let's say we have 10k data points. error rate is 0.05. Then, 
bucket size is 1 / 0.05 = 20 

CurrentTerm after inputting all data will be 10k / 20 = 500.

So, all data with occurrence less than 499.95 will be removed. 

 

E.g.2

Let's change error rate to 0.02 from the last example.  

Bucket size will be 1 / 0.02 = 50

CurrentTerm will be 10k / 50 = 200

So, only data with occurrence less than 199.98 will be removed.

 

Intuitive observation from above is, if the error rate is too big, it may 
exclude many elements with fairly high frequency. 

So, this algorithm is a great fit for finding HOT CLIENTS/ HOT TOPIC kind of 
things. Not a good candidate for other things... 

 

 Also, this algorithm by design cannot let you specify *K from TopK.* It lets 
you specify keep all elements with occurrence more than *e percentage.*

 

 

 


was (Author: xucang):
"7 / e number of items   (350 in this example with error rate being 0.02)"  is 
the extreme case that all elements' frequency is evenly distributed. 

(such as we have 17500 data points and each data appears exactly 50 times. )

 

Another interesting character from this algorithm is: Item with frequency lower 
than 'CurrentTerm - errorRate' will be swept out of this bucket. 

E.g. 1

For example, let's say we have 10k data points. error rate is 0.05. Then, 
bucket size is 1 / 0.05 = 20 

CurrentTerm after inputting all data will be 10k / 20 = 500.

So, all data with occurrence less than 499.95 will be removed. 

 

E.g.2

Let's change error rate to 0.02 from the last example.  

Bucket size will be 1 / 0.02 = 50

CurrentTerm will be 10k / 50 = 200

So, only data with occurrence less than 199.98 will be removed.

 

Intuitive observation from above is, if the error rate is too big, it may 
exclude many elements with fairly high frequency. 

So, this algorithm is a great fit for finding HOT CLIENTS/ HOT TOPIC kind of 
things. Not a good candidate for other things... 

 

 Also, this algorithm by design cannot let you specify *K from TopK.* It lets 
you specify ** keep all elements with occurrence more than *e percentage.*

 

 

 

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491343#comment-16491343
 ] 

Xu Cang edited comment on HBASE-19722 at 5/25/18 10:38 PM:
---

"7 / e number of items   (350 in this example with error rate being 0.02)"  is 
the extreme case that all elements' frequency is evenly distributed. 

(such as we have 17500 data points and each data appears exactly 50 times. )

 

Another interesting character from this algorithm is: Item with frequency lower 
than 'CurrentTerm - errorRate' will be swept out of this bucket. 

E.g. 1

For example, let's say we have 10k data points. error rate is 0.05. Then, 
bucket size is 1 / 0.05 = 20 

CurrentTerm after inputting all data will be 10k / 20 = 500.

So, all data with occurrence less than 499.95 will be removed. 

 

E.g.2

Let's change error rate to 0.02 from the last example.  

Bucket size will be 1 / 0.02 = 50

CurrentTerm will be 10k / 50 = 200

So, only data with occurrence less than 199.98 will be removed.

 

Intuitive observation from above is, if the error rate is too big, it may 
exclude many elements with fairly high frequency. 

So, this algorithm is a great fit for finding HOT CLIENTS/ HOT TOPIC kind of 
things. Not a good candidate for other things... 

 

 Also, this algorithm by design cannot let you specify *K from TopK.* It lets 
you specify ** keep all elements with occurrence more than *e percentage.*

 

 

 


was (Author: xucang):
"7 / e number of items   (350 in this example with error rate being 0.02)"  is 
the extreme case that all elements' frequency is evenly distributed. 

(such as we have 17500 data points and each data appears exactly 50 times. )

 

Another interesting character from this algorithm is: Item with frequency lower 
than 'CurrentTerm - errorRate' will be swept out of this bucket. 

E.g. 1

For example, let's say we have 10k data points. error rate is 0.05. Then, 
bucket size is 1 / 0.05 = 20 

CurrentTerm after inputting all data will be 10k / 20 = 500.

So, all data with occurrence less than 499.95 will be removed. 

 

E.g.2

Let's change error rate to 0.02 from the last example.  

Bucket size will be 1 / 0.02 = 50

CurrentTerm will be 10k / 50 = 200

So, only data with occurrence less than 199.98 will be removed.

 

Intuitive observation from above is, if the error rate is too big, it may 
exclude many elements with fairly high frequency. 

So, this algorithm is a great fit for finding HOT CLIENTS/ HOT TOPIC kind of 
things. Not a good candidate for other things... 

 

 

 

 

 

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491343#comment-16491343
 ] 

Xu Cang commented on HBASE-19722:
-

"7 / e number of items   (350 in this example with error rate being 0.02)"  is 
the extreme case that all elements' frequency is evenly distributed. 

(such as we have 17500 data points and each data appears exactly 50 times. )

 

Another interesting character from this algorithm is: Item with frequency lower 
than 'CurrentTerm - errorRate' will be swept out of this bucket. 

E.g. 1

For example, let's say we have 10k data points. error rate is 0.05. Then, 
bucket size is 1 / 0.05 = 20 

CurrentTerm after inputting all data will be 10k / 20 = 500.

So, all data with occurrence less than 499.95 will be removed. 

 

E.g.2

Let's change error rate to 0.02 from the last example.  

Bucket size will be 1 / 0.02 = 50

CurrentTerm will be 10k / 50 = 200

So, only data with occurrence less than 199.98 will be removed.

 

Intuitive observation from above is, if the error rate is too big, it may 
exclude many elements with fairly high frequency. 

So, this algorithm is a great fit for finding HOT CLIENTS/ HOT TOPIC kind of 
things. Not a good candidate for other things... 

 

 

 

 

 

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-05-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491340#comment-16491340
 ] 

stack commented on HBASE-20642:
---

bq. so as the call is not completed, 

The procedure was submitted, right, and started to make progress (it got as far 
as changing the table descriptor?). Did the procedure not succeed? Though there 
was a crash of Master in the middle of its running? If it did not complete, 
that is a problem.

bq. HBase client will retry the call but it will fail with 
InvalidFamilyOperationException because we don't differentiate if it is retry 
or a new call.

You mean, a user will retry because they think their original submission did 
not take? In this case, if a Procedure in-flight modifying the table, this 
second submission should fail.

Sounds like the original procedure did not complete? Is that so? That it died 
in the middle of its running and so you tried to resubmit the add column... but 
it fails because the original procedure died half-way through? Is this what is 
happening?

Thanks.

> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20582) Bump up JRuby version because of some reported vulnerabilities

2018-05-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491339#comment-16491339
 ] 

Josh Elser commented on HBASE-20582:


[~busbey], better late than never..

> Bump up JRuby version because of some reported vulnerabilities
> --
>
> Key: HBASE-20582
> URL: https://issues.apache.org/jira/browse/HBASE-20582
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, shell
>Reporter: Ankit Singhal
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20582.002.patch, HBASE-20582.addendum.patch, 
> HBASE-20582.patch
>
>
> There are some vulnerabilities reported with two of the libraries used in 
> HBase.
> {code:java}
> Jruby(version:9.1.10.0):
> CVE-2009-5147
> CVE-2013-4363
> CVE-2014-4975
> CVE-2014-8080
> CVE-2014-8090
> CVE-2015-3900
> CVE-2015-7551
> CVE-2015-9096
> CVE-2017-0899
> CVE-2017-0900
> CVE-2017-0901
> CVE-2017-0902
> CVE-2017-0903
> CVE-2017-10784
> CVE-2017-14064
> CVE-2017-9224
> CVE-2017-9225
> CVE-2017-9226
> CVE-2017-9227
> CVE-2017-9228
> {code}
> Tool somehow able to relate the vulnerability of Ruby with JRuby(Java 
> implementation). (Jackson will be handled in a different issue.)
> Not all of them directly affects HBase but [~elserj] suggested that it is 
> better to be on the updated version to avoid issues during an audit in 
> security sensitive organization.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-05-25 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491337#comment-16491337
 ] 

Ankit Singhal commented on HBASE-20642:
---

Thanks [~mdrob] for taking a look.
bq. I think the fix is correct, but I also think we need a unit test before we 
can commit this.
bq. Take a look at ModifyTableProcedure::testRecoveryAndDoubleExecutionOnline
bq. Need to do something similar, probably add another method in 
ProcedureTestingUtility similar to setKillBeforeStoreUpdate, but to kill at 
whatever point breaks this?
Let me try to add some test.

> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-05-25 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491329#comment-16491329
 ] 

Ankit Singhal edited comment on HBASE-20642 at 5/25/18 10:30 PM:
-

bq. I don't see what is wrong. You are trying to modify a table adding a column 
but you can't because a previous attempt succeeded?
[~stack],Actually, the scenario is, a user is trying to add a column family in 
the table but the master went down after completing half of the states in the 
procedure(let's assume the columnFamily was updated in tableDescriptor) , so as 
the call is not completed, HBase client will retry the call but it will fail 
with InvalidFamilyOperationException 
because we don't differentiate if it is retry or a new call. 


was (Author: an...@apache.org):
bq. I don't see what is wrong. You are trying to modify a table adding a column 
but you can't because a previous attempt succeeded?
Actually, the scenario is, a user is trying to add a column family in the table 
but the master went down after completing half of the states in the 
procedure(let's assume the columnFamily was updated in tableDescriptor) , so as 
the call is not completed, HBase client will retry the call but it will fail 
with InvalidFamilyOperationException 
because we don't differentiate if it is retry or a new call. 

> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20582) Bump up JRuby version because of some reported vulnerabilities

2018-05-25 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-20582:
---
Attachment: HBASE-20582.addendum.patch

> Bump up JRuby version because of some reported vulnerabilities
> --
>
> Key: HBASE-20582
> URL: https://issues.apache.org/jira/browse/HBASE-20582
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, shell
>Reporter: Ankit Singhal
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20582.002.patch, HBASE-20582.addendum.patch, 
> HBASE-20582.patch
>
>
> There are some vulnerabilities reported with two of the libraries used in 
> HBase.
> {code:java}
> Jruby(version:9.1.10.0):
> CVE-2009-5147
> CVE-2013-4363
> CVE-2014-4975
> CVE-2014-8080
> CVE-2014-8090
> CVE-2015-3900
> CVE-2015-7551
> CVE-2015-9096
> CVE-2017-0899
> CVE-2017-0900
> CVE-2017-0901
> CVE-2017-0902
> CVE-2017-0903
> CVE-2017-10784
> CVE-2017-14064
> CVE-2017-9224
> CVE-2017-9225
> CVE-2017-9226
> CVE-2017-9227
> CVE-2017-9228
> {code}
> Tool somehow able to relate the vulnerability of Ruby with JRuby(Java 
> implementation). (Jackson will be handled in a different issue.)
> Not all of them directly affects HBase but [~elserj] suggested that it is 
> better to be on the updated version to avoid issues during an audit in 
> security sensitive organization.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20605) Exclude new Azure Storage FileSystem from SecureBulkLoadEndpoint permission check

2018-05-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491335#comment-16491335
 ] 

Ted Yu commented on HBASE-20605:


Good to know

> Exclude new Azure Storage FileSystem from SecureBulkLoadEndpoint permission 
> check
> -
>
> Key: HBASE-20605
> URL: https://issues.apache.org/jira/browse/HBASE-20605
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.5
>
> Attachments: HBASE-20605.001.branch-1.patch, 
> HBASE-20605.002.branch-1.patch
>
>
> Some folks in Hadoop are working on landing a new FileSystem from the Azure 
> team: HADOOP-15407
> At present, this FileSystem doesn't support permissions which causes the 
> SecureBulkLoadEndpoint to balk because it the staging directory doesn't have 
> the proper 711 permissions.
> We have a static list of FileSystem schemes which we ignore this check on. I 
> have a patch on an HBase 1.1ish which:
>  # Adds the new FileSystem scheme
>  # Makes this list configurable for the future



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-05-25 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491329#comment-16491329
 ] 

Ankit Singhal commented on HBASE-20642:
---

bq. I don't see what is wrong. You are trying to modify a table adding a column 
but you can't because a previous attempt succeeded?
Actually, the scenario is, a user is trying to add a column family in the table 
but the master went down after completing half of the states in the 
procedure(let's assume the columnFamily was updated in tableDescriptor) , so as 
the call is not completed, HBase client will retry the call but it will fail 
with InvalidFamilyOperationException 
because we don't differentiate if it is retry or a new call. 

> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20556) Backport HBASE-16490 to branch-1

2018-05-25 Thread Tak Lon (Stephen) Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491320#comment-16491320
 ] 

Tak Lon (Stephen) Wu commented on HBASE-20556:
--

the `compile` passed after *HBASE-20608* 

For the unit test, as mentioned in the [review board 
link|https://reviews.apache.org/r/67045/] that I ran the surefire test and mvn 
clean test, they're all passed.

 

> Backport HBASE-16490 to branch-1
> 
>
> Key: HBASE-20556
> URL: https://issues.apache.org/jira/browse/HBASE-20556
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, snapshots
>Affects Versions: 1.4.4, 1.4.5
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Major
> Attachments: HBASE-20556.branch-1.001.patch, 
> HBASE-20556.branch-1.002.patch, HBASE-20556.branch-1.003.patch, 
> HBASE-20556.branch-1.004.patch
>
>
> As part of HBASE-20555, HBASE-16490 is the first patch that is needed for 
> backporting HBASE-18083



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20556) Backport HBASE-16490 to branch-1

2018-05-25 Thread Tak Lon (Stephen) Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491320#comment-16491320
 ] 

Tak Lon (Stephen) Wu edited comment on HBASE-20556 at 5/25/18 10:20 PM:


the `compile` passed after *HBASE-20608*

For the unit test, as mentioned in the [review board 
link|https://reviews.apache.org/r/67045/] that I ran the surefire test locally 
with -Dsurefire.Xmx=4000m and mvn clean test, they're all passed.

 


was (Author: taklwu):
the `compile` passed after *HBASE-20608* 

For the unit test, as mentioned in the [review board 
link|https://reviews.apache.org/r/67045/] that I ran the surefire test and mvn 
clean test, they're all passed.

 

> Backport HBASE-16490 to branch-1
> 
>
> Key: HBASE-20556
> URL: https://issues.apache.org/jira/browse/HBASE-20556
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, snapshots
>Affects Versions: 1.4.4, 1.4.5
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Major
> Attachments: HBASE-20556.branch-1.001.patch, 
> HBASE-20556.branch-1.002.patch, HBASE-20556.branch-1.003.patch, 
> HBASE-20556.branch-1.004.patch
>
>
> As part of HBASE-20555, HBASE-16490 is the first patch that is needed for 
> backporting HBASE-18083



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20605) Exclude new Azure Storage FileSystem from SecureBulkLoadEndpoint permission check

2018-05-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491321#comment-16491321
 ] 

Josh Elser commented on HBASE-20605:


Funny stuff. Seems like when SecureBulkLoadEndpoint was consolidated into 
"core", checking permissions on the staging directory were dropped. As a result 
of that, this exclusion to the permission checking was also not applied in 
HBASE-17861.

Going off of that, we can just target this one for branch-1. The problem we're 
worried about doesn't exist in 2.x+

> Exclude new Azure Storage FileSystem from SecureBulkLoadEndpoint permission 
> check
> -
>
> Key: HBASE-20605
> URL: https://issues.apache.org/jira/browse/HBASE-20605
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.5
>
> Attachments: HBASE-20605.001.branch-1.patch, 
> HBASE-20605.002.branch-1.patch
>
>
> Some folks in Hadoop are working on landing a new FileSystem from the Azure 
> team: HADOOP-15407
> At present, this FileSystem doesn't support permissions which causes the 
> SecureBulkLoadEndpoint to balk because it the staging directory doesn't have 
> the proper 711 permissions.
> We have a static list of FileSystem schemes which we ignore this check on. I 
> have a patch on an HBase 1.1ish which:
>  # Adds the new FileSystem scheme
>  # Makes this list configurable for the future



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20605) Exclude new Azure Storage FileSystem from SecureBulkLoadEndpoint permission check

2018-05-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491317#comment-16491317
 ] 

Ted Yu commented on HBASE-20605:


Looks good to me.

Please attach patch for master branch.

> Exclude new Azure Storage FileSystem from SecureBulkLoadEndpoint permission 
> check
> -
>
> Key: HBASE-20605
> URL: https://issues.apache.org/jira/browse/HBASE-20605
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.5
>
> Attachments: HBASE-20605.001.branch-1.patch, 
> HBASE-20605.002.branch-1.patch
>
>
> Some folks in Hadoop are working on landing a new FileSystem from the Azure 
> team: HADOOP-15407
> At present, this FileSystem doesn't support permissions which causes the 
> SecureBulkLoadEndpoint to balk because it the staging directory doesn't have 
> the proper 711 permissions.
> We have a static list of FileSystem schemes which we ignore this check on. I 
> have a patch on an HBase 1.1ish which:
>  # Adds the new FileSystem scheme
>  # Makes this list configurable for the future



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488223#comment-16488223
 ] 

Xu Cang edited comment on HBASE-19722 at 5/25/18 10:06 PM:
---

I thought comments regarding TopK client metrics again. I believe it makes 
sense to implement it. I will try to implement a version based on Lossy 
counting TopK algorithm.  

http://www.vldb.org/conf/2002/S10P03.pdf


was (Author: xucang):
I thought comments regarding TopK client metrics again. I believe it makes 
sense to implement it. I will try to implement a version based on Lossy 
counting TopK algorithm.  
([https://micvog.files.wordpress.com/2015/06/approximate_freq_count_over_data_streams_vldb_2002.pdf)]
 

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491048#comment-16491048
 ] 

Xu Cang edited comment on HBASE-19722 at 5/25/18 10:06 PM:
---

"at most 1/e meters will be kept"  — There is a typo here.  It should be '  7 / 
e'  according to the paper:  (http://www.vldb.org/conf/2002/S10P03.pdf    See 
paragraph above chapter 4.3)

This 7 / e space is still good to me. For example, if we use 0.02 as error 
rate, at most 350 meters will be kept.

 

Lossy counting algorithm designed the sweeping happens every  "1 / errorRate" 
items arrived. For example, if e is 0.02, sweep() method will be called every 
50 times. 

Yes, I can try to make 'e' configurable from site config.  

 

"Ideally the operator could be able to set the number of expected top-N, e.g. N 
= 100" – that's a good idea. Let me see if there is a good conversion from e to 
N in topN.

 

Yes, I will fix findbugs errors. And add ASF header. 

 

Thanks for the review, Andrew.

 


was (Author: xucang):
"at most 1/e meters will be kept"  — There is a typo here.  It should be '  7 / 
e'  according to the paper:  
([https://micvog.files.wordpress.com/2015/06/approximate_freq_count_over_data_streams_vldb_2002.pdf]
    See paragraph above chapter 4.3)

This 7 / e space is still good to me. For example, if we use 0.02 as error 
rate, at most 350 meters will be kept.

 

Lossy counting algorithm designed the sweeping happens every  "1 / errorRate" 
items arrived. For example, if e is 0.02, sweep() method will be called every 
50 times. 

Yes, I can try to make 'e' configurable from site config.  

 

"Ideally the operator could be able to set the number of expected top-N, e.g. N 
= 100" – that's a good idea. Let me see if there is a good conversion from e to 
N in topN.

 

Yes, I will fix findbugs errors. And add ASF header. 

 

Thanks for the review, Andrew.

 

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20640) TestQuotaGlobalsSettingsBypass missing test category and ClassRule

2018-05-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491315#comment-16491315
 ] 

Ted Yu commented on HBASE-20640:


lgtm

> TestQuotaGlobalsSettingsBypass missing test category and ClassRule
> --
>
> Key: HBASE-20640
> URL: https://issues.apache.org/jira/browse/HBASE-20640
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20640.001.patch
>
>
> {noformat}
> # Created on 2018-05-24T12:55:49.432
> org.apache.maven.surefire.testset.TestSetFailedException: Test mechanism :: 0
>   at 
> org.apache.maven.surefire.common.junit4.JUnit4RunListener.rethrowAnyTestMechanismFailures(JUnit4RunListener.java:223)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:167)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.hadoop.hbase.HBaseClassTestRuleChecker.testStarted(HBaseClassTestRuleChecker.java:45)
>   at 
> org.junit.runner.notification.RunNotifier$3.notifyListener(RunNotifier.java:121)
>   at 
> org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72)
>   at 
> org.junit.runner.notification.RunNotifier.fireTestStarted(RunNotifier.java:118)
>   at 
> org.apache.maven.surefire.common.junit4.Notifier.fireTestStarted(Notifier.java:100)
>   at 
> org.junit.internal.runners.model.EachTestNotifier.fireTestStarted(EachTestNotifier.java:42)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:323)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   ... 4 more{noformat}
> Looks like I missed a test category in HBASE-18807



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-05-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491313#comment-16491313
 ] 

stack commented on HBASE-20642:
---

I don't see what is wrong. You are trying to modify a table adding a column but 
you can't because a previous attempt succeeded? Thanks [~an...@apache.org]

> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20478) move import checks from hbaseanti to checkstyle

2018-05-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491258#comment-16491258
 ] 

Mike Drob commented on HBASE-20478:
---

posted question to checkstyle community: 
https://groups.google.com/forum/#!topic/checkstyle/IVRI2CLk8k4

> move import checks from hbaseanti to checkstyle
> ---
>
> Key: HBASE-20478
> URL: https://issues.apache.org/jira/browse/HBASE-20478
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Mike Drob
>Priority: Minor
> Attachments: HBASE-20478.0.patch, HBASE-20478.1.patch, 
> HBASE-20478.2.patch, HBASE-20478.3.patch, HBASE-20478.4.patch, 
> HBASE-20478.WIP.2.patch, HBASE-20478.WIP.2.patch, HBASE-20478.WIP.patch, 
> HBASE-anti-check.patch
>
>
> came up in discussion on HBASE-20332. our check of "don't do this" things in 
> the codebase doesn't log the specifics of complaints anywhere, which forces 
> those who want to follow up to reverse engineer the check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20478) move import checks from hbaseanti to checkstyle

2018-05-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491239#comment-16491239
 ] 

Mike Drob commented on HBASE-20478:
---

Been playing around with this locally, the problem according to checkstyle is 
that we're not consistent about wether we have blank lines between imports or 
not. That strikes me as overly pedantic, personally, but I can't find a way to 
turn off the checking entirely. It will either complain that we have too many 
extra lines, or not enough. May end up reaching out to checkstyle devs or 
hacking up a fork

> move import checks from hbaseanti to checkstyle
> ---
>
> Key: HBASE-20478
> URL: https://issues.apache.org/jira/browse/HBASE-20478
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Mike Drob
>Priority: Minor
> Attachments: HBASE-20478.0.patch, HBASE-20478.1.patch, 
> HBASE-20478.2.patch, HBASE-20478.3.patch, HBASE-20478.4.patch, 
> HBASE-20478.WIP.2.patch, HBASE-20478.WIP.2.patch, HBASE-20478.WIP.patch, 
> HBASE-anti-check.patch
>
>
> came up in discussion on HBASE-20332. our check of "don't do this" things in 
> the codebase doesn't log the specifics of complaints anywhere, which forces 
> those who want to follow up to reverse engineer the check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20638) nightly source artifact testing should fail the stage if it's going to report an error on jira

2018-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491216#comment-16491216
 ] 

Hudson commented on HBASE-20638:


Results for branch branch-1.4
[build #334 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> nightly source artifact testing should fail the stage if it's going to report 
> an error on jira
> --
>
> Key: HBASE-20638
> URL: https://issues.apache.org/jira/browse/HBASE-20638
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.2.7, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20638.0.patch
>
>
> Looks like the source artifact testing is properly reporting failures on 
> jira, but is marking the stage on jenkins as successful. that makes it much 
> hard to track over time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20548) Master fails to startup on large clusters, refreshing block distribution

2018-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491217#comment-16491217
 ] 

Hudson commented on HBASE-20548:


Results for branch branch-1.4
[build #334 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Master fails to startup on large clusters, refreshing block distribution
> 
>
> Key: HBASE-20548
> URL: https://issues.apache.org/jira/browse/HBASE-20548
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.4
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20548.branch-1.4.001.patch, 
> HBASE-20548.branch-2.0.001.patch, HBASE-20548.master.001.patch
>
>
> On our large clusters with, master has failed to startup within specified 
> time and aborted itself since it was initializing HDFS block distribution. 
> Enable table also takes time for larger tables for the same reason. My 
> proposal is to refresh HDFS block distribution at the end of master 
> initialization and not at retainAssignment()'s createCluster(). This would 
> address HBASE-16570's intention, but avoid the problems we ran into.
> cc [~aoxiang] [~tedyu]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20646) TestWALProcedureStoreOnHDFS failing on branch-1

2018-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491218#comment-16491218
 ] 

Hudson commented on HBASE-20646:


Results for branch branch-1.4
[build #334 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/334//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> TestWALProcedureStoreOnHDFS failing on branch-1
> ---
>
> Key: HBASE-20646
> URL: https://issues.apache.org/jira/browse/HBASE-20646
> Project: HBase
>  Issue Type: Test
>Affects Versions: 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 1.5.0, 1.4.5
>
> Attachments: HBASE-20646-branch-1.patch
>
>
> TestWALProcedureStoreOnHDFS fails sometimes on branch-1 depending on junit 
> particulars. An @After decoration was improperly added. Remove to fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20594) provide utility to compare old and new descriptors

2018-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491205#comment-16491205
 ] 

Hadoop QA commented on HBASE-20594:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
53s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 45s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
57s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20594 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925192/HBASE-20594.v7.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 54c3097755ab 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b1089e8310 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12967/testReport/ |
| Max. process+thread count | 259 (vs. ulimit of 1) |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12967/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> provide utility to compare old and 

[jira] [Commented] (HBASE-20602) hbase.master.quota.observer.ignore property seems to be not taking effect

2018-05-25 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491204#comment-16491204
 ] 

Biju Nair commented on HBASE-20602:
---

Sorry for the delay [~elserj]. Attached a patch file. Could you please take a 
look. Thanks.

> hbase.master.quota.observer.ignore property seems to be not taking effect
> -
>
> Key: HBASE-20602
> URL: https://issues.apache.org/jira/browse/HBASE-20602
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Major
> Attachments: HBASE-20602.patch
>
>
> From [doc|https://hbase.apache.org/book.html#ops.space.quota.deletion] 
> setting {{hbase.master.quota.observer.ignore property to true}} will retain 
> the space quota even after table is deleted. But doesn't seem to be the case 
> i.e. whether the property is not defined which sets the value to {{false}} or 
> set the property to {{true}} in {{site.xml}}, the quota gets removed when the 
> corresponding table is dropped. Will verify whether it works in 1.x. Did a 
> grep on the master source, did get a hit on the property in code.
> Steps to reproduce
>  * Add this property and restart {{hbase}}
> {noformat}
>     
>         hbase.master.quota.observer.ignore
>         true
>     {noformat}
>  * Through {{hbase}} shell
>  * 
> {noformat}
> hbase(main):003:0> set_quota TYPE => SPACE, TABLE => 't1', LIMIT => '1G', 
> POLICY => NO_INSERTS
> Took 0.0317 seconds{noformat}
>  * 
> {noformat}
> hbase(main):005:0> create 't1','cf1'
> Created table t1
> Took 0.7904 seconds{noformat}
>  * 
> {noformat}
> hbase(main):006:0> list_quotas
> OWNER QUOTAS
> TABLE => t1 TYPE => SPACE, TABLE => t1, LIMIT => 1073741824, VIOLATION_POLICY 
> => NO_INSERTS
> 1 row(s){noformat}
>  * 
> {noformat}
> hbase(main):007:0> disable 't1'
> Took 0.4909 seconds
> hbase(main):008:0> list_quotas
> OWNER QUOTAS
> TABLE => t1 TYPE => SPACE, TABLE => t1, LIMIT => 1073741824, VIOLATION_POLICY 
> => NO_INSERTS
> 1 row(s)
> Took 0.0420 seconds{noformat}
>  * 
> {noformat}
> hbase(main):009:0> drop 't1'
> Took 0.1407 seconds{noformat}
>  * 
> {noformat}
> hbase(main):010:0> list_quotas
> OWNER QUOTAS
> 0 row(s)
> Took 0.0307 seconds{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20602) hbase.master.quota.observer.ignore property seems to be not taking effect

2018-05-25 Thread Biju Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-20602:
--
Attachment: HBASE-20602.patch

> hbase.master.quota.observer.ignore property seems to be not taking effect
> -
>
> Key: HBASE-20602
> URL: https://issues.apache.org/jira/browse/HBASE-20602
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Major
> Attachments: HBASE-20602.patch
>
>
> From [doc|https://hbase.apache.org/book.html#ops.space.quota.deletion] 
> setting {{hbase.master.quota.observer.ignore property to true}} will retain 
> the space quota even after table is deleted. But doesn't seem to be the case 
> i.e. whether the property is not defined which sets the value to {{false}} or 
> set the property to {{true}} in {{site.xml}}, the quota gets removed when the 
> corresponding table is dropped. Will verify whether it works in 1.x. Did a 
> grep on the master source, did get a hit on the property in code.
> Steps to reproduce
>  * Add this property and restart {{hbase}}
> {noformat}
>     
>         hbase.master.quota.observer.ignore
>         true
>     {noformat}
>  * Through {{hbase}} shell
>  * 
> {noformat}
> hbase(main):003:0> set_quota TYPE => SPACE, TABLE => 't1', LIMIT => '1G', 
> POLICY => NO_INSERTS
> Took 0.0317 seconds{noformat}
>  * 
> {noformat}
> hbase(main):005:0> create 't1','cf1'
> Created table t1
> Took 0.7904 seconds{noformat}
>  * 
> {noformat}
> hbase(main):006:0> list_quotas
> OWNER QUOTAS
> TABLE => t1 TYPE => SPACE, TABLE => t1, LIMIT => 1073741824, VIOLATION_POLICY 
> => NO_INSERTS
> 1 row(s){noformat}
>  * 
> {noformat}
> hbase(main):007:0> disable 't1'
> Took 0.4909 seconds
> hbase(main):008:0> list_quotas
> OWNER QUOTAS
> TABLE => t1 TYPE => SPACE, TABLE => t1, LIMIT => 1073741824, VIOLATION_POLICY 
> => NO_INSERTS
> 1 row(s)
> Took 0.0420 seconds{noformat}
>  * 
> {noformat}
> hbase(main):009:0> drop 't1'
> Took 0.1407 seconds{noformat}
>  * 
> {noformat}
> hbase(main):010:0> list_quotas
> OWNER QUOTAS
> 0 row(s)
> Took 0.0307 seconds{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18118) Default storage policy if not configured cannot be "NONE"

2018-05-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491201#comment-16491201
 ] 

Mike Drob commented on HBASE-18118:
---

Can we file a new JIRA for the discussion around this, wether it ends up 
needing a revert, docs, or an addendum? This issue has already gone out in a 
release, and for my sanity in branch tracking it would be nice to see it 
explicitly as a separate unit of work.

> Default storage policy if not configured cannot be "NONE"
> -
>
> Key: HBASE-18118
> URL: https://issues.apache.org/jira/browse/HBASE-18118
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18118.patch
>
>
> HBase can't use 'NONE' as default storage policy if not configured because 
> HDFS supports no such policy. This policy name was probably available in a 
> precommit or early version of the HDFS side support for heterogeneous 
> storage. Now the best default is 'HOT'. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491196#comment-16491196
 ] 

Hadoop QA commented on HBASE-20597:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
36s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
15s{color} | {color:red} hbase-server: The patch generated 1 new + 2 unchanged 
- 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
33s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  0m 
52s{color} | {color:red} The patch causes 44 errors with Hadoop v2.4.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  1m 
46s{color} | {color:red} The patch causes 44 errors with Hadoop v2.5.2. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 49s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 |
| JIRA Issue | HBASE-20597 |
| JIRA Patch URL | 

[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-05-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491192#comment-16491192
 ] 

Mike Drob commented on HBASE-20642:
---

I think the fix is correct, but I also think we need a unit test before we can 
commit this.

Take a look at ModifyTableProcedure::testRecoveryAndDoubleExecutionOnline

Need to do something similar, probably add another method in 
ProcedureTestingUtility similar to setKillBeforeStoreUpdate, but to kill at 
whatever point breaks this?

> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-05-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491181#comment-16491181
 ] 

Josh Elser commented on HBASE-20642:


Given my understanding, I'd say +1

[~stack], [~uagashe], [~mdrob], any of you folks want to take a look?

> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20594) provide utility to compare old and new descriptors

2018-05-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491178#comment-16491178
 ] 

Mike Drob commented on HBASE-20594:
---

v7: moved the logic into TableDescriptorDelta and switched to unmodifiable sets 
over immutable sets.

> provide utility to compare old and new descriptors
> --
>
> Key: HBASE-20594
> URL: https://issues.apache.org/jira/browse/HBASE-20594
> Project: HBase
>  Issue Type: Improvement
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Attachments: HBASE-20594.patch, HBASE-20594.v2.patch, 
> HBASE-20594.v3.patch, HBASE-20594.v4.patch, HBASE-20594.v5.patch, 
> HBASE-20594.v6.patch, HBASE-20594.v7.patch
>
>
> HBASE-20567 gives us hooks that give both the old and new descriptor in 
> pre/postModify* events, but comparing them is still cumbersome. We should 
> provide users some kind of utility for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20640) TestQuotaGlobalsSettingsBypass missing test category and ClassRule

2018-05-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491171#comment-16491171
 ] 

Josh Elser commented on HBASE-20640:


[~stack] another trivial one to come back to 2.0 if you have the cycles to 
glance at.

> TestQuotaGlobalsSettingsBypass missing test category and ClassRule
> --
>
> Key: HBASE-20640
> URL: https://issues.apache.org/jira/browse/HBASE-20640
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20640.001.patch
>
>
> {noformat}
> # Created on 2018-05-24T12:55:49.432
> org.apache.maven.surefire.testset.TestSetFailedException: Test mechanism :: 0
>   at 
> org.apache.maven.surefire.common.junit4.JUnit4RunListener.rethrowAnyTestMechanismFailures(JUnit4RunListener.java:223)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:167)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.hadoop.hbase.HBaseClassTestRuleChecker.testStarted(HBaseClassTestRuleChecker.java:45)
>   at 
> org.junit.runner.notification.RunNotifier$3.notifyListener(RunNotifier.java:121)
>   at 
> org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72)
>   at 
> org.junit.runner.notification.RunNotifier.fireTestStarted(RunNotifier.java:118)
>   at 
> org.apache.maven.surefire.common.junit4.Notifier.fireTestStarted(Notifier.java:100)
>   at 
> org.junit.internal.runners.model.EachTestNotifier.fireTestStarted(EachTestNotifier.java:42)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:323)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   ... 4 more{noformat}
> Looks like I missed a test category in HBASE-18807



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20645) Fix security_available method in security.rb

2018-05-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491166#comment-16491166
 ] 

Josh Elser commented on HBASE-20645:


+1

[~stack], I think you'd want this for 2.0 as well.

> Fix security_available method in security.rb 
> -
>
> Key: HBASE-20645
> URL: https://issues.apache.org/jira/browse/HBASE-20645
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20645.patch
>
>
> "exists?" method expects parameter tableName to be String but ACL_TABLE_NAME 
> is of org.apache.hadoop.hbase.TableName form.
> {code}
> raise(ArgumentError, 'DISABLED: Security features are not available') unless \
>   
> exists?(org.apache.hadoop.hbase.security.access.AccessControlLists::ACL_TABLE_NAME.getNameAsString)
> {code}
> Impact of the bug:-
> So , if a user is running any security related 
> command(revoke,user_permission) and there is an exception(MasterNotRunning) 
> while checking security capabilities, then instead of seeing the underlying 
> exception, user is seeing 
> {code}
> ERROR: no method 'valueOf' for arguments (org.apache.hadoop.hbase.TableName) 
> on Java::OrgApacheHadoopHbase::TableName
>   available overloads:
> (java.lang.String)
> (byte[])
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20645) Fix security_available method in security.rb

2018-05-25 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-20645:
---
Fix Version/s: 2.1.0
   3.0.0

> Fix security_available method in security.rb 
> -
>
> Key: HBASE-20645
> URL: https://issues.apache.org/jira/browse/HBASE-20645
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20645.patch
>
>
> "exists?" method expects parameter tableName to be String but ACL_TABLE_NAME 
> is of org.apache.hadoop.hbase.TableName form.
> {code}
> raise(ArgumentError, 'DISABLED: Security features are not available') unless \
>   
> exists?(org.apache.hadoop.hbase.security.access.AccessControlLists::ACL_TABLE_NAME.getNameAsString)
> {code}
> Impact of the bug:-
> So , if a user is running any security related 
> command(revoke,user_permission) and there is an exception(MasterNotRunning) 
> while checking security capabilities, then instead of seeing the underlying 
> exception, user is seeing 
> {code}
> ERROR: no method 'valueOf' for arguments (org.apache.hadoop.hbase.TableName) 
> on Java::OrgApacheHadoopHbase::TableName
>   available overloads:
> (java.lang.String)
> (byte[])
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20594) provide utility to compare old and new descriptors

2018-05-25 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20594:
--
Attachment: HBASE-20594.v7.patch

> provide utility to compare old and new descriptors
> --
>
> Key: HBASE-20594
> URL: https://issues.apache.org/jira/browse/HBASE-20594
> Project: HBase
>  Issue Type: Improvement
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Attachments: HBASE-20594.patch, HBASE-20594.v2.patch, 
> HBASE-20594.v3.patch, HBASE-20594.v4.patch, HBASE-20594.v5.patch, 
> HBASE-20594.v6.patch, HBASE-20594.v7.patch
>
>
> HBASE-20567 gives us hooks that give both the old and new descriptor in 
> pre/postModify* events, but comparing them is still cumbersome. We should 
> provide users some kind of utility for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20628) SegmentScanner does over-comparing when one flushing

2018-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491130#comment-16491130
 ] 

Hadoop QA commented on HBASE-20628:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
47s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 1s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
59s{color} | {color:blue} hbase-server in branch-2.0 has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 3s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 46s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.regionserver.TestWalAndCompactingMemStoreFlush |
|   | hadoop.hbase.TestAcidGuaranteesWithBasicPolicy |
|   | hadoop.hbase.regionserver.TestCompactingMemStore |
|   | hadoop.hbase.TestAcidGuaranteesWithAdaptivePolicy |
|   | hadoop.hbase.regionserver.TestCompactingToCellFlatMapMemStore |
|   | hadoop.hbase.TestAcidGuaranteesWithEagerPolicy |
|   | hadoop.hbase.regionserver.TestHStore |
|   | hadoop.hbase.TestIOFencing |
|   | hadoop.hbase.regionserver.TestMajorCompaction |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:369877d |
| JIRA Issue | HBASE-20628 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925163/HBASE-20628.branch-2.0.001%20%281%29.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 88af5e30ee6a 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HBASE-20645) Fix security_available method in security.rb

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491121#comment-16491121
 ] 

Andrew Purtell commented on HBASE-20645:


Thanks!

> Fix security_available method in security.rb 
> -
>
> Key: HBASE-20645
> URL: https://issues.apache.org/jira/browse/HBASE-20645
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20645.patch
>
>
> "exists?" method expects parameter tableName to be String but ACL_TABLE_NAME 
> is of org.apache.hadoop.hbase.TableName form.
> {code}
> raise(ArgumentError, 'DISABLED: Security features are not available') unless \
>   
> exists?(org.apache.hadoop.hbase.security.access.AccessControlLists::ACL_TABLE_NAME.getNameAsString)
> {code}
> Impact of the bug:-
> So , if a user is running any security related 
> command(revoke,user_permission) and there is an exception(MasterNotRunning) 
> while checking security capabilities, then instead of seeing the underlying 
> exception, user is seeing 
> {code}
> ERROR: no method 'valueOf' for arguments (org.apache.hadoop.hbase.TableName) 
> on Java::OrgApacheHadoopHbase::TableName
>   available overloads:
> (java.lang.String)
> (byte[])
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491119#comment-16491119
 ] 

Andrew Purtell edited comment on HBASE-20597 at 5/25/18 6:20 PM:
-

FWIW {{mvn clean install -DskipITs -Dtest=\*Replication\*}} passes everything 
after  HBASE-20597-branch-1.addendum-v2.0.patch is applied.

Test environment:
{noformat}
Apache Maven 3.5.3 (3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 
2018-02-24T11:49:05-08:00)
Maven home: /usr/local/Cellar/maven/3.5.3/libexec
Java version: 1.8.0_162, vendor: Oracle Corporation
Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.13.4", arch: "x86_64", family: "mac"
{noformat}



was (Author: apurtell):
FWIW {{mvn clean install -DskipITs -Dtest=\*Replication\*}} passes everything 
after  HBASE-20597-branch-1.addendum-v2.0.patch is applied.

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.addendum-v2.0.patch, 
> HBASE-20597-branch-1.patch, HBASE-20597.addendum.0.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491119#comment-16491119
 ] 

Andrew Purtell commented on HBASE-20597:


FWIW {{mvn clean install -DskipITs -Dtest=\*Replication\*}} passes everything 
after  HBASE-20597-branch-1.addendum-v2.0.patch is applied.

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.addendum-v2.0.patch, 
> HBASE-20597-branch-1.patch, HBASE-20597.addendum.0.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20645) Fix security_available method in security.rb

2018-05-25 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491118#comment-16491118
 ] 

Ankit Singhal commented on HBASE-20645:
---

 bq. Does this affect earlier versions than 2.0?
branch-1 is not affected. As "HBaseAdmin.tableExists" method is overloaded to 
except both String and TableName.

> Fix security_available method in security.rb 
> -
>
> Key: HBASE-20645
> URL: https://issues.apache.org/jira/browse/HBASE-20645
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20645.patch
>
>
> "exists?" method expects parameter tableName to be String but ACL_TABLE_NAME 
> is of org.apache.hadoop.hbase.TableName form.
> {code}
> raise(ArgumentError, 'DISABLED: Security features are not available') unless \
>   
> exists?(org.apache.hadoop.hbase.security.access.AccessControlLists::ACL_TABLE_NAME.getNameAsString)
> {code}
> Impact of the bug:-
> So , if a user is running any security related 
> command(revoke,user_permission) and there is an exception(MasterNotRunning) 
> while checking security capabilities, then instead of seeing the underlying 
> exception, user is seeing 
> {code}
> ERROR: no method 'valueOf' for arguments (org.apache.hadoop.hbase.TableName) 
> on Java::OrgApacheHadoopHbase::TableName
>   available overloads:
> (java.lang.String)
> (byte[])
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20617) Upgrade/remove jetty-jsp

2018-05-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491117#comment-16491117
 ] 

Mike Drob commented on HBASE-20617:
---

statically compiled jsp allows us to catch some errors at compile time, like if 
a field or method is renamed and used by the jsp, then that surfaces during 
build time instead of throwing lots of inscrutable errors the next time 
somebody happens to look at the web ui (could be days or weeks later).

I am sympathetic to the idea of enabling hot fixes, so maybe there is a 
possible hybrid approach we can do? Where we still static compile, but then 
allow for raw jsp in the web container? I'm not up to date on the state of the 
art for front-end work, but I imagine there have been improvements to tools 
since we initially set up the jsp stuff in ~2014.

> Upgrade/remove jetty-jsp
> 
>
> Key: HBASE-20617
> URL: https://issues.apache.org/jira/browse/HBASE-20617
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sakthi
>Priority: Minor
>
> jetty-jsp removed after jetty-9.2.x version. We use the 9.2 version. Research 
> so far brings out that apache-jsp might be of interest to us in jetty-9.4.x 
> version(as JettyJspServlet.class is in apache-jsp). Yet to figure out about 
> jetty-9.3.x.
> Filing to track this along.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20644) Master shutdown due to service ClusterSchemaServiceImpl failing to start

2018-05-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491093#comment-16491093
 ] 

Ted Yu edited comment on HBASE-20644 at 5/25/18 6:04 PM:
-

101383-master-ctr-e138-1518143905142-329221-01-04.hwx.site.log was active 
master log collected today.

AM requeue'd regions since former servers were offline (there was cluster 
restart):
{code}
2018-05-25 15:41:51,543 INFO  [PEWorker-5] assignment.AssignProcedure: Server 
not online, re-queuing pid=974, ppid=957, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure
table=SYSTEM.FUNCTION, region=122fca58c6ac8626bd29868348b8d7d7; rit=OPENING, 
location=ctr-e138-1518143905142-329221-01-02.hwx.site,16020,1527261630657
{code}
The region was assigned to server 006:
{code}
2018-05-25 15:41:51,769 INFO  [PEWorker-2] assignment.RegionStateStore: pid=974 
updating hbase:meta row=122fca58c6ac8626bd29868348b8d7d7, regionState=OPENING, 
regionLocation=ctr-e138-   
1518143905142-329221-01-06.hwx.site,16020,1527262866049
{code}
3 Phoenix table regions were still stuck in transition :
{code}
2018-05-25 15:42:56,546 WARN  [ProcExecTimeout] assignment.AssignmentManager: 
STUCK Region-In-Transition rit=OPENING, 
location=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,
1527262866049, table=SYSTEM.FUNCTION, region=122fca58c6ac8626bd29868348b8d7d7
2018-05-25 15:42:56,546 WARN  [ProcExecTimeout] assignment.AssignmentManager: 
STUCK Region-In-Transition rit=OPENING, 
location=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,
1527262866049, table=GRAMMAR_TABLE_INDEX, 
region=577e8deb885052ec45b8727603bd5cf9
2018-05-25 15:42:56,546 WARN  [ProcExecTimeout] assignment.AssignmentManager: 
STUCK Region-In-Transition rit=OPENING, 
location=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,
1527262866049, table=SYSTEM.CATALOG, region=8c8a2fa7b3de53bdb1a8c2066deac5ac
{code}
Found the following in master log (might be related to HBASE-20492):
{code}
2018-05-25 17:26:27,441 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=2] 
procedure2.ProcedureExecutor: Stored pid=1007, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH;   UnassignProcedure 
table=GRAMMAR_TABLE_INDEX, region=577e8deb885052ec45b8727603bd5cf9, 
server=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,1527262866049
2018-05-25 17:26:27,443 INFO  [PEWorker-15] procedure.MasterProcedureScheduler: 
pid=1007, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=GRAMMAR_TABLE_INDEX, region=577e8deb885052ec45b8727603bd5cf9, 
server=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,1527262866049 
checking lock on 577e8deb885052ec45b8727603bd5cf9
2018-05-25 17:26:27,445 WARN  [PEWorker-15] 
assignment.RegionTransitionProcedure: Failed transition, suspend 1secs 
pid=1007, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=GRAMMAR_TABLE_INDEX, region=577e8deb885052ec45b8727603bd5cf9, 
server=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,1527262866049; 
rit=OPENING, location=ctr-e138-  
1518143905142-329221-01-06.hwx.site,16020,1527262866049; waiting on 
rectified condition fixed by other Procedure or operator intervention
org.apache.hadoop.hbase.exceptions.UnexpectedStateException: Expected 
[SPLITTING, SPLIT, MERGING, OPEN, CLOSING] so could move to CLOSING but current 
state=OPENING
  at 
org.apache.hadoop.hbase.master.assignment.RegionStates$RegionStateNode.transitionState(RegionStates.java:158)
  at 
org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1548)
  at 
org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:203)
  at 
org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:349)
  at 
org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:94)
  at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
  at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1472)
  at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1240)
  at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:75)
{code}



was (Author: yuzhih...@gmail.com):
101383-master-ctr-e138-1518143905142-329221-01-04.hwx.site.log was active 
master log collected today.

AM requeue'd regions since former servers were offline (there was cluster 
restart):
{code}
2018-05-25 15:41:51,543 INFO  [PEWorker-5] assignment.AssignProcedure: Server 
not online, re-queuing pid=974, ppid=957, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure
table=SYSTEM.FUNCTION, region=122fca58c6ac8626bd29868348b8d7d7; rit=OPENING, 

[jira] [Commented] (HBASE-20644) Master shutdown due to service ClusterSchemaServiceImpl failing to start

2018-05-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491093#comment-16491093
 ] 

Ted Yu commented on HBASE-20644:


101383-master-ctr-e138-1518143905142-329221-01-04.hwx.site.log was active 
master log collected today.

AM requeue'd regions since former servers were offline (there was cluster 
restart):
{code}
2018-05-25 15:41:51,543 INFO  [PEWorker-5] assignment.AssignProcedure: Server 
not online, re-queuing pid=974, ppid=957, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure
table=SYSTEM.FUNCTION, region=122fca58c6ac8626bd29868348b8d7d7; rit=OPENING, 
location=ctr-e138-1518143905142-329221-01-02.hwx.site,16020,1527261630657

The region was assigned to server 006:

2018-05-25 15:41:51,769 INFO  [PEWorker-2] assignment.RegionStateStore: pid=974 
updating hbase:meta row=122fca58c6ac8626bd29868348b8d7d7, regionState=OPENING, 
regionLocation=ctr-e138-   
1518143905142-329221-01-06.hwx.site,16020,1527262866049
{code}
3 Phoenix table regions were still stuck in transition :
{code}
2018-05-25 15:42:56,546 WARN  [ProcExecTimeout] assignment.AssignmentManager: 
STUCK Region-In-Transition rit=OPENING, 
location=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,
1527262866049, table=SYSTEM.FUNCTION, region=122fca58c6ac8626bd29868348b8d7d7
2018-05-25 15:42:56,546 WARN  [ProcExecTimeout] assignment.AssignmentManager: 
STUCK Region-In-Transition rit=OPENING, 
location=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,
1527262866049, table=GRAMMAR_TABLE_INDEX, 
region=577e8deb885052ec45b8727603bd5cf9
2018-05-25 15:42:56,546 WARN  [ProcExecTimeout] assignment.AssignmentManager: 
STUCK Region-In-Transition rit=OPENING, 
location=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,
1527262866049, table=SYSTEM.CATALOG, region=8c8a2fa7b3de53bdb1a8c2066deac5ac
{code}
Found the following in master log (might be related to HBASE-20492):
{code}
2018-05-25 17:26:27,441 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=2] 
procedure2.ProcedureExecutor: Stored pid=1007, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH;   UnassignProcedure 
table=GRAMMAR_TABLE_INDEX, region=577e8deb885052ec45b8727603bd5cf9, 
server=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,1527262866049
2018-05-25 17:26:27,443 INFO  [PEWorker-15] procedure.MasterProcedureScheduler: 
pid=1007, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=GRAMMAR_TABLE_INDEX, region=577e8deb885052ec45b8727603bd5cf9, 
server=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,1527262866049 
checking lock on 577e8deb885052ec45b8727603bd5cf9
2018-05-25 17:26:27,445 WARN  [PEWorker-15] 
assignment.RegionTransitionProcedure: Failed transition, suspend 1secs 
pid=1007, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=GRAMMAR_TABLE_INDEX, region=577e8deb885052ec45b8727603bd5cf9, 
server=ctr-e138-1518143905142-329221-01-06.hwx.site,16020,1527262866049; 
rit=OPENING, location=ctr-e138-  
1518143905142-329221-01-06.hwx.site,16020,1527262866049; waiting on 
rectified condition fixed by other Procedure or operator intervention
org.apache.hadoop.hbase.exceptions.UnexpectedStateException: Expected 
[SPLITTING, SPLIT, MERGING, OPEN, CLOSING] so could move to CLOSING but current 
state=OPENING
  at 
org.apache.hadoop.hbase.master.assignment.RegionStates$RegionStateNode.transitionState(RegionStates.java:158)
  at 
org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1548)
  at 
org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:203)
  at 
org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:349)
  at 
org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:94)
  at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
  at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1472)
  at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1240)
  at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:75)
{code}


> Master shutdown due to service ClusterSchemaServiceImpl failing to start
> 
>
> Key: HBASE-20644
> URL: https://issues.apache.org/jira/browse/HBASE-20644
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Romil Choksi
>Priority: Critical
> Attachments: 
> 101383-master-ctr-e138-1518143905142-329221-01-03.hwx.site.log, 
> 101383-master-ctr-e138-1518143905142-329221-01-04.hwx.site.log, 
> 

[jira] [Commented] (HBASE-20617) Upgrade/remove jetty-jsp

2018-05-25 Thread Sakthi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491090#comment-16491090
 ] 

Sakthi commented on HBASE-20617:


Looks like even jetty-9.3.x also has apache-jsp. [~allan163] , I think the 
patch could be uploaded though I am not sure if it would be a safety issue. 
Maybe [~mdrob] can guide us here? 

> Upgrade/remove jetty-jsp
> 
>
> Key: HBASE-20617
> URL: https://issues.apache.org/jira/browse/HBASE-20617
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sakthi
>Priority: Minor
>
> jetty-jsp removed after jetty-9.2.x version. We use the 9.2 version. Research 
> so far brings out that apache-jsp might be of interest to us in jetty-9.4.x 
> version(as JettyJspServlet.class is in apache-jsp). Yet to figure out about 
> jetty-9.3.x.
> Filing to track this along.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20644) Master shutdown due to service ClusterSchemaServiceImpl failing to start

2018-05-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20644:
---
Attachment: 
101383-regionserver-ctr-e138-1518143905142-329221-01-06.hwx.site.log

101383-master-ctr-e138-1518143905142-329221-01-04.hwx.site.log

> Master shutdown due to service ClusterSchemaServiceImpl failing to start
> 
>
> Key: HBASE-20644
> URL: https://issues.apache.org/jira/browse/HBASE-20644
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Romil Choksi
>Priority: Critical
> Attachments: 
> 101383-master-ctr-e138-1518143905142-329221-01-03.hwx.site.log, 
> 101383-master-ctr-e138-1518143905142-329221-01-04.hwx.site.log, 
> 101383-regionserver-ctr-e138-1518143905142-329221-01-02.hwx.site.log, 
> 101383-regionserver-ctr-e138-1518143905142-329221-01-06.hwx.site.log, 
> 101383-regionserver-ctr-e138-1518143905142-329221-01-07.hwx.site.log
>
>
> From hbase-hbase-master-ctr-e138-1518143905142-329221-01-03.hwx.site.log :
> {code}
> 2018-05-23 22:14:29,750 ERROR 
> [master/ctr-e138-1518143905142-329221-01-03:2] master.HMaster: Failed 
> to become active master
> java.lang.IllegalStateException: Expected the service 
> ClusterSchemaServiceImpl [FAILED] to be RUNNING, but the service has FAILED
> at 
> org.apache.hbase.thirdparty.com.google.common.util.concurrent.AbstractService.checkCurrentState(AbstractService.java:345)
> at 
> org.apache.hbase.thirdparty.com.google.common.util.concurrent.AbstractService.awaitRunning(AbstractService.java:291)
> at 
> org.apache.hadoop.hbase.master.HMaster.initClusterSchemaService(HMaster.java:1054)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:918)
> at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2023)
> {code}
> Earlier in the log , the namespace region, 01a7f9ba9fffd691f261d3fbc620da06 , 
> was deemed OPEN on 01-07.hwx.site,16020,1527112194788 which was declared 
> not online:
> {code}
> 2018-05-23 21:54:34,786 INFO  
> [master/ctr-e138-1518143905142-329221-01-03:2] 
> assignment.RegionStateStore: Load hbase:meta entry
>  region=01a7f9ba9fffd691f261d3fbc620da06, regionState=OPEN, 
> lastHost=ctr-e138-1518143905142-329221-01-07.hwx.site,16020,1527112194788,
>  
> regionLocation=ctr-e138-1518143905142-329221-01-07.hwx.site,16020,1527112194788,
>  seqnum=43
> 2018-05-23 21:54:34,787 INFO  
> [master/ctr-e138-1518143905142-329221-01-03:2] 
> assignment.AssignmentManager: Number of RegionServers=1
> 2018-05-23 21:54:34,788 INFO  
> [master/ctr-e138-1518143905142-329221-01-03:2] 
> assignment.AssignmentManager: KILL 
> RegionServer=ctr-e138-1518143905142-329221-01-07.   
> hwx.site,16020,1527112194788 hosting regions but not online.
> {code}
> Later, even though a different instance on 007 registered with master:
> {code}
> 2018-05-23 21:55:13,541 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=2] 
> master.ServerManager: Registering 
> regionserver=ctr-e138-1518143905142-329221-01-07.hwx.site,16020,1527112506002
> ...
> 2018-05-23 21:55:43,881 INFO  
> [master/ctr-e138-1518143905142-329221-01-03:2] 
> client.RpcRetryingCallerImpl: Call exception, tries=12, retries=12, 
> started=69001 ms ago,cancelled=false, 
> msg=org.apache.hadoop.hbase.NotServingRegionException: 
> hbase:namespace,,1527099443383.01a7f9ba9fffd691f261d3fbc620da06. is not 
> online on ctr-e138-1518143905142-329221-  
> 01-07.hwx.site,16020,1527112506002
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3273)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3250)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1414)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2446)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41998)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
> {code}
> There was no OPEN request for 01a7f9ba9fffd691f261d3fbc620da06 sent to that 
> server instance.
> From 
> hbase-hbase-regionserver-ctr-e138-1518143905142-329221-01-07.hwx.site.log 
> :
> {code}
> 2018-05-23 21:52:27,414 INFO  
> [RS_CLOSE_REGION-regionserver/ctr-e138-1518143905142-329221-01-07:16020-1]
>  regionserver.HRegion: Closed hbase:namespace,,1527099443383.   
> 01a7f9ba9fffd691f261d3fbc620da06.
> {code}
> Then 

[jira] [Commented] (HBASE-20645) Fix security_available method in security.rb

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491071#comment-16491071
 ] 

Andrew Purtell commented on HBASE-20645:


Does this affect earlier versions than 2.0?

> Fix security_available method in security.rb 
> -
>
> Key: HBASE-20645
> URL: https://issues.apache.org/jira/browse/HBASE-20645
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-20645.patch
>
>
> "exists?" method expects parameter tableName to be String but ACL_TABLE_NAME 
> is of org.apache.hadoop.hbase.TableName form.
> {code}
> raise(ArgumentError, 'DISABLED: Security features are not available') unless \
>   
> exists?(org.apache.hadoop.hbase.security.access.AccessControlLists::ACL_TABLE_NAME.getNameAsString)
> {code}
> Impact of the bug:-
> So , if a user is running any security related 
> command(revoke,user_permission) and there is an exception(MasterNotRunning) 
> while checking security capabilities, then instead of seeing the underlying 
> exception, user is seeing 
> {code}
> ERROR: no method 'valueOf' for arguments (org.apache.hadoop.hbase.TableName) 
> on Java::OrgApacheHadoopHbase::TableName
>   available overloads:
> (java.lang.String)
> (byte[])
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491066#comment-16491066
 ] 

Andrew Purtell edited comment on HBASE-19722 at 5/25/18 5:45 PM:
-

bq.  Lossy counting algorithm designed the sweeping happens every  "1 / 
errorRate" items arrived. For example, if e is 0.02, sweep() method will be 
called every 50 times. 

It's still done inline with the metrics update. How expensive is the sweep? If 
done from a chore in another thread the average work per update would be less, 
but I suppose there would be locking while the sweep happens, so never mind. 
Thanks.
Edit: You are using a concurrent map so no locking would be needed, maybe, but 
still, no problem to leave as is.

350 might be more than an operator would want. We don't have a precise 
definition for what would be a manageable number of counters because it would 
be a matter of opinion. 100 would be fine IMHO, 1000 not so much. Just some 
numbers I made up right now thinking about scraping the JMX directly. If an 
operator would query this information from an external metrics DB, they could 
sort and cut the list with the DB's query tools, so it doesn't really matter 
for those poeple. Need to consider others who might process the raw JMX data, 
though.


was (Author: apurtell):
bq.  Lossy counting algorithm designed the sweeping happens every  "1 / 
errorRate" items arrived. For example, if e is 0.02, sweep() method will be 
called every 50 times. 

It's still done inline with the metrics update. How expensive is the sweep? If 
done from a chore in another thread the average work per update would be less, 
but I suppose there would be locking while the sweep happens, so never mind. 
Thanks.

350 might be more than an operator would want. We don't have a precise 
definition for what would be a manageable number of counters because it would 
be a matter of opinion. 100 would be fine IMHO, 1000 not so much. Just some 
numbers I made up right now thinking about scraping the JMX directly. If an 
operator would query this information from an external metrics DB, they could 
sort and cut the list with the DB's query tools, so it doesn't really matter 
for those poeple. Need to consider others who might process the raw JMX data, 
though.

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491066#comment-16491066
 ] 

Andrew Purtell edited comment on HBASE-19722 at 5/25/18 5:43 PM:
-

bq.  Lossy counting algorithm designed the sweeping happens every  "1 / 
errorRate" items arrived. For example, if e is 0.02, sweep() method will be 
called every 50 times. 

It's still done inline with the metrics update. How expensive is the sweep? If 
done from a chore in another thread the average work per update would be less, 
but I suppose there would be locking while the sweep happens, so never mind. 
Thanks.

350 might be more than an operator would want. We don't have a precise 
definition for what would be a manageable number of counters because it would 
be a matter of opinion. 100 would be fine IMHO, 1000 not so much. Just some 
numbers I made up right now thinking about scraping the JMX directly. If an 
operator would query this information from an external metrics DB, they could 
sort and cut the list with the DB's query tools, so it doesn't really matter 
for those poeple. Need to consider others who might process the raw JMX data, 
though.


was (Author: apurtell):
bq.  Lossy counting algorithm designed the sweeping happens every  "1 / 
errorRate" items arrived. For example, if e is 0.02, sweep() method will be 
called every 50 times. 

It's still done inline with the metrics update. How expensive is the sweep? Was 
thinking if done from a chore in another thread the average work per update 
would be less, but I suppose there would be locking while the sweep happens, so 
never mind. Thanks.

350 might be more than an operator would want. We don't have a precise 
definition for what would be a manageable number of counters because it would 
be a matter of opinion. 100 would be fine IMHO, 1000 not so much. Just some 
numbers I made up right now thinking about scraping the JMX directly. If an 
operator would query this information from an external metrics DB, they could 
sort and cut the list with the DB's query tools, so it doesn't really matter 
for those poeple. Need to consider others who might process the raw JMX data, 
though.

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491066#comment-16491066
 ] 

Andrew Purtell commented on HBASE-19722:


bq.  Lossy counting algorithm designed the sweeping happens every  "1 / 
errorRate" items arrived. For example, if e is 0.02, sweep() method will be 
called every 50 times. 

It's still done inline with the metrics update. How expensive is the sweep? Was 
thinking if done from a chore in another thread the average work per update 
would be less, but I suppose there would be locking while the sweep happens, so 
never mind. Thanks.

350 might be more than an operator would want. We don't have a precise 
definition for what would be a manageable number of counters because it would 
be a matter of opinion. 100 would be fine IMHO, 1000 not so much. Just some 
numbers I made up right now thinking about scraping the JMX directly. If an 
operator would query this information from an external metrics DB, they could 
sort and cut the list with the DB's query tools, so it doesn't really matter 
for those poeple. Need to consider others who might process the raw JMX data, 
though.

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491054#comment-16491054
 ] 

Andrew Purtell commented on HBASE-20597:


Attached as HBASE-20597-branch-1.addendum-v2.0.patch. Reverts the earlier 
change. Just adds 'synchronized' to endpoint methods that publish, use, or 
modify the 'zkw' instance not already synchronized. Running replication unit 
tests now. 

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.addendum-v2.0.patch, 
> HBASE-20597-branch-1.patch, HBASE-20597.addendum.0.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20597:
---
Attachment: HBASE-20597-branch-1.addendum-v2.0.patch

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.addendum-v2.0.patch, 
> HBASE-20597-branch-1.patch, HBASE-20597.addendum.0.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20597:
---
Attachment: (was: HBASE-20597-branch-1.patch)

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, 
> HBASE-20597.addendum.0.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491048#comment-16491048
 ] 

Xu Cang commented on HBASE-19722:
-

"at most 1/e meters will be kept"  — There is a typo here.  It should be '  7 / 
e'  according to the paper:  
([https://micvog.files.wordpress.com/2015/06/approximate_freq_count_over_data_streams_vldb_2002.pdf]
    See paragraph above chapter 4.3)

This 7 / e space is still good to me. For example, if we use 0.02 as error 
rate, at most 350 meters will be kept.

 

Lossy counting algorithm designed the sweeping happens every  "1 / errorRate" 
items arrived. For example, if e is 0.02, sweep() method will be called every 
50 times. 

Yes, I can try to make 'e' configurable from site config.  

 

"Ideally the operator could be able to set the number of expected top-N, e.g. N 
= 100" – that's a good idea. Let me see if there is a good conversion from e to 
N in topN.

 

Yes, I will fix findbugs errors. And add ASF header. 

 

Thanks for the review, Andrew.

 

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20597:
---
Attachment: HBASE-20597-branch-1.patch

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, HBASE-20597-branch-1.patch, 
> HBASE-20597.addendum.0.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491036#comment-16491036
 ] 

Andrew Purtell commented on HBASE-20597:


Let me make a quick patch that does what I suggested in my last comment above. 
Back in a sec...

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, 
> HBASE-20597.addendum.0.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491026#comment-16491026
 ] 

Andrew Purtell edited comment on HBASE-19722 at 5/25/18 5:16 PM:
-

bq. By using lossy count to maintain meters, at most 1 / e meters will be kept  
(e is error rate) e.g. when e is 0.02 by default, at most 50 Clients request 
metrics will be kept

Oh, this is a nice solution. 

So we sweep the lossy counts at every update? Can this be done from a periodic 
chore instead to lessen the per-update cost? 

Can {{e}} be made configurable by site configuration? Ideally the operator 
could be able to set the number of expected top-N, e.g. N = 100

New files  TestLossyCounting.java and LossyCounting.java need an ASF header. 
Cut and paste from any file that already has one.

Please fix reported checkstyle nits. 
{noformat}
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MetaTableMetrics.java:146:
private void 
registerLossyCountingMeterIfNotPresent(ObserverContext
 e,: Line is longer than 100 characters (found 104). [LineLength]
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/LossyCounting.java:71:
   * @return: At-clause should have a non-empty description. 
[NonEmptyAtclauseDescription]
./hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestLossyCounting.java:9:import
 static org.junit.Assert.assertEquals;: Wrong order for 
'org.junit.Assert.assertEquals' import. [ImportOrder]
{noformat}

That test failure in precommit doesn't look related.


was (Author: apurtell):
bq. By using lossy count to maintain meters, at most 1 / e meters will be kept  
(e is error rate) e.g. when e is 0.02 by default, at most 50 Clients request 
metrics will be kept

Oh, this is a nice solution. 

So we sweep the lossy counts at every update? Can this be done from a periodic 
chore instead to lessen the per-update cost? 

Can {{e}} be made configurable by site configuration?

New files  TestLossyCounting.java and LossyCounting.java need an ASF header. 
Cut and paste from any file that already has one.

Please fix reported checkstyle nits. 
{noformat}
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MetaTableMetrics.java:146:
private void 
registerLossyCountingMeterIfNotPresent(ObserverContext
 e,: Line is longer than 100 characters (found 104). [LineLength]
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/LossyCounting.java:71:
   * @return: At-clause should have a non-empty description. 
[NonEmptyAtclauseDescription]
./hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestLossyCounting.java:9:import
 static org.junit.Assert.assertEquals;: Wrong order for 
'org.junit.Assert.assertEquals' import. [ImportOrder]
{noformat}

That test failure in precommit doesn't look related.

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491026#comment-16491026
 ] 

Andrew Purtell commented on HBASE-19722:


bq. By using lossy count to maintain meters, at most 1 / e meters will be kept  
(e is error rate) e.g. when e is 0.02 by default, at most 50 Clients request 
metrics will be kept

Oh, this is a nice solution. 

So we sweep the lossy counts at every update? Can this be done from a periodic 
chore instead to lessen the per-update cost? 

Can {{e}} be made configurable by site configuration?

New files  TestLossyCounting.java and LossyCounting.java need an ASF header. 
Cut and paste from any file that already has one.

Please fix reported checkstyle nits. 
{noformat}
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MetaTableMetrics.java:146:
private void 
registerLossyCountingMeterIfNotPresent(ObserverContext
 e,: Line is longer than 100 characters (found 104). [LineLength]
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/LossyCounting.java:71:
   * @return: At-clause should have a non-empty description. 
[NonEmptyAtclauseDescription]
./hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestLossyCounting.java:9:import
 static org.junit.Assert.assertEquals;: Wrong order for 
'org.junit.Assert.assertEquals' import. [ImportOrder]
{noformat}

That test failure in precommit doesn't look related.

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491015#comment-16491015
 ] 

Hadoop QA commented on HBASE-20597:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 1s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
19s{color} | {color:blue} hbase-server in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
11s{color} | {color:red} hbase-server: The patch generated 3 new + 2 unchanged 
- 0 fixed = 5 total (was 2) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} hbase-server generated 0 new + 0 unchanged - 2 fixed 
= 0 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m 59s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.replication.multiwal.TestReplicationSyncUpToolWithMultipleWAL |
|   | hadoop.hbase.replication.TestReplicationDroppedTables |
|   | 
hadoop.hbase.replication.multiwal.TestReplicationSyncUpToolWithMultipleAsyncWAL 
|
|   | hadoop.hbase.replication.TestReplicationDisableInactivePeer |
|   | hadoop.hbase.replication.TestReplicationSmallTests |
|   | hadoop.hbase.replication.TestReplicationWithTags |
|   | hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleWAL |
|   | hadoop.hbase.replication.TestReplicationSyncUpTool |
|   | hadoop.hbase.replication.TestReplicationEndpoint |
|   | hadoop.hbase.replication.TestNamespaceReplication |
|   | hadoop.hbase.replication.TestReplicationKillMasterRS |
|   | hadoop.hbase.replication.TestReplicationChangingPeerRegionservers |
|   | hadoop.hbase.replication.TestReplicationEmptyWALRecovery |
|   | 

[jira] [Commented] (HBASE-18948) HBase tags are server side only.

2018-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491003#comment-16491003
 ] 

Hadoop QA commented on HBASE-18948:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
11s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  4m  
6s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  3m 
59s{color} | {color:blue} patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-18948 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925168/HBASE-18948_v1.patch |
| Optional Tests |  asflicense  refguide  |
| uname | Linux e0b53f3d3dd1 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b1089e8310 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964/artifact/patchprocess/branch-site/book.html
 |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964/artifact/patchprocess/patch-site/book.html
 |
| Max. process+thread count | 93 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> HBase tags are server side only.
> 
>
> Key: HBASE-18948
> URL: https://issues.apache.org/jira/browse/HBASE-18948
> Project: HBase
>  Issue Type: Improvement
>  Components: API, documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-18948.patch, HBASE-18948_v1.patch
>
>
> HBase tags are server side only. In the Apache HBase documentation, in 
> section 62.1.1 http://hbase.apache.org/book.html#_implementation_details , I 
> am going to add a sentence to state explicitly that "Tags are not available 
> for get/set from client operations including coprocessors". 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20628) SegmentScanner does over-comparing when one flushing

2018-05-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490987#comment-16490987
 ] 

stack edited comment on HBASE-20628 at 5/25/18 4:39 PM:


I tried it and it seems a bit slower. I set it to 13 (an old suggestion of 
yours). Left-hand side is run with current state. Right-hand side is run with 
config set to 13.

 !Screen Shot 2018-05-25 at 9.38.00 AM.png! 




was (Author: stack):
I tried it and it seems a bit slower. I set it to 13 (an old suggestion of 
yours).

 !Screen Shot 2018-05-25 at 9.38.00 AM.png! 



> SegmentScanner does over-comparing when one flushing
> 
>
> Key: HBASE-20628
> URL: https://issues.apache.org/jira/browse/HBASE-20628
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: HBASE-20628.branch-2.0.001 (1).patch, 
> HBASE-20628.branch-2.0.001.patch, HBASE-20628.branch-2.0.001.patch, Screen 
> Shot 2018-05-25 at 9.38.00 AM.png, hits-20628.png
>
>
> Flushing memstore is taking too long. It looks like we are doing a bunch of 
> comparing out of a new facility in hbase2, the Segment scanner at flush time.
> Below is a patch from [~anoop.hbase]. I had a similar more hacky version. 
> Both undo the extra comparing we were seeing in perf tests.
> [~anastas] and [~eshcar]. Need your help please.
> As I read it, we are trying to flush the memstore snapshot (default, no IMC 
> case). There is only ever going to be one Segment involved (even if IMC is 
> enabled); the snapshot Segment. But the getScanners is returning a list (of 
> one)  Scanners and the scan is via the generic SegmentScanner which is all 
> about a bunch of stuff we don't need when doing a flush so it seems to do 
> more work than is necessary. It also supports scanning backwards which is not 
> needed when trying to flush memstore.
> Do you see a problem doing a version of Anoops patch (whether IMC or not)? It 
> makes a big difference in general throughput when the below patch is in 
> place. Thanks.
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> index cbd60e5da3..c3dd972254 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> @@ -40,7 +40,8 @@ public class MemStoreSnapshot implements Closeable {
>  this.cellsCount = snapshot.getCellsCount();
>  this.memStoreSize = snapshot.getMemStoreSize();
>  this.timeRangeTracker = snapshot.getTimeRangeTracker();
> -this.scanners = snapshot.getScanners(Long.MAX_VALUE, Long.MAX_VALUE);
> +//this.scanners = snapshot.getScanners(Long.MAX_VALUE, Long.MAX_VALUE);
> +this.scanners = snapshot.getScannersForSnapshot();
>  this.tagsPresent = snapshot.isTagsPresent();
>}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> index 70074bf3b4..279c4e50c8 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> @@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.KeyValueUtil;
>  import org.apache.hadoop.hbase.io.TimeRange;
>  import org.apache.hadoop.hbase.util.Bytes;
>  import org.apache.hadoop.hbase.util.ClassSize;
> +import org.apache.hadoop.hbase.util.CollectionBackedScanner;
>  import org.apache.yetus.audience.InterfaceAudience;
>  import org.slf4j.Logger;
>  import 
> org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
> @@ -130,6 +131,10 @@ public abstract class Segment {
>  return Collections.singletonList(new SegmentScanner(this, readPoint, 
> order));
>}
> +  public List getScannersForSnapshot() {
> +return Collections.singletonList(new 
> CollectionBackedScanner(this.cellSet.get(), comparator));
> +  }
> +
>/**
> * @return whether the segment has any cells
> */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20628) SegmentScanner does over-comparing when one flushing

2018-05-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490987#comment-16490987
 ] 

stack commented on HBASE-20628:
---

I tried it and it seems a bit slower. I set it to 13 (an old suggestion of 
yours).

 !Screen Shot 2018-05-25 at 9.38.00 AM.png! 



> SegmentScanner does over-comparing when one flushing
> 
>
> Key: HBASE-20628
> URL: https://issues.apache.org/jira/browse/HBASE-20628
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: HBASE-20628.branch-2.0.001 (1).patch, 
> HBASE-20628.branch-2.0.001.patch, HBASE-20628.branch-2.0.001.patch, Screen 
> Shot 2018-05-25 at 9.38.00 AM.png, hits-20628.png
>
>
> Flushing memstore is taking too long. It looks like we are doing a bunch of 
> comparing out of a new facility in hbase2, the Segment scanner at flush time.
> Below is a patch from [~anoop.hbase]. I had a similar more hacky version. 
> Both undo the extra comparing we were seeing in perf tests.
> [~anastas] and [~eshcar]. Need your help please.
> As I read it, we are trying to flush the memstore snapshot (default, no IMC 
> case). There is only ever going to be one Segment involved (even if IMC is 
> enabled); the snapshot Segment. But the getScanners is returning a list (of 
> one)  Scanners and the scan is via the generic SegmentScanner which is all 
> about a bunch of stuff we don't need when doing a flush so it seems to do 
> more work than is necessary. It also supports scanning backwards which is not 
> needed when trying to flush memstore.
> Do you see a problem doing a version of Anoops patch (whether IMC or not)? It 
> makes a big difference in general throughput when the below patch is in 
> place. Thanks.
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> index cbd60e5da3..c3dd972254 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> @@ -40,7 +40,8 @@ public class MemStoreSnapshot implements Closeable {
>  this.cellsCount = snapshot.getCellsCount();
>  this.memStoreSize = snapshot.getMemStoreSize();
>  this.timeRangeTracker = snapshot.getTimeRangeTracker();
> -this.scanners = snapshot.getScanners(Long.MAX_VALUE, Long.MAX_VALUE);
> +//this.scanners = snapshot.getScanners(Long.MAX_VALUE, Long.MAX_VALUE);
> +this.scanners = snapshot.getScannersForSnapshot();
>  this.tagsPresent = snapshot.isTagsPresent();
>}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> index 70074bf3b4..279c4e50c8 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> @@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.KeyValueUtil;
>  import org.apache.hadoop.hbase.io.TimeRange;
>  import org.apache.hadoop.hbase.util.Bytes;
>  import org.apache.hadoop.hbase.util.ClassSize;
> +import org.apache.hadoop.hbase.util.CollectionBackedScanner;
>  import org.apache.yetus.audience.InterfaceAudience;
>  import org.slf4j.Logger;
>  import 
> org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
> @@ -130,6 +131,10 @@ public abstract class Segment {
>  return Collections.singletonList(new SegmentScanner(this, readPoint, 
> order));
>}
> +  public List getScannersForSnapshot() {
> +return Collections.singletonList(new 
> CollectionBackedScanner(this.cellSet.get(), comparator));
> +  }
> +
>/**
> * @return whether the segment has any cells
> */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20628) SegmentScanner does over-comparing when one flushing

2018-05-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20628:
--
Attachment: Screen Shot 2018-05-25 at 9.38.00 AM.png

> SegmentScanner does over-comparing when one flushing
> 
>
> Key: HBASE-20628
> URL: https://issues.apache.org/jira/browse/HBASE-20628
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: HBASE-20628.branch-2.0.001 (1).patch, 
> HBASE-20628.branch-2.0.001.patch, HBASE-20628.branch-2.0.001.patch, Screen 
> Shot 2018-05-25 at 9.38.00 AM.png, hits-20628.png
>
>
> Flushing memstore is taking too long. It looks like we are doing a bunch of 
> comparing out of a new facility in hbase2, the Segment scanner at flush time.
> Below is a patch from [~anoop.hbase]. I had a similar more hacky version. 
> Both undo the extra comparing we were seeing in perf tests.
> [~anastas] and [~eshcar]. Need your help please.
> As I read it, we are trying to flush the memstore snapshot (default, no IMC 
> case). There is only ever going to be one Segment involved (even if IMC is 
> enabled); the snapshot Segment. But the getScanners is returning a list (of 
> one)  Scanners and the scan is via the generic SegmentScanner which is all 
> about a bunch of stuff we don't need when doing a flush so it seems to do 
> more work than is necessary. It also supports scanning backwards which is not 
> needed when trying to flush memstore.
> Do you see a problem doing a version of Anoops patch (whether IMC or not)? It 
> makes a big difference in general throughput when the below patch is in 
> place. Thanks.
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> index cbd60e5da3..c3dd972254 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> @@ -40,7 +40,8 @@ public class MemStoreSnapshot implements Closeable {
>  this.cellsCount = snapshot.getCellsCount();
>  this.memStoreSize = snapshot.getMemStoreSize();
>  this.timeRangeTracker = snapshot.getTimeRangeTracker();
> -this.scanners = snapshot.getScanners(Long.MAX_VALUE, Long.MAX_VALUE);
> +//this.scanners = snapshot.getScanners(Long.MAX_VALUE, Long.MAX_VALUE);
> +this.scanners = snapshot.getScannersForSnapshot();
>  this.tagsPresent = snapshot.isTagsPresent();
>}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> index 70074bf3b4..279c4e50c8 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> @@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.KeyValueUtil;
>  import org.apache.hadoop.hbase.io.TimeRange;
>  import org.apache.hadoop.hbase.util.Bytes;
>  import org.apache.hadoop.hbase.util.ClassSize;
> +import org.apache.hadoop.hbase.util.CollectionBackedScanner;
>  import org.apache.yetus.audience.InterfaceAudience;
>  import org.slf4j.Logger;
>  import 
> org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
> @@ -130,6 +131,10 @@ public abstract class Segment {
>  return Collections.singletonList(new SegmentScanner(this, readPoint, 
> order));
>}
> +  public List getScannersForSnapshot() {
> +return Collections.singletonList(new 
> CollectionBackedScanner(this.cellSet.get(), comparator));
> +  }
> +
>/**
> * @return whether the segment has any cells
> */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18948) HBase tags are server side only.

2018-05-25 Thread Thiriguna Bharat Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiriguna Bharat Rao updated HBASE-18948:
-
Attachment: HBASE-18948_v1.patch

> HBase tags are server side only.
> 
>
> Key: HBASE-18948
> URL: https://issues.apache.org/jira/browse/HBASE-18948
> Project: HBase
>  Issue Type: Improvement
>  Components: API, documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-18948.patch, HBASE-18948_v1.patch
>
>
> HBase tags are server side only. In the Apache HBase documentation, in 
> section 62.1.1 http://hbase.apache.org/book.html#_implementation_details , I 
> am going to add a sentence to state explicitly that "Tags are not available 
> for get/set from client operations including coprocessors". 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18948) HBase tags are server side only.

2018-05-25 Thread Thiriguna Bharat Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490978#comment-16490978
 ] 

Thiriguna Bharat Rao commented on HBASE-18948:
--

Hi [~elserj]

Many thanks for the prompt feedback and review. I made the required changes in 
the Tag implementation details and added the following note in security.adoc: 

Coprocessors that run server-side on RegionServers can perform get and set 
operations on cell Tags.
Tags are striped out at the RPC layer before the read response is sent back, so 
clients do not see these tags.

Generated HBASE-18948_v1.patch which erased the changes that were made last 
year for HBASE-18948.patch, so reviewers will now a single commit with the new 
change. 

Appreciate your support and time.

 

Best,

Triguna

> HBase tags are server side only.
> 
>
> Key: HBASE-18948
> URL: https://issues.apache.org/jira/browse/HBASE-18948
> Project: HBase
>  Issue Type: Improvement
>  Components: API, documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-18948.patch
>
>
> HBase tags are server side only. In the Apache HBase documentation, in 
> section 62.1.1 http://hbase.apache.org/book.html#_implementation_details , I 
> am going to add a sentence to state explicitly that "Tags are not available 
> for get/set from client operations including coprocessors". 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20592) Create a tool to verify tables do not have prefix tree encoding

2018-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490953#comment-16490953
 ] 

Hadoop QA commented on HBASE-20592:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m  
4s{color} | {color:blue} hbase-server in master has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
10s{color} | {color:red} hbase-server: The patch generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
49s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m  9s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m  9s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20592 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925145/HBASE-20592.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux d71b1e2b2b08 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 36f3d9432a |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| unit | 

[jira] [Created] (HBASE-20651) Master, prevents hbck or shell command to reassign the split parent region

2018-05-25 Thread huaxiang sun (JIRA)
huaxiang sun created HBASE-20651:


 Summary: Master, prevents hbck or shell command to reassign the 
split parent region
 Key: HBASE-20651
 URL: https://issues.apache.org/jira/browse/HBASE-20651
 Project: HBase
  Issue Type: Improvement
  Components: master
Affects Versions: 1.2.6
Reporter: huaxiang sun
Assignee: huaxiang sun


We are seeing that hbck brings back split parent region and this causes region 
inconsistency. More details will be filled as reproduce is still ongoing. Might 
need to do something at hbck or master to prevent this from happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20628) SegmentScanner does over-comparing when one flushing

2018-05-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20628:
--
Attachment: HBASE-20628.branch-2.0.001 (1).patch

> SegmentScanner does over-comparing when one flushing
> 
>
> Key: HBASE-20628
> URL: https://issues.apache.org/jira/browse/HBASE-20628
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: HBASE-20628.branch-2.0.001 (1).patch, 
> HBASE-20628.branch-2.0.001.patch, HBASE-20628.branch-2.0.001.patch, 
> hits-20628.png
>
>
> Flushing memstore is taking too long. It looks like we are doing a bunch of 
> comparing out of a new facility in hbase2, the Segment scanner at flush time.
> Below is a patch from [~anoop.hbase]. I had a similar more hacky version. 
> Both undo the extra comparing we were seeing in perf tests.
> [~anastas] and [~eshcar]. Need your help please.
> As I read it, we are trying to flush the memstore snapshot (default, no IMC 
> case). There is only ever going to be one Segment involved (even if IMC is 
> enabled); the snapshot Segment. But the getScanners is returning a list (of 
> one)  Scanners and the scan is via the generic SegmentScanner which is all 
> about a bunch of stuff we don't need when doing a flush so it seems to do 
> more work than is necessary. It also supports scanning backwards which is not 
> needed when trying to flush memstore.
> Do you see a problem doing a version of Anoops patch (whether IMC or not)? It 
> makes a big difference in general throughput when the below patch is in 
> place. Thanks.
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> index cbd60e5da3..c3dd972254 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> @@ -40,7 +40,8 @@ public class MemStoreSnapshot implements Closeable {
>  this.cellsCount = snapshot.getCellsCount();
>  this.memStoreSize = snapshot.getMemStoreSize();
>  this.timeRangeTracker = snapshot.getTimeRangeTracker();
> -this.scanners = snapshot.getScanners(Long.MAX_VALUE, Long.MAX_VALUE);
> +//this.scanners = snapshot.getScanners(Long.MAX_VALUE, Long.MAX_VALUE);
> +this.scanners = snapshot.getScannersForSnapshot();
>  this.tagsPresent = snapshot.isTagsPresent();
>}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> index 70074bf3b4..279c4e50c8 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> @@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.KeyValueUtil;
>  import org.apache.hadoop.hbase.io.TimeRange;
>  import org.apache.hadoop.hbase.util.Bytes;
>  import org.apache.hadoop.hbase.util.ClassSize;
> +import org.apache.hadoop.hbase.util.CollectionBackedScanner;
>  import org.apache.yetus.audience.InterfaceAudience;
>  import org.slf4j.Logger;
>  import 
> org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
> @@ -130,6 +131,10 @@ public abstract class Segment {
>  return Collections.singletonList(new SegmentScanner(this, readPoint, 
> order));
>}
> +  public List getScannersForSnapshot() {
> +return Collections.singletonList(new 
> CollectionBackedScanner(this.cellSet.get(), comparator));
> +  }
> +
>/**
> * @return whether the segment has any cells
> */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490897#comment-16490897
 ] 

Andrew Purtell commented on HBASE-20597:


No test failures here at all. 

It would be fine to, instead of using a lock object, to synchronize on the 
endpoint instance by making the relevant methods all 'synchronized'. I thought 
using a lock named similarly to what it was protecting was clearer about intent 
but didn't mean to introduce the findbugs complaint in doing so. Either way 
achieves the same end. 

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, 
> HBASE-20597.addendum.0.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20628) SegmentScanner does over-comparing when one flushing

2018-05-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490898#comment-16490898
 ] 

stack commented on HBASE-20628:
---

Thanks fro the reminder...

> SegmentScanner does over-comparing when one flushing
> 
>
> Key: HBASE-20628
> URL: https://issues.apache.org/jira/browse/HBASE-20628
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: HBASE-20628.branch-2.0.001 (1).patch, 
> HBASE-20628.branch-2.0.001.patch, HBASE-20628.branch-2.0.001.patch, 
> hits-20628.png
>
>
> Flushing memstore is taking too long. It looks like we are doing a bunch of 
> comparing out of a new facility in hbase2, the Segment scanner at flush time.
> Below is a patch from [~anoop.hbase]. I had a similar more hacky version. 
> Both undo the extra comparing we were seeing in perf tests.
> [~anastas] and [~eshcar]. Need your help please.
> As I read it, we are trying to flush the memstore snapshot (default, no IMC 
> case). There is only ever going to be one Segment involved (even if IMC is 
> enabled); the snapshot Segment. But the getScanners is returning a list (of 
> one)  Scanners and the scan is via the generic SegmentScanner which is all 
> about a bunch of stuff we don't need when doing a flush so it seems to do 
> more work than is necessary. It also supports scanning backwards which is not 
> needed when trying to flush memstore.
> Do you see a problem doing a version of Anoops patch (whether IMC or not)? It 
> makes a big difference in general throughput when the below patch is in 
> place. Thanks.
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> index cbd60e5da3..c3dd972254 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
> @@ -40,7 +40,8 @@ public class MemStoreSnapshot implements Closeable {
>  this.cellsCount = snapshot.getCellsCount();
>  this.memStoreSize = snapshot.getMemStoreSize();
>  this.timeRangeTracker = snapshot.getTimeRangeTracker();
> -this.scanners = snapshot.getScanners(Long.MAX_VALUE, Long.MAX_VALUE);
> +//this.scanners = snapshot.getScanners(Long.MAX_VALUE, Long.MAX_VALUE);
> +this.scanners = snapshot.getScannersForSnapshot();
>  this.tagsPresent = snapshot.isTagsPresent();
>}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> index 70074bf3b4..279c4e50c8 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
> @@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.KeyValueUtil;
>  import org.apache.hadoop.hbase.io.TimeRange;
>  import org.apache.hadoop.hbase.util.Bytes;
>  import org.apache.hadoop.hbase.util.ClassSize;
> +import org.apache.hadoop.hbase.util.CollectionBackedScanner;
>  import org.apache.yetus.audience.InterfaceAudience;
>  import org.slf4j.Logger;
>  import 
> org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
> @@ -130,6 +131,10 @@ public abstract class Segment {
>  return Collections.singletonList(new SegmentScanner(this, readPoint, 
> order));
>}
> +  public List getScannersForSnapshot() {
> +return Collections.singletonList(new 
> CollectionBackedScanner(this.cellSet.get(), comparator));
> +  }
> +
>/**
> * @return whether the segment has any cells
> */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20592) Create a tool to verify tables do not have prefix tree encoding

2018-05-25 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490893#comment-16490893
 ] 

Peter Somogyi commented on HBASE-20592:
---

{quote}Can we output anything else about the incompatible Data Blcok Encodings? 
and ID number or something?
{quote}
I'll check the possibilities. The exception message contains 
org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.PREFIX_TREE so at worst 
case we can get it from there.

> Create a tool to verify tables do not have prefix tree encoding
> ---
>
> Key: HBASE-20592
> URL: https://issues.apache.org/jira/browse/HBASE-20592
> Project: HBase
>  Issue Type: New Feature
>  Components: Operability, tooling
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: HBASE-20592.master.001.patch
>
>
> HBase 2.0.0 removed PREFIX_TREE encoding so users need to modify data block 
> encoding to something else before upgrading to HBase 2.0+. A tool would help 
> users to verify that there are no tables left with PREFIX_TREE encoding.
> The tool needs to check the following:
>  * There are no tables where DATA_BLOCK_ENCODING => 'PREFIX_TREE'
>  * -Check existing hfiles that none of them have PREFIX_TREE encoding (in 
> case table description is changed but hfiles were not rewritten)-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20650) Revisit the HBaseTestingUtility related classes

2018-05-25 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490890#comment-16490890
 ] 

Sean Busbey commented on HBASE-20650:
-

is it worth starting with a simple design doc to ensure we're all on the same 
page about what we want out of our testing utility?

> Revisit the HBaseTestingUtility related classes
> ---
>
> Key: HBASE-20650
> URL: https://issues.apache.org/jira/browse/HBASE-20650
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> They are marked as IA.Public because lots of users use them to write UTs. The 
> problem here is that, they are classes, not interfaces, so it will be a bit 
> hard for us to keep compatibility. And also, MiniHBaseCluster is marked as 
> IA.Public, and it is a class instead of a interface, and another strange 
> thing is that, its parent class, HBaseCluster, is marked as IA.Private.
> We need to revisit the design here to make it clean...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20518) Need to serialize the enabled field for UpdatePeerConfigProcedure

2018-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490875#comment-16490875
 ] 

Hudson commented on HBASE-20518:


Results for branch branch-2
[build #781 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/781/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/781//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/781//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/781//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> Need to serialize the enabled field for UpdatePeerConfigProcedure
> -
>
> Key: HBASE-20518
> URL: https://issues.apache.org/jira/browse/HBASE-20518
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 3.0.0, 2.1.0
>Reporter: Duo Zhang
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20518.v01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20644) Master shutdown due to service ClusterSchemaServiceImpl failing to start

2018-05-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20644:
---
Priority: Critical  (was: Major)

> Master shutdown due to service ClusterSchemaServiceImpl failing to start
> 
>
> Key: HBASE-20644
> URL: https://issues.apache.org/jira/browse/HBASE-20644
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Romil Choksi
>Priority: Critical
> Attachments: 
> 101383-master-ctr-e138-1518143905142-329221-01-03.hwx.site.log, 
> 101383-regionserver-ctr-e138-1518143905142-329221-01-02.hwx.site.log, 
> 101383-regionserver-ctr-e138-1518143905142-329221-01-07.hwx.site.log
>
>
> From hbase-hbase-master-ctr-e138-1518143905142-329221-01-03.hwx.site.log :
> {code}
> 2018-05-23 22:14:29,750 ERROR 
> [master/ctr-e138-1518143905142-329221-01-03:2] master.HMaster: Failed 
> to become active master
> java.lang.IllegalStateException: Expected the service 
> ClusterSchemaServiceImpl [FAILED] to be RUNNING, but the service has FAILED
> at 
> org.apache.hbase.thirdparty.com.google.common.util.concurrent.AbstractService.checkCurrentState(AbstractService.java:345)
> at 
> org.apache.hbase.thirdparty.com.google.common.util.concurrent.AbstractService.awaitRunning(AbstractService.java:291)
> at 
> org.apache.hadoop.hbase.master.HMaster.initClusterSchemaService(HMaster.java:1054)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:918)
> at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2023)
> {code}
> Earlier in the log , the namespace region, 01a7f9ba9fffd691f261d3fbc620da06 , 
> was deemed OPEN on 01-07.hwx.site,16020,1527112194788 which was declared 
> not online:
> {code}
> 2018-05-23 21:54:34,786 INFO  
> [master/ctr-e138-1518143905142-329221-01-03:2] 
> assignment.RegionStateStore: Load hbase:meta entry
>  region=01a7f9ba9fffd691f261d3fbc620da06, regionState=OPEN, 
> lastHost=ctr-e138-1518143905142-329221-01-07.hwx.site,16020,1527112194788,
>  
> regionLocation=ctr-e138-1518143905142-329221-01-07.hwx.site,16020,1527112194788,
>  seqnum=43
> 2018-05-23 21:54:34,787 INFO  
> [master/ctr-e138-1518143905142-329221-01-03:2] 
> assignment.AssignmentManager: Number of RegionServers=1
> 2018-05-23 21:54:34,788 INFO  
> [master/ctr-e138-1518143905142-329221-01-03:2] 
> assignment.AssignmentManager: KILL 
> RegionServer=ctr-e138-1518143905142-329221-01-07.   
> hwx.site,16020,1527112194788 hosting regions but not online.
> {code}
> Later, even though a different instance on 007 registered with master:
> {code}
> 2018-05-23 21:55:13,541 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=2] 
> master.ServerManager: Registering 
> regionserver=ctr-e138-1518143905142-329221-01-07.hwx.site,16020,1527112506002
> ...
> 2018-05-23 21:55:43,881 INFO  
> [master/ctr-e138-1518143905142-329221-01-03:2] 
> client.RpcRetryingCallerImpl: Call exception, tries=12, retries=12, 
> started=69001 ms ago,cancelled=false, 
> msg=org.apache.hadoop.hbase.NotServingRegionException: 
> hbase:namespace,,1527099443383.01a7f9ba9fffd691f261d3fbc620da06. is not 
> online on ctr-e138-1518143905142-329221-  
> 01-07.hwx.site,16020,1527112506002
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3273)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3250)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1414)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2446)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41998)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
> {code}
> There was no OPEN request for 01a7f9ba9fffd691f261d3fbc620da06 sent to that 
> server instance.
> From 
> hbase-hbase-regionserver-ctr-e138-1518143905142-329221-01-07.hwx.site.log 
> :
> {code}
> 2018-05-23 21:52:27,414 INFO  
> [RS_CLOSE_REGION-regionserver/ctr-e138-1518143905142-329221-01-07:16020-1]
>  regionserver.HRegion: Closed hbase:namespace,,1527099443383.   
> 01a7f9ba9fffd691f261d3fbc620da06.
> {code}
> Then region server 007 restarted:
> {code}
> Wed May 23 21:55:03 UTC 2018 Starting regionserver on 
> ctr-e138-1518143905142-329221-01-07.hwx.site
> {code}
> After which the region 01a7f9ba9fffd691f261d3fbc620da06 never showed up again 
> in log 007



--
This message was sent by 

[jira] [Updated] (HBASE-20648) HBASE-19364 "Truncate_preserve fails with table when replica region > 1" for master branch

2018-05-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20648:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch.

> HBASE-19364 "Truncate_preserve fails with table when replica region > 1" for 
> master branch
> --
>
> Key: HBASE-20648
> URL: https://issues.apache.org/jira/browse/HBASE-20648
> Project: HBase
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20648.master.001.patch
>
>
> It seems like the issue mentioned in HBASE-19364 exists in master branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20592) Create a tool to verify tables do not have prefix tree encoding

2018-05-25 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490795#comment-16490795
 ] 

Sean Busbey commented on HBASE-20592:
-

Can we output anything else about the incompatible Data Blcok Encodings? and ID 
number or something?

Can we provide a link to upgrade guidance in the ref guide that explains how 
one goes about converting?

Please add a short name for the tool so that we don't need to tell people to 
use a fully qualified class name. 

> Create a tool to verify tables do not have prefix tree encoding
> ---
>
> Key: HBASE-20592
> URL: https://issues.apache.org/jira/browse/HBASE-20592
> Project: HBase
>  Issue Type: New Feature
>  Components: Operability, tooling
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: HBASE-20592.master.001.patch
>
>
> HBase 2.0.0 removed PREFIX_TREE encoding so users need to modify data block 
> encoding to something else before upgrading to HBase 2.0+. A tool would help 
> users to verify that there are no tables left with PREFIX_TREE encoding.
> The tool needs to check the following:
>  * There are no tables where DATA_BLOCK_ENCODING => 'PREFIX_TREE'
>  * -Check existing hfiles that none of them have PREFIX_TREE encoding (in 
> case table description is changed but hfiles were not rewritten)-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490757#comment-16490757
 ] 

Sean Busbey commented on HBASE-20597:
-

in case the java/maven versions matter for the errors I'm getting before/after 
this addendum:

{code}
Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 
2017-10-18T00:58:13-07:00)
Maven home: /usr/share/apache-maven-3.5.2
Java version: 1.8.0_171, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-693.21.1.el7.x86_64", arch: "amd64", family: 
"unix"
{code}

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, 
> HBASE-20597.addendum.0.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490751#comment-16490751
 ] 

Sean Busbey edited comment on HBASE-20597 at 5/25/18 1:58 PM:
--

here's a potential addendum that ensures safe retrieval / update via an 
AtomicReference and removes the {{synchronized}} from {{setRegionServers}}.

Some of the other locking in the {{HBaseReplicationEndpoint}} still looks 
confusing to me (e.g. getPeerUUID), but it doesn't look unsafe so I left it 
alone.

this patch clears findbugs locally ( {{mvn test-compile findbugs:check 
-DskipTests=true}} ). I'm getting some test failures when I use {{mvn 
-Psite-install-step install && mvn test -Dtest=\*Replication\*}}. Looking to 
chase them down, but I also get failures when I don't have this addendum in 
place so I'm not sure yet what's environment, flaky, or a problem with my 
approach.


was (Author: busbey):

here's a potential addendum that ensures safe retrieval / update via an 
AtomicReference and removes the {{synchronized}} from {{setRegionServers}}.

Some of the other locking in the {{HBaseReplicationEndpoint}} still looks 
confusing to me (e.g. getPeerUUID), but it doesn't look unsafe so I left it 
alone.

this patch clears findbugs locally ( {{mvn test-compile findbugs:check 
-DskipTests=true}} ). I'm getting some test failures when I use {{mvn 
-Psite-install-step install && mvn test -Dtest=*Replication*}}. Looking to 
chase them down, but I also get failures when I don't have this addendum in 
place so I'm not sure yet what's environment, flaky, or a problem with my 
approach.

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, 
> HBASE-20597.addendum.0.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19718) Remove PREFIX_TREE from compression.adoc

2018-05-25 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490754#comment-16490754
 ] 

Peter Somogyi commented on HBASE-19718:
---

Thanks for reviewing Ted. I pushed it to master only. Let me know if it needs 
to be cherry-picked to other branches.

> Remove PREFIX_TREE from compression.adoc
> 
>
> Key: HBASE-19718
> URL: https://issues.apache.org/jira/browse/HBASE-19718
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Ted Yu
>Assignee: Peter Somogyi
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HBASE-19718.master.001.patch
>
>
> compression.adoc still refers to PREFIX_TREE though the encoding has been 
> removed:
> {code}
>  -data_block_encodingEncoding algorithm (e.g. prefix compression) to
>   use for data blocks in the test column family, 
> one
>   of [NONE, PREFIX, DIFF, FAST_DIFF, PREFIX_TREE].
> {code}
> ROW_INDEX_V1 should be put in its place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19718) Remove PREFIX_TREE from compression.adoc

2018-05-25 Thread Peter Somogyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated HBASE-19718:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> Remove PREFIX_TREE from compression.adoc
> 
>
> Key: HBASE-19718
> URL: https://issues.apache.org/jira/browse/HBASE-19718
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Ted Yu
>Assignee: Peter Somogyi
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HBASE-19718.master.001.patch
>
>
> compression.adoc still refers to PREFIX_TREE though the encoding has been 
> removed:
> {code}
>  -data_block_encodingEncoding algorithm (e.g. prefix compression) to
>   use for data blocks in the test column family, 
> one
>   of [NONE, PREFIX, DIFF, FAST_DIFF, PREFIX_TREE].
> {code}
> ROW_INDEX_V1 should be put in its place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-25 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20597:

Status: Patch Available  (was: Reopened)

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.4.4, 1.3.2
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, 
> HBASE-20597.addendum.0.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >