[jira] [Updated] (HBASE-18485) Performance issue: ClientAsyncPrefetchScanner is slower than ClientSimpleScanner

2017-08-03 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18485:
---
Attachment: HBASE-18485-v4.patch

> Performance issue: ClientAsyncPrefetchScanner is slower than 
> ClientSimpleScanner
> 
>
> Key: HBASE-18485
> URL: https://issues.apache.org/jira/browse/HBASE-18485
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.0-alpha-2
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18485-v1.patch, HBASE-18485-v2.patch, 
> HBASE-18485-v3.patch, HBASE-18485-v4.patch, HBASE-18485-v4.patch, 
> HBASE-18485-v4.patch
>
>
> Copied the test result from HBASE-17994.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred scan 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --asyncPrefetch=True scan 1
> {code}
> Mean latency.
> || ||Test1|| Test2 || Test3 || Test4|| Test5||
> |scan| 12.21 | 14.32 | 13.25 | 13.07 | 11.83 |
> |scan with prefetch=True | 37.36 | 37.88 | 37.56 | 37.66 | 38.28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18485) Performance issue: ClientAsyncPrefetchScanner is slower than ClientSimpleScanner

2017-08-03 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18485:
---
Fix Version/s: 2.0.0-alpha-2
   3.0.0
Affects Version/s: 2.0.0-alpha-2
   3.0.0
   Status: Patch Available  (was: Open)

> Performance issue: ClientAsyncPrefetchScanner is slower than 
> ClientSimpleScanner
> 
>
> Key: HBASE-18485
> URL: https://issues.apache.org/jira/browse/HBASE-18485
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.0-alpha-2
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18485-v1.patch, HBASE-18485-v2.patch, 
> HBASE-18485-v3.patch, HBASE-18485-v4.patch, HBASE-18485-v4.patch, 
> HBASE-18485-v4.patch
>
>
> Copied the test result from HBASE-17994.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred scan 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --asyncPrefetch=True scan 1
> {code}
> Mean latency.
> || ||Test1|| Test2 || Test3 || Test4|| Test5||
> |scan| 12.21 | 14.32 | 13.25 | 13.07 | 11.83 |
> |scan with prefetch=True | 37.36 | 37.88 | 37.56 | 37.66 | 38.28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18485) Performance issue: ClientAsyncPrefetchScanner is slower than ClientSimpleScanner

2017-08-03 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18485:
---
Attachment: (was: HBASE-18485-v4.patch)

> Performance issue: ClientAsyncPrefetchScanner is slower than 
> ClientSimpleScanner
> 
>
> Key: HBASE-18485
> URL: https://issues.apache.org/jira/browse/HBASE-18485
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.0-alpha-2
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18485-v1.patch, HBASE-18485-v2.patch, 
> HBASE-18485-v3.patch, HBASE-18485-v4.patch, HBASE-18485-v4.patch, 
> HBASE-18485-v4.patch
>
>
> Copied the test result from HBASE-17994.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred scan 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --asyncPrefetch=True scan 1
> {code}
> Mean latency.
> || ||Test1|| Test2 || Test3 || Test4|| Test5||
> |scan| 12.21 | 14.32 | 13.25 | 13.07 | 11.83 |
> |scan with prefetch=True | 37.36 | 37.88 | 37.56 | 37.66 | 38.28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18485) Performance issue: ClientAsyncPrefetchScanner is slower than ClientSimpleScanner

2017-08-03 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18485:
---
Status: Open  (was: Patch Available)

> Performance issue: ClientAsyncPrefetchScanner is slower than 
> ClientSimpleScanner
> 
>
> Key: HBASE-18485
> URL: https://issues.apache.org/jira/browse/HBASE-18485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18485-v1.patch, HBASE-18485-v2.patch, 
> HBASE-18485-v3.patch, HBASE-18485-v4.patch, HBASE-18485-v4.patch, 
> HBASE-18485-v4.patch
>
>
> Copied the test result from HBASE-17994.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred scan 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --asyncPrefetch=True scan 1
> {code}
> Mean latency.
> || ||Test1|| Test2 || Test3 || Test4|| Test5||
> |scan| 12.21 | 14.32 | 13.25 | 13.07 | 11.83 |
> |scan with prefetch=True | 37.36 | 37.88 | 37.56 | 37.66 | 38.28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18485) Performance issue: ClientAsyncPrefetchScanner is slower than ClientSimpleScanner

2017-08-03 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113957#comment-16113957
 ] 

Guanghao Zhang commented on HBASE-18485:


Ok. Changed type to improvement.
bq. Should we enable the prefetch as default?
Checked HBASE-13071, it was merged since hbase 2.0. So it should be default in 
3.0? But we will remove the blocking code and use the async code base. The 
async client scanner will do the prefetch job by default. So I thought we don't 
need enable it as default in this patch.

> Performance issue: ClientAsyncPrefetchScanner is slower than 
> ClientSimpleScanner
> 
>
> Key: HBASE-18485
> URL: https://issues.apache.org/jira/browse/HBASE-18485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18485-v1.patch, HBASE-18485-v2.patch, 
> HBASE-18485-v3.patch, HBASE-18485-v4.patch, HBASE-18485-v4.patch, 
> HBASE-18485-v4.patch
>
>
> Copied the test result from HBASE-17994.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred scan 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --asyncPrefetch=True scan 1
> {code}
> Mean latency.
> || ||Test1|| Test2 || Test3 || Test4|| Test5||
> |scan| 12.21 | 14.32 | 13.25 | 13.07 | 11.83 |
> |scan with prefetch=True | 37.36 | 37.88 | 37.56 | 37.66 | 38.28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18485) Performance issue: ClientAsyncPrefetchScanner is slower than ClientSimpleScanner

2017-08-03 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18485:
---
Issue Type: Improvement  (was: Bug)

> Performance issue: ClientAsyncPrefetchScanner is slower than 
> ClientSimpleScanner
> 
>
> Key: HBASE-18485
> URL: https://issues.apache.org/jira/browse/HBASE-18485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18485-v1.patch, HBASE-18485-v2.patch, 
> HBASE-18485-v3.patch, HBASE-18485-v4.patch, HBASE-18485-v4.patch, 
> HBASE-18485-v4.patch
>
>
> Copied the test result from HBASE-17994.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred scan 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --asyncPrefetch=True scan 1
> {code}
> Mean latency.
> || ||Test1|| Test2 || Test3 || Test4|| Test5||
> |scan| 12.21 | 14.32 | 13.25 | 13.07 | 11.83 |
> |scan with prefetch=True | 37.36 | 37.88 | 37.56 | 37.66 | 38.28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18516) [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113938#comment-16113938
 ] 

Hadoop QA commented on HBASE-18516:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 57s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 48s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestMasterFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:bdc94b1 |
| JIRA Issue | HBASE-18516 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880299/hbase-18516.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 15513e27a655 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 6266bb3 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7915/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7915/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7915/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring
> 

[jira] [Commented] (HBASE-18167) OfflineMetaRepair tool may cause HMaster abort always

2017-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113934#comment-16113934
 ] 

Hudson commented on HBASE-18167:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #234 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/234/])
HBASE-18167 OfflineMetaRepair tool may cause HMaster abort always (tedyu: rev 
34ab9edde1a0b808416c2dd1d03c85a52f03f093)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java


> OfflineMetaRepair tool may cause HMaster abort always
> -
>
> Key: HBASE-18167
> URL: https://issues.apache.org/jira/browse/HBASE-18167
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 1.4.0, 1.3.2
>
> Attachments: HBASE-18167-branch-1.3.v2.patch, 
> HBASE-18167-branch-1.patch, HBASE-18167-branch-1-V2.patch
>
>
> In the production environment, we met a weird scenario where some Meta table 
> HFile blocks were missing due to some reason.
> To recover the environment we tried to rebuild the meta using 
> OfflineMetaRepair tool and restart the cluster, but HMaster couldn't finish 
> it's initialization. It always timed out as namespace table region was never 
> assigned.
> Steps to reproduce
> ==
> 1. Assign meta table region to HMaster (it can be on any RS, just to 
> reproduce the  scenario)
> {noformat}
>   
> hbase.balancer.tablesOnMaster
> hbase:meta
> 
> {noformat}
> 2. Start HMaster and RegionServer
> 2. Create two namespace, say "ns1" & "ns2"
> 3. Create two tables "ns1:t1' & "ns2:t1'
> 4. flush 'hbase:meta"
> 5. Stop HMaster (graceful shutdown)
> 6. Kill -9 RegionServer (Abnormal shutdown)
> 7. Run OfflineMetaRepair as follows,
> {noformat}
>   hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair -fix
> {noformat}
> 8. Restart HMaster and RegionServer
> 9. HMaster will never be able to finish its initialization and abort always 
> with below message,
> {code}
> 2017-06-06 15:11:07,582 FATAL [Hostname:16000.activeMasterManager] 
> master.HMaster: Unhandled exception. Starting shutdown.
> java.io.IOException: Timedout 12ms waiting for namespace table to be 
> assigned
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:98)
> at 
> org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1054)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:848)
> at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:199)
> at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1871)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Root cause
> ==
> 1. During HM start up AM assumes that it's a failover scenario based on the 
> existing old WAL files, so SSH/SCP will split WAL files and assign the 
> holding regions. 
> 2. During SSH/SCP it retrieves the server holding regions from meta/AM's 
> in-memory-state, but meta only had "regioninfo" entry (as already rebuild by 
> OfflineMetaRepair). So empty region will be returned and it wont trigger any 
> assignment.
> 3. HMaster which is waiting for namespace table to be assigned will timeout 
> and abort always.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18485) Performance issue: ClientAsyncPrefetchScanner is slower than ClientSimpleScanner

2017-08-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113927#comment-16113927
 ] 

Chia-Ping Tsai commented on HBASE-18485:


Have any bugs been fixed? It seems to me this issue is related to performance. 
Maybe we should change the type to improvement. Should we enable the prefetch 
as default?

> Performance issue: ClientAsyncPrefetchScanner is slower than 
> ClientSimpleScanner
> 
>
> Key: HBASE-18485
> URL: https://issues.apache.org/jira/browse/HBASE-18485
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18485-v1.patch, HBASE-18485-v2.patch, 
> HBASE-18485-v3.patch, HBASE-18485-v4.patch, HBASE-18485-v4.patch, 
> HBASE-18485-v4.patch
>
>
> Copied the test result from HBASE-17994.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred scan 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --asyncPrefetch=True scan 1
> {code}
> Mean latency.
> || ||Test1|| Test2 || Test3 || Test4|| Test5||
> |scan| 12.21 | 14.32 | 13.25 | 13.07 | 11.83 |
> |scan with prefetch=True | 37.36 | 37.88 | 37.56 | 37.66 | 38.28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18485) Performance issue: ClientAsyncPrefetchScanner is slower than ClientSimpleScanner

2017-08-03 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18485:
---
Attachment: HBASE-18485-v4.patch

> Performance issue: ClientAsyncPrefetchScanner is slower than 
> ClientSimpleScanner
> 
>
> Key: HBASE-18485
> URL: https://issues.apache.org/jira/browse/HBASE-18485
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18485-v1.patch, HBASE-18485-v2.patch, 
> HBASE-18485-v3.patch, HBASE-18485-v4.patch, HBASE-18485-v4.patch, 
> HBASE-18485-v4.patch
>
>
> Copied the test result from HBASE-17994.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred scan 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --asyncPrefetch=True scan 1
> {code}
> Mean latency.
> || ||Test1|| Test2 || Test3 || Test4|| Test5||
> |scan| 12.21 | 14.32 | 13.25 | 13.07 | 11.83 |
> |scan with prefetch=True | 37.36 | 37.88 | 37.56 | 37.66 | 38.28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18502) Change MasterObserver to use TableDescriptor and ColumnFamilyDescriptor

2017-08-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18502:
---
Attachment: HBASE-18502.v1.patch

v1 patch
# fix TestAccessController
# The TestMasterFailover, which is enabled recently by HBASE-18231, is a flak 
test. And it is tracked by HBASE-18425.

> Change MasterObserver to use TableDescriptor and ColumnFamilyDescriptor
> ---
>
> Key: HBASE-18502
> URL: https://issues.apache.org/jira/browse/HBASE-18502
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18502.v0.patch, HBASE-18502.v1.patch
>
>
> MasterObserver is IA.COPROC so we can make some Incompatible change for 3.0 
> and 2.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18502) Change MasterObserver to use TableDescriptor and ColumnFamilyDescriptor

2017-08-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18502:
---
Status: Patch Available  (was: Open)

> Change MasterObserver to use TableDescriptor and ColumnFamilyDescriptor
> ---
>
> Key: HBASE-18502
> URL: https://issues.apache.org/jira/browse/HBASE-18502
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18502.v0.patch, HBASE-18502.v1.patch
>
>
> MasterObserver is IA.COPROC so we can make some Incompatible change for 3.0 
> and 2.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18078) [C++] Harden RPC by handling various communication abnormalities

2017-08-03 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HBASE-18078:
--
Status: Patch Available  (was: Open)

> [C++] Harden RPC by handling various communication abnormalities
> 
>
> Key: HBASE-18078
> URL: https://issues.apache.org/jira/browse/HBASE-18078
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HBASE-18078.000.patch, HBASE-18078.001.patch, 
> HBASE-18078.002.patch, HBASE-18078.003.patch, HBASE-18078.004.patch, 
> HBASE-18078.005.patch, HBASE-18078.006.patch, HBASE-18078.007.patch
>
>
> RPC layer should handle various communication abnormalities (e.g. connection 
> timeout, server aborted connection, and so on). Ideally, the corresponding 
> exceptions should be raised and propagated through handlers of pipeline in 
> client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18502) Change MasterObserver to use TableDescriptor and ColumnFamilyDescriptor

2017-08-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18502:
---
Status: Open  (was: Patch Available)

> Change MasterObserver to use TableDescriptor and ColumnFamilyDescriptor
> ---
>
> Key: HBASE-18502
> URL: https://issues.apache.org/jira/browse/HBASE-18502
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18502.v0.patch
>
>
> MasterObserver is IA.COPROC so we can make some Incompatible change for 3.0 
> and 2.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18078) [C++] Harden RPC by handling various communication abnormalities

2017-08-03 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113903#comment-16113903
 ] 

Xiaobing Zhou commented on HBASE-18078:
---

v7:
# added RpcClient::GetFutureWithException 
#  fixe broken tests due to v6

Extra tests come with next patch.

> [C++] Harden RPC by handling various communication abnormalities
> 
>
> Key: HBASE-18078
> URL: https://issues.apache.org/jira/browse/HBASE-18078
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HBASE-18078.000.patch, HBASE-18078.001.patch, 
> HBASE-18078.002.patch, HBASE-18078.003.patch, HBASE-18078.004.patch, 
> HBASE-18078.005.patch, HBASE-18078.006.patch, HBASE-18078.007.patch
>
>
> RPC layer should handle various communication abnormalities (e.g. connection 
> timeout, server aborted connection, and so on). Ideally, the corresponding 
> exceptions should be raised and propagated through handlers of pipeline in 
> client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18078) [C++] Harden RPC by handling various communication abnormalities

2017-08-03 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HBASE-18078:
--
Attachment: HBASE-18078.007.patch

> [C++] Harden RPC by handling various communication abnormalities
> 
>
> Key: HBASE-18078
> URL: https://issues.apache.org/jira/browse/HBASE-18078
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HBASE-18078.000.patch, HBASE-18078.001.patch, 
> HBASE-18078.002.patch, HBASE-18078.003.patch, HBASE-18078.004.patch, 
> HBASE-18078.005.patch, HBASE-18078.006.patch, HBASE-18078.007.patch
>
>
> RPC layer should handle various communication abnormalities (e.g. connection 
> timeout, server aborted connection, and so on). Ideally, the corresponding 
> exceptions should be raised and propagated through handlers of pipeline in 
> client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18517) limit max log message width in log4j

2017-08-03 Thread Vikas Vishwakarma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113883#comment-16113883
 ] 

Vikas Vishwakarma edited comment on HBASE-18517 at 8/4/17 3:48 AM:
---

Do let me know if the above looks ok and if there are any suggestions on what 
should be the max width limit for the log messages. I see normally 200-300 
bytes on an average for log lines. Maybe 100-200 times with some more buffer 
should be ok so we can possibly consider 5K to 10K limits


was (Author: vik.karma):
Do let me know if the above looks ok and if there are any suggestions on what 
should be the max width limit for the log messages. I see normally 200-300 
bytes on an average for log lines. Maybe 10-20 times should be ok so we can 
possibly consider 5K to 10K limits

> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In first case, one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity was also noted on this cluster although not 100% sure 
> if it was related to above issue. 
> We should consider limiting the message size in logs which can be easily done 
> by adding a maximum width format modifier on the message conversion character 
> in log4j.properties
> log4j.appender.console.layout.ConversionPattern=...: %m%n
> to 
> log4j.appender.console.layout.ConversionPattern=...: %.1m%n



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2017-08-03 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-18517:
--
Description: 
We had two cases now in our prod / pilot setups which is leading to humongous 
log lines in RegionServer logs. 

In first case, one of the phoenix user had constructed a query with a really 
large list of Id filters (61 MB) that translated into HBase scan that was 
running slow which lead to responseTooSlow messages in the logs with the entire 
filter list being printed in the logs, example
ipc.RpcServer - (responseTooSlow): 
{"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
 { type: REGION_NAME value:  . 
org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
 ...  ... 

There was another case where a use case had created a table with really large 
key/region names. This was causing humongous log lines for flush and compaction 
on these regions filling up the RS logs

These large logs usually cause issues with disk I/O load, loading the splunk 
servers, even machine perf degradations. With 61 MB log lines basic log 
processing commands like vim, scrolling the logs, wc -l , etc were getting 
stuck. High GC activity was also noted on this cluster although not 100% sure 
if it was related to above issue. 

We should consider limiting the message size in logs which can be easily done 
by adding a maximum width format modifier on the message conversion character 
in log4j.properties
log4j.appender.console.layout.ConversionPattern=...: %m%n
to 
log4j.appender.console.layout.ConversionPattern=...: %.1m%n


  was:
We had two cases now in our prod / pilot setups which is leading to humongous 
log lines in RegionServer logs. 
In one case one of the phoenix user had constructed a query with a really large 
list of Id filters (61 MB) that translated into HBase scan that was running 
slow which lead to responseTooSlow messages in the logs with the entire filter 
list being printed in the logs, example
ipc.RpcServer - (responseTooSlow): 
{"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
 { type: REGION_NAME value:  . 
org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
 ...  ... 

There was another case where a use case had created a table with really large 
key/region names. This was causing humongous log lines for flush and compaction 
on these regions filling up the RS logs

These large logs usually cause issues with disk I/O load, loading the splunk 
servers, even machine perf degradations. With 61 MB log lines basic log 
processing commands like vim, scrolling the logs, wc -l , etc were getting 
stuck. High GC activity was also noted on this cluster although not 100% sure 
if it was related to above issue. 

We should consider limiting the message size in logs which can be easily done 
by adding a maximum width format modifier on the message conversion character 
in log4j.properties
log4j.appender.console.layout.ConversionPattern=...: %m%n
to 
log4j.appender.console.layout.ConversionPattern=...: %.1m%n



> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In first case, one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity 

[jira] [Commented] (HBASE-18517) limit max log message width in log4j

2017-08-03 Thread Vikas Vishwakarma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113883#comment-16113883
 ] 

Vikas Vishwakarma commented on HBASE-18517:
---

Do let me know if the above looks ok and if there are any suggestions on what 
should be the max width limit for the log messages. I see normally 200-300 
bytes on an average for log lines. Maybe 10-20 times should be ok so we can 
possibly consider 5K to 10K limits

> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In one case one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , etc were getting 
> stuck. High GC activity was also noted on this cluster although not 100% sure 
> if it was related to above issue. 
> We should consider limiting the message size in logs which can be easily done 
> by adding a maximum width format modifier on the message conversion character 
> in log4j.properties
> log4j.appender.console.layout.ConversionPattern=...: %m%n
> to 
> log4j.appender.console.layout.ConversionPattern=...: %.1m%n



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18517) limit max log message width in log4j

2017-08-03 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-18517:
--
Description: 
We had two cases now in our prod / pilot setups which is leading to humongous 
log lines in RegionServer logs. 
In one case one of the phoenix user had constructed a query with a really large 
list of Id filters (61 MB) that translated into HBase scan that was running 
slow which lead to responseTooSlow messages in the logs with the entire filter 
list being printed in the logs, example
ipc.RpcServer - (responseTooSlow): 
{"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
 { type: REGION_NAME value:  . 
org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
 ...  ... 

There was another case where a use case had created a table with really large 
key/region names. This was causing humongous log lines for flush and compaction 
on these regions filling up the RS logs

These large logs usually cause issues with disk I/O load, loading the splunk 
servers, even machine perf degradations. With 61 MB log lines basic log 
processing commands like vim, scrolling the logs, wc -l , etc were getting 
stuck. High GC activity was also noted on this cluster although not 100% sure 
if it was related to above issue. 

We should consider limiting the message size in logs which can be easily done 
by adding a maximum width format modifier on the message conversion character 
in log4j.properties
log4j.appender.console.layout.ConversionPattern=...: %m%n
to 
log4j.appender.console.layout.ConversionPattern=...: %.1m%n


  was:
We had two cases now in our prod / pilot setups which is leading to humongous 
log lines in RegionServer logs. 
In one case one of the phoenix user had constructed a query with a really large 
list of Id filters (61 MB) that translated into HBase scan that was running 
slow which lead to responseTooSlow messages in the logs with the entire filter 
list being printed in the logs, example
ipc.RpcServer - (responseTooSlow): 
{"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
 { type: REGION_NAME value:  . 
org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
 ...  ... 

There was another case where a use case had created a table with really large 
key/region names. This was causing humongous log lines for flush and compaction 
on these regions filling up the RS logs

These large logs usually cause issues with disk I/O load, loading the splunk 
servers, even machine perf degradations. With 61 MB log lines basic log 
processing commands like vim, scrolling the logs, wc -l , etc were getting 
stuck. High GC activity was also noted on this cluster although not 100% sure 
if it was related to above issue. 

We should consider limiting the message size in logs which can be easily done 
by adding a maximum width format modifier on the message conversion character 
in log4j.properties
log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: 
%m%n
to 
log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: 
%.1m%n



> limit max log message width in log4j
> 
>
> Key: HBASE-18517
> URL: https://issues.apache.org/jira/browse/HBASE-18517
> Project: HBase
>  Issue Type: Bug
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
>
> We had two cases now in our prod / pilot setups which is leading to humongous 
> log lines in RegionServer logs. 
> In one case one of the phoenix user had constructed a query with a really 
> large list of Id filters (61 MB) that translated into HBase scan that was 
> running slow which lead to responseTooSlow messages in the logs with the 
> entire filter list being printed in the logs, example
> ipc.RpcServer - (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
>  { type: REGION_NAME value:  . 
> org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
>  ...  ... 
> There was another case where a use case had created a table with really large 
> key/region names. This was causing humongous log lines for flush and 
> compaction on these regions filling up the RS logs
> These large logs usually cause issues with disk I/O load, loading the splunk 
> servers, even machine perf degradations. With 61 MB log lines basic log 
> processing commands like vim, scrolling the logs, wc -l , 

[jira] [Created] (HBASE-18517) limit max log message width in log4j

2017-08-03 Thread Vikas Vishwakarma (JIRA)
Vikas Vishwakarma created HBASE-18517:
-

 Summary: limit max log message width in log4j
 Key: HBASE-18517
 URL: https://issues.apache.org/jira/browse/HBASE-18517
 Project: HBase
  Issue Type: Bug
Reporter: Vikas Vishwakarma
Assignee: Vikas Vishwakarma


We had two cases now in our prod / pilot setups which is leading to humongous 
log lines in RegionServer logs. 
In one case one of the phoenix user had constructed a query with a really large 
list of Id filters (61 MB) that translated into HBase scan that was running 
slow which lead to responseTooSlow messages in the logs with the entire filter 
list being printed in the logs, example
ipc.RpcServer - (responseTooSlow): 
{"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","starttimems":1501457864417,"responsesize":11,"method":"Scan","param":"region
 { type: REGION_NAME value:  . 
org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter\\022\351\\200\\036\\n(org.apache.phoenix.filter.SkipScanFilter
 ...  ... 

There was another case where a use case had created a table with really large 
key/region names. This was causing humongous log lines for flush and compaction 
on these regions filling up the RS logs

These large logs usually cause issues with disk I/O load, loading the splunk 
servers, even machine perf degradations. With 61 MB log lines basic log 
processing commands like vim, scrolling the logs, wc -l , etc were getting 
stuck. High GC activity was also noted on this cluster although not 100% sure 
if it was related to above issue. 

We should consider limiting the message size in logs which can be easily done 
by adding a maximum width format modifier on the message conversion character 
in log4j.properties
log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: 
%m%n
to 
log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: 
%.1m%n




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18500) Performance issue: Don't use BufferedMutator for HTable's put method

2017-08-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113847#comment-16113847
 ] 

Chia-Ping Tsai commented on HBASE-18500:


bq. I thought we can remove the BufferedMutator from Table.
I agree completely because it brings four benefits.
# correct the metrics (see HBASE-18476)
# make HTable thread-safe (see HBASE-17368)
# reduce the latency
# get rid of some deprecated methods in Table

> Performance issue: Don't use BufferedMutator for HTable's put method
> 
>
> Key: HBASE-18500
> URL: https://issues.apache.org/jira/browse/HBASE-18500
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18500-v1.patch
>
>
> Copied the test result from HBASE-17994.
> Run start-hbase.sh in my local computer and use the default config to test 
> with PE tool.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --autoFlush=True randomWrite 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --autoFlush=True asyncRandomWrite 1
> {code}
> Mean latency test result.
> || || Test1 || Test2 || Test3 || Test4 || Test5 ||
> | randomWrite | 164.39 | 161.22 | 164.78 | 140.61 | 151.69 |
> | asyncRandomWrite | 122.29 | 125.58 | 122.23 | 113.18 | 123.02 |
> 50th latency test result.
> || || Test1 || Test2 || Test3 || Test4 || Test5 ||
> | randomWrite | 130.00 | 125.00 | 123.00 | 112.00 | 121.00 |
> | asyncRandomWrite | 95.00 | 97.00 | 95.00 | 88.00 | 95.00 |
> 99th latency test result.
> || || Test1 || Test2 || Test3 || Test4 || Test5 ||
> | randomWrite | 600.00 | 600.00 | 650.00 | 404.00 | 425.00 |
> | asyncRandomWrite | 339.00 | 327.00 | 297.00 | 311.00 | 318.00 |
> In our internal 0.98 branch, the PE test result shows the async write has the 
> almost same latency with the blocking write. But for master branch, the 
> result shows the async write has better latency than the blocking client.  
> Take a look about the code, I thought the difference is the BufferedMutator. 
> For master branch, HTable don't have a write buffer and all write request 
> will be flushed directly. And user can use BufferedMutator when user want to 
> perform client-side buffering of writes. For the performance issue 
> (autoFlush=True), I thought we can use rpc caller directly in HTable's put 
> method. Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18500) Performance issue: Don't use BufferedMutator for HTable's put method

2017-08-03 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113836#comment-16113836
 ] 

Guanghao Zhang commented on HBASE-18500:


I thought we can remove the BufferedMutator from Table. Then Table is used for 
no buffered write and BufferedMutator used for the buffered write.
[~anoop.hbase] [~ram_krish] [~chia7712] [~stack] What do you think about this?


> Performance issue: Don't use BufferedMutator for HTable's put method
> 
>
> Key: HBASE-18500
> URL: https://issues.apache.org/jira/browse/HBASE-18500
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18500-v1.patch
>
>
> Copied the test result from HBASE-17994.
> Run start-hbase.sh in my local computer and use the default config to test 
> with PE tool.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --autoFlush=True randomWrite 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --autoFlush=True asyncRandomWrite 1
> {code}
> Mean latency test result.
> || || Test1 || Test2 || Test3 || Test4 || Test5 ||
> | randomWrite | 164.39 | 161.22 | 164.78 | 140.61 | 151.69 |
> | asyncRandomWrite | 122.29 | 125.58 | 122.23 | 113.18 | 123.02 |
> 50th latency test result.
> || || Test1 || Test2 || Test3 || Test4 || Test5 ||
> | randomWrite | 130.00 | 125.00 | 123.00 | 112.00 | 121.00 |
> | asyncRandomWrite | 95.00 | 97.00 | 95.00 | 88.00 | 95.00 |
> 99th latency test result.
> || || Test1 || Test2 || Test3 || Test4 || Test5 ||
> | randomWrite | 600.00 | 600.00 | 650.00 | 404.00 | 425.00 |
> | asyncRandomWrite | 339.00 | 327.00 | 297.00 | 311.00 | 318.00 |
> In our internal 0.98 branch, the PE test result shows the async write has the 
> almost same latency with the blocking write. But for master branch, the 
> result shows the async write has better latency than the blocking client.  
> Take a look about the code, I thought the difference is the BufferedMutator. 
> For master branch, HTable don't have a write buffer and all write request 
> will be flushed directly. And user can use BufferedMutator when user want to 
> perform client-side buffering of writes. For the performance issue 
> (autoFlush=True), I thought we can use rpc caller directly in HTable's put 
> method. Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-08-03 Thread Sreeram Venkatasubramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreeram Venkatasubramanian updated HBASE-16290:
---
Attachment: OldPatch-0001-Dump-Call-Queue-Summary.patch

Old patch.

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Critical
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: 0001-Dump-call-queue-summaries.patch, 
> DebugDump_screenshot.png, OldPatch-0001-Dump-Call-Queue-Summary.patch, Sample 
> Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-08-03 Thread Sreeram Venkatasubramanian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113798#comment-16113798
 ] 

Sreeram Venkatasubramanian commented on HBASE-16290:


Sure [~chia7712]. I am attaching the old patch for your reference.

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Critical
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: 0001-Dump-call-queue-summaries.patch, 
> DebugDump_screenshot.png, OldPatch-0001-Dump-Call-Queue-Summary.patch, Sample 
> Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18485) Performance issue: ClientAsyncPrefetchScanner is slower than ClientSimpleScanner

2017-08-03 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113790#comment-16113790
 ] 

Guanghao Zhang commented on HBASE-18485:


Retry for Hadoop QA. And TestMasterFailover was tracked by HBASE-18425. 
[~tedyu] Thanks for your review.

> Performance issue: ClientAsyncPrefetchScanner is slower than 
> ClientSimpleScanner
> 
>
> Key: HBASE-18485
> URL: https://issues.apache.org/jira/browse/HBASE-18485
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18485-v1.patch, HBASE-18485-v2.patch, 
> HBASE-18485-v3.patch, HBASE-18485-v4.patch, HBASE-18485-v4.patch
>
>
> Copied the test result from HBASE-17994.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred scan 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --asyncPrefetch=True scan 1
> {code}
> Mean latency.
> || ||Test1|| Test2 || Test3 || Test4|| Test5||
> |scan| 12.21 | 14.32 | 13.25 | 13.07 | 11.83 |
> |scan with prefetch=True | 37.36 | 37.88 | 37.56 | 37.66 | 38.28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18485) Performance issue: ClientAsyncPrefetchScanner is slower than ClientSimpleScanner

2017-08-03 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18485:
---
Attachment: HBASE-18485-v4.patch

> Performance issue: ClientAsyncPrefetchScanner is slower than 
> ClientSimpleScanner
> 
>
> Key: HBASE-18485
> URL: https://issues.apache.org/jira/browse/HBASE-18485
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18485-v1.patch, HBASE-18485-v2.patch, 
> HBASE-18485-v3.patch, HBASE-18485-v4.patch, HBASE-18485-v4.patch
>
>
> Copied the test result from HBASE-17994.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred scan 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --asyncPrefetch=True scan 1
> {code}
> Mean latency.
> || ||Test1|| Test2 || Test3 || Test4|| Test5||
> |scan| 12.21 | 14.32 | 13.25 | 13.07 | 11.83 |
> |scan with prefetch=True | 37.36 | 37.88 | 37.56 | 37.66 | 38.28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-08-03 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113788#comment-16113788
 ] 

Guanghao Zhang commented on HBASE-17125:


Ping [~anoop.hbase] [~Apache9] for reviewing.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: 17125-slack-13.txt, example.diff, 
> HBASE-17125.master.001.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.003.patch, 
> HBASE-17125.master.004.patch, HBASE-17125.master.005.patch, 
> HBASE-17125.master.006.patch, HBASE-17125.master.007.patch, 
> HBASE-17125.master.008.patch, HBASE-17125.master.009.patch, 
> HBASE-17125.master.009.patch, HBASE-17125.master.010.patch, 
> HBASE-17125.master.011.patch, HBASE-17125.master.011.patch, 
> HBASE-17125.master.012.patch, HBASE-17125.master.013.patch, 
> HBASE-17125.master.014.patch, HBASE-17125.master.015.patch, 
> HBASE-17125.master.016.patch, HBASE-17125.master.017.patch, 
> HBASE-17125.master.018.patch, HBASE-17125.master.019.patch, 
> HBASE-17125.master.020.patch, HBASE-17125.master.checkReturnedVersions.patch, 
> HBASE-17125.master.no-specified-filter.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-08-03 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113785#comment-16113785
 ] 

Guanghao Zhang commented on HBASE-17125:


bq.  testXXXWithFilterHint and testXXXWithFilter should be removed because they 
can't reproduce the bug of top (cell) change
If no objections about 020 patch, I will remove them when prepare a final 
patch. Thanks.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: 17125-slack-13.txt, example.diff, 
> HBASE-17125.master.001.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.003.patch, 
> HBASE-17125.master.004.patch, HBASE-17125.master.005.patch, 
> HBASE-17125.master.006.patch, HBASE-17125.master.007.patch, 
> HBASE-17125.master.008.patch, HBASE-17125.master.009.patch, 
> HBASE-17125.master.009.patch, HBASE-17125.master.010.patch, 
> HBASE-17125.master.011.patch, HBASE-17125.master.011.patch, 
> HBASE-17125.master.012.patch, HBASE-17125.master.013.patch, 
> HBASE-17125.master.014.patch, HBASE-17125.master.015.patch, 
> HBASE-17125.master.016.patch, HBASE-17125.master.017.patch, 
> HBASE-17125.master.018.patch, HBASE-17125.master.019.patch, 
> HBASE-17125.master.020.patch, HBASE-17125.master.checkReturnedVersions.patch, 
> HBASE-17125.master.no-specified-filter.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-08-03 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113784#comment-16113784
 ] 

Guanghao Zhang commented on HBASE-17125:


[~tedyu] Ok. I thought the slack idea is not easy to understand... Any 
concerns about the 020 patch?

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: 17125-slack-13.txt, example.diff, 
> HBASE-17125.master.001.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.003.patch, 
> HBASE-17125.master.004.patch, HBASE-17125.master.005.patch, 
> HBASE-17125.master.006.patch, HBASE-17125.master.007.patch, 
> HBASE-17125.master.008.patch, HBASE-17125.master.009.patch, 
> HBASE-17125.master.009.patch, HBASE-17125.master.010.patch, 
> HBASE-17125.master.011.patch, HBASE-17125.master.011.patch, 
> HBASE-17125.master.012.patch, HBASE-17125.master.013.patch, 
> HBASE-17125.master.014.patch, HBASE-17125.master.015.patch, 
> HBASE-17125.master.016.patch, HBASE-17125.master.017.patch, 
> HBASE-17125.master.018.patch, HBASE-17125.master.019.patch, 
> HBASE-17125.master.020.patch, HBASE-17125.master.checkReturnedVersions.patch, 
> HBASE-17125.master.no-specified-filter.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-08-03 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113780#comment-16113780
 ] 

Guanghao Zhang commented on HBASE-17125:


The failed ut is not related. And it was tracked by HBASE-18425.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: 17125-slack-13.txt, example.diff, 
> HBASE-17125.master.001.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.003.patch, 
> HBASE-17125.master.004.patch, HBASE-17125.master.005.patch, 
> HBASE-17125.master.006.patch, HBASE-17125.master.007.patch, 
> HBASE-17125.master.008.patch, HBASE-17125.master.009.patch, 
> HBASE-17125.master.009.patch, HBASE-17125.master.010.patch, 
> HBASE-17125.master.011.patch, HBASE-17125.master.011.patch, 
> HBASE-17125.master.012.patch, HBASE-17125.master.013.patch, 
> HBASE-17125.master.014.patch, HBASE-17125.master.015.patch, 
> HBASE-17125.master.016.patch, HBASE-17125.master.017.patch, 
> HBASE-17125.master.018.patch, HBASE-17125.master.019.patch, 
> HBASE-17125.master.020.patch, HBASE-17125.master.checkReturnedVersions.patch, 
> HBASE-17125.master.no-specified-filter.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18167) OfflineMetaRepair tool may cause HMaster abort always

2017-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113765#comment-16113765
 ] 

Hudson commented on HBASE-18167:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #156 (See 
[https://builds.apache.org/job/HBase-1.3-IT/156/])
HBASE-18167 OfflineMetaRepair tool may cause HMaster abort always (tedyu: rev 
34ab9edde1a0b808416c2dd1d03c85a52f03f093)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java


> OfflineMetaRepair tool may cause HMaster abort always
> -
>
> Key: HBASE-18167
> URL: https://issues.apache.org/jira/browse/HBASE-18167
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 1.4.0, 1.3.2
>
> Attachments: HBASE-18167-branch-1.3.v2.patch, 
> HBASE-18167-branch-1.patch, HBASE-18167-branch-1-V2.patch
>
>
> In the production environment, we met a weird scenario where some Meta table 
> HFile blocks were missing due to some reason.
> To recover the environment we tried to rebuild the meta using 
> OfflineMetaRepair tool and restart the cluster, but HMaster couldn't finish 
> it's initialization. It always timed out as namespace table region was never 
> assigned.
> Steps to reproduce
> ==
> 1. Assign meta table region to HMaster (it can be on any RS, just to 
> reproduce the  scenario)
> {noformat}
>   
> hbase.balancer.tablesOnMaster
> hbase:meta
> 
> {noformat}
> 2. Start HMaster and RegionServer
> 2. Create two namespace, say "ns1" & "ns2"
> 3. Create two tables "ns1:t1' & "ns2:t1'
> 4. flush 'hbase:meta"
> 5. Stop HMaster (graceful shutdown)
> 6. Kill -9 RegionServer (Abnormal shutdown)
> 7. Run OfflineMetaRepair as follows,
> {noformat}
>   hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair -fix
> {noformat}
> 8. Restart HMaster and RegionServer
> 9. HMaster will never be able to finish its initialization and abort always 
> with below message,
> {code}
> 2017-06-06 15:11:07,582 FATAL [Hostname:16000.activeMasterManager] 
> master.HMaster: Unhandled exception. Starting shutdown.
> java.io.IOException: Timedout 12ms waiting for namespace table to be 
> assigned
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:98)
> at 
> org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1054)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:848)
> at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:199)
> at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1871)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Root cause
> ==
> 1. During HM start up AM assumes that it's a failover scenario based on the 
> existing old WAL files, so SSH/SCP will split WAL files and assign the 
> holding regions. 
> 2. During SSH/SCP it retrieves the server holding regions from meta/AM's 
> in-memory-state, but meta only had "regioninfo" entry (as already rebuild by 
> OfflineMetaRepair). So empty region will be returned and it wont trigger any 
> assignment.
> 3. HMaster which is waiting for namespace table to be assigned will timeout 
> and abort always.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18167) OfflineMetaRepair tool may cause HMaster abort always

2017-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113762#comment-16113762
 ] 

Hudson commented on HBASE-18167:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK7 #220 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/220/])
HBASE-18167 OfflineMetaRepair tool may cause HMaster abort always (tedyu: rev 
34ab9edde1a0b808416c2dd1d03c85a52f03f093)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java


> OfflineMetaRepair tool may cause HMaster abort always
> -
>
> Key: HBASE-18167
> URL: https://issues.apache.org/jira/browse/HBASE-18167
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 1.4.0, 1.3.2
>
> Attachments: HBASE-18167-branch-1.3.v2.patch, 
> HBASE-18167-branch-1.patch, HBASE-18167-branch-1-V2.patch
>
>
> In the production environment, we met a weird scenario where some Meta table 
> HFile blocks were missing due to some reason.
> To recover the environment we tried to rebuild the meta using 
> OfflineMetaRepair tool and restart the cluster, but HMaster couldn't finish 
> it's initialization. It always timed out as namespace table region was never 
> assigned.
> Steps to reproduce
> ==
> 1. Assign meta table region to HMaster (it can be on any RS, just to 
> reproduce the  scenario)
> {noformat}
>   
> hbase.balancer.tablesOnMaster
> hbase:meta
> 
> {noformat}
> 2. Start HMaster and RegionServer
> 2. Create two namespace, say "ns1" & "ns2"
> 3. Create two tables "ns1:t1' & "ns2:t1'
> 4. flush 'hbase:meta"
> 5. Stop HMaster (graceful shutdown)
> 6. Kill -9 RegionServer (Abnormal shutdown)
> 7. Run OfflineMetaRepair as follows,
> {noformat}
>   hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair -fix
> {noformat}
> 8. Restart HMaster and RegionServer
> 9. HMaster will never be able to finish its initialization and abort always 
> with below message,
> {code}
> 2017-06-06 15:11:07,582 FATAL [Hostname:16000.activeMasterManager] 
> master.HMaster: Unhandled exception. Starting shutdown.
> java.io.IOException: Timedout 12ms waiting for namespace table to be 
> assigned
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:98)
> at 
> org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1054)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:848)
> at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:199)
> at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1871)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Root cause
> ==
> 1. During HM start up AM assumes that it's a failover scenario based on the 
> existing old WAL files, so SSH/SCP will split WAL files and assign the 
> holding regions. 
> 2. During SSH/SCP it retrieves the server holding regions from meta/AM's 
> in-memory-state, but meta only had "regioninfo" entry (as already rebuild by 
> OfflineMetaRepair). So empty region will be returned and it wont trigger any 
> assignment.
> 3. HMaster which is waiting for namespace table to be assigned will timeout 
> and abort always.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18167) OfflineMetaRepair tool may cause HMaster abort always

2017-08-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18167:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> OfflineMetaRepair tool may cause HMaster abort always
> -
>
> Key: HBASE-18167
> URL: https://issues.apache.org/jira/browse/HBASE-18167
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 1.4.0, 1.3.2
>
> Attachments: HBASE-18167-branch-1.3.v2.patch, 
> HBASE-18167-branch-1.patch, HBASE-18167-branch-1-V2.patch
>
>
> In the production environment, we met a weird scenario where some Meta table 
> HFile blocks were missing due to some reason.
> To recover the environment we tried to rebuild the meta using 
> OfflineMetaRepair tool and restart the cluster, but HMaster couldn't finish 
> it's initialization. It always timed out as namespace table region was never 
> assigned.
> Steps to reproduce
> ==
> 1. Assign meta table region to HMaster (it can be on any RS, just to 
> reproduce the  scenario)
> {noformat}
>   
> hbase.balancer.tablesOnMaster
> hbase:meta
> 
> {noformat}
> 2. Start HMaster and RegionServer
> 2. Create two namespace, say "ns1" & "ns2"
> 3. Create two tables "ns1:t1' & "ns2:t1'
> 4. flush 'hbase:meta"
> 5. Stop HMaster (graceful shutdown)
> 6. Kill -9 RegionServer (Abnormal shutdown)
> 7. Run OfflineMetaRepair as follows,
> {noformat}
>   hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair -fix
> {noformat}
> 8. Restart HMaster and RegionServer
> 9. HMaster will never be able to finish its initialization and abort always 
> with below message,
> {code}
> 2017-06-06 15:11:07,582 FATAL [Hostname:16000.activeMasterManager] 
> master.HMaster: Unhandled exception. Starting shutdown.
> java.io.IOException: Timedout 12ms waiting for namespace table to be 
> assigned
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:98)
> at 
> org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1054)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:848)
> at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:199)
> at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1871)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Root cause
> ==
> 1. During HM start up AM assumes that it's a failover scenario based on the 
> existing old WAL files, so SSH/SCP will split WAL files and assign the 
> holding regions. 
> 2. During SSH/SCP it retrieves the server holding regions from meta/AM's 
> in-memory-state, but meta only had "regioninfo" entry (as already rebuild by 
> OfflineMetaRepair). So empty region will be returned and it wont trigger any 
> assignment.
> 3. HMaster which is waiting for namespace table to be assigned will timeout 
> and abort always.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18167) OfflineMetaRepair tool may cause HMaster abort always

2017-08-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113580#comment-16113580
 ] 

Ted Yu edited comment on HBASE-18167 at 8/4/17 1:04 AM:


{code}
Hunk #4 succeeded at 142 (offset -8 lines).
1 out of 4 hunks FAILED -- saving rejects to file 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java.rej
{code}
I resolved the above conflict and committed to branch-1.3


was (Author: yuzhih...@gmail.com):
{code}
Hunk #4 succeeded at 142 (offset -8 lines).
1 out of 4 hunks FAILED -- saving rejects to file 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java.rej
{code}
Mind updating the 1.3 patch ?

> OfflineMetaRepair tool may cause HMaster abort always
> -
>
> Key: HBASE-18167
> URL: https://issues.apache.org/jira/browse/HBASE-18167
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 1.4.0, 1.3.2
>
> Attachments: HBASE-18167-branch-1.3.v2.patch, 
> HBASE-18167-branch-1.patch, HBASE-18167-branch-1-V2.patch
>
>
> In the production environment, we met a weird scenario where some Meta table 
> HFile blocks were missing due to some reason.
> To recover the environment we tried to rebuild the meta using 
> OfflineMetaRepair tool and restart the cluster, but HMaster couldn't finish 
> it's initialization. It always timed out as namespace table region was never 
> assigned.
> Steps to reproduce
> ==
> 1. Assign meta table region to HMaster (it can be on any RS, just to 
> reproduce the  scenario)
> {noformat}
>   
> hbase.balancer.tablesOnMaster
> hbase:meta
> 
> {noformat}
> 2. Start HMaster and RegionServer
> 2. Create two namespace, say "ns1" & "ns2"
> 3. Create two tables "ns1:t1' & "ns2:t1'
> 4. flush 'hbase:meta"
> 5. Stop HMaster (graceful shutdown)
> 6. Kill -9 RegionServer (Abnormal shutdown)
> 7. Run OfflineMetaRepair as follows,
> {noformat}
>   hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair -fix
> {noformat}
> 8. Restart HMaster and RegionServer
> 9. HMaster will never be able to finish its initialization and abort always 
> with below message,
> {code}
> 2017-06-06 15:11:07,582 FATAL [Hostname:16000.activeMasterManager] 
> master.HMaster: Unhandled exception. Starting shutdown.
> java.io.IOException: Timedout 12ms waiting for namespace table to be 
> assigned
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:98)
> at 
> org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1054)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:848)
> at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:199)
> at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1871)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Root cause
> ==
> 1. During HM start up AM assumes that it's a failover scenario based on the 
> existing old WAL files, so SSH/SCP will split WAL files and assign the 
> holding regions. 
> 2. During SSH/SCP it retrieves the server holding regions from meta/AM's 
> in-memory-state, but meta only had "regioninfo" entry (as already rebuild by 
> OfflineMetaRepair). So empty region will be returned and it wont trigger any 
> assignment.
> 3. HMaster which is waiting for namespace table to be assigned will timeout 
> and abort always.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18248) Warn if monitored RPC task has been tied up beyond a configurable threshold

2017-08-03 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113674#comment-16113674
 ] 

Andrew Purtell commented on HBASE-18248:


[~stack] I didn't want to create a ChoreService just to get a chore, and IIRC 
it wasn't trivial to pass something in to the constructor that had one to 
borrow. Looks good to encapsulate the monitoring too to my eye. However if you 
have a suggestion happy to take it up. 

> Warn if monitored RPC task has been tied up beyond a configurable threshold
> ---
>
> Key: HBASE-18248
> URL: https://issues.apache.org/jira/browse/HBASE-18248
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 3.0.0, 1.4.0
>
> Attachments: HBASE-18248-branch-1.patch, HBASE-18248-branch-1.patch, 
> HBASE-18248.patch, HBASE-18248.patch
>
>
> Warn if monitored task has been tied up beyond a configurable threshold. We 
> especially want to do this for RPC tasks. Use a separate threshold for 
> warning about stuck RPC tasks versus other types of tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18020) Update API Compliance Checker to Incorporate Improvements Done in Hadoop

2017-08-03 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113667#comment-16113667
 ] 

Dima Spivak commented on HBASE-18020:
-

Done. Sorry for the delay, the flu sucks.

> Update API Compliance Checker to Incorporate Improvements Done in Hadoop
> 
>
> Key: HBASE-18020
> URL: https://issues.apache.org/jira/browse/HBASE-18020
> Project: HBase
>  Issue Type: Improvement
>  Components: API, community
>Reporter: Alex Leblang
>Assignee: Alex Leblang
> Fix For: 2.0.0
>
> Attachments: HBASE-18020.0.patch, HBASE-18020.branch-1.2.001.patch, 
> HBASE-18020.branch-1.2.002.patch, HBASE-18020.branch-1.2.003.patch, 
> HBASE-18020.branch-1.2.004.patch
>
>
> Recently the Hadoop community has made a number of improvements in their api 
> compliance checker based on feedback from the hbase and kudu community. We 
> should adopt these changes ourselves.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18516) [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring

2017-08-03 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113629#comment-16113629
 ] 

Appy commented on HBASE-18516:
--

+1.
will commit on QA run result.

> [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring
> --
>
> Key: HBASE-18516
> URL: https://issues.apache.org/jira/browse/HBASE-18516
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: hbase-18516.master.001.patch
>
>
> * Call to methods sendRegionOpen(), isServerReachable(), 
> removeRequeuedDeadServers(), getRequeuedDeadServers() got removed in 
> HBASE-14614
> * Call to method ServerManager.sendFavoredNodes() got removed in HBASE-17198



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18167) OfflineMetaRepair tool may cause HMaster abort always

2017-08-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113580#comment-16113580
 ] 

Ted Yu commented on HBASE-18167:


{code}
Hunk #4 succeeded at 142 (offset -8 lines).
1 out of 4 hunks FAILED -- saving rejects to file 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java.rej
{code}
Mind updating the 1.3 patch ?

> OfflineMetaRepair tool may cause HMaster abort always
> -
>
> Key: HBASE-18167
> URL: https://issues.apache.org/jira/browse/HBASE-18167
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 1.4.0, 1.3.1, 1.3.2
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 1.4.0, 1.3.2
>
> Attachments: HBASE-18167-branch-1.3.v2.patch, 
> HBASE-18167-branch-1.patch, HBASE-18167-branch-1-V2.patch
>
>
> In the production environment, we met a weird scenario where some Meta table 
> HFile blocks were missing due to some reason.
> To recover the environment we tried to rebuild the meta using 
> OfflineMetaRepair tool and restart the cluster, but HMaster couldn't finish 
> it's initialization. It always timed out as namespace table region was never 
> assigned.
> Steps to reproduce
> ==
> 1. Assign meta table region to HMaster (it can be on any RS, just to 
> reproduce the  scenario)
> {noformat}
>   
> hbase.balancer.tablesOnMaster
> hbase:meta
> 
> {noformat}
> 2. Start HMaster and RegionServer
> 2. Create two namespace, say "ns1" & "ns2"
> 3. Create two tables "ns1:t1' & "ns2:t1'
> 4. flush 'hbase:meta"
> 5. Stop HMaster (graceful shutdown)
> 6. Kill -9 RegionServer (Abnormal shutdown)
> 7. Run OfflineMetaRepair as follows,
> {noformat}
>   hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair -fix
> {noformat}
> 8. Restart HMaster and RegionServer
> 9. HMaster will never be able to finish its initialization and abort always 
> with below message,
> {code}
> 2017-06-06 15:11:07,582 FATAL [Hostname:16000.activeMasterManager] 
> master.HMaster: Unhandled exception. Starting shutdown.
> java.io.IOException: Timedout 12ms waiting for namespace table to be 
> assigned
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:98)
> at 
> org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1054)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:848)
> at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:199)
> at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1871)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Root cause
> ==
> 1. During HM start up AM assumes that it's a failover scenario based on the 
> existing old WAL files, so SSH/SCP will split WAL files and assign the 
> holding regions. 
> 2. During SSH/SCP it retrieves the server holding regions from meta/AM's 
> in-memory-state, but meta only had "regioninfo" entry (as already rebuild by 
> OfflineMetaRepair). So empty region will be returned and it wont trigger any 
> assignment.
> 3. HMaster which is waiting for namespace table to be assigned will timeout 
> and abort always.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16893) Use Collection.removeIf instead of Iterator.remove in DependentColumnFilter

2017-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113577#comment-16113577
 ] 

Hudson commented on HBASE-16893:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3482 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3482/])
HBASE-16893 Use Collection.removeIf instead of Iterator.remove in (chia7712: 
rev 855dd48f0a65e7db7263c076d7ed078bf1295ec5)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java


> Use Collection.removeIf instead of Iterator.remove in DependentColumnFilter
> ---
>
> Key: HBASE-16893
> URL: https://issues.apache.org/jira/browse/HBASE-16893
> Project: HBase
>  Issue Type: Improvement
>Reporter: Robert Yokota
>Assignee: Robert Yokota
>Priority: Minor
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-16893.master.001.patch, 
> HBASE-16893.master.002.patch, HBASE-16893.master.003.patch
>
>
> This is a performance improvement to use Iterables.removeIf in the 
> filterRowCells method of DependentColumnFilter as described here:
> https://rayokota.wordpress.com/2016/10/20/tips-on-writing-custom-hbase-filters/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18231) Deprecate and throw unsupported operation when Admin#closeRegion is called.

2017-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113575#comment-16113575
 ] 

Hudson commented on HBASE-18231:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3482 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3482/])
HBASE-18231 Deprecate Admin#closeRegion*() commands in favor of (appy: rev 
de696cf6b653749c6bf105ef3d62d7a6c6923c57)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterNoCluster.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicasClient.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin2.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckTwoRS.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaObserverChoreRegionReports.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerNoMaster.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java


> Deprecate and throw unsupported operation when Admin#closeRegion is called.
> ---
>
> Key: HBASE-18231
> URL: https://issues.apache.org/jira/browse/HBASE-18231
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Appy
>Priority: Critical
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18231.master.001.patch, 
> HBASE-18231.master.002.patch, HBASE-18231.master.003.patch
>
>
> [~uagashe] tripped over this today. Admin#closeRegion which we used to use in 
> branch-1 will cause damage in AMv2 cluster. Instead you need to call unassign 
> -- i.e. all cluster ops must go via the Master; no more going direct to 
> RegionServer closing regions behind the Master's back.
> Review all Admin ops to see what else skirts Master and deprecate and throw 
> unsupported if called.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18102) [SHELL] Purge close_region command that allows by-pass of Master

2017-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113576#comment-16113576
 ] 

Hudson commented on HBASE-18102:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3482 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3482/])
HBASE-18102 Purge close_region command that allows by-pass of Master (appy: rev 
71151eb0e9951830bc6d8a12d0d1629457d3ec73)
* (edit) hbase-shell/src/test/ruby/hbase/admin_test.rb
* (edit) hbase-shell/src/main/ruby/hbase/admin.rb
* (edit) hbase-shell/src/main/ruby/shell/commands/close_region.rb
HBASE-18102 (addendum fixing shell tests) - Purge close_region command (appy: 
rev 504a1f14e39255c4bf398875d6d96578792547d2)
* (edit) hbase-shell/src/test/ruby/hbase/admin_test.rb


> [SHELL] Purge close_region command that allows by-pass of Master
> 
>
> Key: HBASE-18102
> URL: https://issues.apache.org/jira/browse/HBASE-18102
> Project: HBase
>  Issue Type: Sub-task
>  Components: Operability, shell
>Reporter: stack
>Assignee: Appy
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18102.master.001.patch, 
> HBASE-18102.master.002.patch
>
>
> In AMv2, if a RS is not aligned with Master notions of how the world is, then 
> the Master will kill the deviant RS (TODO: is forcing compliance via less 
> radical means -- but that is how it is currently).
> The shell currently allows by-passing the Master to make cluster 
> modifications such as our being able to send a close directly to a 
> RegionServer for it to execute locally. This facility was used in the past to 
> do fix-up when Master lost account of Region locations. In the new regime, 
> such mis-accounting should no longer happen and, should a user mistakenly do 
> an explicit close against a RS, the consequences will be more than the user 
> bargained for; the Master will shut down the RS as soon as it reports close 
> of a Region the master thinks should be open (No independence allowed!).
> This issue is to review shell Region and Table manipulation commands to purge 
> those that by-pass Master or at least to add big warning.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18470) Remove the redundant comma from RetriesExhaustedWithDetailsException#getDesc

2017-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113578#comment-16113578
 ] 

Hudson commented on HBASE-18470:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3482 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3482/])
HBASE-18470 Remove the redundant comma from (chia7712: rev 
fe890b70ace30f35cce947de26a64fb646290219)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java


> Remove the redundant comma from RetriesExhaustedWithDetailsException#getDesc
> 
>
> Key: HBASE-18470
> URL: https://issues.apache.org/jira/browse/HBASE-18470
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Minor
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18470.master.001.patch, 
> HBASE-18470.master.002.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> The describe from `RetriesExhaustedWithDetailsException#getDesc` is `
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 3 
> actions: FailedServerException: 3 times, `, there is a not need ', ' in the 
> tail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17104) Improve cryptic error message "Memstore size is" on region close

2017-08-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113572#comment-16113572
 ] 

Ted Yu commented on HBASE-17104:


{code}
1695  LOG.error("Impossible! abort=false and memstore not flushed. 
Memstore size is " + memstoreDataSize.get());
{code}
In the above case, shouldn't exception be thrown ?

> Improve cryptic error message "Memstore size is" on region close
> 
>
> Key: HBASE-17104
> URL: https://issues.apache.org/jira/browse/HBASE-17104
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Matteo Bertozzi
>Assignee: Sahil Aggarwal
>Priority: Trivial
>  Labels: beginner, noob
> Fix For: 2.0.0
>
> Attachments: HBASE-17104.master.001 (1) (1).patch, 
> HBASE-17104.master.001 (1).patch, HBASE-17104.master.001.patch
>
>
> while grepping my RS log for ERROR I found a cryptic
> {noformat}
> ERROR [RS_CLOSE_REGION-u1604vm:35021-1] regionserver.HRegion(1601): Memstore 
> size is 33744
> {noformat}
> from the code looks like we seems to want to notify the user about the fact 
> that on close the rs was not able to flush and there were things in the RS. 
> https://github.com/apache/hbase/blob/c3685760f004450667920144f926383eb307de53/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L1601
> {code}
> if (!canFlush) {
>   this.decrMemstoreSize(new MemstoreSize(memstoreDataSize.get(), 
> getMemstoreHeapOverhead()));
> } else if (memstoreDataSize.get() != 0) {
>   LOG.error("Memstore size is " + memstoreDataSize.get());
> }
> {code}
> this should probably not even be an error but a warn or even info, unless we 
> have puts that specifically asked to not be written to the wal,  otherwise 
> the data in the memstore should be safe in the wals. 
> In any case it will be nice to have a message describing what is going on and 
> why we are notifying about the memstore size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17104) Improve cryptic error message "Memstore size is" on region close

2017-08-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113569#comment-16113569
 ] 

Ted Yu commented on HBASE-17104:


Sahil:
QA bot wouldn't run test suite if the status for JIRA is "In Progress".

> Improve cryptic error message "Memstore size is" on region close
> 
>
> Key: HBASE-17104
> URL: https://issues.apache.org/jira/browse/HBASE-17104
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Matteo Bertozzi
>Assignee: Sahil Aggarwal
>Priority: Trivial
>  Labels: beginner, noob
> Fix For: 2.0.0
>
> Attachments: HBASE-17104.master.001 (1) (1).patch, 
> HBASE-17104.master.001 (1).patch, HBASE-17104.master.001.patch
>
>
> while grepping my RS log for ERROR I found a cryptic
> {noformat}
> ERROR [RS_CLOSE_REGION-u1604vm:35021-1] regionserver.HRegion(1601): Memstore 
> size is 33744
> {noformat}
> from the code looks like we seems to want to notify the user about the fact 
> that on close the rs was not able to flush and there were things in the RS. 
> https://github.com/apache/hbase/blob/c3685760f004450667920144f926383eb307de53/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L1601
> {code}
> if (!canFlush) {
>   this.decrMemstoreSize(new MemstoreSize(memstoreDataSize.get(), 
> getMemstoreHeapOverhead()));
> } else if (memstoreDataSize.get() != 0) {
>   LOG.error("Memstore size is " + memstoreDataSize.get());
> }
> {code}
> this should probably not even be an error but a warn or even info, unless we 
> have puts that specifically asked to not be written to the wal,  otherwise 
> the data in the memstore should be safe in the wals. 
> In any case it will be nice to have a message describing what is going on and 
> why we are notifying about the memstore size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18231) Deprecate and throw unsupported operation when Admin#closeRegion is called.

2017-08-03 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113521#comment-16113521
 ] 

Appy commented on HBASE-18231:
--

Let's wait for verdict on removing/not-removing Admin#closeRegion functions. 
Then i'll put up patch for everything together.

> Deprecate and throw unsupported operation when Admin#closeRegion is called.
> ---
>
> Key: HBASE-18231
> URL: https://issues.apache.org/jira/browse/HBASE-18231
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Appy
>Priority: Critical
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18231.master.001.patch, 
> HBASE-18231.master.002.patch, HBASE-18231.master.003.patch
>
>
> [~uagashe] tripped over this today. Admin#closeRegion which we used to use in 
> branch-1 will cause damage in AMv2 cluster. Instead you need to call unassign 
> -- i.e. all cluster ops must go via the Master; no more going direct to 
> RegionServer closing regions behind the Master's back.
> Review all Admin ops to see what else skirts Master and deprecate and throw 
> unsupported if called.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18492) [AMv2] Embed code for selecting highest versioned region server for system table regions in AssignmentManager.processAssignQueue()

2017-08-03 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113466#comment-16113466
 ] 

Umesh Agashe commented on HBASE-18492:
--

The test hadoop.hbase.master.TestMasterFailover fails on master as well 
(without these changes). I checked with [~appy], he recently enabled this test 
in patch for HBASE-18231. Flaky build is not run on master since this commit. 
Lets wait for flaky build to finish. I expect this test will show up in flaky 
list. The patch can committed once this is verified.

> [AMv2] Embed code for selecting highest versioned region server for system 
> table regions in AssignmentManager.processAssignQueue()
> --
>
> Key: HBASE-18492
> URL: https://issues.apache.org/jira/browse/HBASE-18492
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Affects Versions: 2.0.0
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: hbase-18492.master.001.patch, 
> hbase-18492.master.002.patch, hbase-18492.master.003.patch
>
>
> Embed logic of selecting highest versioned region server for system table 
> regions in AssignmentManager.processAssignQueue(). This way from any section 
> of the code when system table regions are re/assigned, only highest versioned 
> RS are candidates for target servers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-08-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113458#comment-16113458
 ] 

Ted Yu commented on HBASE-17125:


Please see my comment on Jun 26th.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: 17125-slack-13.txt, example.diff, 
> HBASE-17125.master.001.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.003.patch, 
> HBASE-17125.master.004.patch, HBASE-17125.master.005.patch, 
> HBASE-17125.master.006.patch, HBASE-17125.master.007.patch, 
> HBASE-17125.master.008.patch, HBASE-17125.master.009.patch, 
> HBASE-17125.master.009.patch, HBASE-17125.master.010.patch, 
> HBASE-17125.master.011.patch, HBASE-17125.master.011.patch, 
> HBASE-17125.master.012.patch, HBASE-17125.master.013.patch, 
> HBASE-17125.master.014.patch, HBASE-17125.master.015.patch, 
> HBASE-17125.master.016.patch, HBASE-17125.master.017.patch, 
> HBASE-17125.master.018.patch, HBASE-17125.master.019.patch, 
> HBASE-17125.master.020.patch, HBASE-17125.master.checkReturnedVersions.patch, 
> HBASE-17125.master.no-specified-filter.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15042) refactor so that site materials are in the Standard Maven Place

2017-08-03 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113447#comment-16113447
 ] 

Misty Stanley-Jones commented on HBASE-15042:
-

Unless I'm confused, you don't need to change anything, because the website 
gets built into the {{target/}} directory, and doesn't care about {{src/}} at 
all, assuming that Maven can find what it needs.

> refactor so that site materials are in the Standard Maven Place
> ---
>
> Key: HBASE-15042
> URL: https://issues.apache.org/jira/browse/HBASE-15042
> Project: HBase
>  Issue Type: Task
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15042.master.001.patch, 
> HBASE-15042.master.002.patch, HBASE-15042.master.002.patch, 
> HBASE-15042.master.003.patch
>
>
> for some reason we currently have our site materials in {{src/main/site}} 
> rather than the maven prescribed {{src/site}}. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18516) [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring

2017-08-03 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-18516:
-
Status: Patch Available  (was: In Progress)

> [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring
> --
>
> Key: HBASE-18516
> URL: https://issues.apache.org/jira/browse/HBASE-18516
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: hbase-18516.master.001.patch
>
>
> * Call to methods sendRegionOpen(), isServerReachable(), 
> removeRequeuedDeadServers(), getRequeuedDeadServers() got removed in 
> HBASE-14614
> * Call to method ServerManager.sendFavoredNodes() got removed in HBASE-17198



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18516) [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring

2017-08-03 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-18516:
-
Attachment: hbase-18516.master.001.patch

> [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring
> --
>
> Key: HBASE-18516
> URL: https://issues.apache.org/jira/browse/HBASE-18516
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: hbase-18516.master.001.patch
>
>
> * Call to methods sendRegionOpen(), isServerReachable(), 
> removeRequeuedDeadServers(), getRequeuedDeadServers() got removed in 
> HBASE-14614
> * Call to method ServerManager.sendFavoredNodes() got removed in HBASE-17198



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-14135) HBase Backup/Restore Phase 3: Merge backup images

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113429#comment-16113429
 ] 

Hadoop QA commented on HBASE-14135:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
43m 23s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}151m  8s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}216m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestMasterFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:bdc94b1 |
| JIRA Issue | HBASE-14135 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880102/HBASE-14135-v9.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 2ebc0117b574 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / fe890b7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7912/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7912/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7912/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> HBase Backup/Restore Phase 3: Merge backup images
> -
>
> Key: HBASE-14135
> URL: https://issues.apache.org/jira/browse/HBASE-14135
> Project: HBase
>  Issue Type: New Feature
> 

[jira] [Commented] (HBASE-18469) Correct RegionServer metric of totalRequestCount

2017-08-03 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113420#comment-16113420
 ] 

Jerry He commented on HBASE-18469:
--

I had the confusion previously as well, and like the idea to make it clear.

On the detail side, it is trick to get a good new name with good meaning. It is 
a bit different for scan and multi actions.
For scan, one row is one row scanned.  For multi actions, the row actions can 
be on the same row.  For example, many put and delete actions can be on the 
same row. And these actions are countered separately.

Also, it is possible to add/increment the request row counter somehow at the 
higher level in batch, not one row a time? Just a little saving.


> Correct  RegionServer metric of  totalRequestCount
> --
>
> Key: HBASE-18469
> URL: https://issues.apache.org/jira/browse/HBASE-18469
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics, regionserver
>Affects Versions: 1.2.0
>Reporter: Shibin Zhang
>Assignee: Yu Li
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-18469.patch
>
>
> when i get the metric ,i found  this three metric may be have some error  as 
> follow :
> "totalRequestCount" : 17541,
> "readRequestCount" : 17483,
> "writeRequestCount" : 1633,



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113410#comment-16113410
 ] 

Hadoop QA commented on HBASE-17125:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
34m 23s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} hbase-client generated 5 new + 0 unchanged - 0 fixed = 
5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
49s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}145m 27s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 1s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}206m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestMasterFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:bdc94b1 |
| JIRA Issue | HBASE-17125 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880203/HBASE-17125.master.020.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 088eafb945e5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / fe890b7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7913/artifact/patchprocess/diff-javadoc-javadoc-hbase-client.txt
 |
| unit | 

[jira] [Commented] (HBASE-18502) Change MasterObserver to use TableDescriptor and ColumnFamilyDescriptor

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113408#comment-16113408
 ] 

Hadoop QA commented on HBASE-18502:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
34m 50s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 44s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hbase-examples in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.security.access.TestAccessController |
|   | hadoop.hbase.master.TestMasterFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.13.1 Server=1.13.1 Image:yetus/hbase:bdc94b1 |
| JIRA Issue | HBASE-18502 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880235/HBASE-18502.v0.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 964c783dddbc 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / fe890b7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| unit | 

[jira] [Updated] (HBASE-18516) [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring

2017-08-03 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-18516:
-
Description: 
* Call to methods sendRegionOpen(), isServerReachable(), 
removeRequeuedDeadServers(), getRequeuedDeadServers() got removed in HBASE-14614
* Call to method ServerManager.sendFavoredNodes() got removed in HBASE-17198

  was:
* Call to methods sendRegionOpen(), isServerReachable(), 
removeRequeuedDeadServers(), getRequeuedDeadServers() got removed in HBASE-14614
* ServerManager.sendFavoredNodes() got removed in HBASE-17198


> [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring
> --
>
> Key: HBASE-18516
> URL: https://issues.apache.org/jira/browse/HBASE-18516
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
>
> * Call to methods sendRegionOpen(), isServerReachable(), 
> removeRequeuedDeadServers(), getRequeuedDeadServers() got removed in 
> HBASE-14614
> * Call to method ServerManager.sendFavoredNodes() got removed in HBASE-17198



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Work started] (HBASE-18516) [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring

2017-08-03 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-18516 started by Umesh Agashe.

> [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring
> --
>
> Key: HBASE-18516
> URL: https://issues.apache.org/jira/browse/HBASE-18516
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
>
> * Call to methods sendRegionOpen(), isServerReachable(), 
> removeRequeuedDeadServers(), getRequeuedDeadServers() got removed in 
> HBASE-14614
> * ServerManager.sendFavoredNodes() got removed in HBASE-17198



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18516) [AMv2] Remove dead code in ServerManager resulted mostly from AMv2 refactoring

2017-08-03 Thread Umesh Agashe (JIRA)
Umesh Agashe created HBASE-18516:


 Summary: [AMv2] Remove dead code in ServerManager resulted mostly 
from AMv2 refactoring
 Key: HBASE-18516
 URL: https://issues.apache.org/jira/browse/HBASE-18516
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Umesh Agashe
Assignee: Umesh Agashe
 Fix For: 2.0.0


* Call to methods sendRegionOpen(), isServerReachable(), 
removeRequeuedDeadServers(), getRequeuedDeadServers() got removed in HBASE-14614
* ServerManager.sendFavoredNodes() got removed in HBASE-17198



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113380#comment-16113380
 ] 

Hadoop QA commented on HBASE-16290:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 28s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}131m 27s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestMasterFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:bdc94b1 |
| JIRA Issue | HBASE-16290 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880258/0001-Dump-call-queue-summaries.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 6c84ca85d152 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / fe890b7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7911/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7911/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7911/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>   

[jira] [Comment Edited] (HBASE-18231) Deprecate and throw unsupported operation when Admin#closeRegion is called.

2017-08-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113259#comment-16113259
 ] 

Chia-Ping Tsai edited comment on HBASE-18231 at 8/3/17 6:37 PM:


The AsyncAdmin is an new feature introduced into 2.0+ so AsyncAdmin#closeRegion 
can be removed.


was (Author: chia7712):
The AsyncAdmin is an new feature introduced into 2.0+ only so 
AsyncAdmin#closeRegion can be removed.

> Deprecate and throw unsupported operation when Admin#closeRegion is called.
> ---
>
> Key: HBASE-18231
> URL: https://issues.apache.org/jira/browse/HBASE-18231
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Appy
>Priority: Critical
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18231.master.001.patch, 
> HBASE-18231.master.002.patch, HBASE-18231.master.003.patch
>
>
> [~uagashe] tripped over this today. Admin#closeRegion which we used to use in 
> branch-1 will cause damage in AMv2 cluster. Instead you need to call unassign 
> -- i.e. all cluster ops must go via the Master; no more going direct to 
> RegionServer closing regions behind the Master's back.
> Review all Admin ops to see what else skirts Master and deprecate and throw 
> unsupported if called.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18231) Deprecate and throw unsupported operation when Admin#closeRegion is called.

2017-08-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113259#comment-16113259
 ] 

Chia-Ping Tsai edited comment on HBASE-18231 at 8/3/17 6:35 PM:


The AsyncAdmin is an new feature introduced into 2.0+ only so 
AsyncAdmin#closeRegion can be removed.


was (Author: chia7712):
The AsyncAdmin an new feature introduced into 2.0+ only so 
AsyncAdmin#closeRegion can be removed.

> Deprecate and throw unsupported operation when Admin#closeRegion is called.
> ---
>
> Key: HBASE-18231
> URL: https://issues.apache.org/jira/browse/HBASE-18231
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Appy
>Priority: Critical
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18231.master.001.patch, 
> HBASE-18231.master.002.patch, HBASE-18231.master.003.patch
>
>
> [~uagashe] tripped over this today. Admin#closeRegion which we used to use in 
> branch-1 will cause damage in AMv2 cluster. Instead you need to call unassign 
> -- i.e. all cluster ops must go via the Master; no more going direct to 
> RegionServer closing regions behind the Master's back.
> Review all Admin ops to see what else skirts Master and deprecate and throw 
> unsupported if called.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18231) Deprecate and throw unsupported operation when Admin#closeRegion is called.

2017-08-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113259#comment-16113259
 ] 

Chia-Ping Tsai commented on HBASE-18231:


The AsyncAdmin an new feature introduced into 2.0+ only so 
AsyncAdmin#closeRegion can be removed.

> Deprecate and throw unsupported operation when Admin#closeRegion is called.
> ---
>
> Key: HBASE-18231
> URL: https://issues.apache.org/jira/browse/HBASE-18231
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Appy
>Priority: Critical
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18231.master.001.patch, 
> HBASE-18231.master.002.patch, HBASE-18231.master.003.patch
>
>
> [~uagashe] tripped over this today. Admin#closeRegion which we used to use in 
> branch-1 will cause damage in AMv2 cluster. Instead you need to call unassign 
> -- i.e. all cluster ops must go via the Master; no more going direct to 
> RegionServer closing regions behind the Master's back.
> Review all Admin ops to see what else skirts Master and deprecate and throw 
> unsupported if called.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18231) Deprecate and throw unsupported operation when Admin#closeRegion is called.

2017-08-03 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113234#comment-16113234
 ] 

Appy commented on HBASE-18231:
--

That'll break user clients.
>From the "hbase 2.0 compatibility expectations" thread:
bq. An hbase1 client can run against an hbase2 cluster but it will only be able 
to do DML (Get/Put/Scan, etc.). We do not allow being able to do admin ops 
using an hbase1 Admin client against an hbase2 cluster. 

So we certainly don't have to worry about compat, but am not sure what's the 
accepted way to fail. Is it fine to just remove the methods, because then the 
clients might crash?
[~stack]

> Deprecate and throw unsupported operation when Admin#closeRegion is called.
> ---
>
> Key: HBASE-18231
> URL: https://issues.apache.org/jira/browse/HBASE-18231
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Appy
>Priority: Critical
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18231.master.001.patch, 
> HBASE-18231.master.002.patch, HBASE-18231.master.003.patch
>
>
> [~uagashe] tripped over this today. Admin#closeRegion which we used to use in 
> branch-1 will cause damage in AMv2 cluster. Instead you need to call unassign 
> -- i.e. all cluster ops must go via the Master; no more going direct to 
> RegionServer closing regions behind the Master's back.
> Review all Admin ops to see what else skirts Master and deprecate and throw 
> unsupported if called.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18515) Introduce Delete.add as a replacement for Delete#addDeleteMarker

2017-08-03 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-18515:
--

 Summary:  Introduce Delete.add as a replacement for 
Delete#addDeleteMarker
 Key: HBASE-18515
 URL: https://issues.apache.org/jira/browse/HBASE-18515
 Project: HBase
  Issue Type: Task
Reporter: Chia-Ping Tsai
 Fix For: 3.0.0, 2.0.0-alpha-2


{quote}
  public Delete addDeleteMarker(Cell kv) throws IOException {
// TODO: Deprecate and rename 'add' so it matches how we add KVs to Puts.
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18515) Introduce Delete.add as a replacement for Delete#addDeleteMarker

2017-08-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18515:
---
Labels: beginner  (was: )

>  Introduce Delete.add as a replacement for Delete#addDeleteMarker
> -
>
> Key: HBASE-18515
> URL: https://issues.apache.org/jira/browse/HBASE-18515
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>  Labels: beginner
> Fix For: 3.0.0, 2.0.0-alpha-2
>
>
> {quote}
>   public Delete addDeleteMarker(Cell kv) throws IOException {
> // TODO: Deprecate and rename 'add' so it matches how we add KVs to Puts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17980) Any HRegionInfo we give out should be immutable

2017-08-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17980:
---
Labels: beginner  (was: )

> Any HRegionInfo we give out should be immutable
> ---
>
> Key: HBASE-17980
> URL: https://issues.apache.org/jira/browse/HBASE-17980
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>  Labels: beginner
> Fix For: 2.0.0
>
>
> This is similar to HBASE-15583.
> # Introduce RegionInfo class. HRegionInfo will extend RegionInfo.
> # Deprecate HRegionInfo to be removed in 3.0
> # RegionInfo contain all of the read-only methods of HRegionInfo
> # Add "RegionInfo Builder"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-8518) Get rid of hbase.hstore.compaction.complete setting

2017-08-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113192#comment-16113192
 ] 

Chia-Ping Tsai edited comment on HBASE-8518 at 8/3/17 6:01 PM:
---

bq. may I take over this issue ?
Go ahead.

bq. The remaining is refactor the test code, right ?
Exactly


was (Author: chia7712):
bq. may I take over this issue ?
Go ahead.


> Get rid of hbase.hstore.compaction.complete setting
> ---
>
> Key: HBASE-8518
> URL: https://issues.apache.org/jira/browse/HBASE-8518
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: brandboat
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-8518-1.patch
>
>
> hbase.hstore.compaction.complete is a strange setting that causes the 
> finished compaction to not complete (files are just left in tmp) in HStore. 
> It's used by one test.
> The setting with the same name is also used by CompactionTool, but that usage 
> is semi-unrelated and could probably be removed easily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-8518) Get rid of hbase.hstore.compaction.complete setting

2017-08-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113192#comment-16113192
 ] 

Chia-Ping Tsai commented on HBASE-8518:
---

bq. may I take over this issue ?
Go ahead.


> Get rid of hbase.hstore.compaction.complete setting
> ---
>
> Key: HBASE-8518
> URL: https://issues.apache.org/jira/browse/HBASE-8518
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: brandboat
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-8518-1.patch
>
>
> hbase.hstore.compaction.complete is a strange setting that causes the 
> finished compaction to not complete (files are just left in tmp) in HStore. 
> It's used by one test.
> The setting with the same name is also used by CompactionTool, but that usage 
> is semi-unrelated and could probably be removed easily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-8518) Get rid of hbase.hstore.compaction.complete setting

2017-08-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned HBASE-8518:
-

Assignee: brandboat

> Get rid of hbase.hstore.compaction.complete setting
> ---
>
> Key: HBASE-8518
> URL: https://issues.apache.org/jira/browse/HBASE-8518
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: brandboat
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-8518-1.patch
>
>
> hbase.hstore.compaction.complete is a strange setting that causes the 
> finished compaction to not complete (files are just left in tmp) in HStore. 
> It's used by one test.
> The setting with the same name is also used by CompactionTool, but that usage 
> is semi-unrelated and could probably be removed easily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18514) Backport space quota "phase2" work to branch-2

2017-08-03 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18514:
---
Attachment: HBASE-18514.001.branch-2.patch

> Backport space quota "phase2" work to branch-2
> --
>
> Key: HBASE-18514
> URL: https://issues.apache.org/jira/browse/HBASE-18514
> Project: HBase
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-18514.001.branch-2.patch
>
>
> People generally seem to be in favor of backporting the phase 2 work 
> (includes the size of hbase snapshots in quota rules) for the hbase-2.0 
> release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18514) Backport space quota "phase2" work to branch-2

2017-08-03 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18514:
---
Status: Patch Available  (was: Open)

> Backport space quota "phase2" work to branch-2
> --
>
> Key: HBASE-18514
> URL: https://issues.apache.org/jira/browse/HBASE-18514
> Project: HBase
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-18514.001.branch-2.patch
>
>
> People generally seem to be in favor of backporting the phase 2 work 
> (includes the size of hbase snapshots in quota rules) for the hbase-2.0 
> release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18142) Deletion of a cell deletes the previous versions too

2017-08-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113174#comment-16113174
 ] 

Chia-Ping Tsai commented on HBASE-18142:


Breaking the operational compatibility, you need to fix the related unit tests. 
For example:
{code:title=table_test.rb}
define_test "delete should work without timestamp" do
  @test_table.delete("101", "x:a")
  res = @test_table._get_internal('101', 'x:a')
  assert_nil(res)
end

define_test "delete should work with timestamp" do
  @test_table.delete("102", "x:a", 1214)
  res = @test_table._get_internal('102', 'x:a')
  assert_nil(res)
end

define_test "delete should work with integer keys" do
  @test_table.delete(103, "x:a")
  res = @test_table._get_internal('103', 'x:a')
  assert_nil(res)
end
{code}

> Deletion of a cell deletes the previous versions too
> 
>
> Key: HBASE-18142
> URL: https://issues.apache.org/jira/browse/HBASE-18142
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 3.0.0
>Reporter: Karthick
>Assignee: ChunHao
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-18142.master.v0.patch, HBASE-18142.master.v1.patch
>
>
> When I tried to delete a cell using it's timestamp in the Hbase Shell, the 
> previous versions of the same cell also got deleted. But when I tried the 
> same using the Java API, then the previous versions are not deleted and I can 
> retrive the previous values.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java
> see this file to fix the issue. This method (public Delete addColumns(final 
> byte [] family, final byte [] qualifier, final long timestamp)) only deletes 
> the current version of the cell. The previous versions are not deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18514) Backport space quota "phase2" work to branch-2

2017-08-03 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113117#comment-16113117
 ] 

Josh Elser commented on HBASE-18514:


This will encompass the issues: HBASE-17748, HBASE-17752, and HBASE-17840.

> Backport space quota "phase2" work to branch-2
> --
>
> Key: HBASE-18514
> URL: https://issues.apache.org/jira/browse/HBASE-18514
> Project: HBase
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0
>
>
> People generally seem to be in favor of backporting the phase 2 work 
> (includes the size of hbase snapshots in quota rules) for the hbase-2.0 
> release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18514) Backport space quota "phase2" work to branch-2

2017-08-03 Thread Josh Elser (JIRA)
Josh Elser created HBASE-18514:
--

 Summary: Backport space quota "phase2" work to branch-2
 Key: HBASE-18514
 URL: https://issues.apache.org/jira/browse/HBASE-18514
 Project: HBase
  Issue Type: Task
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Blocker
 Fix For: 2.0.0


People generally seem to be in favor of backporting the phase 2 work (includes 
the size of hbase snapshots in quota rules) for the hbase-2.0 release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18142) Deletion of a cell deletes the previous versions too

2017-08-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18142:

Status: Patch Available  (was: Open)

> Deletion of a cell deletes the previous versions too
> 
>
> Key: HBASE-18142
> URL: https://issues.apache.org/jira/browse/HBASE-18142
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 3.0.0
>Reporter: Karthick
>Assignee: ChunHao
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-18142.master.v0.patch, HBASE-18142.master.v1.patch
>
>
> When I tried to delete a cell using it's timestamp in the Hbase Shell, the 
> previous versions of the same cell also got deleted. But when I tried the 
> same using the Java API, then the previous versions are not deleted and I can 
> retrive the previous values.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java
> see this file to fix the issue. This method (public Delete addColumns(final 
> byte [] family, final byte [] qualifier, final long timestamp)) only deletes 
> the current version of the cell. The previous versions are not deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18142) Deletion of a cell deletes the previous versions too

2017-08-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18142:

Attachment: HBASE-18142.master.v1.patch

> Deletion of a cell deletes the previous versions too
> 
>
> Key: HBASE-18142
> URL: https://issues.apache.org/jira/browse/HBASE-18142
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 3.0.0
>Reporter: Karthick
>Assignee: ChunHao
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-18142.master.v0.patch, HBASE-18142.master.v1.patch
>
>
> When I tried to delete a cell using it's timestamp in the Hbase Shell, the 
> previous versions of the same cell also got deleted. But when I tried the 
> same using the Java API, then the previous versions are not deleted and I can 
> retrive the previous values.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java
> see this file to fix the issue. This method (public Delete addColumns(final 
> byte [] family, final byte [] qualifier, final long timestamp)) only deletes 
> the current version of the cell. The previous versions are not deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18142) Deletion of a cell deletes the previous versions too

2017-08-03 Thread ChunHao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChunHao updated HBASE-18142:

Status: Open  (was: Patch Available)

> Deletion of a cell deletes the previous versions too
> 
>
> Key: HBASE-18142
> URL: https://issues.apache.org/jira/browse/HBASE-18142
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 3.0.0
>Reporter: Karthick
>Assignee: ChunHao
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-18142.master.v0.patch
>
>
> When I tried to delete a cell using it's timestamp in the Hbase Shell, the 
> previous versions of the same cell also got deleted. But when I tried the 
> same using the Java API, then the previous versions are not deleted and I can 
> retrive the previous values.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java
> see this file to fix the issue. This method (public Delete addColumns(final 
> byte [] family, final byte [] qualifier, final long timestamp)) only deletes 
> the current version of the cell. The previous versions are not deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18304) Start enforcing upperbounds on dependencies

2017-08-03 Thread Tamas Penzes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Penzes updated HBASE-18304:
-
Attachment: HBASE-18304.master.003.patch

> Start enforcing upperbounds on dependencies
> ---
>
> Key: HBASE-18304
> URL: https://issues.apache.org/jira/browse/HBASE-18304
> Project: HBase
>  Issue Type: Task
>  Components: build, dependencies
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Tamas Penzes
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-18304.master.001.patch, 
> HBASE-18304.master.002.patch, HBASE-18304.master.002.patch, 
> HBASE-18304.master.003.patch
>
>
> would be nice to get this going before our next major version.
> http://maven.apache.org/enforcer/enforcer-rules/requireUpperBoundDeps.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18485) Performance issue: ClientAsyncPrefetchScanner is slower than ClientSimpleScanner

2017-08-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113054#comment-16113054
 ] 

Ted Yu commented on HBASE-18485:


lgtm

Better check failed tests locally before committing.

> Performance issue: ClientAsyncPrefetchScanner is slower than 
> ClientSimpleScanner
> 
>
> Key: HBASE-18485
> URL: https://issues.apache.org/jira/browse/HBASE-18485
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18485-v1.patch, HBASE-18485-v2.patch, 
> HBASE-18485-v3.patch, HBASE-18485-v4.patch
>
>
> Copied the test result from HBASE-17994.
> {code}
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred scan 1
> ./bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 
> --nomapred --asyncPrefetch=True scan 1
> {code}
> Mean latency.
> || ||Test1|| Test2 || Test3 || Test4|| Test5||
> |scan| 12.21 | 14.32 | 13.25 | 13.07 | 11.83 |
> |scan with prefetch=True | 37.36 | 37.88 | 37.56 | 37.66 | 38.28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-8518) Get rid of hbase.hstore.compaction.complete setting

2017-08-03 Thread brandboat (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113045#comment-16113045
 ] 

brandboat commented on HBASE-8518:
--

[~sershe], [~chia7712] , may I take over this issue ?

It seems that samar has already removed the hbase.hstore.compaction.complete in 
the CompactTool.java and HStore.java in patch.

The remaining is refactor the test code, right ?

> Get rid of hbase.hstore.compaction.complete setting
> ---
>
> Key: HBASE-8518
> URL: https://issues.apache.org/jira/browse/HBASE-8518
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-8518-1.patch
>
>
> hbase.hstore.compaction.complete is a strange setting that causes the 
> finished compaction to not complete (files are just left in tmp) in HStore. 
> It's used by one test.
> The setting with the same name is also used by CompactionTool, but that usage 
> is semi-unrelated and could probably be removed easily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-08-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113032#comment-16113032
 ] 

Chia-Ping Tsai commented on HBASE-16290:


Would you please don't delete the old patch? I need the old patch to activate 
my memory...

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Critical
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: 0001-Dump-call-queue-summaries.patch, 
> DebugDump_screenshot.png, Sample Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-08-03 Thread Sreeram Venkatasubramanian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113025#comment-16113025
 ] 

Sreeram Venkatasubramanian commented on HBASE-16290:


Hi [~chia7712] Thank you for your comments. I have modified to use 
Collections.EMPTY_MAP. Call queue contents are also printed for 
FifoRpcScheduler. I have attached the updated patch. Kindly let me know if the 
changes look OK.

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Critical
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: 0001-Dump-call-queue-summaries.patch, 
> DebugDump_screenshot.png, Sample Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18462) HBase server fails to start when rootdir contains spaces

2017-08-03 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113022#comment-16113022
 ] 

Zach York commented on HBASE-18462:
---

So there are some issues here. As you can see from the stack trace below it is 
Hadoop FileSystem that actually throws this (by calling URI). So if we handle 
this as you suggest by encoding the space as %20 in HBase this will work for 
some cases. However, what if you create a directory that actually contains the 
space character (for example, you can do this on S3)? Then HBase will create a 
root directory with %20 instead of the correct space character. This will be 
confusing to users in my opinion.

Caused by: java.lang.IllegalArgumentException: Illegal character in path at 
index 89: 
file:/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Java_JDK_Versions_Test/jdk/JDK
 1.7 
(latest)/label/beam/sdks/java/io/hbase/target/test-data/b11a0828-4628-4fe9-885d-073fb641ddc9
at java.net.URI.create(URI.java:859)
at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:175)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
at org.apache.hadoop.hbase.fs.HFileSystem.(HFileSystem.java:80)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:613)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:564)
at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:412)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)

> HBase server fails to start when rootdir contains spaces
> 
>
> Key: HBASE-18462
> URL: https://issues.apache.org/jira/browse/HBASE-18462
> Project: HBase
>  Issue Type: Bug
>  Components: hbase, test
>Affects Versions: 1.3.1, 1.2.6
>Reporter: Ismaël Mejía
>Priority: Minor
>
> As part of the tests for the HBase connector for Beam I discovered that when 
> you start an HBase server instance from a directory that contains spaces 
> (rootdir) it does not start correctly. This happens both with the 
> HBaseTestingUtility server and with the binary distribution too.
> The concrete exception says:
> {quote}
> Caused by: java.net.URISyntaxException: Illegal character in path at index 
> 89: 
> file:/home/jenkins/jenkins-slave/workspace/beam_PostCommit_Java_JDK_Versions_Test/jdk/JDK
>  1.7 
> (latest)/label/beam/sdks/java/io/hbase/target/test-data/b11a0828-4628-4fe9-885d-073fb641ddc9
>   at java.net.URI$Parser.fail(URI.java:2829)
>   at java.net.URI$Parser.checkChars(URI.java:3002)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3086)
>   at java.net.URI$Parser.parse(URI.java:3034)
>   at java.net.URI.(URI.java:595)
>   at java.net.URI.create(URI.java:857)
>   ... 37 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-08-03 Thread Sreeram Venkatasubramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreeram Venkatasubramanian updated HBASE-16290:
---
Attachment: 0001-Dump-call-queue-summaries.patch

Patch attached

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Critical
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: 0001-Dump-call-queue-summaries.patch, 
> DebugDump_screenshot.png, Sample Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-08-03 Thread Sreeram Venkatasubramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreeram Venkatasubramanian updated HBASE-16290:
---
Attachment: (was: 0001-Dump-Call-Queue-Summary.patch)

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Critical
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: DebugDump_screenshot.png, Sample Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18502) Change MasterObserver to use TableDescriptor and ColumnFamilyDescriptor

2017-08-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18502:
---
Summary: Change MasterObserver to use TableDescriptor and 
ColumnFamilyDescriptor  (was: Change MasterObserver to use TableDescriptor)

> Change MasterObserver to use TableDescriptor and ColumnFamilyDescriptor
> ---
>
> Key: HBASE-18502
> URL: https://issues.apache.org/jira/browse/HBASE-18502
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18502.v0.patch
>
>
> MasterObserver is IA.COPROC so we can make some Incompatible change for 3.0 
> and 2.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-14135) HBase Backup/Restore Phase 3: Merge backup images

2017-08-03 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112993#comment-16112993
 ] 

Josh Elser commented on HBASE-14135:


bq. This is not only for backup, right, Josh Elser?

Yeah, it definitely applies to all aspects of HBase. WALs is probably the most 
prevalent area I can think of.

bq. Hard crash can happen during regular HBase operation, do we have any 
automation tools in Master to address potential issues?

For WALs, I'm sure you're well aware of all of the CleanerChore logic we have 
surrounding WAL archival/removal that run in the Master. For these backup 
tools, it's a bit different since things are primarily being driven by the 
client instead of inside of HBase itself. I'm less asking the question "why 
wasn't server-side driven cleanup implemented" and more trying to ask the 
question "should we implement such cleanup?". I'd defer to you to say how easy 
such an automated (and safe) cleanup would be inside of the Master.

If it would be too difficult (which is what my gut-reaction was), a 
tool/utility to summarize these (expected) transient data (files in HDFS and 
hbase:backup records) would be really nice to have. If/when we would have to 
debug some kind of issue WRT backups or just HDFS use by hbase, such a tool 
could give us a definitive yes/no as to whether these transient files are to 
blame or not.

> HBase Backup/Restore Phase 3: Merge backup images
> -
>
> Key: HBASE-14135
> URL: https://issues.apache.org/jira/browse/HBASE-14135
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBASE-14135-v3.patch, HBASE-14135-v5.patch, 
> HBASE-14135-v6.patch, HBASE-14135-v7.patch, HBASE-14135-v8.patch, 
> HBASE-14135-v9.patch
>
>
> User can merge incremental backup images into single incremental backup image.
> # Merge supports only incremental images
> # Merge supports only images for the same backup destinations
> Command:
> {code}
> hbase backup merge image1,image2,..imageK
> {code}
> Example:
> {code}
> hbase backup merge backup_143126764557,backup_143126764456 
> {code}
> When operation is complete, only the most recent backup image will be kept 
> (in above example -  backup_143126764557) as a merged backup image, all other 
> images will be deleted from both: file system and backup system tables, 
> corresponding backup manifest for the merged backup image will be updated to 
> remove dependencies from deleted images. Merged backup image will contains 
> all the data from original image and from deleted images.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18304) Start enforcing upperbounds on dependencies

2017-08-03 Thread Tamas Penzes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112938#comment-16112938
 ] 

Tamas Penzes commented on HBASE-18304:
--

Hi [~busbey],

Please see my comments inline.

> 1042 com.google.protobuf:protobuf-java
> This is going to be a nightmare due to our purposeful handling of multiple 
> versions. But maybe I'm misunderstanding it, since shouldn't our internal use 
> of protobuf 3 be masked since we relocate it in third-party-deps?

We do only reference protobuf 3.3.0 in hbase-protocol-shaded now, but it is a 
dependency of hbase-client, hbase-procedure and hbase-server. Through the 
transitive dependencies it causes conflict in this three module.
If I exclude protobuf from the dependency hbase-protocol-shaded in these three 
modules, it looks okay. Is it?

> 1043  org.slf4j:slf4j-log4j12
> This one should be easy to just set to latest.

If I add org.slf4j:slf4j-log4j12:${slf4j.version} to hbase-client as dependency 
it solves the problem.

> 1044  com.google.guava:guava
> Maybe solved for us by our move to third-party-deps? Shouldn't only Hadoop's 
> show up? or is the conflict in spark or some such? (questions for the 
> eventual follow-on JIRA)

Almost solved. org.tachyonproject:tachyon-client uses guava 14.0.1, and is 
referenced directly and transitively from org.apache.spark:spark-core_2.10.
Otherwise we only use guava version 11.0.2. If I can exclude it from 
spark-core_2.10 transitive dependencies in hbase-spark and hbase-spark-it it 
works.

> 1045  com.thoughtworks.paranamer:paranamer
> 1046  commons-net:commons-net
> 1047  net.java.dev.jets3t:jets3t
> These should go okay.

Go okay as being excluded from the check or if I add them to hbase-spark and 
hbase-spark-it as direct dependency?

> 1048  org.scala-lang:scala-library
> 1049  org.scala-lang:scala-reflect
> These are probably just an error in our spark module. Best not to try to 
> address it until we close out HBASE-16179

Okay. They stay excluded from the check.

> 1050  io.netty:netty
> I think also solved by our move to third-party-deps on HBASE-18271

Just as with guava org.apache.spark:spark-core_2.10 causes the problem. It uses 
netty version 3.8.0.Final as transitive dependency while we use 3.6.2.Final 
everywhere else.
Should I exclude it from spark-core's dependencies manually?

Thanks, Tamaas

> Start enforcing upperbounds on dependencies
> ---
>
> Key: HBASE-18304
> URL: https://issues.apache.org/jira/browse/HBASE-18304
> Project: HBase
>  Issue Type: Task
>  Components: build, dependencies
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Tamas Penzes
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-18304.master.001.patch, 
> HBASE-18304.master.002.patch, HBASE-18304.master.002.patch
>
>
> would be nice to get this going before our next major version.
> http://maven.apache.org/enforcer/enforcer-rules/requireUpperBoundDeps.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18513) 1.3 release line API docs should be on website

2017-08-03 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-18513:
---

 Summary: 1.3 release line API docs should be on website
 Key: HBASE-18513
 URL: https://issues.apache.org/jira/browse/HBASE-18513
 Project: HBase
  Issue Type: Improvement
  Components: community, website
Affects Versions: 1.3.1, 1.3.0
Reporter: Sean Busbey
Priority: Blocker
 Fix For: 1.3.2


Docs should be whatever the current maintenance release is at the time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-08-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112885#comment-16112885
 ] 

Chia-Ping Tsai commented on HBASE-17125:


As mentioned in HBASE-18295,
{quote}
The bugs caused by filter may be resolved by HBASE-17125 because the patch make 
matcher check the version before asking filter. If the SEEK_NEXT_COLUMN is 
returned, the filter.filterKeyValue isn't evaluated. Maybe we should push the 
HBASE-17125...FYI Guanghao Zhang
{quote}
>From my perspective, *testXXXWithFilterHint* and *testXXXWithFilter* should be 
>removed because they can't reproduce the bug of top (cell) change.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: 17125-slack-13.txt, example.diff, 
> HBASE-17125.master.001.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.003.patch, 
> HBASE-17125.master.004.patch, HBASE-17125.master.005.patch, 
> HBASE-17125.master.006.patch, HBASE-17125.master.007.patch, 
> HBASE-17125.master.008.patch, HBASE-17125.master.009.patch, 
> HBASE-17125.master.009.patch, HBASE-17125.master.010.patch, 
> HBASE-17125.master.011.patch, HBASE-17125.master.011.patch, 
> HBASE-17125.master.012.patch, HBASE-17125.master.013.patch, 
> HBASE-17125.master.014.patch, HBASE-17125.master.015.patch, 
> HBASE-17125.master.016.patch, HBASE-17125.master.017.patch, 
> HBASE-17125.master.018.patch, HBASE-17125.master.019.patch, 
> HBASE-17125.master.020.patch, HBASE-17125.master.checkReturnedVersions.patch, 
> HBASE-17125.master.no-specified-filter.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18502) Change MasterObserver to use TableDescriptor

2017-08-03 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112729#comment-16112729
 ] 

Chia-Ping Tsai commented on HBASE-18502:


The deprecated methods are not modified so as to simplify the update from 
hbase1 to hbase2 for user.

> Change MasterObserver to use TableDescriptor
> 
>
> Key: HBASE-18502
> URL: https://issues.apache.org/jira/browse/HBASE-18502
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18502.v0.patch
>
>
> MasterObserver is IA.COPROC so we can make some Incompatible change for 3.0 
> and 2.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18502) Change MasterObserver to use TableDescriptor

2017-08-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18502:
---
Status: Patch Available  (was: Open)

> Change MasterObserver to use TableDescriptor
> 
>
> Key: HBASE-18502
> URL: https://issues.apache.org/jira/browse/HBASE-18502
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18502.v0.patch
>
>
> MasterObserver is IA.COPROC so we can make some Incompatible change for 3.0 
> and 2.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18502) Change MasterObserver to use TableDescriptor

2017-08-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18502:
---
Attachment: HBASE-18502.v0.patch

> Change MasterObserver to use TableDescriptor
> 
>
> Key: HBASE-18502
> URL: https://issues.apache.org/jira/browse/HBASE-18502
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18502.v0.patch
>
>
> MasterObserver is IA.COPROC so we can make some Incompatible change for 3.0 
> and 2.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18502) Change MasterObserver to use TableDescriptor

2017-08-03 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18502:
---
Fix Version/s: 2.0.0-alpha-2
   3.0.0

> Change MasterObserver to use TableDescriptor
> 
>
> Key: HBASE-18502
> URL: https://issues.apache.org/jira/browse/HBASE-18502
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
>
> MasterObserver is IA.COPROC so we can make some Incompatible change for 3.0 
> and 2.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-18211) Encryption of exisiting data in Stripe Compaction

2017-08-03 Thread Sahil Aggarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Aggarwal reassigned HBASE-18211:
--

Assignee: Sahil Aggarwal

> Encryption of exisiting data in Stripe Compaction
> -
>
> Key: HBASE-18211
> URL: https://issues.apache.org/jira/browse/HBASE-18211
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, encryption
>Reporter: Karthick
>Assignee: Sahil Aggarwal
>Priority: Critical
>  Labels: compaction, encryption
>
> We have a table which has time series data with Stripe Compaction enabled. 
> After encryption has been enabled for this table the newer entries are 
> encrypted and inserted. However to encrypt the existing data in the table, a 
> major compaction has to run. Since, stripe compaction doesn't allow a major 
> compaction to run, we are unable to encrypt the previous data. 
> see this 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactionPolicy.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18211) Encryption of exisiting data in Stripe Compaction

2017-08-03 Thread Sahil Aggarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112689#comment-16112689
 ] 

Sahil Aggarwal commented on HBASE-18211:


Would like to give it a shot. Assigning to myself.

> Encryption of exisiting data in Stripe Compaction
> -
>
> Key: HBASE-18211
> URL: https://issues.apache.org/jira/browse/HBASE-18211
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, encryption
>Reporter: Karthick
>Priority: Critical
>  Labels: compaction, encryption
>
> We have a table which has time series data with Stripe Compaction enabled. 
> After encryption has been enabled for this table the newer entries are 
> encrypted and inserted. However to encrypt the existing data in the table, a 
> major compaction has to run. Since, stripe compaction doesn't allow a major 
> compaction to run, we are unable to encrypt the previous data. 
> see this 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/StripeCompactionPolicy.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18470) Remove the redundant comma from RetriesExhaustedWithDetailsException#getDesc

2017-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112576#comment-16112576
 ] 

Hudson commented on HBASE-18470:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #232 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/232/])
HBASE-18470 Remove the redundant comma from (chia7712: rev 
5f617aae9494692c8978440f422dd659f56e68eb)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java


> Remove the redundant comma from RetriesExhaustedWithDetailsException#getDesc
> 
>
> Key: HBASE-18470
> URL: https://issues.apache.org/jira/browse/HBASE-18470
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Minor
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18470.master.001.patch, 
> HBASE-18470.master.002.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> The describe from `RetriesExhaustedWithDetailsException#getDesc` is `
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 3 
> actions: FailedServerException: 3 times, `, there is a not need ', ' in the 
> tail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-18512) Region Server will abort with IllegalStateException if HDFS umask has limited scope

2017-08-03 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar reassigned HBASE-18512:


Assignee: Pankaj Kumar

> Region Server will abort with IllegalStateException if HDFS umask has limited 
> scope
> ---
>
> Key: HBASE-18512
> URL: https://issues.apache.org/jira/browse/HBASE-18512
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, security
>Affects Versions: 1.4.0
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>
> If HDFS umask (fs.permissions.umask-mode) has limited scope say 077 then 
> file/dir permission will not be wider than 700. HDFS client has to set 
> permission explicitly if required.
> During SecureBulkLoadEndpoint CP start, RegionServer creates (if not exist) 
> the staging directory with the specified permission and later throws 
> IllegalStateException if staging directory permission is not set to 711.
> After HBASE-17861, we are setting staging dir permission explicitly only when 
> it exist. In case of fresh cluster startup staging dir permission wont be 711 
> when umask defined as 077 which cause RS to abort.
> {noformat}
> 2017-07-30 14:26:33,350 | ERROR | 
> B.defaultRpcServer.handler=12,queue=2,port=21300 | Region server 
> HOSTNAME,PORT,X reported a fatal error:
> ABORTING region server HOSTNAME,PORT,X: The coprocessor 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Staging directory of 
> /user/HBase/hbase-staging already exists but permissions aren't set to 
> '-rwx--x--x' 
> Cause:
> java.lang.IllegalStateException: Staging directory of 
> /user/root/hbase-staging already exists but permissions aren't set to 
> '-rwx--x--x' 
> {noformat}
> We should set permission explicitly to 711 after staging directory creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18512) Region Server will abort with IllegalStateException if HDFS umask has limited scope

2017-08-03 Thread Pankaj Kumar (JIRA)
Pankaj Kumar created HBASE-18512:


 Summary: Region Server will abort with IllegalStateException if 
HDFS umask has limited scope
 Key: HBASE-18512
 URL: https://issues.apache.org/jira/browse/HBASE-18512
 Project: HBase
  Issue Type: Bug
  Components: regionserver, security
Affects Versions: 1.4.0
Reporter: Pankaj Kumar


If HDFS umask (fs.permissions.umask-mode) has limited scope say 077 then 
file/dir permission will not be wider than 700. HDFS client has to set 
permission explicitly if required.


During SecureBulkLoadEndpoint CP start, RegionServer creates (if not exist) the 
staging directory with the specified permission and later throws 
IllegalStateException if staging directory permission is not set to 711.
After HBASE-17861, we are setting staging dir permission explicitly only when 
it exist. In case of fresh cluster startup staging dir permission wont be 711 
when umask defined as 077 which cause RS to abort.

{noformat}
2017-07-30 14:26:33,350 | ERROR | 
B.defaultRpcServer.handler=12,queue=2,port=21300 | Region server 
HOSTNAME,PORT,X reported a fatal error:
ABORTING region server HOSTNAME,PORT,X: The coprocessor 
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
java.lang.IllegalStateException: Staging directory of /user/HBase/hbase-staging 
already exists but permissions aren't set to '-rwx--x--x' 
Cause:
java.lang.IllegalStateException: Staging directory of /user/root/hbase-staging 
already exists but permissions aren't set to '-rwx--x--x' 
{noformat}

We should set permission explicitly to 711 after staging directory creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18470) Remove the redundant comma from RetriesExhaustedWithDetailsException#getDesc

2017-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112538#comment-16112538
 ] 

Hudson commented on HBASE-18470:


FAILURE: Integrated in Jenkins build HBase-1.4 #833 (See 
[https://builds.apache.org/job/HBase-1.4/833/])
HBASE-18470 Remove the redundant comma from (chia7712: rev 
318e712fdb0de927ced49893ab6f0181fc48)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java


> Remove the redundant comma from RetriesExhaustedWithDetailsException#getDesc
> 
>
> Key: HBASE-18470
> URL: https://issues.apache.org/jira/browse/HBASE-18470
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Minor
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18470.master.001.patch, 
> HBASE-18470.master.002.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> The describe from `RetriesExhaustedWithDetailsException#getDesc` is `
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 3 
> actions: FailedServerException: 3 times, `, there is a not need ', ' in the 
> tail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >