[jira] [Updated] (HBASE-23318) LoadTestTool doesn't start

2019-11-18 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-23318:
--
Fix Version/s: 2.1.8

> LoadTestTool doesn't start
> --
>
> Key: HBASE-23318
> URL: https://issues.apache.org/jira/browse/HBASE-23318
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> ./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
> with a CNFE. We are missing the tests jar from hbase-zookeeper. 
> The client tarball includes this but if one wants to launch it on a server or 
> a general purpose deploy (i.e. not the client tarball) the test jar has to be 
> in the server classpath as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] sandeepvinayak commented on a change in pull request #837: HBASE-23309: Adding the flexibility to ChainWalEntryFilter to filter the whole entry if all cells get filtered

2019-11-18 Thread GitBox
sandeepvinayak commented on a change in pull request #837: HBASE-23309: Adding 
the flexibility to ChainWalEntryFilter to filter the whole entry if all cells 
get filtered
URL: https://github.com/apache/hbase/pull/837#discussion_r347715850
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ChainWALEntryFilter.java
 ##
 @@ -68,13 +82,17 @@ public void initCellFilters() {
 
   @Override
   public Entry filter(Entry entry) {
+
 for (WALEntryFilter filter : filters) {
   if (entry == null) {
 return null;
   }
   entry = filter.filter(entry);
 }
 filterCells(entry);
+if (shouldFilterEmptyEntry() && entry != null && 
entry.getEdit().isEmpty()) {
 
 Review comment:
   @apurtell This is the flexility we want to provide to custom replication 
endpoint so that they every custom replication endpoint doesn't need to 
re-implement everything what `ChainWALEntryFilter` already does. 
   
   Here is the scenario we want to cover, let's take an example:
   ```java
   CustomWALFilter implements WALEntryFilter, WALCellFilter {
   @override 
public void filter(Entry) {
   }
   @override 
   public void filterCell(Entry, Cell){
   }
   }
   ```
   Custom Replication endpoint set the filters by:
   ```java
   ChainWALEntryFilter(filters); \\ new CustomWALFilter() is part of filters
   ```
   if `filter` in the above `CustomWALFilter` returns Entry but filterCell 
filters all the cells, the `ChainWALEntryFilter` will not return null in 
current implementation. Since most of the `CustomWALFilter` uses 
`ChainWALEntryFilter`, isn't it better to provide this flexibility in 
ChainWALFilter itself? Sure, it's possible to all this logic in 
CustomWALFilter, but shouldn't we provide re-usability through ChainWALFilter 
and provides a flexibility through a config? Let me know your thoughts. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] sandeepvinayak commented on a change in pull request #837: HBASE-23309: Adding the flexibility to ChainWalEntryFilter to filter the whole entry if all cells get filtered

2019-11-18 Thread GitBox
sandeepvinayak commented on a change in pull request #837: HBASE-23309: Adding 
the flexibility to ChainWalEntryFilter to filter the whole entry if all cells 
get filtered
URL: https://github.com/apache/hbase/pull/837#discussion_r347715850
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ChainWALEntryFilter.java
 ##
 @@ -68,13 +82,17 @@ public void initCellFilters() {
 
   @Override
   public Entry filter(Entry entry) {
+
 for (WALEntryFilter filter : filters) {
   if (entry == null) {
 return null;
   }
   entry = filter.filter(entry);
 }
 filterCells(entry);
+if (shouldFilterEmptyEntry() && entry != null && 
entry.getEdit().isEmpty()) {
 
 Review comment:
   @apurtell This is the flexility we want to provide to custom replication 
endpoint so that every custom replication endpoint doesn't need to re-implement 
everything what `ChainWALEntryFilter` already does. 
   
   Here is the scenario we want to cover, let's take an example:
   ```java
   CustomWALFilter implements WALEntryFilter, WALCellFilter {
   @override 
public void filter(Entry) {
   }
   @override 
   public void filterCell(Entry, Cell){
   }
   }
   ```
   Custom Replication endpoint set the filters by:
   ```java
   ChainWALEntryFilter(filters); \\ new CustomWALFilter() is part of filters
   ```
   if `filter` in the above `CustomWALFilter` returns Entry but filterCell 
filters all the cells, the `ChainWALEntryFilter` will not return null in 
current implementation. Since most of the `CustomWALFilter` uses 
`ChainWALEntryFilter`, isn't it better to provide this flexibility in 
ChainWALFilter itself? Sure, it's possible to all this logic in 
CustomWALFilter, but shouldn't we provide re-usability through ChainWALFilter 
and provides a flexibility through a config? Let me know your thoughts. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23314) Make HBaseObjectStoreSemantics FilterFileSystem

2019-11-18 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23314:
--
Status: Patch Available  (was: Open)

> Make HBaseObjectStoreSemantics FilterFileSystem
> ---
>
> Key: HBASE-23314
> URL: https://issues.apache.org/jira/browse/HBASE-23314
> Project: HBase
>  Issue Type: Improvement
>  Components: hboss
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> HBaseObjectStoreSemantics, as a wrapper of object store file system 
> implementation, currently extends FileSystem itself. There is no 
> straightforward way to expose its wrapped files system. However, some tooling 
> would need to operate using the wrapped object store file systems, for e.g. 
> S3GuardTool is expecting the file system implementation is S3A so it can 
> access the metadata store easily. A simple S3GuardTool against HBOSS will get 
> confusing error like "s3a://mybucket is not a S3A file system".
> Let's make HBaseObjectStoreSemantics a FilterFileSystem so that places like 
> S3GuardTool can use {{getRawFilesSystem()}} to retrieve the wrapped file 
> system. Doing this should not break the contract of HBOSS contract.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23301) Generate CHANGES.md and RELEASENOTES.md for 2.1.8

2019-11-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977116#comment-16977116
 ] 

Hudson commented on HBASE-23301:


Results for branch branch-2.1
[build #1714 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1714/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1714//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1714//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1714//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Generate CHANGES.md and RELEASENOTES.md for 2.1.8
> -
>
> Key: HBASE-23301
> URL: https://issues.apache.org/jira/browse/HBASE-23301
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.1.8
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-11-18 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977104#comment-16977104
 ] 

Anoop Sam John commented on HBASE-22969:


Can you add a Release Notes?

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Assignee: Udai Bhan Kashyap
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.0005.patch, HBASE-22969.0006.patch, HBASE-22969.0007.patch, 
> HBASE-22969.0008.patch, HBASE-22969.0009.patch, HBASE-22969.0010.patch, 
> HBASE-22969.0011.patch, HBASE-22969.0012.patch, HBASE-22969.0013.patch, 
> HBASE-22969.0014.patch, HBASE-22969.HBASE-22969.0001.patch, 
> HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23312) HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-18 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977097#comment-16977097
 ] 

HBase QA commented on HBASE-23312:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
58s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
35s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 1s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 58s{color} 
| {color:red} hbase-thrift in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
10s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.thrift.TestThriftSpnegoHttpServer |
|   | hadoop.hbase.thrift2.TestThrift2HttpServer |
|   | hadoop.hbase.thrift.TestThriftSpnegoHttpFallbackServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1030/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-23312 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986179/HBASE-23312.master.001.patch
 |
| Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
| uname | Linux 34bd64a38bf3 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / 3ba71fe589 |
| Default Java | 1.8.0_181 |
| unit | 

[jira] [Commented] (HBASE-23296) Add CompositeBucketCache to support tiered BC

2019-11-18 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977089#comment-16977089
 ] 

Anoop Sam John commented on HBASE-23296:


This needs a high level design doc also.  Also marking it as new feature.

> Add CompositeBucketCache to support tiered BC
> -
>
> Key: HBASE-23296
> URL: https://issues.apache.org/jira/browse/HBASE-23296
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
>
> LruBlockCache is not suitable in the following scenarios:
> (1) cache size too large (will take too much heap memory, and 
> evictBlocksByHfileName is not so efficient, as HBASE-23277 mentioned)
> (2) block evicted frequently, especially cacheOnWrite & prefetchOnOpen are 
> enabled.
> Since block‘s data is reclaimed by GC, this may affect GC performance.
> So how about enabling a Bucket based L1 Cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23296) Add CompositeBucketCache to support tiered BC

2019-11-18 Thread Anoop Sam John (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-23296:
---
Issue Type: New Feature  (was: Improvement)

> Add CompositeBucketCache to support tiered BC
> -
>
> Key: HBASE-23296
> URL: https://issues.apache.org/jira/browse/HBASE-23296
> Project: HBase
>  Issue Type: New Feature
>  Components: BlockCache
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
>
> LruBlockCache is not suitable in the following scenarios:
> (1) cache size too large (will take too much heap memory, and 
> evictBlocksByHfileName is not so efficient, as HBASE-23277 mentioned)
> (2) block evicted frequently, especially cacheOnWrite & prefetchOnOpen are 
> enabled.
> Since block‘s data is reclaimed by GC, this may affect GC performance.
> So how about enabling a Bucket based L1 Cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] sandeepvinayak commented on a change in pull request #837: HBASE-23309: Adding the flexibility to ChainWalEntryFilter to filter the whole entry if all cells get filtered

2019-11-18 Thread GitBox
sandeepvinayak commented on a change in pull request #837: HBASE-23309: Adding 
the flexibility to ChainWalEntryFilter to filter the whole entry if all cells 
get filtered
URL: https://github.com/apache/hbase/pull/837#discussion_r347715850
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ChainWALEntryFilter.java
 ##
 @@ -68,13 +82,17 @@ public void initCellFilters() {
 
   @Override
   public Entry filter(Entry entry) {
+
 for (WALEntryFilter filter : filters) {
   if (entry == null) {
 return null;
   }
   entry = filter.filter(entry);
 }
 filterCells(entry);
+if (shouldFilterEmptyEntry() && entry != null && 
entry.getEdit().isEmpty()) {
 
 Review comment:
   @apurtell This is the flexility we want to provide to customer replication 
endpoint so that they every custom replication endpoint doesn't need to 
re-implement everything what `ChainWALEntryFilter` already does. 
   
   Here is the scenario we want to cover, let's take an example:
   ```java
   CustomWALFilter implements WALEntryFilter, WALCellFilter {
   @override 
public void filter(Entry) {
   }
   @override 
   public void filterCell(Entry, Cell){
   }
   }
   ```
   Custom Replication endpoint set the filters by:
   ```java
   ChainWALEntryFilter(filters); \\ new CustomWALFilter() is part of filters
   ```
   if `filter` in the above `CustomWALFilter` returns Entry but filterCell 
filters all the cells, the `ChainWALEntryFilter` will not return null in 
current implementation. Since most of the `CustomWALFilter` uses 
`ChainWALEntryFilter`, isn't it better to provide this flexibility in 
ChainWALFilter itself? Sure, it's possible to all this logic in 
CustomWALFilter, but shouldn't we provide re-usability through ChainWALFilter 
and provides a flexibility through a config? Let me know your thoughts. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #847: HBASE-23315 Miscellaneous HBCK Report page cleanup

2019-11-18 Thread GitBox
Apache-HBase commented on issue #847: HBASE-23315 Miscellaneous HBCK Report 
page cleanup
URL: https://github.com/apache/hbase/pull/847#issuecomment-555313105
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 10s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m  5s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 23s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 16s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  branch-2 passed  |
   | +0 :ok: |  spotbugs  |   3m  9s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 25s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  2s |  the patch passed  |
   | -1 :x: |  checkstyle  |   0m 15s |  hbase-procedure: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  checkstyle  |   1m 17s |  hbase-server: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 12s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 29s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  the patch passed  |
   | -1 :x: |  findbugs  |   3m 30s |  hbase-server generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 34s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 13s |  hbase-http in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 32s |  hbase-procedure in the patch 
passed.  |
   | -1 :x: |  unit  | 162m 46s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   2m  9s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 238m 55s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-server |
   |  |  Null pointer dereference of hri in 
org.apache.hadoop.hbase.master.HbckChore.loadRegionsFromFS()  Dereferenced at 
HbckChore.java:in org.apache.hadoop.hbase.master.HbckChore.loadRegionsFromFS()  
Dereferenced at HbckChore.java:[line 267] |
   |  |  Load of known null value in 
org.apache.hadoop.hbase.master.HbckChore.loadRegionsFromFS()  At 
HbckChore.java:in org.apache.hadoop.hbase.master.HbckChore.loadRegionsFromFS()  
At HbckChore.java:[line 267] |
   | Failed junit tests | hadoop.hbase.master.assignment.TestHbckChore |
   |   | hadoop.hbase.client.TestScannersFromClientSide2 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-847/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/847 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux e1eb5bdd5651 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-847/out/precommit/personality/provided.sh
 |
   | git revision | branch-2 / 44c8b58cec |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-847/1/artifact/out/diff-checkstyle-hbase-procedure.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-847/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | findbugs | 

[jira] [Assigned] (HBASE-23296) Add CompositeBucketCache to support tiered BC

2019-11-18 Thread chenxu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenxu reassigned HBASE-23296:
--

Assignee: chenxu

> Add CompositeBucketCache to support tiered BC
> -
>
> Key: HBASE-23296
> URL: https://issues.apache.org/jira/browse/HBASE-23296
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
>
> LruBlockCache is not suitable in the following scenarios:
> (1) cache size too large (will take too much heap memory, and 
> evictBlocksByHfileName is not so efficient, as HBASE-23277 mentioned)
> (2) block evicted frequently, especially cacheOnWrite & prefetchOnOpen are 
> enabled.
> Since block‘s data is reclaimed by GC, this may affect GC performance.
> So how about enabling a Bucket based L1 Cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23296) Add CompositeBucketCache to support tiered BC

2019-11-18 Thread chenxu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenxu updated HBASE-23296:
---
Summary: Add CompositeBucketCache to support tiered BC  (was: Support 
Bucket based L1 Cache)

> Add CompositeBucketCache to support tiered BC
> -
>
> Key: HBASE-23296
> URL: https://issues.apache.org/jira/browse/HBASE-23296
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Reporter: chenxu
>Priority: Major
>
> LruBlockCache is not suitable in the following scenarios:
> (1) cache size too large (will take too much heap memory, and 
> evictBlocksByHfileName is not so efficient, as HBASE-23277 mentioned)
> (2) block evicted frequently, especially cacheOnWrite & prefetchOnOpen are 
> enabled.
> Since block‘s data is reclaimed by GC, this may affect GC performance.
> So how about enabling a Bucket based L1 Cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23312) HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-18 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977074#comment-16977074
 ] 

Kevin Risden commented on HBASE-23312:
--

Submitted patch which falls back to non SPNEGO configs like before HBASE-19852. 
Adds tests for old configs and keeps new config test.

> HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible
> 
>
> Key: HBASE-23312
> URL: https://issues.apache.org/jira/browse/HBASE-23312
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 3.0.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.6, 2.1.7
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: HBASE-23312.master.001.patch
>
>
> HBASE-19852 is not backwards compatible since it now requires the SPNEGO 
> thrift configs. I haven't seen anything in Apache HBase about changing this 
> so that the older configs still work with a merged keytab. (fall back to the 
> non SPNEGO specific principal/keytab configs)
> I wrote the original patch in HBASE-19852 and with hindsight being 20/20, I 
> think this section of could be extended to fall back to not requiring the 
> additional configs.
> https://github.com/apache/hbase/blame/master/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftHttpServlet.java#L78



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23312) HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-18 Thread Kevin Risden (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated HBASE-23312:
-
Attachment: HBASE-23312.master.001.patch

> HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible
> 
>
> Key: HBASE-23312
> URL: https://issues.apache.org/jira/browse/HBASE-23312
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 3.0.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.6, 2.1.7
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: HBASE-23312.master.001.patch
>
>
> HBASE-19852 is not backwards compatible since it now requires the SPNEGO 
> thrift configs. I haven't seen anything in Apache HBase about changing this 
> so that the older configs still work with a merged keytab. (fall back to the 
> non SPNEGO specific principal/keytab configs)
> I wrote the original patch in HBASE-19852 and with hindsight being 20/20, I 
> think this section of could be extended to fall back to not requiring the 
> additional configs.
> https://github.com/apache/hbase/blame/master/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftHttpServlet.java#L78



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23312) HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-18 Thread Kevin Risden (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated HBASE-23312:
-
Status: Patch Available  (was: In Progress)

> HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible
> 
>
> Key: HBASE-23312
> URL: https://issues.apache.org/jira/browse/HBASE-23312
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.1.7, 2.1.6, 2.1.5, 2.1.4, 2.1.3, 2.1.2, 2.1.1, 3.0.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: HBASE-23312.master.001.patch
>
>
> HBASE-19852 is not backwards compatible since it now requires the SPNEGO 
> thrift configs. I haven't seen anything in Apache HBase about changing this 
> so that the older configs still work with a merged keytab. (fall back to the 
> non SPNEGO specific principal/keytab configs)
> I wrote the original patch in HBASE-19852 and with hindsight being 20/20, I 
> think this section of could be extended to fall back to not requiring the 
> additional configs.
> https://github.com/apache/hbase/blame/master/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftHttpServlet.java#L78



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23251) Add Column Family and Table Names to HFileContext and use in HFileWriterImpl logging

2019-11-18 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977067#comment-16977067
 ] 

Lijin Bin commented on HBASE-23251:
---

Have not merge to branch-2.2 and branch-2.1 because conflit.

> Add Column Family and Table Names to HFileContext and use in HFileWriterImpl 
> logging
> 
>
> Key: HBASE-23251
> URL: https://issues.apache.org/jira/browse/HBASE-23251
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-23251.v01.patch
>
>
> When something goes wrong in the Store / HFile write path, it would be very 
> useful to know which column family and table the error is coming from. 
> Currently the HFileWriterImpl gets an HFileContext object with some useful 
> state information, but the column family and table aren't among them. 
> For example, this would be very helpful diagnosing HBASE-23143 and similar 
> issues. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-21593) closing flags show be set false in HRegion

2019-11-18 Thread xiaolerzheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910311#comment-16910311
 ] 

xiaolerzheng edited comment on HBASE-21593 at 11/19/19 2:11 AM:


[Duo 
Zhang|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=zhangduo] [Xu 
Cang|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=xucang] 
[Michael 
Stack|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=stack], this 
PR is not fix  the problem described above...


was (Author: xiaolerzheng):
Duo Zhang Xu Cang, this PR is not fix  the problem described above...

> closing flags show be set false in HRegion
> --
>
> Key: HBASE-21593
> URL: https://issues.apache.org/jira/browse/HBASE-21593
> Project: HBase
>  Issue Type: Bug
>Reporter: xiaolerzheng
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-21593.branch-1.001.patch, 
> image-2018-12-13-16-04-51-892.png, image-2018-12-13-16-05-09-246.png, 
> image-2018-12-13-16-05-36-404.png
>
>
> in HRegion.java
>  
>  
> 1429 // block waiting for the lock for closing
> 1430 lock.writeLock().lock();
> 1431 this.closing.set(true);
> 1432 status.setStatus("Disabling writes for close");
>  
> 
>  
>  
> 1557 } finally {
>        {color:#FF}  //should here add {color}
>     {color:#FF}    this.closing.set(false); {color}
> 1558  lock.writeLock().unlock();
> 1559 }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-11-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977054#comment-16977054
 ] 

Hudson commented on HBASE-22969:


Results for branch branch-2.2
[build #697 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/697/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/697//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/697//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/697//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Assignee: Udai Bhan Kashyap
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.0005.patch, HBASE-22969.0006.patch, HBASE-22969.0007.patch, 
> HBASE-22969.0008.patch, HBASE-22969.0009.patch, HBASE-22969.0010.patch, 
> HBASE-22969.0011.patch, HBASE-22969.0012.patch, HBASE-22969.0013.patch, 
> HBASE-22969.0014.patch, HBASE-22969.HBASE-22969.0001.patch, 
> HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on 

[jira] [Commented] (HBASE-22982) Send SIGSTOP to hang or SIGCONT to resume rs and add graceful rolling restart

2019-11-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977055#comment-16977055
 ] 

Hudson commented on HBASE-22982:


Results for branch branch-2.2
[build #697 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/697/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/697//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/697//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/697//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Send SIGSTOP to hang or SIGCONT to resume rs and add graceful rolling restart
> -
>
> Key: HBASE-22982
> URL: https://issues.apache.org/jira/browse/HBASE-22982
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Affects Versions: 3.0.0
>Reporter: Szabolcs Bukros
>Assignee: Szabolcs Bukros
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> * Add a Chaos Monkey action that uses SIGSTOP and SIGCONT to hang and resume 
> a ratio of region servers.
>  * Add a Chaos Monkey action to simulate a rolling restart including 
> graceful_stop like functionality that unloads the regions from the server 
> before a restart and then places it under load again afterwards.
>  * Add these actions to the relevant monkeys



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-18511) Default no regions on master

2019-11-18 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-18511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977047#comment-16977047
 ] 

Michael Stack commented on HBASE-18511:
---

List IIRC. This feature does not work though. Needs a start-up sequence rewrite.

> Default no regions on master
> 
>
> Key: HBASE-18511
> URL: https://issues.apache.org/jira/browse/HBASE-18511
> Project: HBase
>  Issue Type: Task
>  Components: master
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-18511.master.001.patch, 
> HBASE-18511.master.002.patch, HBASE-18511.master.003.patch, 
> HBASE-18511.master.004.patch, HBASE-18511.master.005.patch, 
> HBASE-18511.master.006.patch, HBASE-18511.master.007.patch, 
> HBASE-18511.master.008.patch, HBASE-18511.master.009.patch, 
> HBASE-18511.master.010.patch, HBASE-18511.master.011.patch, 
> HBASE-18511.master.012.patch, HBASE-18511.master.013.patch, 
> HBASE-18511.master.014.patch, HBASE-18511.master.015.patch
>
>
> Let this be umbrella issue for no-regions-on-master as default deploy (as it 
> was in branch-1).
> Also need to make sure we can run WITH regions on master; in particular 
> system tables with RPC short-circuit as it is now in hbase master.
> Background is that master branch carried a change that allowed Master carry 
> regions. On top of this improvement on branch-1, Master defaulted to carry 
> system tables only. No release was made with this configuration. Now we are 
> going to cut the 2.0.0 release, the decision is that hbase-2 should have the 
> same layout as hbase-1 so this issue implements the undoing of Master 
> carrying system tables by default (though the capability remains).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23288) Backport HBASE-23251 (Add Column Family and Table Names to HFileContext) to branch-1

2019-11-18 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell resolved HBASE-23288.
-
Fix Version/s: 1.6.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Backport HBASE-23251 (Add Column Family and Table Names to HFileContext) to 
> branch-1
> 
>
> Key: HBASE-23288
> URL: https://issues.apache.org/jira/browse/HBASE-23288
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 1.6.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] apurtell merged pull request #822: HBASE-23288 - Backport HBASE-23251 (Add Column Family and Table Names…

2019-11-18 Thread GitBox
apurtell merged pull request #822: HBASE-23288 - Backport HBASE-23251 (Add 
Column Family and Table Names…
URL: https://github.com/apache/hbase/pull/822
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #848: HBASE-23318 LoadTestTool doesn't start

2019-11-18 Thread GitBox
Apache-HBase commented on issue #848: HBASE-23318 LoadTestTool doesn't start
URL: https://github.com/apache/hbase/pull/848#issuecomment-555280830
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 28s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 54s |  master passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 28s |  hbase-assembly in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 12s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  15m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-848/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/848 |
   | Optional Tests | dupname asflicense javac javadoc unit xml |
   | uname | Linux 23fc4e886ee2 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-848/out/precommit/personality/provided.sh
 |
   | git revision | master / 4b99816dd6 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-848/1/testReport/
 |
   | Max. process+thread count | 86 (vs. ulimit of 1) |
   | modules | C: hbase-assembly U: hbase-assembly |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-848/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23318) LoadTestTool doesn't start

2019-11-18 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-23318:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> LoadTestTool doesn't start
> --
>
> Key: HBASE-23318
> URL: https://issues.apache.org/jira/browse/HBASE-23318
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> ./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
> with a CNFE. We are missing the tests jar from hbase-zookeeper. 
> The client tarball includes this but if one wants to launch it on a server or 
> a general purpose deploy (i.e. not the client tarball) the test jar has to be 
> in the server classpath as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23259) Ability to run mini cluster using pre-determined available random ports

2019-11-18 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977031#comment-16977031
 ] 

Bharath Vissapragada commented on HBASE-23259:
--

[~ndimiduk] sure. I've set the "Affects versions" accordingly. I don't see any 
comments I must address there (unless I'm missing something). Are you +1 on it? 

> Ability to run mini cluster using pre-determined available random ports
> ---
>
> Key: HBASE-23259
> URL: https://issues.apache.org/jira/browse/HBASE-23259
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0, 1.4.12, 2.2.3
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As noted in the code reviews for HBASE-18095, we need the ability to run the 
> mini-cluster using a pre-determined set of random (and available) port 
> numbers. When I say pre-determined, I mean the test knows these ports even 
> before starting the mini cluster. 
> In short, the workflow is something like,
> {noformat}
> List ports = getRandomAvailablePorts();
> startMiniCluster(conf, ports);
> {noformat}
> The reason we need this is that certain configs introduced in HBASE-18095 
> depend on the ports on which the master is expected to serve the RPCs. While 
> that is known for regular deployments (like 16000 for master etc), it is 
> totally random in the mini cluster tests. So we need to know them before hand 
> for templating out the configs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23259) Ability to run mini cluster using pre-determined available random ports

2019-11-18 Thread Bharath Vissapragada (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharath Vissapragada updated HBASE-23259:
-
Affects Version/s: 2.2.3
   1.4.12

> Ability to run mini cluster using pre-determined available random ports
> ---
>
> Key: HBASE-23259
> URL: https://issues.apache.org/jira/browse/HBASE-23259
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0, 1.4.12, 2.2.3
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As noted in the code reviews for HBASE-18095, we need the ability to run the 
> mini-cluster using a pre-determined set of random (and available) port 
> numbers. When I say pre-determined, I mean the test knows these ports even 
> before starting the mini cluster. 
> In short, the workflow is something like,
> {noformat}
> List ports = getRandomAvailablePorts();
> startMiniCluster(conf, ports);
> {noformat}
> The reason we need this is that certain configs introduced in HBASE-18095 
> depend on the ports on which the master is expected to serve the RPCs. While 
> that is known for regular deployments (like 16000 for master etc), it is 
> totally random in the mini cluster tests. So we need to know them before hand 
> for templating out the configs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] apurtell merged pull request #848: HBASE-23318 LoadTestTool doesn't start

2019-11-18 Thread GitBox
apurtell merged pull request #848: HBASE-23318 LoadTestTool doesn't start
URL: https://github.com/apache/hbase/pull/848
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] apurtell commented on a change in pull request #837: HBASE-23309: Adding the flexibility to ChainWalEntryFilter to filter the whole entry if all cells get filtered

2019-11-18 Thread GitBox
apurtell commented on a change in pull request #837: HBASE-23309: Adding the 
flexibility to ChainWalEntryFilter to filter the whole entry if all cells get 
filtered
URL: https://github.com/apache/hbase/pull/837#discussion_r347682215
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ChainWALEntryFilter.java
 ##
 @@ -68,13 +82,17 @@ public void initCellFilters() {
 
   @Override
   public Entry filter(Entry entry) {
+
 for (WALEntryFilter filter : filters) {
   if (entry == null) {
 return null;
   }
   entry = filter.filter(entry);
 }
 filterCells(entry);
+if (shouldFilterEmptyEntry() && entry != null && 
entry.getEdit().isEmpty()) {
 
 Review comment:
   Put this into your own WALEntryFilter. Or, fine if you want to add a 
WALEntryFilter impl to HBase code that does this. But don't do this here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] apurtell commented on a change in pull request #837: HBASE-23309: Adding the flexibility to ChainWalEntryFilter to filter the whole entry if all cells get filtered

2019-11-18 Thread GitBox
apurtell commented on a change in pull request #837: HBASE-23309: Adding the 
flexibility to ChainWalEntryFilter to filter the whole entry if all cells get 
filtered
URL: https://github.com/apache/hbase/pull/837#discussion_r347681958
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/BaseReplicationEndpoint.java
 ##
 @@ -91,7 +92,7 @@ public WALEntryFilter getWALEntryfilter() {
 }
   }
 }
-return filters.isEmpty() ? null : new ChainWALEntryFilter(filters);
+return filters.isEmpty() ? null : new ChainWALEntryFilter(filters, 
this.replicationPeer);
 
 Review comment:
   Is there some other way to do this that does not require adding 
ReplicationPeer to the method signature? 
   
   Why not create a WALEntryFilter that drops empty cells? Then you can add it 
to the chain if you want and no further changes are needed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] apurtell commented on a change in pull request #837: HBASE-23309: Adding the flexibility to ChainWalEntryFilter to filter the whole entry if all cells get filtered

2019-11-18 Thread GitBox
apurtell commented on a change in pull request #837: HBASE-23309: Adding the 
flexibility to ChainWalEntryFilter to filter the whole entry if all cells get 
filtered
URL: https://github.com/apache/hbase/pull/837#discussion_r347681256
 
 

 ##
 File path: hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
 ##
 @@ -1295,6 +1295,9 @@
   /** Configuration key for SplitLog manager timeout */
   public static final String HBASE_SPLITLOG_MANAGER_TIMEOUT = 
"hbase.splitlog.manager.timeout";
 
+  /** To allow the empty entries to get filtered  which have no cells or all 
cells got filtered though WALCellFilter */
+  public static final String HBASE_REPLICATION_WAL_FILTER_EMPTY_ENTRY = 
"hbase.replication.wal.filteremptyentry";
 
 Review comment:
   This doesn't belong here. You are already changing BaseReplicationEndpoint, 
put it there?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] sandeepvinayak commented on issue #837: HBASE-23309: Adding the flexibility to ChainWalEntryFilter to filter the whole entry if all cells get filtered

2019-11-18 Thread GitBox
sandeepvinayak commented on issue #837: HBASE-23309: Adding the flexibility to 
ChainWalEntryFilter to filter the whole entry if all cells get filtered
URL: https://github.com/apache/hbase/pull/837#issuecomment-555277402
 
 
   @apurtell  FYI


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] apurtell commented on issue #848: HBASE-23318 LoadTestTool doesn't start

2019-11-18 Thread GitBox
apurtell commented on issue #848: HBASE-23318 LoadTestTool doesn't start
URL: https://github.com/apache/hbase/pull/848#issuecomment-555276372
 
 
   HBaseZKTestingUtility


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #835: HBASE-23307 Add running of ReplicationBarrierCleaner to hbck2 fixMeta…

2019-11-18 Thread GitBox
Apache-HBase commented on issue #835: HBASE-23307 Add running of 
ReplicationBarrierCleaner to hbck2 fixMeta…
URL: https://github.com/apache/hbase/pull/835#issuecomment-555275869
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-2.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 12s |  branch-2.2 passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  branch-2.2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  branch-2.2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m  1s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  branch-2.2 passed  |
   | +0 :ok: |  spotbugs  |   3m 10s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  9s |  branch-2.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 47s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 21s |  hbase-server: The patch generated 2 
new + 148 unchanged - 0 fixed = 150 total (was 148)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   3m 59s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m  0s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   | -1 :x: |  findbugs  |   3m 18s |  hbase-server generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 151m 18s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 203m 54s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-server |
   |  |  Nullcheck of regionDir at line 266 of value previously dereferenced in 
org.apache.hadoop.hbase.master.HbckChore.loadRegionsFromFS()  At 
HbckChore.java:266 of value previously dereferenced in 
org.apache.hadoop.hbase.master.HbckChore.loadRegionsFromFS()  At 
HbckChore.java:[line 266] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-835/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/835 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 1fd54a95773e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-835/out/precommit/personality/provided.sh
 |
   | git revision | branch-2.2 / 1eceb24b67 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-835/4/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | findbugs | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-835/4/artifact/out/new-findbugs-hbase-server.html
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-835/4/testReport/
 |
   | Max. process+thread count | 4154 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-835/4/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23318) LoadTestTool doesn't start

2019-11-18 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-23318:

Status: Patch Available  (was: Open)

> LoadTestTool doesn't start
> --
>
> Key: HBASE-23318
> URL: https://issues.apache.org/jira/browse/HBASE-23318
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> ./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
> with a CNFE. We are missing the tests jar from hbase-zookeeper. 
> The client tarball includes this but if one wants to launch it on a server or 
> a general purpose deploy (i.e. not the client tarball) the test jar has to be 
> in the server classpath as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 commented on issue #848: HBASE-23318 LoadTestTool doesn't start

2019-11-18 Thread GitBox
Apache9 commented on issue #848: HBASE-23318 LoadTestTool doesn't start
URL: https://github.com/apache/hbase/pull/848#issuecomment-555275599
 
 
   What is the error message? Which class is missing?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23318) LoadTestTool doesn't start

2019-11-18 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-23318:

Description: 
./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
with a CNFE. We are missing the tests jar from hbase-zookeeper. 

The client tarball includes this but if one wants to launch it on a server or a 
general purpose deploy (i.e. not the client tarball) the test jar has to be in 
the server classpath as well. 

  was:./bin/hbase ltt after unpacking a binary tarball distribution doesn't 
start with a CNFE. We are missing the tests jar from hbase-zookeeper. 


> LoadTestTool doesn't start
> --
>
> Key: HBASE-23318
> URL: https://issues.apache.org/jira/browse/HBASE-23318
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> ./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
> with a CNFE. We are missing the tests jar from hbase-zookeeper. 
> The client tarball includes this but if one wants to launch it on a server or 
> a general purpose deploy (i.e. not the client tarball) the test jar has to be 
> in the server classpath as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23182) The create-release scripts are broken

2019-11-18 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-23182.
---
Hadoop Flags: Reviewed
  Resolution: Fixed

Pushed to master.

Thanks [~stack], [~busbey] and [~bharathv] for reviewing.

> The create-release scripts are broken
> -
>
> Key: HBASE-23182
> URL: https://issues.apache.org/jira/browse/HBASE-23182
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> Only several small bugs but it does make the releasing fail...
> Mostly introduced by HBASE-23092.
> Will upload the patch after I successully published 2.2.2RC0...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-11-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977024#comment-16977024
 ] 

Hudson commented on HBASE-22969:


Results for branch branch-2
[build #2358 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2358/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2358//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2358//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2358//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Assignee: Udai Bhan Kashyap
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.0005.patch, HBASE-22969.0006.patch, HBASE-22969.0007.patch, 
> HBASE-22969.0008.patch, HBASE-22969.0009.patch, HBASE-22969.0010.patch, 
> HBASE-22969.0011.patch, HBASE-22969.0012.patch, HBASE-22969.0013.patch, 
> HBASE-22969.0014.patch, HBASE-22969.HBASE-22969.0001.patch, 
> HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on 

[GitHub] [hbase] apurtell opened a new pull request #848: HBASE-23318 LoadTestTool doesn't start

2019-11-18 Thread GitBox
apurtell opened a new pull request #848: HBASE-23318 LoadTestTool doesn't start
URL: https://github.com/apache/hbase/pull/848
 
 
   * Package the test jar from hbase-zookeeper into lib/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23318) LoadTestTool doesn't start

2019-11-18 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-23318:

Affects Version/s: 2.2.1

> LoadTestTool doesn't start
> --
>
> Key: HBASE-23318
> URL: https://issues.apache.org/jira/browse/HBASE-23318
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Andrew Kyle Purtell
>Priority: Minor
>
> ./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
> with a CNFE. We are missing the tests jar from hbase-zookeeper. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23318) LoadTestTool doesn't start

2019-11-18 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-23318:

Fix Version/s: 2.2.3
   2.3.0
   3.0.0

> LoadTestTool doesn't start
> --
>
> Key: HBASE-23318
> URL: https://issues.apache.org/jira/browse/HBASE-23318
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> ./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
> with a CNFE. We are missing the tests jar from hbase-zookeeper. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23318) LoadTestTool doesn't start

2019-11-18 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell reassigned HBASE-23318:
---

Assignee: Andrew Kyle Purtell

> LoadTestTool doesn't start
> --
>
> Key: HBASE-23318
> URL: https://issues.apache.org/jira/browse/HBASE-23318
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> ./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
> with a CNFE. We are missing the tests jar from hbase-zookeeper. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 merged pull request #736: HBASE-23182 The create-release scripts are broken

2019-11-18 Thread GitBox
Apache9 merged pull request #736: HBASE-23182 The create-release scripts are 
broken
URL: https://github.com/apache/hbase/pull/736
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23259) Ability to run mini cluster using pre-determined available random ports

2019-11-18 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977023#comment-16977023
 ] 

Nick Dimiduk commented on HBASE-23259:
--

Breaking this sub-task out to a top-level ticket since this improvement to test 
is applicable outside of the context of HBASE-18095.

[~bharathv] once your master patch is +1, would you mind checking that the 
patch back-ports to branch-2 and branch-1? Note that branch-1 requires jdk7.

> Ability to run mini cluster using pre-determined available random ports
> ---
>
> Key: HBASE-23259
> URL: https://issues.apache.org/jira/browse/HBASE-23259
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As noted in the code reviews for HBASE-18095, we need the ability to run the 
> mini-cluster using a pre-determined set of random (and available) port 
> numbers. When I say pre-determined, I mean the test knows these ports even 
> before starting the mini cluster. 
> In short, the workflow is something like,
> {noformat}
> List ports = getRandomAvailablePorts();
> startMiniCluster(conf, ports);
> {noformat}
> The reason we need this is that certain configs introduced in HBASE-18095 
> depend on the ports on which the master is expected to serve the RPCs. While 
> that is known for regular deployments (like 16000 for master etc), it is 
> totally random in the mini cluster tests. So we need to know them before hand 
> for templating out the configs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23318) LoadTestTool doesn't start

2019-11-18 Thread Andrew Kyle Purtell (Jira)
Andrew Kyle Purtell created HBASE-23318:
---

 Summary: LoadTestTool doesn't start
 Key: HBASE-23318
 URL: https://issues.apache.org/jira/browse/HBASE-23318
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Kyle Purtell


./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
with a CNFE. We are missing the tests jar from hbase-zookeeper. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23317) Detect and sideline poison pill regions

2019-11-18 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977021#comment-16977021
 ] 

Andrew Kyle Purtell commented on HBASE-23317:
-

Nope. Repurposed

> Detect and sideline poison pill regions
> ---
>
> Key: HBASE-23317
> URL: https://issues.apache.org/jira/browse/HBASE-23317
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Kyle Purtell
>Priority: Minor
>
> The master can track that a region deploy has been repeatedly crashing 
> regionservers and rather than continue to pass around the poison pill put its 
> assignment into an administratively failed state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 commented on issue #736: HBASE-23182 The create-release scripts are broken

2019-11-18 Thread GitBox
Apache9 commented on issue #736: HBASE-23182 The create-release scripts are 
broken
URL: https://github.com/apache/hbase/pull/736#issuecomment-555273302
 
 
   Hi @bharathv , this PR is for fixing the problems of the scripts and I've 
tested this twice when releasing 2.2.2 and 2.1.8, which means it works. So I 
think we could get this in first, and file a new issue to polish the scripts. 
If you have interest, the scripts also have other things which could be 
improved.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23259) Ability to run mini cluster using pre-determined available random ports

2019-11-18 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-23259:
-
Parent: (was: HBASE-18095)
Issue Type: Test  (was: Sub-task)

> Ability to run mini cluster using pre-determined available random ports
> ---
>
> Key: HBASE-23259
> URL: https://issues.apache.org/jira/browse/HBASE-23259
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As noted in the code reviews for HBASE-18095, we need the ability to run the 
> mini-cluster using a pre-determined set of random (and available) port 
> numbers. When I say pre-determined, I mean the test knows these ports even 
> before starting the mini cluster. 
> In short, the workflow is something like,
> {noformat}
> List ports = getRandomAvailablePorts();
> startMiniCluster(conf, ports);
> {noformat}
> The reason we need this is that certain configs introduced in HBASE-18095 
> depend on the ports on which the master is expected to serve the RPCs. While 
> that is known for regular deployments (like 16000 for master etc), it is 
> totally random in the mini cluster tests. So we need to know them before hand 
> for templating out the configs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347666779
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
+  break;
+}
+  }
+} while (retryCounter.shouldRetry());
+if (znodes == null) {
+  return;
+}
+for (String znode: znodes) {
+  String path = ZNodePaths.joinZNode(watcher.getZNodePaths().baseZNode, 
znode);
+  updateMetaLocation(path, ZNodeOpType.INIT);
+}
+  }
+
+  /**
+   * Gets the HRegionLocation for a given meta replica ID. Renews the watch on 
the znode for
+   * future updates.
+   * @param replicaId 

[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347673418
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaRegionLocationCache.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static junit.framework.TestCase.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.master.RegionState;
+import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({SmallTests.class, MasterTests.class })
+public class TestMetaRegionLocationCache {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+  HBaseClassTestRule.forClass(TestMetaRegionLocationCache.class);
+
+  private static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
+  private static AsyncRegistry REGISTRY;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+TEST_UTIL.getConfiguration().set(BaseLoadBalancer.TABLES_ON_MASTER, 
"none");
 
 Review comment:
   `"none"`? this configuration point is read as a boolean... sometimes. Or a 
string other times  I asked over on HBASE-18511. Over in the book, 
https://hbase.apache.org/book.html#_changes_of_note > "Master hosting regions" 
feature broken and unsupported , I guess it's a boolean.
   
   Either way, I think you want the default behavior, which is to not carry 
tables on master, so you can just leave it unspecified?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347658931
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
 
 Review comment:
   I think these are okay at `WARN`. Thoughts?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347663604
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
+  break;
+}
+  }
+} while (retryCounter.shouldRetry());
+if (znodes == null) {
+  return;
+}
+for (String znode: znodes) {
+  String path = ZNodePaths.joinZNode(watcher.getZNodePaths().baseZNode, 
znode);
+  updateMetaLocation(path, ZNodeOpType.INIT);
+}
+  }
+
+  /**
+   * Gets the HRegionLocation for a given meta replica ID. Renews the watch on 
the znode for
+   * future updates.
+   * @param replicaId 

[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347674584
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaRegionLocationCache.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static junit.framework.TestCase.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.master.RegionState;
+import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({SmallTests.class, MasterTests.class })
+public class TestMetaRegionLocationCache {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+  HBaseClassTestRule.forClass(TestMetaRegionLocationCache.class);
+
+  private static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
+  private static AsyncRegistry REGISTRY;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+TEST_UTIL.getConfiguration().set(BaseLoadBalancer.TABLES_ON_MASTER, 
"none");
+TEST_UTIL.getConfiguration().setInt(HConstants.META_REPLICAS_NUM, 3);
+TEST_UTIL.startMiniCluster(3);
+REGISTRY = AsyncRegistryFactory.getRegistry(TEST_UTIL.getConfiguration());
+RegionReplicaTestHelper.waitUntilAllMetaReplicasHavingRegionLocation(
+TEST_UTIL.getConfiguration(), REGISTRY, 3);
+TEST_UTIL.getAdmin().balancerSwitch(false, true);
+  }
+
+  @AfterClass
+  public static void cleanUp() throws Exception {
+IOUtils.closeQuietly(REGISTRY);
+TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private List getCurrentMetaLocations(ZKWatcher zk) throws 
Exception {
+List result = new ArrayList<>();
+for (String znode: zk.getMetaReplicaNodes()) {
+  String path = ZNodePaths.joinZNode(zk.getZNodePaths().baseZNode, znode);
+  int replicaId = zk.getZNodePaths().getMetaReplicaIdFromPath(path);
+  RegionState state = MetaTableLocator.getMetaRegionState(zk, replicaId);
+  result.add(new HRegionLocation(state.getRegion(), 
state.getServerName()));
+}
+return result;
+  }
+
+  // Verifies that the cached meta locations in the given master are in sync 
with what is in ZK.
+  private void verifyCachedMetaLocations(HMaster master) throws Exception {
+List metaHRLs =
+master.getMetaRegionLocationCache().getMetaRegionLocations().get();
+assertTrue(metaHRLs != null);
 
 Review comment:
   Because you used `Optional.get()` above, this can never be null. Also, 
`assertNotNull` exists.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347659731
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
 
 Review comment:
   Instead of logging and swallowing the exception, I think an 
`InterruptedException` should be rethrown. I cannot think of a case where we 
actually want to swallow the interruption? In that case, no need for the log 
line either.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347659868
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
 
 Review comment:
   Also, I think `DEBUG` is fine because presumably cache population can be 
accomplished at a later time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache 

[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347675910
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaRegionLocationCache.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static junit.framework.TestCase.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.master.RegionState;
+import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({SmallTests.class, MasterTests.class })
+public class TestMetaRegionLocationCache {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+  HBaseClassTestRule.forClass(TestMetaRegionLocationCache.class);
+
+  private static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
+  private static AsyncRegistry REGISTRY;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+TEST_UTIL.getConfiguration().set(BaseLoadBalancer.TABLES_ON_MASTER, 
"none");
+TEST_UTIL.getConfiguration().setInt(HConstants.META_REPLICAS_NUM, 3);
+TEST_UTIL.startMiniCluster(3);
+REGISTRY = AsyncRegistryFactory.getRegistry(TEST_UTIL.getConfiguration());
+RegionReplicaTestHelper.waitUntilAllMetaReplicasHavingRegionLocation(
+TEST_UTIL.getConfiguration(), REGISTRY, 3);
+TEST_UTIL.getAdmin().balancerSwitch(false, true);
+  }
+
+  @AfterClass
+  public static void cleanUp() throws Exception {
+IOUtils.closeQuietly(REGISTRY);
+TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private List getCurrentMetaLocations(ZKWatcher zk) throws 
Exception {
+List result = new ArrayList<>();
+for (String znode: zk.getMetaReplicaNodes()) {
+  String path = ZNodePaths.joinZNode(zk.getZNodePaths().baseZNode, znode);
+  int replicaId = zk.getZNodePaths().getMetaReplicaIdFromPath(path);
+  RegionState state = MetaTableLocator.getMetaRegionState(zk, replicaId);
+  result.add(new HRegionLocation(state.getRegion(), 
state.getServerName()));
+}
+return result;
+  }
+
+  // Verifies that the cached meta locations in the given master are in sync 
with what is in ZK.
+  private void verifyCachedMetaLocations(HMaster master) throws Exception {
+List metaHRLs =
+master.getMetaRegionLocationCache().getMetaRegionLocations().get();
+assertTrue(metaHRLs != null);
+assertFalse(metaHRLs.isEmpty());
+ZKWatcher zk = master.getZooKeeper();
+List metaZnodes = zk.getMetaReplicaNodes();
+assertEquals(metaZnodes.size(), metaHRLs.size());
+List actualHRLs = getCurrentMetaLocations(zk);
+Collections.sort(metaHRLs);
+Collections.sort(actualHRLs);
+assertEquals(actualHRLs, metaHRLs);
+  }
+
+  @Test public void testInitialMetaLocations() throws Exception {
+verifyCachedMetaLocations(TEST_UTIL.getMiniHBaseCluster().getMaster());
+  }
+
+  @Test public void testStandByMetaLocations() throws Exception {
+HMaster standBy = 
TEST_UTIL.getMiniHBaseCluster().startMaster().getMaster();
+verifyCachedMetaLocations(standBy);
+  }
+
+  /*
+   * Shuffles the meta region replicas around the cluster and makes sure the 
cache is 

[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347656749
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
 
 Review comment:
   "initial"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347675740
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaRegionLocationCache.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static junit.framework.TestCase.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.master.RegionState;
+import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({SmallTests.class, MasterTests.class })
+public class TestMetaRegionLocationCache {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+  HBaseClassTestRule.forClass(TestMetaRegionLocationCache.class);
+
+  private static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
+  private static AsyncRegistry REGISTRY;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+TEST_UTIL.getConfiguration().set(BaseLoadBalancer.TABLES_ON_MASTER, 
"none");
+TEST_UTIL.getConfiguration().setInt(HConstants.META_REPLICAS_NUM, 3);
+TEST_UTIL.startMiniCluster(3);
+REGISTRY = AsyncRegistryFactory.getRegistry(TEST_UTIL.getConfiguration());
+RegionReplicaTestHelper.waitUntilAllMetaReplicasHavingRegionLocation(
+TEST_UTIL.getConfiguration(), REGISTRY, 3);
+TEST_UTIL.getAdmin().balancerSwitch(false, true);
+  }
+
+  @AfterClass
+  public static void cleanUp() throws Exception {
+IOUtils.closeQuietly(REGISTRY);
+TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private List getCurrentMetaLocations(ZKWatcher zk) throws 
Exception {
+List result = new ArrayList<>();
+for (String znode: zk.getMetaReplicaNodes()) {
+  String path = ZNodePaths.joinZNode(zk.getZNodePaths().baseZNode, znode);
+  int replicaId = zk.getZNodePaths().getMetaReplicaIdFromPath(path);
+  RegionState state = MetaTableLocator.getMetaRegionState(zk, replicaId);
+  result.add(new HRegionLocation(state.getRegion(), 
state.getServerName()));
+}
+return result;
+  }
+
+  // Verifies that the cached meta locations in the given master are in sync 
with what is in ZK.
+  private void verifyCachedMetaLocations(HMaster master) throws Exception {
+List metaHRLs =
+master.getMetaRegionLocationCache().getMetaRegionLocations().get();
+assertTrue(metaHRLs != null);
+assertFalse(metaHRLs.isEmpty());
+ZKWatcher zk = master.getZooKeeper();
+List metaZnodes = zk.getMetaReplicaNodes();
+assertEquals(metaZnodes.size(), metaHRLs.size());
+List actualHRLs = getCurrentMetaLocations(zk);
+Collections.sort(metaHRLs);
+Collections.sort(actualHRLs);
+assertEquals(actualHRLs, metaHRLs);
+  }
+
+  @Test public void testInitialMetaLocations() throws Exception {
+verifyCachedMetaLocations(TEST_UTIL.getMiniHBaseCluster().getMaster());
+  }
+
+  @Test public void testStandByMetaLocations() throws Exception {
+HMaster standBy = 
TEST_UTIL.getMiniHBaseCluster().startMaster().getMaster();
+verifyCachedMetaLocations(standBy);
+  }
+
+  /*
+   * Shuffles the meta region replicas around the cluster and makes sure the 
cache is 

[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347656880
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
 
 Review comment:
   "initial"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347667370
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
+  break;
+}
+  }
+} while (retryCounter.shouldRetry());
+if (znodes == null) {
+  return;
+}
+for (String znode: znodes) {
+  String path = ZNodePaths.joinZNode(watcher.getZNodePaths().baseZNode, 
znode);
+  updateMetaLocation(path, ZNodeOpType.INIT);
+}
+  }
+
+  /**
+   * Gets the HRegionLocation for a given meta replica ID. Renews the watch on 
the znode for
+   * future updates.
+   * @param replicaId 

[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347650440
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
 ##
 @@ -3051,6 +3053,44 @@ public static ProcedureDescription 
buildProcedureDescription(String signature, S
 return builder.build();
   }
 
+  /**
+   * Get the Meta region state from the passed data bytes. Can handle both old 
and new style
+   * server names.
+   * @param data protobuf serialized data with meta server name.
+   * @param replicaId replica ID for this region
+   * @return RegionState instance corresponding to the serialized data.
+   * @throws DeserializationException if the data is invalid.
+   */
+  public static RegionState parseMetaRegionStateFrom(final byte[] data, int 
replicaId)
+  throws DeserializationException {
+RegionState.State state = RegionState.State.OPEN;
+ServerName serverName;
+if (data != null && data.length > 0 && ProtobufUtil.isPBMagicPrefix(data)) 
{
+  try {
+int prefixLen = ProtobufUtil.lengthOfPBMagic();
+ZooKeeperProtos.MetaRegionServer rl =
+ZooKeeperProtos.MetaRegionServer.parser().parseFrom(data, 
prefixLen,
+data.length - prefixLen);
+if (rl.hasState()) {
+  state = RegionState.State.convert(rl.getState());
+}
+HBaseProtos.ServerName sn = rl.getServer();
+serverName = ServerName.valueOf(
+sn.getHostName(), sn.getPort(), sn.getStartCode());
+  } catch (InvalidProtocolBufferException e) {
+throw new DeserializationException("Unable to parse meta region 
location");
+  }
+} else {
+  // old style of meta region location?
+  serverName = parseServerNameFrom(data);
+}
+if (serverName == null) {
+  state = RegionState.State.OFFLINE;
 
 Review comment:
   nit: the default state value be `OFFLINE`, then this null-check and 
assignment is not necessary.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347655680
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.RegionInfoBuilder;
+import org.apache.hadoop.hbase.client.RegionReplicaUtil;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  private final ZKWatcher watcher;
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+watcher = zkWatcher;
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
 
 Review comment:
   Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347668152
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
 
 Review comment:
   Mind promoting all these juicy member comments to javadoc ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347655476
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
 
 Review comment:
   nit: "This class is thread-safe" is ambiguous to me. Maybe you mean "A 
single instance of this class can safely be shared across threads"?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347666141
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
+  break;
+}
+  }
+} while (retryCounter.shouldRetry());
+if (znodes == null) {
+  return;
+}
+for (String znode: znodes) {
+  String path = ZNodePaths.joinZNode(watcher.getZNodePaths().baseZNode, 
znode);
+  updateMetaLocation(path, ZNodeOpType.INIT);
+}
+  }
+
+  /**
+   * Gets the HRegionLocation for a given meta replica ID. Renews the watch on 
the znode for
+   * future updates.
+   * @param replicaId 

[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347669433
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
+  break;
+}
+  }
+} while (retryCounter.shouldRetry());
+if (znodes == null) {
+  return;
 
 Review comment:
   Yeah I think this class becomes a zombie if it's unable to locate the 
replica locations. It would be more resilient as a state machine that can 
transition back to repeat the work done in your `populateInitialMetaLocations` 
method.
   
   Alternatively, it looks like 

[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347658636
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
 
 Review comment:
   "... because stand-by masters can potentially start **before** the initial 
znode creation," right? If this code was executed after initial znode creation, 
there'd already be a znode to watch.
   
   Am I misunderstanding something?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347635366
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
 ##
 @@ -3051,6 +3053,44 @@ public static ProcedureDescription 
buildProcedureDescription(String signature, S
 return builder.build();
   }
 
+  /**
+   * Get the Meta region state from the passed data bytes. Can handle both old 
and new style
+   * server names.
+   * @param data protobuf serialized data with meta server name.
+   * @param replicaId replica ID for this region
+   * @return RegionState instance corresponding to the serialized data.
+   * @throws DeserializationException if the data is invalid.
+   */
+  public static RegionState parseMetaRegionStateFrom(final byte[] data, int 
replicaId)
 
 Review comment:
   Is this a method that should exist in the non-shaded equivalent?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347656602
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
+  break;
+}
+  }
+} while (retryCounter.shouldRetry());
+if (znodes == null) {
+  return;
 
 Review comment:
   What happens now? The znodes don't exist, we have no watchers established. 
Is this instance now a zombie Should it abort the Master instance? Should it 
throw an exception to notify the caller that things didn't go as planned?
   
   If it's not a zombie and this is a perfectly fine state, why did we try to 
establish 

[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347657346
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
 
 Review comment:
   What thread are we blocking on this synchronous ZK lookup? I think it's the 
main thread for the master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347676342
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaTableLocator.java
 ##
 @@ -274,42 +269,17 @@ public static RegionState getMetaRegionState(ZKWatcher 
zkw) throws KeeperExcepti
* @throws KeeperException if a ZooKeeper operation fails
*/
   public static RegionState getMetaRegionState(ZKWatcher zkw, int replicaId)
-  throws KeeperException {
-RegionState.State state = RegionState.State.OPEN;
-ServerName serverName = null;
+  throws KeeperException {
+RegionState regionState = null;
 try {
   byte[] data = ZKUtil.getData(zkw, 
zkw.getZNodePaths().getZNodeForReplica(replicaId));
-  if (data != null && data.length > 0 && 
ProtobufUtil.isPBMagicPrefix(data)) {
-try {
-  int prefixLen = ProtobufUtil.lengthOfPBMagic();
-  ZooKeeperProtos.MetaRegionServer rl =
-ZooKeeperProtos.MetaRegionServer.parser().parseFrom(data, 
prefixLen,
-data.length - prefixLen);
-  if (rl.hasState()) {
-state = RegionState.State.convert(rl.getState());
-  }
-  HBaseProtos.ServerName sn = rl.getServer();
-  serverName = ServerName.valueOf(
-sn.getHostName(), sn.getPort(), sn.getStartCode());
-} catch (InvalidProtocolBufferException e) {
-  throw new DeserializationException("Unable to parse meta region 
location");
-}
-  } else {
-// old style of meta region location?
-serverName = ProtobufUtil.parseServerNameFrom(data);
-  }
+  regionState = ProtobufUtil.parseMetaRegionStateFrom(data, replicaId);
 
 Review comment:
   Ah, I see; the earlier code was moved.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347656778
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
 
 Review comment:
   "initial"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347651220
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
 ##
 @@ -3051,6 +3053,44 @@ public static ProcedureDescription 
buildProcedureDescription(String signature, S
 return builder.build();
   }
 
+  /**
+   * Get the Meta region state from the passed data bytes. Can handle both old 
and new style
+   * server names.
+   * @param data protobuf serialized data with meta server name.
+   * @param replicaId replica ID for this region
+   * @return RegionState instance corresponding to the serialized data.
+   * @throws DeserializationException if the data is invalid.
+   */
+  public static RegionState parseMetaRegionStateFrom(final byte[] data, int 
replicaId)
 
 Review comment:
   Looks like there's no unit tests covering this new method, nor the 
`parseServerNameFrom` that you wrap. This parsing business seems pretty 
critical to correctness, so it would be nice to see direct unit test coverage.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23317) Detect and sideline poison pill regions

2019-11-18 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977009#comment-16977009
 ] 

Sean Busbey commented on HBASE-23317:
-

Is this a dup of HBASE-23316?

> Detect and sideline poison pill regions
> ---
>
> Key: HBASE-23317
> URL: https://issues.apache.org/jira/browse/HBASE-23317
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Kyle Purtell
>Priority: Minor
>
> The master can track that a region deploy has been repeatedly crashing 
> regionservers and rather than continue to pass around the poison pill put its 
> assignment into an administratively failed state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-18511) Default no regions on master

2019-11-18 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-18511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977006#comment-16977006
 ] 

Nick Dimiduk commented on HBASE-18511:
--

Is {{hbase.balancer.tablesOnMaster}} supposed to be a boolean value, or a list 
of table names? We interpret it one way {{ZNodeClearer}} and the other way in 
{{LoadBalancer}}...

> Default no regions on master
> 
>
> Key: HBASE-18511
> URL: https://issues.apache.org/jira/browse/HBASE-18511
> Project: HBase
>  Issue Type: Task
>  Components: master
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-18511.master.001.patch, 
> HBASE-18511.master.002.patch, HBASE-18511.master.003.patch, 
> HBASE-18511.master.004.patch, HBASE-18511.master.005.patch, 
> HBASE-18511.master.006.patch, HBASE-18511.master.007.patch, 
> HBASE-18511.master.008.patch, HBASE-18511.master.009.patch, 
> HBASE-18511.master.010.patch, HBASE-18511.master.011.patch, 
> HBASE-18511.master.012.patch, HBASE-18511.master.013.patch, 
> HBASE-18511.master.014.patch, HBASE-18511.master.015.patch
>
>
> Let this be umbrella issue for no-regions-on-master as default deploy (as it 
> was in branch-1).
> Also need to make sure we can run WITH regions on master; in particular 
> system tables with RPC short-circuit as it is now in hbase master.
> Background is that master branch carried a change that allowed Master carry 
> regions. On top of this improvement on branch-1, Master defaulted to carry 
> system tables only. No release was made with this configuration. Now we are 
> going to cut the 2.0.0 release, the decision is that hbase-2 should have the 
> same layout as hbase-1 so this issue implements the undoing of Master 
> carrying system tables by default (though the capability remains).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack opened a new pull request #847: HBASE-23315 Miscellaneous HBCK Report page cleanup

2019-11-18 Thread GitBox
saintstack opened a new pull request #847: HBASE-23315 Miscellaneous HBCK 
Report page cleanup
URL: https://github.com/apache/hbase/pull/847
 
 
* Add a bit of javadoc around SerialReplicationChecker.
* Miniscule edit to the profiler jsp page and then a bit of doc on how to 
make it work that might help.
* Add some detail if NPE getting BitSetNode to help w/ debug.
* Change HbckChore to log region names instead of encoded names; helps 
doing diagnostics; can take region name and query in shell to find out all 
about the region according to hbase:meta.
* Add some fix-it help inline in the HBCK Report page – how to fix.
* Add counts in procedures page so can see if making progress; move listing 
of WALs to end of the page.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23317) Detect and sideline poison pill regions

2019-11-18 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-23317:

Description: The master can track that a region deploy has been repeatedly 
crashing regionservers and rather than continue to pass around the poison pill 
put its assignment into an administratively failed state.  (was: If a table 
coprocessor fails to load, rather than aborting, throw an exception which 
prevents the region from opening. This will lead to unresolvable regions in 
transition but in some circumstances this may be preferable to process aborts. 
On the other hand, there would be a new risk that the failure to load is a 
symptom of or a cause of regionserver global state corruption that eventually 
leads to other problems. Should at least be an option, though.  )

> Detect and sideline poison pill regions
> ---
>
> Key: HBASE-23317
> URL: https://issues.apache.org/jira/browse/HBASE-23317
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Kyle Purtell
>Priority: Minor
>
> The master can track that a region deploy has been repeatedly crashing 
> regionservers and rather than continue to pass around the poison pill put its 
> assignment into an administratively failed state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23317) Detect and sideline poison pill regions

2019-11-18 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-23317:

Summary: Detect and sideline poison pill regions  (was: An option to fail 
only the region open if a coprocessor fails to load)

> Detect and sideline poison pill regions
> ---
>
> Key: HBASE-23317
> URL: https://issues.apache.org/jira/browse/HBASE-23317
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Kyle Purtell
>Priority: Minor
>
> If a table coprocessor fails to load, rather than aborting, throw an 
> exception which prevents the region from opening. This will lead to 
> unresolvable regions in transition but in some circumstances this may be 
> preferable to process aborts. On the other hand, there would be a new risk 
> that the failure to load is a symptom of or a cause of regionserver global 
> state corruption that eventually leads to other problems. Should at least be 
> an option, though.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23317) An option to fail only the region open if a coprocessor fails to load

2019-11-18 Thread Andrew Kyle Purtell (Jira)
Andrew Kyle Purtell created HBASE-23317:
---

 Summary: An option to fail only the region open if a coprocessor 
fails to load
 Key: HBASE-23317
 URL: https://issues.apache.org/jira/browse/HBASE-23317
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Kyle Purtell


If a table coprocessor fails to load, rather than aborting, throw an exception 
which prevents the region from opening. This will lead to unresolvable regions 
in transition but in some circumstances this may be preferable to process 
aborts. On the other hand, there would be a new risk that the failure to load 
is a symptom of or a cause of regionserver global state corruption that 
eventually leads to other problems. Should at least be an option, though.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23316) RegionServers should refuse to load Regions with malformed coprocs, but not crash

2019-11-18 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created HBASE-23316:
---

 Summary: RegionServers should refuse to load Regions with 
malformed coprocs, but not crash
 Key: HBASE-23316
 URL: https://issues.apache.org/jira/browse/HBASE-23316
 Project: HBase
  Issue Type: Improvement
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby


Currently, a region server will crash if it tries to load a region with a 
coprocessor that is malformed (such as not being on the RS's classpath.) This 
can lead to a cascading "poison pill" in which  the HMaster keeps reassigning 
the region to different region servers, bringing down server after server and 
endangering the whole cluster.

We definitely can't load the Region if the coproc is wrong, but neither should 
that harm other, correctly configured regions on the same server. 

In this JIRA, I'll change the behavior to fail to load the region, and 
increment a metric for region load failures. Future JIRAs can build on this, 
such as by having the HMaster stop trying to load a malformed region absent 
user intervention after some number of retries. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23315) Miscellaneous HBCK Report page cleanup

2019-11-18 Thread Michael Stack (Jira)
Michael Stack created HBASE-23315:
-

 Summary: Miscellaneous HBCK Report page cleanup
 Key: HBASE-23315
 URL: https://issues.apache.org/jira/browse/HBASE-23315
 Project: HBase
  Issue Type: Improvement
Reporter: Michael Stack


A bunch of touch up on the hbck report page:

 * Add a bit of javadoc around SerialReplicationChecker.
 * Miniscule edit to the profiler jsp page and then a bit of doc on how to make 
it work that might help.
 * Add some detail if NPE getting BitSetNode to help w/ debug.
 * Change HbckChore to log region names instead of encoded names; helps doing 
diagnostics; can take region name and query in shell to find out all about the 
region according to hbase:meta.
 * Add some fix-it help inline in the HBCK Report page -- how to fix.
 * Add counts in procedures page so can see if making progress; move listing of 
WALs to end of the page.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase-filesystem] liuml07 opened a new pull request #11: Make HBaseObjectStoreSemantics FilterFileSystem

2019-11-18 Thread GitBox
liuml07 opened a new pull request #11: Make HBaseObjectStoreSemantics 
FilterFileSystem
URL: https://github.com/apache/hbase-filesystem/pull/11
 
 
   https://issues.apache.org/jira/browse/HBASE-23314


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23314) Make HBaseObjectStoreSemantics FilterFileSystem

2019-11-18 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976958#comment-16976958
 ] 

Mingliang Liu commented on HBASE-23314:
---

[~wchevreuil], [~mackrorysd] and [~ste...@apache.org] Does this make sense? 
Thanks,

> Make HBaseObjectStoreSemantics FilterFileSystem
> ---
>
> Key: HBASE-23314
> URL: https://issues.apache.org/jira/browse/HBASE-23314
> Project: HBase
>  Issue Type: Improvement
>  Components: hboss
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> HBaseObjectStoreSemantics, as a wrapper of object store file system 
> implementation, currently extends FileSystem itself. There is no 
> straightforward way to expose its wrapped files system. However, some tooling 
> would need to operate using the wrapped object store file systems, for e.g. 
> S3GuardTool is expecting the file system implementation is S3A so it can 
> access the metadata store easily. A simple S3GuardTool against HBOSS will get 
> confusing error like "s3a://mybucket is not a S3A file system".
> Let's make HBaseObjectStoreSemantics a FilterFileSystem so that places like 
> S3GuardTool can use {{getRawFilesSystem()}} to retrieve the wrapped file 
> system. Doing this should not break the contract of HBOSS contract.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23314) Make HBaseObjectStoreSemantics FilterFileSystem

2019-11-18 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23314:
--
Issue Type: Improvement  (was: New Feature)

> Make HBaseObjectStoreSemantics FilterFileSystem
> ---
>
> Key: HBASE-23314
> URL: https://issues.apache.org/jira/browse/HBASE-23314
> Project: HBase
>  Issue Type: Improvement
>  Components: hboss
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> HBaseObjectStoreSemantics, as a wrapper of object store file system 
> implementation, currently extends FileSystem itself. There is no 
> straightforward way to expose its wrapped files system. However, some tooling 
> would need to operate using the wrapped object store file systems, for e.g. 
> S3GuardTool is expecting the file system implementation is S3A so it can 
> access the metadata store easily. A simple S3GuardTool against HBOSS will get 
> confusing error like "s3a://mybucket is not a S3A file system".
> Let's make HBaseObjectStoreSemantics a FilterFileSystem so that places like 
> S3GuardTool can use {{getRawFilesSystem()}} to retrieve the wrapped file 
> system. Doing this should not break the contract of HBOSS contract.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23314) Make HBaseObjectStoreSemantics FilterFileSystem

2019-11-18 Thread Mingliang Liu (Jira)
Mingliang Liu created HBASE-23314:
-

 Summary: Make HBaseObjectStoreSemantics FilterFileSystem
 Key: HBASE-23314
 URL: https://issues.apache.org/jira/browse/HBASE-23314
 Project: HBase
  Issue Type: New Feature
  Components: hboss
Reporter: Mingliang Liu
Assignee: Mingliang Liu


HBaseObjectStoreSemantics, as a wrapper of object store file system 
implementation, currently extends FileSystem itself. There is no 
straightforward way to expose its wrapped files system. However, some tooling 
would need to operate using the wrapped object store file systems, for e.g. 
S3GuardTool is expecting the file system implementation is S3A so it can access 
the metadata store easily. A simple S3GuardTool against HBOSS will get 
confusing error like "s3a://mybucket is not a S3A file system".

Let's make HBaseObjectStoreSemantics a FilterFileSystem so that places like 
S3GuardTool can use {{getRawFilesSystem()}} to retrieve the wrapped file 
system. Doing this should not break the contract of HBOSS contract.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23282) HBCKServerCrashProcedure for 'Unknown Servers'

2019-11-18 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976952#comment-16976952
 ] 

Michael Stack commented on HBASE-23282:
---

Merged. Too many conflicts for branch-2.1.

> HBCKServerCrashProcedure for 'Unknown Servers'
> --
>
> Key: HBASE-23282
> URL: https://issues.apache.org/jira/browse/HBASE-23282
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2, proc-v2
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> With an overdriving, sustained load, I can fairly easily manufacture an 
> hbase:meta table that references servers that are no longer in the live list 
> nor are members of deadservers; i.e. 'Unknown Servers'.  The new 'HBCK 
> Report' UI in Master has a section where it lists 'Unknown Servers' if any in 
> hbase:meta.
> Once in this state, the repair is awkward. Our assign/unassign Procedure is 
> particularly dogged about insisting that we confirm close/open of Regions 
> when it is going about its business which is well and good if server is in 
> live/dead sets but when an 'Unknown Server', we invariably end up trying to 
> confirm against a non-longer present server (More on this in follow-on 
> issues).
> What is wanted is queuing of a ServerCrashProcedure for each 'Unknown 
> Server'. It would split any WALs (there shouldn't be any if server was 
> restarted) and ideally it would cancel out any assigns and reassign regions 
> off the 'Unknown Server'.  But the 'normal' SCP consults the in-memory 
> cluster state figuring what Regions were on the crashed server... And 
> 'Unknown Servers' don't have state in in-master memory Maps of Servers to 
> Regions or  in DeadServers list which works fine for the usual case.
> Suggestion here is that hbck2 be able to drive in a special SCP, one which 
> would get list of Regions by scanning hbase:meta rather than asking Master 
> memory; an HBCKSCP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23282) HBCKServerCrashProcedure for 'Unknown Servers'

2019-11-18 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-23282.
---
Fix Version/s: 2.2.3
   2.3.0
   3.0.0
 Release Note: hbck2 scheduleRecoveries will now run a SCP that also looks 
in hbase:meta for any references to the scheduled server -- not just consult 
Master in-memory state -- just in case vestiges of the server are leftover in 
hbase:meta 
 Assignee: Michael Stack
   Resolution: Fixed

> HBCKServerCrashProcedure for 'Unknown Servers'
> --
>
> Key: HBASE-23282
> URL: https://issues.apache.org/jira/browse/HBASE-23282
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2, proc-v2
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> With an overdriving, sustained load, I can fairly easily manufacture an 
> hbase:meta table that references servers that are no longer in the live list 
> nor are members of deadservers; i.e. 'Unknown Servers'.  The new 'HBCK 
> Report' UI in Master has a section where it lists 'Unknown Servers' if any in 
> hbase:meta.
> Once in this state, the repair is awkward. Our assign/unassign Procedure is 
> particularly dogged about insisting that we confirm close/open of Regions 
> when it is going about its business which is well and good if server is in 
> live/dead sets but when an 'Unknown Server', we invariably end up trying to 
> confirm against a non-longer present server (More on this in follow-on 
> issues).
> What is wanted is queuing of a ServerCrashProcedure for each 'Unknown 
> Server'. It would split any WALs (there shouldn't be any if server was 
> restarted) and ideally it would cancel out any assigns and reassign regions 
> off the 'Unknown Server'.  But the 'normal' SCP consults the in-memory 
> cluster state figuring what Regions were on the crashed server... And 
> 'Unknown Servers' don't have state in in-master memory Maps of Servers to 
> Regions or  in DeadServers list which works fine for the usual case.
> Suggestion here is that hbck2 be able to drive in a special SCP, one which 
> would get list of Regions by scanning hbase:meta rather than asking Master 
> memory; an HBCKSCP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack closed pull request #826: HBASE-23282 HBCKServerCrashProcedure for 'Unknown Servers'

2019-11-18 Thread GitBox
saintstack closed pull request #826: HBASE-23282 HBCKServerCrashProcedure for 
'Unknown Servers'
URL: https://github.com/apache/hbase/pull/826
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on issue #826: HBASE-23282 HBCKServerCrashProcedure for 'Unknown Servers'

2019-11-18 Thread GitBox
saintstack commented on issue #826: HBASE-23282 HBCKServerCrashProcedure for 
'Unknown Servers'
URL: https://github.com/apache/hbase/pull/826#issuecomment-555237978
 
 
   Closed. Merging by hand (with a FindBugs justification to address above 
complaint).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23313) [hbck2] setRegionState should update Master in-memory state too

2019-11-18 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23313:
--
Component/s: hbck2

> [hbck2] setRegionState should update Master in-memory state too
> ---
>
> Key: HBASE-23313
> URL: https://issues.apache.org/jira/browse/HBASE-23313
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Reporter: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> setRegionState changes the hbase:meta table info:state column. It does not 
> alter the Master's in-memory state. This means you have to kill Master and 
> have another assume Active Master role of a state-change to be noticed. 
> Better if the setRegionState just went via Master and updated Master and 
> hbase:meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23313) [hbck2] setRegionState should update Master in-memory state too

2019-11-18 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23313:
--
Fix Version/s: 2.3.0
   3.0.0

> [hbck2] setRegionState should update Master in-memory state too
> ---
>
> Key: HBASE-23313
> URL: https://issues.apache.org/jira/browse/HBASE-23313
> Project: HBase
>  Issue Type: Bug
>Reporter: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> setRegionState changes the hbase:meta table info:state column. It does not 
> alter the Master's in-memory state. This means you have to kill Master and 
> have another assume Active Master role of a state-change to be noticed. 
> Better if the setRegionState just went via Master and updated Master and 
> hbase:meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #834: HBASE-23237 Negative sign in requestsPerSecond

2019-11-18 Thread GitBox
Apache-HBase commented on issue #834: HBASE-23237 Negative sign in 
requestsPerSecond
URL: https://github.com/apache/hbase/pull/834#issuecomment-555228202
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 51s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 30s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  2s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 34s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 32s |  master passed  |
   | -0 :warning: |  patch  |   4m 40s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 22s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 28s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 57s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  17m  0s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m 30s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 176m 45s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 237m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-834/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/834 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 55fd909c8d42 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-834/out/precommit/personality/provided.sh
 |
   | git revision | master / 8bfdfe1b85 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-834/3/testReport/
 |
   | Max. process+thread count | 4553 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-834/3/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #812: HBASE-23275: Track active master's address in ActiveMasterManager

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #812: HBASE-23275: Track active 
master's address in ActiveMasterManager
URL: https://github.com/apache/hbase/pull/812#discussion_r347628985
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
 ##
 @@ -57,12 +56,18 @@
   final AtomicBoolean clusterHasActiveMaster = new AtomicBoolean(false);
   final AtomicBoolean clusterShutDown = new AtomicBoolean(false);
 
+  // This server's information.
   private final ServerName sn;
   private int infoPort;
   private final Server master;
 
+  // Active master's server name. Invalidated anytime active master changes 
(based on ZK
+  // notifications) and lazily fetched on-demand.
+  // ServerName is immutable, so we don't need heavy synchronization around it.
 
 Review comment:
   No, this is a fair point. It would be nice to consolidate these concerns, 
but if we don't have the test coverage and you don't have the capacity for a 
manual test, better leave it as is.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23313) [hbck2] setRegionState should update Master in-memory state too

2019-11-18 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976916#comment-16976916
 ] 

Michael Stack commented on HBASE-23313:
---

Link to issue where we did this for table state already.

> [hbck2] setRegionState should update Master in-memory state too
> ---
>
> Key: HBASE-23313
> URL: https://issues.apache.org/jira/browse/HBASE-23313
> Project: HBase
>  Issue Type: Bug
>Reporter: Michael Stack
>Priority: Major
>
> setRegionState changes the hbase:meta table info:state column. It does not 
> alter the Master's in-memory state. This means you have to kill Master and 
> have another assume Active Master role of a state-change to be noticed. 
> Better if the setRegionState just went via Master and updated Master and 
> hbase:meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23313) [hbck2] setRegionState should update Master in-memory state too

2019-11-18 Thread Michael Stack (Jira)
Michael Stack created HBASE-23313:
-

 Summary: [hbck2] setRegionState should update Master in-memory 
state too
 Key: HBASE-23313
 URL: https://issues.apache.org/jira/browse/HBASE-23313
 Project: HBase
  Issue Type: Bug
Reporter: Michael Stack


setRegionState changes the hbase:meta table info:state column. It does not 
alter the Master's in-memory state. This means you have to kill Master and have 
another assume Active Master role of a state-change to be noticed. Better if 
the setRegionState just went via Master and updated Master and hbase:meta.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bharathv commented on issue #846: HBASE-23234 Provide .editorconfig based on checkstyle configuration

2019-11-18 Thread GitBox
bharathv commented on issue #846: HBASE-23234 Provide .editorconfig based on 
checkstyle configuration
URL: https://github.com/apache/hbase/pull/846#issuecomment-555225451
 
 
   Sean beat me to it. Looks like he put the same comment. I'm an intellij user 
but just out of curiosity I tried eclipse to see how the whole checkstyle thing 
works. Following are my observations fwiw.
   
   1. I tried eclipse-cs [1]. It fails with the following error which 
apparently is a known problem 
https://github.com/checkstyle/eclipse-cs/issues/107. Couldn't workaround it.
   
   java.lang.NoClassDefFoundError: 
org/eclipse/jdt/internal/ui/preferences/PreferencesAccess
   at 
net.sf.eclipsecs.core.transformer.FormatterConfigWriter.writeCleanupSettings(FormatterConfigWriter.java:95)
   at 
net.sf.eclipsecs.core.transformer.FormatterConfigWriter.writeSettings(FormatterConfigWriter.java:89)
   at 
net.sf.eclipsecs.core.transformer.FormatterConfigWriter.(FormatterConfigWriter.java:81)
   at 
net.sf.eclipsecs.core.transformer.CheckstyleTransformer.transformRules(CheckstyleTransformer.java:124)
   at 
net.sf.eclipsecs.core.jobs.TransformCheckstyleRulesJob.runInWorkspace(TransformCheckstyleRulesJob.java:119)
   at 
org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:42)
   at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
   
   2. This patch's editorconfig file doesn't work with my eclipse setup for 
whatever reason. I'm not able to figure out why.
   
   [1] 
https://stackoverflow.com/questions/984778/how-to-generate-an-eclipse-formatter-configuration-from-a-checkstyle-configurati


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #807: HBASE-23259: Ability to start minicluster with pre-determined master ports

2019-11-18 Thread GitBox
bharathv commented on a change in pull request #807: HBASE-23259: Ability to 
start minicluster with pre-determined master ports
URL: https://github.com/apache/hbase/pull/807#discussion_r347624937
 
 

 ##
 File path: hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
 ##
 @@ -171,6 +171,11 @@
   /** Configuration key for master web API port */
   public static final String MASTER_INFO_PORT = "hbase.master.info.port";
 
+  /** Configuration key for the list of master host:ports **/
+  public static final String MASTER_ADDRS_KEY = "hbase.master.addrs";
 
 Review comment:
   Yep, I'll consolidate the parsing logic. It will come in the next patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #826: HBASE-23282 HBCKServerCrashProcedure for 'Unknown Servers'

2019-11-18 Thread GitBox
Apache-HBase commented on issue #826: HBASE-23282 HBCKServerCrashProcedure for 
'Unknown Servers'
URL: https://github.com/apache/hbase/pull/826#issuecomment-555209116
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 20s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 56s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 13s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  branch-2 passed  |
   | +0 :ok: |  spotbugs  |   3m 29s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 37s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 23s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 19s |  hbase-server: The patch generated 1 
new + 48 unchanged - 0 fixed = 49 total (was 48)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 11s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 16s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  the patch passed  |
   | -1 :x: |  findbugs  |   3m 36s |  hbase-server generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 35s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 162m 15s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   1m  4s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 226m 37s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-server |
   |  |  Possible null pointer dereference of ps in 
org.apache.hadoop.hbase.master.procedure.HBCKServerCrashProcedure.getRegionsOnCrashedServer(MasterProcedureEnv)
 on exception path  Dereferenced at HBCKServerCrashProcedure.java:ps in 
org.apache.hadoop.hbase.master.procedure.HBCKServerCrashProcedure.getRegionsOnCrashedServer(MasterProcedureEnv)
 on exception path  Dereferenced at HBCKServerCrashProcedure.java:[line 81] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-826/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/826 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux b60a47b4ce4e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-826/out/precommit/personality/provided.sh
 |
   | git revision | branch-2 / ab63bde013 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-826/7/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | findbugs | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-826/7/artifact/out/new-findbugs-hbase-server.html
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-826/7/testReport/
 |
   | Max. process+thread count | 4640 (vs. ulimit of 1) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-826/7/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log 

[GitHub] [hbase] ndimiduk commented on a change in pull request #807: HBASE-23259: Ability to start minicluster with pre-determined master ports

2019-11-18 Thread GitBox
ndimiduk commented on a change in pull request #807: HBASE-23259: Ability to 
start minicluster with pre-determined master ports
URL: https://github.com/apache/hbase/pull/807#discussion_r347605214
 
 

 ##
 File path: hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
 ##
 @@ -171,6 +171,11 @@
   /** Configuration key for master web API port */
   public static final String MASTER_INFO_PORT = "hbase.master.info.port";
 
+  /** Configuration key for the list of master host:ports **/
+  public static final String MASTER_ADDRS_KEY = "hbase.master.addrs";
 
 Review comment:
   Looking at our zookeeper connection string parsing code. We depend on the 
Hadoop `Configuration` object to handle parsing of the comma-delimited 
configuration value. We then manually check for `':'` characters to split out 
ports.
   
   If we're making a habit of doing this kind of parsing (lists of socket 
addresses), it's probably work encapsulating the logic into a single place.
   
   
https://github.com/apache/hbase/blob/5e84997f2ffdbcf5f849d70c30ddbe2db4039ca4/hbase-common/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java#L97-L111


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >