[jira] [Commented] (HBASE-23328) info:regioninfo goes wrong when region replicas enabled

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980659#comment-16980659
 ] 

Hudson commented on HBASE-23328:


Results for branch branch-2.1
[build #1718 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1718/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1718//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1718//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1718//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> info:regioninfo goes wrong when region replicas enabled
> ---
>
> Key: HBASE-23328
> URL: https://issues.apache.org/jira/browse/HBASE-23328
> Project: HBase
>  Issue Type: Bug
>  Components: read replicas
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
>
> Noticed that the info:regioninfo content in hbase:meta can become that of a 
> serialized replica. I think it mostly harmless but accounting especially 
> debugging is frustrated because hbase:meta row name does not match the 
> info:regioninfo.
> Here is an example:
> {code}
> t1,c6e977ef,1572669121340.0b455b2d57f91c153d5088533205c268. 
> column=info:regioninfo, timestamp=1574367093772, value={ENCODED => 
> 5199f7826c340ba944517e97c6ebaf04, NAME => 
> 't1,c6e977ef,1572669121340_0001.5199f7826c340ba944517e97c6ebaf04.', STARTKEY 
> => 'c6e977ef', ENDKEY => 'c72b0126', REPLICA_ID => 1}
> {code}
> Notice how hbase:meta row name is like that of the info:regioninfo content 
> only we are listing REPLICA_ID content and the encoded name is different (as 
> it factors replicaid).
> The original Region Replica design describes how the info:regioninfo is 
> supposed to have the default HRI serialized only. See comment on HRI changes 
> in 
> https://issues.apache.org/jira/secure/attachment/12627276/hbase-10347_redo_v8.patch
> -Going back over history, this may have been a bug since Region Replicas came 
> in.- <= No. Looking at an old cluster w/ region replicas, it doesn't have 
> this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23325) [UI]rsgoup average load keep two decimals

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980658#comment-16980658
 ] 

Hudson commented on HBASE-23325:


Results for branch branch-2.1
[build #1718 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1718/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1718//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1718//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1718//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [UI]rsgoup average load keep two decimals
> -
>
> Key: HBASE-23325
> URL: https://issues.apache.org/jira/browse/HBASE-23325
> Project: HBase
>  Issue Type: Improvement
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: 20191121165713.png
>
>
> In */master-status*,  rsgoup average load keep two decimals. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23085) Network and Data related Actions

2019-11-22 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-23085.
-
Resolution: Fixed

addendum pushed to needed branches.

> Network and Data related Actions
> 
>
> Key: HBASE-23085
> URL: https://issues.apache.org/jira/browse/HBASE-23085
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Szabolcs Bukros
>Assignee: Szabolcs Bukros
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Add additional actions to:
>  * manipulate network packages with tc (reorder, loose,...)
>  * add CPU load
>  * fill the disk
>  * corrupt or delete regionserver data files
> Create new monkey factories for the new actions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23117) Bad enum in hbase:meta info:state column can fail loadMeta and stop startup

2019-11-22 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980655#comment-16980655
 ] 

HBase QA commented on HBASE-23117:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
45s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
25s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
35s{color} | {color:red} hbase-server: The patch generated 2 new + 223 
unchanged - 0 fixed = 225 total (was 223) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
35s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}261m  5s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}328m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/2/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hbase/pull/867 |
| JIRA Issue | HBASE-23117 |
| Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
| uname | Linux b24cd14b8338 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-867/out/precommit/personality/provided.sh
 |
| git revision | branch-2 / c8592f1fb7 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/2/artifact/out/diff-checkstyle-hbase-server.txt
 

[GitHub] [hbase] Apache-HBase commented on issue #867: HBASE-23117: Bad enum in hbase:meta info:state column can fail loadMeta and stop startup

2019-11-22 Thread GitBox
Apache-HBase commented on issue #867: HBASE-23117: Bad enum in hbase:meta 
info:state column can fail loadMeta and stop startup
URL: https://github.com/apache/hbase/pull/867#issuecomment-557765439
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 13s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   8m 37s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 51s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 45s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  branch-2 passed  |
   | +0 :ok: |  spotbugs  |   4m 25s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 22s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  1s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 35s |  hbase-server: The patch generated 2 
new + 223 unchanged - 0 fixed = 225 total (was 223)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  16m 59s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 47s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 261m  5s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 328m 40s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/867 |
   | JIRA Issue | HBASE-23117 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux b24cd14b8338 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-867/out/precommit/personality/provided.sh
 |
   | git revision | branch-2 / c8592f1fb7 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/2/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/2/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/2/testReport/
 |
   | Max. process+thread count | 4943 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey merged pull request #871: HBASE-23085 Network and Data related Actions; ADDENDUM

2019-11-22 Thread GitBox
busbey merged pull request #871: HBASE-23085 Network and Data related Actions; 
ADDENDUM
URL: https://github.com/apache/hbase/pull/871
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-11-22 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-22969.
-
Resolution: Fixed

pushed addendum to master, branch-2, branch-2.2

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Assignee: Udai Bhan Kashyap
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.0005.patch, HBASE-22969.0006.patch, HBASE-22969.0007.patch, 
> HBASE-22969.0008.patch, HBASE-22969.0009.patch, HBASE-22969.0010.patch, 
> HBASE-22969.0011.patch, HBASE-22969.0012.patch, HBASE-22969.0013.patch, 
> HBASE-22969.0014.patch, HBASE-22969.HBASE-22969.0001.patch, 
> HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #837: HBASE-23309: Adding the flexibility to ChainWalEntryFilter to filter the whole entry if all cells get filtered

2019-11-22 Thread GitBox
Apache-HBase commented on issue #837: HBASE-23309: Adding the flexibility to 
ChainWalEntryFilter to filter the whole entry if all cells get filtered
URL: https://github.com/apache/hbase/pull/837#issuecomment-557759985
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 53s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m  2s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 14s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 39s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 30s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 21s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 27s |  hbase-server: The patch generated 5 
new + 3 unchanged - 0 fixed = 8 total (was 3)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   5m  2s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 19s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 59s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  | 163m  3s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 48s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 234m 33s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-837/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/837 |
   | JIRA Issue | HBASE-23309 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux e31d5e039289 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-837/out/precommit/personality/provided.sh
 |
   | git revision | master / ee730c8c79 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-837/5/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-837/5/testReport/
 |
   | Max. process+thread count | 4257 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-837/5/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23309) Add support in ChainWalEntryFilter to filter Entry if all cells get filtered through WalCellFilter

2019-11-22 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980640#comment-16980640
 ] 

HBase QA commented on HBASE-23309:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
14s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
39s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
27s{color} | {color:red} hbase-server: The patch generated 5 new + 3 unchanged 
- 0 fixed = 8 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 2s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 19s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
59s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}163m  
3s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}234m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-837/5/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hbase/pull/837 |
| JIRA Issue | HBASE-23309 |
| Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
| uname | Linux e31d5e039289 

[jira] [Commented] (HBASE-23328) info:regioninfo goes wrong when region replicas enabled

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980632#comment-16980632
 ] 

Hudson commented on HBASE-23328:


Results for branch branch-2
[build #2362 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> info:regioninfo goes wrong when region replicas enabled
> ---
>
> Key: HBASE-23328
> URL: https://issues.apache.org/jira/browse/HBASE-23328
> Project: HBase
>  Issue Type: Bug
>  Components: read replicas
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
>
> Noticed that the info:regioninfo content in hbase:meta can become that of a 
> serialized replica. I think it mostly harmless but accounting especially 
> debugging is frustrated because hbase:meta row name does not match the 
> info:regioninfo.
> Here is an example:
> {code}
> t1,c6e977ef,1572669121340.0b455b2d57f91c153d5088533205c268. 
> column=info:regioninfo, timestamp=1574367093772, value={ENCODED => 
> 5199f7826c340ba944517e97c6ebaf04, NAME => 
> 't1,c6e977ef,1572669121340_0001.5199f7826c340ba944517e97c6ebaf04.', STARTKEY 
> => 'c6e977ef', ENDKEY => 'c72b0126', REPLICA_ID => 1}
> {code}
> Notice how hbase:meta row name is like that of the info:regioninfo content 
> only we are listing REPLICA_ID content and the encoded name is different (as 
> it factors replicaid).
> The original Region Replica design describes how the info:regioninfo is 
> supposed to have the default HRI serialized only. See comment on HRI changes 
> in 
> https://issues.apache.org/jira/secure/attachment/12627276/hbase-10347_redo_v8.patch
> -Going back over history, this may have been a bug since Region Replicas came 
> in.- <= No. Looking at an old cluster w/ region replicas, it doesn't have 
> this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23325) [UI]rsgoup average load keep two decimals

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980631#comment-16980631
 ] 

Hudson commented on HBASE-23325:


Results for branch branch-2
[build #2362 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [UI]rsgoup average load keep two decimals
> -
>
> Key: HBASE-23325
> URL: https://issues.apache.org/jira/browse/HBASE-23325
> Project: HBase
>  Issue Type: Improvement
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: 20191121165713.png
>
>
> In */master-status*,  rsgoup average load keep two decimals. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23307) Add running of ReplicationBarrierCleaner to hbck2 fixMeta invocation

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980633#comment-16980633
 ] 

Hudson commented on HBASE-23307:


Results for branch branch-2
[build #2362 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2362//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add running of ReplicationBarrierCleaner to hbck2 fixMeta invocation
> 
>
> Key: HBASE-23307
> URL: https://issues.apache.org/jira/browse/HBASE-23307
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbck2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Run the ReplicationBarrierCleaner chore when hbck2 invokes fixMeta. It will 
> clean up stale rep_barrier entries in hbase:meta which can help if trying to 
> do a restore of hbase:meta to good state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22174) Remove error prone from our precommit javac check

2019-11-22 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980615#comment-16980615
 ] 

Duo Zhang commented on HBASE-22174:
---

Oh, seems no newer version yet...

> Remove error prone from our precommit javac check
> -
>
> Key: HBASE-22174
> URL: https://issues.apache.org/jira/browse/HBASE-22174
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.0.6, 2.1.5
>
> Attachments: HBASE-22174-HBASE-22174.patch, HBASE-22174.patch
>
>
> As the result is not stable. Can add it back as a separated check later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22174) Remove error prone from our precommit javac check

2019-11-22 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980613#comment-16980613
 ] 

Duo Zhang commented on HBASE-22174:
---

We can try newer version to see if it can give a stable result?

> Remove error prone from our precommit javac check
> -
>
> Key: HBASE-22174
> URL: https://issues.apache.org/jira/browse/HBASE-22174
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.0.6, 2.1.5
>
> Attachments: HBASE-22174-HBASE-22174.patch, HBASE-22174.patch
>
>
> As the result is not stable. Can add it back as a separated check later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23321) [hbck2] fixHoles of fixMeta doesn't update in-memory state

2019-11-22 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980602#comment-16980602
 ] 

Michael Stack commented on HBASE-23321:
---

[~ndimiduk] makes sense. Make new issue sir?

> [hbck2] fixHoles of fixMeta doesn't update in-memory state
> --
>
> Key: HBASE-23321
> URL: https://issues.apache.org/jira/browse/HBASE-23321
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> If hbase:meta has holes, you can run fixMeta from hbck2. This will close the 
> holes but you have to restart the Master for it to notice the new region 
> additions. Also, we were plugging holes by adding regions but no state for 
> the region which makes it awkward to subsequently assign. Fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #872: HBASE-23333 Provide call context around timeouts and other failure scenarios

2019-11-22 Thread GitBox
Apache-HBase commented on issue #872: HBASE-2 Provide call context around 
timeouts and other failure scenarios
URL: https://github.com/apache/hbase/pull/872#issuecomment-557748567
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 25s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 34s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  master passed  |
   | +0 :ok: |  spotbugs  |   1m  7s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  hbase-client: The patch 
generated 0 new + 7 unchanged - 3 fixed = 7 total (was 10)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 33s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 45s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 53s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  49m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-872/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/872 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 435e5cb8da54 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-872/out/precommit/personality/provided.sh
 |
   | git revision | master / ee730c8c79 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-872/1/testReport/
 |
   | Max. process+thread count | 294 (vs. ulimit of 1) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-872/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on issue #872: HBASE-23333 Provide call context around timeouts and other failure scenarios

2019-11-22 Thread GitBox
ndimiduk commented on issue #872: HBASE-2 Provide call context around 
timeouts and other failure scenarios
URL: https://github.com/apache/hbase/pull/872#issuecomment-557740423
 
 
   Initial commit simply include the output of `Call#toString` where before we 
only have `call.id`. Looking to see if there's a sensible way to include a call 
stack that tracks up into the application code. Will be back.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk opened a new pull request #872: HBASE-23333 Provide call context around timeouts and other failure scenarios

2019-11-22 Thread GitBox
ndimiduk opened a new pull request #872: HBASE-2 Provide call context 
around timeouts and other failure scenarios
URL: https://github.com/apache/hbase/pull/872
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23333) Provide call context around timeouts and other failure scenarios

2019-11-22 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-2:
-
Summary: Provide call context around timeouts and other failure scenarios  
(was: Provide call context call timeouts and other failure scenarios)

> Provide call context around timeouts and other failure scenarios
> 
>
> Key: HBASE-2
> URL: https://issues.apache.org/jira/browse/HBASE-2
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Operability
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Priority: Major
>
> Failure diagnosis isn't very straightforward with call stack traces like
> {noformat}
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to 
> c501d28b0dfa/172.17.0.2:45657 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=508, 
> waitTime=60006, rpcTimeout=6
>   at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:204)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:392)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:97)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:423)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:419)
>   at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:96)
>   at 
> org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:199)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:680)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:755)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:483)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=508, 
> waitTime=60006, rpcTimeout=6
>   at 
> org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:200)
>   ... 4 more{noformat}
> Probably the "affectsVersions" goes back farther than this.
> See if we can provide more calling context, even stack trace from the call 
> origin, in these exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23333) Provide call context call timeouts and other failure scenarios

2019-11-22 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HBASE-2:


 Summary: Provide call context call timeouts and other failure 
scenarios
 Key: HBASE-2
 URL: https://issues.apache.org/jira/browse/HBASE-2
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 3.0.0, 2.3.0
Reporter: Nick Dimiduk


Failure diagnosis isn't very straightforward with call stack traces like
{noformat}
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to 
c501d28b0dfa/172.17.0.2:45657 failed on local exception: 
org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=508, waitTime=60006, 
rpcTimeout=6
at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:204)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:392)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:97)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:423)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:419)
at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:96)
at 
org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:199)
at 
org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:680)
at 
org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:755)
at 
org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:483)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=508, 
waitTime=60006, rpcTimeout=6
at 
org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:200)
... 4 more{noformat}
Probably the "affectsVersions" goes back farther than this.

See if we can provide more calling context, even stack trace from the call 
origin, in these exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23333) Provide call context call timeouts and other failure scenarios

2019-11-22 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-2:
-
Component/s: Operability

> Provide call context call timeouts and other failure scenarios
> --
>
> Key: HBASE-2
> URL: https://issues.apache.org/jira/browse/HBASE-2
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Operability
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Priority: Major
>
> Failure diagnosis isn't very straightforward with call stack traces like
> {noformat}
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to 
> c501d28b0dfa/172.17.0.2:45657 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=508, 
> waitTime=60006, rpcTimeout=6
>   at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:204)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:392)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:97)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:423)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:419)
>   at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:96)
>   at 
> org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:199)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:680)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:755)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:483)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=508, 
> waitTime=60006, rpcTimeout=6
>   at 
> org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:200)
>   ... 4 more{noformat}
> Probably the "affectsVersions" goes back farther than this.
> See if we can provide more calling context, even stack trace from the call 
> origin, in these exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-22 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r349833173
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe (a single instance of this class can be shared by multiple 
threads without race
+ * conditions).
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  /**
+   * Maximum number of times we retry when ZK operation times out.
+   */
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  /**
+   * Sleep interval ms between ZK operation retries.
+   */
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  /**
+   * Cached meta region locations indexed by replica ID.
+   * CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+   * client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+   * that should be OK since the size of the list is often small and mutations 
are not too often
+   * and we do not need to block client requests while mutations are in 
progress.
+   */
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  /**
+   * Populates the current snapshot of meta locations from ZK. If no meta 
znodes exist, it registers
+   * a watcher on base znode to check for any CREATE/DELETE events on the 
children.
+   */
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+while (retryCounter.shouldRetry()) {
+  try {
+znodes = watcher.getMetaReplicaNodesAndWatch();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating initial meta locations", ke);
+if (!retryCounter.shouldRetry()) {
+  // Retries exhausted and watchers not set. This is not a desirable 
state since the cache
+  // could remain stale forever. Propagate the exception.
+  watcher.abort("Error populating meta locations", ke);
+  return;
+}
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  

[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-22 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r349833521
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe (a single instance of this class can be shared by multiple 
threads without race
+ * conditions).
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  /**
+   * Maximum number of times we retry when ZK operation times out.
+   */
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  /**
+   * Sleep interval ms between ZK operation retries.
+   */
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  /**
+   * Cached meta region locations indexed by replica ID.
+   * CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+   * client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+   * that should be OK since the size of the list is often small and mutations 
are not too often
+   * and we do not need to block client requests while mutations are in 
progress.
+   */
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  /**
+   * Populates the current snapshot of meta locations from ZK. If no meta 
znodes exist, it registers
+   * a watcher on base znode to check for any CREATE/DELETE events on the 
children.
+   */
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+while (retryCounter.shouldRetry()) {
+  try {
+znodes = watcher.getMetaReplicaNodesAndWatch();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating initial meta locations", ke);
+if (!retryCounter.shouldRetry()) {
+  // Retries exhausted and watchers not set. This is not a desirable 
state since the cache
+  // could remain stale forever. Propagate the exception.
+  watcher.abort("Error populating meta locations", ke);
+  return;
+}
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  

[GitHub] [hbase] Apache-HBase commented on issue #869: HBASE-22969 A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position; ADDENDUM

2019-11-22 Thread GitBox
Apache-HBase commented on issue #869: HBASE-22969 A new binary component 
comparator(BinaryComponentComparator) to perform comparison of arbitrary length 
and position; ADDENDUM
URL: https://github.com/apache/hbase/pull/869#issuecomment-557731977
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 21s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 32s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 18s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 53s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 51s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   7m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   6m 19s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  24m 11s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 325m 54s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 404m 17s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.master.procedure.TestSCPWithReplicasWithoutZKCoordinated |
   |   | hadoop.hbase.security.access.TestSnapshotScannerHDFSAclController |
   |   | hadoop.hbase.util.TestFromClientSide3WoUnsafe |
   |   | hadoop.hbase.master.TestAssignmentManagerMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-869/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/869 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux fbb0afdde7b5 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-869/out/precommit/personality/provided.sh
 |
   | git revision | master / 8e52339cb8 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-869/2/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-869/2/testReport/
 |
   | Max. process+thread count | 4932 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-869/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23328) info:regioninfo goes wrong when region replicas enabled

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980569#comment-16980569
 ] 

Hudson commented on HBASE-23328:


Results for branch branch-2.2
[build #701 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> info:regioninfo goes wrong when region replicas enabled
> ---
>
> Key: HBASE-23328
> URL: https://issues.apache.org/jira/browse/HBASE-23328
> Project: HBase
>  Issue Type: Bug
>  Components: read replicas
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
>
> Noticed that the info:regioninfo content in hbase:meta can become that of a 
> serialized replica. I think it mostly harmless but accounting especially 
> debugging is frustrated because hbase:meta row name does not match the 
> info:regioninfo.
> Here is an example:
> {code}
> t1,c6e977ef,1572669121340.0b455b2d57f91c153d5088533205c268. 
> column=info:regioninfo, timestamp=1574367093772, value={ENCODED => 
> 5199f7826c340ba944517e97c6ebaf04, NAME => 
> 't1,c6e977ef,1572669121340_0001.5199f7826c340ba944517e97c6ebaf04.', STARTKEY 
> => 'c6e977ef', ENDKEY => 'c72b0126', REPLICA_ID => 1}
> {code}
> Notice how hbase:meta row name is like that of the info:regioninfo content 
> only we are listing REPLICA_ID content and the encoded name is different (as 
> it factors replicaid).
> The original Region Replica design describes how the info:regioninfo is 
> supposed to have the default HRI serialized only. See comment on HRI changes 
> in 
> https://issues.apache.org/jira/secure/attachment/12627276/hbase-10347_redo_v8.patch
> -Going back over history, this may have been a bug since Region Replicas came 
> in.- <= No. Looking at an old cluster w/ region replicas, it doesn't have 
> this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23307) Add running of ReplicationBarrierCleaner to hbck2 fixMeta invocation

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980570#comment-16980570
 ] 

Hudson commented on HBASE-23307:


Results for branch branch-2.2
[build #701 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add running of ReplicationBarrierCleaner to hbck2 fixMeta invocation
> 
>
> Key: HBASE-23307
> URL: https://issues.apache.org/jira/browse/HBASE-23307
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbck2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Run the ReplicationBarrierCleaner chore when hbck2 invokes fixMeta. It will 
> clean up stale rep_barrier entries in hbase:meta which can help if trying to 
> do a restore of hbase:meta to good state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23325) [UI]rsgoup average load keep two decimals

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980568#comment-16980568
 ] 

Hudson commented on HBASE-23325:


Results for branch branch-2.2
[build #701 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/701//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [UI]rsgoup average load keep two decimals
> -
>
> Key: HBASE-23325
> URL: https://issues.apache.org/jira/browse/HBASE-23325
> Project: HBase
>  Issue Type: Improvement
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: 20191121165713.png
>
>
> In */master-status*,  rsgoup average load keep two decimals. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] sandeepvinayak commented on issue #837: HBASE-23309: Adding the flexibility to ChainWalEntryFilter to filter the whole entry if all cells get filtered

2019-11-22 Thread GitBox
sandeepvinayak commented on issue #837: HBASE-23309: Adding the flexibility to 
ChainWalEntryFilter to filter the whole entry if all cells get filtered
URL: https://github.com/apache/hbase/pull/837#issuecomment-557727366
 
 
   @wchevreuil Removed setting the flag through replicationpeerconfig and 
instead a made a sub class CustomChainWalEntryFilter if an endpoint wants to 
use this feature in it's wal filter. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #859: HBASE-23307 Add running of ReplicationBarrierCleaner to hbck2 fixMeta…

2019-11-22 Thread GitBox
ndimiduk commented on a change in pull request #859: HBASE-23307 Add running of 
ReplicationBarrierCleaner to hbck2 fixMeta…
URL: https://github.com/apache/hbase/pull/859#discussion_r349826991
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterRestartFailover.java
 ##
 @@ -116,7 +116,7 @@ public void test() throws Exception {
 .filter(p -> (p instanceof ServerCrashProcedure) &&
 ((ServerCrashProcedure) 
p).getServerName().equals(SERVER_FOR_TEST)).findAny();
 assertTrue("Should have one SCP for " + SERVER_FOR_TEST, 
procedure.isPresent());
-assertFalse("Submit the SCP for the same serverName " + SERVER_FOR_TEST + 
" which should fail",
+assertTrue("Submit the SCP for the same serverName " + SERVER_FOR_TEST + " 
which should fail",
 
 Review comment:
   Huh? why does this assertion flip?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #740: HBASE-23197 'IllegalArgumentException: Wrong FS' on edits replay when…

2019-11-22 Thread GitBox
Apache-HBase commented on issue #740: HBASE-23197 'IllegalArgumentException: 
Wrong FS' on edits replay when…
URL: https://github.com/apache/hbase/pull/740#issuecomment-557724328
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 13s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 28s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 27s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 33s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  master passed  |
   | +0 :ok: |  spotbugs  |   3m 34s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 33s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 44s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m  8s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 318m 22s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 375m 21s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.master.procedure.TestSCPWithReplicas |
   |   | hadoop.hbase.master.TestSplitWALManager |
   |   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
   |   | hadoop.hbase.master.TestAssignmentManagerMetrics |
   |   | 
hadoop.hbase.replication.multiwal.TestReplicationSyncUpToolWithMultipleAsyncWAL 
|
   |   | hadoop.hbase.replication.TestReplicationStatusAfterLagging |
   |   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
   |   | hadoop.hbase.util.TestFromClientSide3WoUnsafe |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-740/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/740 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux ff8ee0ad499e 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-740/out/precommit/personality/provided.sh
 |
   | git revision | master / 8e52339cb8 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-740/7/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-740/7/testReport/
 |
   | Max. process+thread count | 5086 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-740/7/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #855: HBASE-23322 [hbck2] Simplification on HBCKSCP scheduling

2019-11-22 Thread GitBox
ndimiduk commented on a change in pull request #855: HBASE-23322 [hbck2] 
Simplification on HBCKSCP scheduling
URL: https://github.com/apache/hbase/pull/855#discussion_r349824911
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
 ##
 @@ -1484,15 +1485,21 @@ public int getNumRegionsOpened() {
 return 0;
   }
 
-  public long submitServerCrash(ServerName serverName, boolean shouldSplitWal) 
{
-boolean carryingMeta;
-long pid;
+  /**
+   * Usually run by the Master in reaction to server crash during normal 
processing.
+   * Can also be invoked via external RPC to effect repair; in the latter case,
+   * the 'force' flag is set so we push through the SCP though context may 
indicate
+   * already-running-SCP (An old SCP may have exited abnormally, or damaged 
cluster
+   * may still have references in hbase:meta to 'Unknown Servers' -- servers 
that
+   * are not online or in dead servers list, etc.)
+   * @param force Set if the request came in externally over RPC (via hbck2). 
Force means
+   *  run the SCP even if it seems as though there might be an 
outstanding
+   *  SCP running.
+   * @return pid of scheduled SCP or {@link Procedure#NO_PROC_ID} if none 
scheduled.
+   */
+  public long submitServerCrash(ServerName serverName, boolean shouldSplitWal, 
boolean force) {
 
 Review comment:
   I don't love this `force` flag. I get that all we have to look for are 
side-effects, but seems like there should be a way of accounting the actively 
running procedures and at least wait for the current one to finish before 
starting the next. Or maybe the procedure implementations can negotiate the 
mutual exclusion lock between themselves? This could would unconditionally 
schedule the action and the action itself would refuse to run as long as 
another one is in flight. And then, of course, the second action might wake up 
and find that it has no work to do.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23321) [hbck2] fixHoles of fixMeta doesn't update in-memory state

2019-11-22 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980551#comment-16980551
 ] 

Nick Dimiduk commented on HBASE-23321:
--

{noformat}
@@ -99,11 +98,13 @@ class MetaFixer {
   HRegion.createRegionDir(configuration, ri, 
FSUtils.getRootDir(configuration));
   // If an error here, then we'll have a region in the filesystem but not
   // in hbase:meta (if the below fails). Should be able to rerun the fix.
-  // The second call to createRegionDir will just go through. Idempotent.
-  Put put = MetaTableAccessor.makePutFromRegionInfo(ri, 
HConstants.LATEST_TIMESTAMP);
-  MetaTableAccessor.putsToMetaTable(this.masterServices.getConnection(),
-  Collections.singletonList(put));
-  LOG.info("Fixed hole by adding {}; region is NOT assigned (assign to 
online).", ri);
+  // Add to hbase:meta and then update in-memory state so it knows of new
+  // Region; addRegionToMeta adds region and adds a state column set to 
CLOSED.
+  MetaTableAccessor.addRegionToMeta(this.masterServices.getConnection(), 
ri);
+  this.masterServices.getAssignmentManager().getRegionStates().
+  updateRegionState(ri, RegionState.State.CLOSED);
+  LOG.info("Fixed hole by adding {} in CLOSED state; region NOT assigned 
(assign to ONLINE).",
+  ri);
 }
   }{noformat}
 

This is good. I wonder – after looping through all the holes, is there a way to 
fast-track getting these new regions assigned? Some "need assignment run" flag 
we can tickle to get things going all the sooner?

+1

> [hbck2] fixHoles of fixMeta doesn't update in-memory state
> --
>
> Key: HBASE-23321
> URL: https://issues.apache.org/jira/browse/HBASE-23321
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> If hbase:meta has holes, you can run fixMeta from hbck2. This will close the 
> holes but you have to restart the Master for it to notice the new region 
> additions. Also, we were plugging holes by adding regions but no state for 
> the region which makes it awkward to subsequently assign. Fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on issue #851: Hbase 23321

2019-11-22 Thread GitBox
ndimiduk commented on issue #851: Hbase 23321
URL: https://github.com/apache/hbase/pull/851#issuecomment-557717436
 
 
   It's hard to review this one boss...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on issue #847: HBASE-23315 Miscellaneous HBCK Report page cleanup

2019-11-22 Thread GitBox
ndimiduk commented on issue #847: HBASE-23315 Miscellaneous HBCK Report page 
cleanup
URL: https://github.com/apache/hbase/pull/847#issuecomment-557715913
 
 
   Belated +1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #847: HBASE-23315 Miscellaneous HBCK Report page cleanup

2019-11-22 Thread GitBox
ndimiduk commented on a change in pull request #847: HBASE-23315 Miscellaneous 
HBCK Report page cleanup
URL: https://github.com/apache/hbase/pull/847#discussion_r349817668
 
 

 ##
 File path: hbase-server/src/main/resources/hbase-webapps/master/procedures.jsp
 ##
 @@ -81,11 +81,14 @@
 Errors
 Parameters
 
-<% for (Procedure proc : procedures) { 
+<%
 
 Review comment:
   I can't believe we're still actively maintaining JSP in 2020.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #847: HBASE-23315 Miscellaneous HBCK Report page cleanup

2019-11-22 Thread GitBox
ndimiduk commented on a change in pull request #847: HBASE-23315 Miscellaneous 
HBCK Report page cleanup
URL: https://github.com/apache/hbase/pull/847#discussion_r349817483
 
 

 ##
 File path: hbase-server/src/main/resources/hbase-webapps/master/hbck.jsp
 ##
 @@ -142,10 +142,18 @@
   Orphan Regions on RegionServer
 
   
+  
+
+  The below are Regions we've lost account of. To be safe, run bulk 
load of any data found in these Region orphan directories back into the HBase 
cluster.
+  First make sure hbase:meta is in healthy state; run 'hbkc2 fixMeta' 
to be sure. Once this is done, per Region below, run a bulk
+  load -- '$ hbase completebulkload REGION_DIR_PATH TABLE_NAME' -- and 
then delete the desiccated directory content (HFiles are removed upon 
successful load; all that is left are empty directories
 
 Review comment:
   Yuck. In a table with lots of busted regions, this `completeBulkLoad` would 
be tedious. Maybe we need a new `hbck2 bulkloadOrphanedRegions` command that 
can identify the orphans, bulk load them, and clean up the husks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on a change in pull request #847: HBASE-23315 Miscellaneous HBCK Report page cleanup

2019-11-22 Thread GitBox
ndimiduk commented on a change in pull request #847: HBASE-23315 Miscellaneous 
HBCK Report page cleanup
URL: https://github.com/apache/hbase/pull/847#discussion_r349816994
 
 

 ##
 File path: hbase-server/src/main/resources/hbase-webapps/master/hbck.jsp
 ##
 @@ -142,10 +142,18 @@
   Orphan Regions on RegionServer
 
   
+  
+
+  The below are Regions we've lost account of. To be safe, run bulk 
load of any data found in these Region orphan directories back into the HBase 
cluster.
+  First make sure hbase:meta is in healthy state; run 'hbkc2 fixMeta' 
to be sure. Once this is done, per Region below, run a bulk
 
 Review comment:
   "`hbck2 fixMeta`"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #868: HBASE-23296 Add CompositeBucketCache to support tiered BC

2019-11-22 Thread GitBox
Apache-HBase commented on issue #868: HBASE-23296 Add CompositeBucketCache to 
support tiered BC
URL: https://github.com/apache/hbase/pull/868#issuecomment-557711774
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 48s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   8m  3s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 54s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 53s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 29s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 36s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m  9s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   7m 40s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 41s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 41s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 44s |  hbase-server: The patch 
generated 0 new + 57 unchanged - 2 fixed = 57 total (was 59)  |
   | +1 :green_heart: |  checkstyle  |   0m 12s |  The patch passed checkstyle 
in hbase-external-blockcache  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   6m 25s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  24m 58s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 329m 45s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  unit  |   0m 48s |  hbase-external-blockcache in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 419m 26s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.client.TestConnection |
   |   | hadoop.hbase.master.procedure.TestSCPWithReplicasWithoutZKCoordinated |
   |   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
   |   | hadoop.hbase.client.TestSnapshotTemporaryDirectory |
   |   | hadoop.hbase.master.TestSplitWALManager |
   |   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
   |   | hadoop.hbase.master.TestAssignmentManagerMetrics |
   |   | hadoop.hbase.client.TestFromClientSide3 |
   |   | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory |
   |   | hadoop.hbase.master.TestMasterShutdown |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-868/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/868 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 3a0b48a060e2 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-868/out/precommit/personality/provided.sh
 |
   | git revision | master / 3b0c276aa3 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-868/2/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-868/2/testReport/
 |
   | Max. process+thread count | 4870 (vs. ulimit of 1) |
   | modules | C: hbase-server hbase-external-blockcache U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-868/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[jira] [Resolved] (HBASE-23332) [HBCKReport] Split Regions shown as Overlaps in 'Overlap' section

2019-11-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-23332.
---
Resolution: Cannot Reproduce

Resolving. Lost logs. Seems like root cause is corrupt procedure. Spent time 
verifying we don't drop 'split/offline' flags when serializing to hbase:meta 
and that seems fine. Resolving because unable to debug.

> [HBCKReport] Split Regions shown as Overlaps in 'Overlap' section
> -
>
> Key: HBASE-23332
> URL: https://issues.apache.org/jira/browse/HBASE-23332
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2, UI
>Reporter: Michael Stack
>Priority: Major
>
> The new 'HBCK Report' page has to be exacting else makes for wild goose chase 
> or worse, operator damage of running cluster.
> I just came across instances where split parents as reported as overlapping 
> their daughters:
> {code}
>  {ENCODED => 22776817918e40d0ba93eb48314d65a1, NAME => 
> 't1,2ac082e1,1572669261019.22776817918e40d0ba93eb48314d65a1.', STARTKEY => 
> '2ac082e1', ENDKEY => '2b020c18'}  {ENCODED => 
> 8cbe15b2f59d69974357e8800a0bfbbc, NAME => 
> 't1,2ac082e1,1574362260851.8cbe15b2f59d69974357e8800a0bfbbc.', STARTKEY => 
> '2ac082e1', ENDKEY => '2ae3529d-1d72-4250-9bd8-4e9b9959284f'}
>  {ENCODED => 22776817918e40d0ba93eb48314d65a1, NAME => 
> 't1,2ac082e1,1572669261019.22776817918e40d0ba93eb48314d65a1.', STARTKEY => 
> '2ac082e1', ENDKEY => '2b020c18'}  {ENCODED => 
> bd062ce8e9c99a6988f0a8223168e028, NAME => 
> 't1,2ae3529d-1d72-4250-9bd8-4e9b9959284f,1574362260851.bd062ce8e9c99a6988f0a8223168e028.',
>  STARTKEY => '2ae3529d-1d72-4250-9bd8-4e9b9959284f', ENDKEY => 
> '2b020c18'}
> {code}
> Need to fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23332) [HBCKReport] Split Regions shown as Overlaps in 'Overlap' section

2019-11-22 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980518#comment-16980518
 ] 

Michael Stack commented on HBASE-23332:
---

Something else is going on here. Somehow we dropped the parent split/offline 
flag.

> [HBCKReport] Split Regions shown as Overlaps in 'Overlap' section
> -
>
> Key: HBASE-23332
> URL: https://issues.apache.org/jira/browse/HBASE-23332
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2, UI
>Reporter: Michael Stack
>Priority: Major
>
> The new 'HBCK Report' page has to be exacting else makes for wild goose chase 
> or worse, operator damage of running cluster.
> I just came across instances where split parents as reported as overlapping 
> their daughters:
> {code}
>  {ENCODED => 22776817918e40d0ba93eb48314d65a1, NAME => 
> 't1,2ac082e1,1572669261019.22776817918e40d0ba93eb48314d65a1.', STARTKEY => 
> '2ac082e1', ENDKEY => '2b020c18'}  {ENCODED => 
> 8cbe15b2f59d69974357e8800a0bfbbc, NAME => 
> 't1,2ac082e1,1574362260851.8cbe15b2f59d69974357e8800a0bfbbc.', STARTKEY => 
> '2ac082e1', ENDKEY => '2ae3529d-1d72-4250-9bd8-4e9b9959284f'}
>  {ENCODED => 22776817918e40d0ba93eb48314d65a1, NAME => 
> 't1,2ac082e1,1572669261019.22776817918e40d0ba93eb48314d65a1.', STARTKEY => 
> '2ac082e1', ENDKEY => '2b020c18'}  {ENCODED => 
> bd062ce8e9c99a6988f0a8223168e028, NAME => 
> 't1,2ae3529d-1d72-4250-9bd8-4e9b9959284f,1574362260851.bd062ce8e9c99a6988f0a8223168e028.',
>  STARTKEY => '2ae3529d-1d72-4250-9bd8-4e9b9959284f', ENDKEY => 
> '2b020c18'}
> {code}
> Need to fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #864: HBASE-23313 [hbck2] setRegionState should update Master in-memory sta…

2019-11-22 Thread GitBox
Apache-HBase commented on issue #864: HBASE-23313 [hbck2] setRegionState should 
update Master in-memory sta…
URL: https://github.com/apache/hbase/pull/864#issuecomment-557684756
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 18s |  master passed  |
   | +1 :green_heart: |  compile  |   2m  4s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m  8s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 36s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 49s |  master passed  |
   | -0 :warning: |  patch  |   4m 36s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  2s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  4s |  the patch passed  |
   | -1 :x: |  checkstyle  |   0m 39s |  hbase-client: The patch generated 3 
new + 289 unchanged - 2 fixed = 292 total (was 291)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 33s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  hbaseprotoc  |   1m 56s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   8m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 41s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 50s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 164m 29s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   1m 37s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 240m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-864/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/864 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool |
   | uname | Linux 12aa3a90d1f0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-864/out/precommit/personality/provided.sh
 |
   | git revision | master / 8e52339cb8 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-864/3/artifact/out/diff-checkstyle-hbase-client.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-864/3/testReport/
 |
   | Max. process+thread count | 4939 (vs. ulimit of 1) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-864/3/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With 

[jira] [Created] (HBASE-23332) [HBCKReport] Split Regions shown as Overlaps in 'Overlap' section

2019-11-22 Thread Michael Stack (Jira)
Michael Stack created HBASE-23332:
-

 Summary: [HBCKReport] Split Regions shown as Overlaps in 'Overlap' 
section
 Key: HBASE-23332
 URL: https://issues.apache.org/jira/browse/HBASE-23332
 Project: HBase
  Issue Type: Bug
  Components: hbck2, UI
Reporter: Michael Stack


The new 'HBCK Report' page has to be exacting else makes for wild goose chase 
or worse, operator damage of running cluster.

I just came across instances where split parents as reported as overlapping 
their daughters:
{code}
 {ENCODED => 22776817918e40d0ba93eb48314d65a1, NAME => 
't1,2ac082e1,1572669261019.22776817918e40d0ba93eb48314d65a1.', STARTKEY => 
'2ac082e1', ENDKEY => '2b020c18'}  {ENCODED => 
8cbe15b2f59d69974357e8800a0bfbbc, NAME => 
't1,2ac082e1,1574362260851.8cbe15b2f59d69974357e8800a0bfbbc.', STARTKEY => 
'2ac082e1', ENDKEY => '2ae3529d-1d72-4250-9bd8-4e9b9959284f'}
 {ENCODED => 22776817918e40d0ba93eb48314d65a1, NAME => 
't1,2ac082e1,1572669261019.22776817918e40d0ba93eb48314d65a1.', STARTKEY => 
'2ac082e1', ENDKEY => '2b020c18'}  {ENCODED => 
bd062ce8e9c99a6988f0a8223168e028, NAME => 
't1,2ae3529d-1d72-4250-9bd8-4e9b9959284f,1574362260851.bd062ce8e9c99a6988f0a8223168e028.',
 STARTKEY => '2ae3529d-1d72-4250-9bd8-4e9b9959284f', ENDKEY => 
'2b020c18'}
{code}

Need to fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-11-22 Thread Clay B. (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980482#comment-16980482
 ] 

Clay B. commented on HBASE-22969:
-

It seems with [~psomogyi]'s addendum this is hopefully resolved now?

I was able to pull master and run:
{code}
mvn install -DskipTests
cd hbase-client
mvn -PerrorProne clean test-compile -DskipTests=true
cd ../hbase-server
mvn -PerrorProne clean test-compile -DskipTests=true
{code}

And get a successful Maven run all times. Though I do see many {{[WARNING]}} 
entries from Error Prone, I do not see any for 
{{BinaryComponentComparator.java}} now.

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Assignee: Udai Bhan Kashyap
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.0005.patch, HBASE-22969.0006.patch, HBASE-22969.0007.patch, 
> HBASE-22969.0008.patch, HBASE-22969.0009.patch, HBASE-22969.0010.patch, 
> HBASE-22969.0011.patch, HBASE-22969.0012.patch, HBASE-22969.0013.patch, 
> HBASE-22969.0014.patch, HBASE-22969.HBASE-22969.0001.patch, 
> HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> 

[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-22 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r349774266
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe (a single instance of this class can be shared by multiple 
threads without race
+ * conditions).
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  /**
+   * Maximum number of times we retry when ZK operation times out.
+   */
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  /**
+   * Sleep interval ms between ZK operation retries.
+   */
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  /**
+   * Cached meta region locations indexed by replica ID.
+   * CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+   * client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+   * that should be OK since the size of the list is often small and mutations 
are not too often
+   * and we do not need to block client requests while mutations are in 
progress.
+   */
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  /**
+   * Populates the current snapshot of meta locations from ZK. If no meta 
znodes exist, it registers
+   * a watcher on base znode to check for any CREATE/DELETE events on the 
children.
+   */
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+while (retryCounter.shouldRetry()) {
+  try {
+znodes = watcher.getMetaReplicaNodesAndWatch();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating initial meta locations", ke);
+if (!retryCounter.shouldRetry()) {
+  // Retries exhausted and watchers not set. This is not a desirable 
state since the cache
+  // could remain stale forever. Propagate the exception.
+  watcher.abort("Error populating meta locations", ke);
+  return;
+}
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  

[GitHub] [hbase] ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-22 Thread GitBox
ndimiduk commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r349775033
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe (a single instance of this class can be shared by multiple 
threads without race
+ * conditions).
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  /**
+   * Maximum number of times we retry when ZK operation times out.
+   */
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  /**
+   * Sleep interval ms between ZK operation retries.
+   */
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  /**
+   * Cached meta region locations indexed by replica ID.
+   * CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+   * client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+   * that should be OK since the size of the list is often small and mutations 
are not too often
+   * and we do not need to block client requests while mutations are in 
progress.
+   */
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  /**
+   * Populates the current snapshot of meta locations from ZK. If no meta 
znodes exist, it registers
+   * a watcher on base znode to check for any CREATE/DELETE events on the 
children.
+   */
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+while (retryCounter.shouldRetry()) {
+  try {
+znodes = watcher.getMetaReplicaNodesAndWatch();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating initial meta locations", ke);
+if (!retryCounter.shouldRetry()) {
+  // Retries exhausted and watchers not set. This is not a desirable 
state since the cache
+  // could remain stale forever. Propagate the exception.
+  watcher.abort("Error populating meta locations", ke);
+  return;
+}
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  

[jira] [Commented] (HBASE-22174) Remove error prone from our precommit javac check

2019-11-22 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980457#comment-16980457
 ] 

Nick Dimiduk commented on HBASE-22174:
--

What can we do to get this enabled again in pre-commit? It's still enabled in 
nightly, where it's a little late. I would prefer that it's enabled everywhere 
or disabled everywhere.

> Remove error prone from our precommit javac check
> -
>
> Key: HBASE-22174
> URL: https://issues.apache.org/jira/browse/HBASE-22174
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.0.6, 2.1.5
>
> Attachments: HBASE-22174-HBASE-22174.patch, HBASE-22174.patch
>
>
> As the result is not stable. Can add it back as a separated check later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth

2019-11-22 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980454#comment-16980454
 ] 

Bharath Vissapragada commented on HBASE-23330:
--

{quote} If we leave it in ZK {quote}

We will still have it in ZK but clients aren't expected to have access to it 
(or ZK less like you said). So in a way that breaks the objective. I don't 
think we'd have too much performance overhead because the information looked up 
is very small and cached on the client side and not every client does it (there 
is special subset). Thoughts?

>   Expose cluster ID for clients using it for delegation token based auth
> 
>
> Key: HBASE-23330
> URL: https://issues.apache.org/jira/browse/HBASE-23330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As Gary Helming noted in HBASE-18095, some clients use Cluster ID for 
> delgation based auth. 
> {quote}
> There is an additional complication here for token-based authentication. When 
> a delegation token is used for SASL authentication, the client uses the 
> cluster ID obtained from Zookeeper to select the token identifier to use. So 
> there would also need to be some Zookeeper-less, unauthenticated way to 
> obtain the cluster ID as well.
> {quote}
> Once we move ZK out of the picture, cluster ID sits behind an end point that 
> needs to be authenticated. Figure out a way to expose this to clients.
> One suggestion in the comments (from Andrew)
> {quote}
>  Cluster ID lookup is most easily accomplished with a new servlet on the 
> HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It 
> can't share the RPC server endpoint when SASL is enabled because any 
> interaction with that endpoint must be authenticated. This is ugly but 
> alternatives seem worse. One alternative would be a second RPC port for APIs 
> that do not / cannot require prior authentication.
> {quote}
> There could be implications if SPNEGO is enabled on these http(s) end points. 
> We need to make sure that it is handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] busbey merged pull request #869: HBASE-22969 A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position; ADDENDUM

2019-11-22 Thread GitBox
busbey merged pull request #869: HBASE-22969 A new binary component 
comparator(BinaryComponentComparator) to perform comparison of arbitrary length 
and position; ADDENDUM
URL: https://github.com/apache/hbase/pull/869
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on issue #869: HBASE-22969 A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position; ADDENDUM

2019-11-22 Thread GitBox
busbey commented on issue #869: HBASE-22969 A new binary component 
comparator(BinaryComponentComparator) to perform comparison of arbitrary length 
and position; ADDENDUM
URL: https://github.com/apache/hbase/pull/869#issuecomment-557667194
 
 
   sorry, missed that it was test code. also the update looks good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-11-22 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980450#comment-16980450
 ] 

Nick Dimiduk commented on HBASE-22969:
--

Looks like this patch introduced some error prone issues that are failing 
nightly builds on master and branch-2. [~udaikashyap] mind fixing with an 
addendum?

 
{noformat}
$ mvn -PerrorProne clean test-compile -DskipTests=true
...
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
(default-testCompile) on project hbase-server: Compilation failure: Compilation 
failure: 
[ERROR] 
/Users/ndimiduk/repos/apache/hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFiltersWithBinaryComponentComparator.java:[197,51]
 error: [ArrayToString] Calling toString on an ar
ray does not provide useful information
[ERROR] (see https://errorprone.info/bugpattern/ArrayToString)
[ERROR]   Did you mean 'LOG.info("added row:" + 
Arrays.toString(Hex.encodeHex(key)) + "with value 'abc'");'?
[ERROR] 
/Users/ndimiduk/repos/apache/hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFiltersWithBinaryComponentComparator.java:[201,51]
 error: [ArrayToString] Calling toString on an array does not provide useful 
information
[ERROR] (see https://errorprone.info/bugpattern/ArrayToString)
[ERROR]   Did you mean 'LOG.info("added row:" + 
Arrays.toString(Hex.encodeHex(key)) + "with value 'xyz'");'?
[ERROR] -> [Help 1] {noformat}

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Assignee: Udai Bhan Kashyap
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.0005.patch, HBASE-22969.0006.patch, HBASE-22969.0007.patch, 
> HBASE-22969.0008.patch, HBASE-22969.0009.patch, HBASE-22969.0010.patch, 
> HBASE-22969.0011.patch, HBASE-22969.0012.patch, HBASE-22969.0013.patch, 
> HBASE-22969.0014.patch, HBASE-22969.HBASE-22969.0001.patch, 
> HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new 

[jira] [Commented] (HBASE-23189) Finalize I/O optimized MOB compaction

2019-11-22 Thread Vladimir Rodionov (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980442#comment-16980442
 ] 

Vladimir Rodionov commented on HBASE-23189:
---

Closing, passes stress tests up to 6M (above 6M HBase fails with 
NotServingRegionExceptions, which is not related to the feature but a master 
branch stability issue). will mark this feature as *experimental* in release 
notes. 

> Finalize I/O optimized MOB compaction
> -
>
> Key: HBASE-23189
> URL: https://issues.apache.org/jira/browse/HBASE-23189
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>
> +corresponding test cases
> The current code for I/O optimized compaction has not been tested and 
> verified yet. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23189) Finalize I/O optimized MOB compaction

2019-11-22 Thread Vladimir Rodionov (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov resolved HBASE-23189.
---
Resolution: Fixed

> Finalize I/O optimized MOB compaction
> -
>
> Key: HBASE-23189
> URL: https://issues.apache.org/jira/browse/HBASE-23189
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>
> +corresponding test cases
> The current code for I/O optimized compaction has not been tested and 
> verified yet. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-22 Thread GitBox
Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs 
(HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#issuecomment-557655734
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 40s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 49s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  master passed  |
   | +0 :ok: |  spotbugs  |   1m 42s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 40s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   6m 23s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 20s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 47s |  hbase-thrift in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  63m 30s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/850 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 039f0ebcaa4c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-850/out/precommit/personality/provided.sh
 |
   | git revision | master / 8e52339cb8 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/7/testReport/
 |
   | Max. process+thread count | 1636 (vs. ulimit of 1) |
   | modules | C: hbase-thrift U: hbase-thrift |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/7/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth

2019-11-22 Thread Wellington Chevreuil (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980385#comment-16980385
 ] 

Wellington Chevreuil commented on HBASE-23330:
--

If we leave it in ZK, would it completely break the objective we are trying to 
achieve here? Are we trying to go totally ZKless, or just trying to minimise 
overhead of requests on it? Because requiring clients to do http requests on a 
master web interface does not seem lighter or faster then look at zk znode.  

>   Expose cluster ID for clients using it for delegation token based auth
> 
>
> Key: HBASE-23330
> URL: https://issues.apache.org/jira/browse/HBASE-23330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As Gary Helming noted in HBASE-18095, some clients use Cluster ID for 
> delgation based auth. 
> {quote}
> There is an additional complication here for token-based authentication. When 
> a delegation token is used for SASL authentication, the client uses the 
> cluster ID obtained from Zookeeper to select the token identifier to use. So 
> there would also need to be some Zookeeper-less, unauthenticated way to 
> obtain the cluster ID as well.
> {quote}
> Once we move ZK out of the picture, cluster ID sits behind an end point that 
> needs to be authenticated. Figure out a way to expose this to clients.
> One suggestion in the comments (from Andrew)
> {quote}
>  Cluster ID lookup is most easily accomplished with a new servlet on the 
> HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It 
> can't share the RPC server endpoint when SASL is enabled because any 
> interaction with that endpoint must be authenticated. This is ugly but 
> alternatives seem worse. One alternative would be a second RPC port for APIs 
> that do not / cannot require prior authentication.
> {quote}
> There could be implications if SPNEGO is enabled on these http(s) end points. 
> We need to make sure that it is handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] risdenk commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-22 Thread GitBox
risdenk commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs 
(HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#issuecomment-557633285
 
 
   Latest push adds comment about deprecation handling and why not use Hadoop 
`Configuration` deprecation handling. Also fixes the license header to remove 
copyright.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-22 Thread GitBox
risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r349723969
 
 

 ##
 File path: 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
 ##
 @@ -249,6 +263,27 @@ protected void setupParamters() throws IOException {
 pauseMonitor.start();
   }
 
+  private String getSpengoPrincipal(Configuration conf, String host) throws 
IOException {
+String principal = conf.get(THRIFT_SPNEGO_PRINCIPAL_KEY);
 
 Review comment:
   I added a comment specifically for this case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-22 Thread GitBox
risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r349722573
 
 

 ##
 File path: 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
 ##
 @@ -249,6 +263,27 @@ protected void setupParamters() throws IOException {
 pauseMonitor.start();
   }
 
+  private String getSpengoPrincipal(Configuration conf, String host) throws 
IOException {
+String principal = conf.get(THRIFT_SPNEGO_PRINCIPAL_KEY);
 
 Review comment:
   So I don't think using the Hadoop `Configuration` deprecations will work 
here. The existing config key `hbase.thrift.kerberos.principal` is still valid 
and needs to be used for Kerberos communication between HBase Thrift Server to 
backend HBase master/rs. The new config key `hbase.thrift.spnego.principal` 
should be used for SPNEGO (handling SPNEGO HTTP only not for backend 
communication). 
   
   So its not really a deprecation per say, but more configuration to split out 
the principal/keytab used for SPNEGO versus backend communication. In an ideal 
world, the config `hbase.thrift.kerberos.principal` would never have done 
double duty.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] wchevreuil commented on a change in pull request #864: HBASE-23313 [hbck2] setRegionState should update Master in-memory sta…

2019-11-22 Thread GitBox
wchevreuil commented on a change in pull request #864: HBASE-23313 [hbck2] 
setRegionState should update Master in-memory sta…
URL: https://github.com/apache/hbase/pull/864#discussion_r349715610
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHbck.java
 ##
 @@ -182,6 +183,23 @@ public void testSetTableStateInMeta() throws Exception {
   prevState.isDisabled());
   }
 
+  @Test
+  public void testSetRegionStateInMEta() throws Exception {
 
 Review comment:
   Well spotted!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] wchevreuil commented on a change in pull request #864: HBASE-23313 [hbck2] setRegionState should update Master in-memory sta…

2019-11-22 Thread GitBox
wchevreuil commented on a change in pull request #864: HBASE-23313 [hbck2] 
setRegionState should update Master in-memory sta…
URL: https://github.com/apache/hbase/pull/864#discussion_r349715407
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 ##
 @@ -2465,6 +2466,39 @@ public GetTableStateResponse 
setTableStateInMeta(RpcController controller,
 }
   }
 
+  /**
+   * Update state of the region in meta only. This is required by hbck in some 
situations to cleanup
+   * stuck assign/ unassign regions procedures for the table.
+   *
+   * @return previous state of the region
+   */
+  @Override
+  public MasterProtos.GetRegionStateResponse 
setRegionStateInMeta(RpcController controller,
+MasterProtos.SetRegionStateInMetaRequest request) throws ServiceException {
+try {
+  RegionInfo info = this.master.getAssignmentManager().
+loadRegionFromMeta(request.getRegionInfo().getRegionEncodedName());
+  LOG.trace("region info loaded from meta table: {}", info);
+  RegionState prevState = 
this.master.getAssignmentManager().getRegionStates().
+getRegionState(info);
+  RegionState newState = RegionState.convert(request.getRegionState());
+  LOG.info("{} set region={} state from {} to {}", 
master.getClientIdAuditPrefix(),
+info, prevState.getState(), newState.getState());
+  Put metaPut = MetaTableAccessor.makePutFromRegionInfo(info, 
System.currentTimeMillis());
+  metaPut.addColumn(HConstants.CATALOG_FAMILY,
+HConstants.STATE_QUALIFIER, Bytes.toBytes(newState.getState().name()));
+  List putList = new ArrayList<>();
+  putList.add(metaPut);
+  MetaTableAccessor.putsToMetaTable(this.master.getConnection(), putList);
+  //Loads from meta again to refresh AM cache with the new region state
+  
this.master.getAssignmentManager().loadRegionFromMeta(info.getEncodedName());
+  return MasterProtos.GetRegionStateResponse.newBuilder().
+setRegionState(prevState.convert()).build();
+} catch (Exception e) {
+  throw new ServiceException(e);
+}
 
 Review comment:
   Yeah, had thought about that after this commit. Let me refactor it to 
receive list of region encoded names.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] wchevreuil commented on a change in pull request #864: HBASE-23313 [hbck2] setRegionState should update Master in-memory sta…

2019-11-22 Thread GitBox
wchevreuil commented on a change in pull request #864: HBASE-23313 [hbck2] 
setRegionState should update Master in-memory sta…
URL: https://github.com/apache/hbase/pull/864#discussion_r349714891
 
 

 ##
 File path: hbase-protocol-shaded/src/main/protobuf/Master.proto
 ##
 @@ -1152,6 +1161,10 @@ service HbckService {
   rpc SetTableStateInMeta(SetTableStateInMetaRequest)
 returns(GetTableStateResponse);
 
+  /** Update state of the table in meta only*/
+  rpc SetRegionStateInMeta(SetRegionStateInMetaRequest)
+returns(GetRegionStateResponse);
 
 Review comment:
   Ah, yeah, I actually just used `SetTableStateInMeta` as my template here. I 
can fix it to be consistent with the method name. Also just noticed a 
copy mistake in the comment. Will correct that as well on the next commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23331) Document HBASE-18095

2019-11-22 Thread Bharath Vissapragada (Jira)
Bharath Vissapragada created HBASE-23331:


 Summary: Document HBASE-18095
 Key: HBASE-23331
 URL: https://issues.apache.org/jira/browse/HBASE-23331
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Bharath Vissapragada
Assignee: Bharath Vissapragada


Just a placeholder for documenting the parent jira. We should talk about the 
new configurations added and how to use them and intention behind the design. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth

2019-11-22 Thread Bharath Vissapragada (Jira)
Bharath Vissapragada created HBASE-23330:


 Summary:   Expose cluster ID for clients using it for delegation 
token based auth
 Key: HBASE-23330
 URL: https://issues.apache.org/jira/browse/HBASE-23330
 Project: HBase
  Issue Type: Sub-task
  Components: Client, master
Affects Versions: 3.0.0
Reporter: Bharath Vissapragada
Assignee: Bharath Vissapragada


As Gary Helming noted in HBASE-18095, some clients use Cluster ID for delgation 
based auth. 

{quote}
There is an additional complication here for token-based authentication. When a 
delegation token is used for SASL authentication, the client uses the cluster 
ID obtained from Zookeeper to select the token identifier to use. So there 
would also need to be some Zookeeper-less, unauthenticated way to obtain the 
cluster ID as well.
{quote}

Once we move ZK out of the picture, cluster ID sits behind an end point that 
needs to be authenticated. Figure out a way to expose this to clients.

One suggestion in the comments (from Andrew)

{quote}
 Cluster ID lookup is most easily accomplished with a new servlet on the 
HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It can't 
share the RPC server endpoint when SASL is enabled because any interaction with 
that endpoint must be authenticated. This is ugly but alternatives seem worse. 
One alternative would be a second RPC port for APIs that do not / cannot 
require prior authentication.
{quote}

There could be implications if SPNEGO is enabled on these http(s) end points. 
We need to make sure that it is handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #864: HBASE-23313 [hbck2] setRegionState should update Master in-memory sta…

2019-11-22 Thread GitBox
saintstack commented on a change in pull request #864: HBASE-23313 [hbck2] 
setRegionState should update Master in-memory sta…
URL: https://github.com/apache/hbase/pull/864#discussion_r349697648
 
 

 ##
 File path: hbase-protocol-shaded/src/main/protobuf/Master.proto
 ##
 @@ -1152,6 +1161,10 @@ service HbckService {
   rpc SetTableStateInMeta(SetTableStateInMetaRequest)
 returns(GetTableStateResponse);
 
+  /** Update state of the table in meta only*/
+  rpc SetRegionStateInMeta(SetRegionStateInMetaRequest)
+returns(GetRegionStateResponse);
 
 Review comment:
   I see this pattern setting table state where there we reused a 
GetTableStateResponse as return from SetTableState method. Usually the reponse 
has same prefix as request -- i.e. the name of the method. I suppose this is 
ok. Maybe one day we'll have a method that just queries the method state and 
when that is added, we'll need this GetRegionStateResponse again.
   
   Just noting that this is breaking the general pattern.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #864: HBASE-23313 [hbck2] setRegionState should update Master in-memory sta…

2019-11-22 Thread GitBox
saintstack commented on a change in pull request #864: HBASE-23313 [hbck2] 
setRegionState should update Master in-memory sta…
URL: https://github.com/apache/hbase/pull/864#discussion_r349698814
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 ##
 @@ -2465,6 +2466,39 @@ public GetTableStateResponse 
setTableStateInMeta(RpcController controller,
 }
   }
 
+  /**
+   * Update state of the region in meta only. This is required by hbck in some 
situations to cleanup
+   * stuck assign/ unassign regions procedures for the table.
+   *
+   * @return previous state of the region
+   */
+  @Override
+  public MasterProtos.GetRegionStateResponse 
setRegionStateInMeta(RpcController controller,
+MasterProtos.SetRegionStateInMetaRequest request) throws ServiceException {
+try {
+  RegionInfo info = this.master.getAssignmentManager().
+loadRegionFromMeta(request.getRegionInfo().getRegionEncodedName());
+  LOG.trace("region info loaded from meta table: {}", info);
+  RegionState prevState = 
this.master.getAssignmentManager().getRegionStates().
+getRegionState(info);
+  RegionState newState = RegionState.convert(request.getRegionState());
+  LOG.info("{} set region={} state from {} to {}", 
master.getClientIdAuditPrefix(),
+info, prevState.getState(), newState.getState());
+  Put metaPut = MetaTableAccessor.makePutFromRegionInfo(info, 
System.currentTimeMillis());
+  metaPut.addColumn(HConstants.CATALOG_FAMILY,
+HConstants.STATE_QUALIFIER, Bytes.toBytes(newState.getState().name()));
+  List putList = new ArrayList<>();
+  putList.add(metaPut);
+  MetaTableAccessor.putsToMetaTable(this.master.getConnection(), putList);
+  //Loads from meta again to refresh AM cache with the new region state
+  
this.master.getAssignmentManager().loadRegionFromMeta(info.getEncodedName());
+  return MasterProtos.GetRegionStateResponse.newBuilder().
+setRegionState(prevState.convert()).build();
+} catch (Exception e) {
+  throw new ServiceException(e);
+}
 
 Review comment:
   This is great but what you think about doing more than one Region per RPC?
   
   I'm thinking of the case where you have a cluster with thousands of Regions 
and perhaps a few hundred need their state set .  If we only do a single Region 
at a time, it will take a long time settiing state on hundreds of Regions. We'd 
have to change the hbck2 command too so it took more than one Region in the 
list.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #864: HBASE-23313 [hbck2] setRegionState should update Master in-memory sta…

2019-11-22 Thread GitBox
saintstack commented on a change in pull request #864: HBASE-23313 [hbck2] 
setRegionState should update Master in-memory sta…
URL: https://github.com/apache/hbase/pull/864#discussion_r349698921
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHbck.java
 ##
 @@ -182,6 +183,23 @@ public void testSetTableStateInMeta() throws Exception {
   prevState.isDisabled());
   }
 
+  @Test
+  public void testSetRegionStateInMEta() throws Exception {
 
 Review comment:
   s/MEta/Meta/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-11-22 Thread Udai Bhan Kashyap (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980315#comment-16980315
 ] 

Udai Bhan Kashyap commented on HBASE-22969:
---

Thanks [~psomogyi] . Would you be kind enough to describe the issues or provide 
some hints?

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Assignee: Udai Bhan Kashyap
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.0005.patch, HBASE-22969.0006.patch, HBASE-22969.0007.patch, 
> HBASE-22969.0008.patch, HBASE-22969.0009.patch, HBASE-22969.0010.patch, 
> HBASE-22969.0011.patch, HBASE-22969.0012.patch, HBASE-22969.0013.patch, 
> HBASE-22969.0014.patch, HBASE-22969.HBASE-22969.0001.patch, 
> HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23280) Purge rep_barrier:seqnumDuringOpen on delete of Region

2019-11-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-23280.
---
Resolution: Not A Problem

Resolving as 'Not a problem' any more after subtask which runs the 
ReplicationBarrierCleaner when hbck2 fixMeta is invoked and because of 
HBASE-23294 which fixed a bug in RBC.

> Purge rep_barrier:seqnumDuringOpen on delete of Region
> --
>
> Key: HBASE-23280
> URL: https://issues.apache.org/jira/browse/HBASE-23280
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Michael Stack
>Priority: Major
>
> The Region GC Procedure only cleans the 'info' column family.  We also write 
> a rep_barrier column family as of HBASE-20115 . HBASE-20117 adds a chore to 
> clean them up after-the-fact.  I've not studied how rep_barrier works (There 
> is a comment in MetaTableAccessor to add explaination).
> This issue is about adding the deletion of the rep_barrier content on region 
> delete ([~zhangduo] will this mess up serial replication?).
> I want to clean out these rows. They occasionally can be misinterpreted in 
> such as the hbck report as 'Orphan Regions' or in simple loading tools, we'll 
> find the rep_barrier row and then fail because no accompanying 
> info:regioninfo.
> Perhaps removing rep_barrier column family promptly is the wrong thing to 
> do... we need the lag for replication to catch up Let me know [~zhangduo].
> Here is what they look like:
> {code}
> hbase(main):050:0> get 'hbase:meta', 
> ',22d0e538,1572669183985.6aa8710020b8a4f9ea290539fc254a76.'
> COLUMN
>   CELL
>  rep_barrier:seqnumDuringOpen 
>   timestamp=1573272944262, value=\x00\x00\x00\x00\x00\x00\x00\x02
> {code}
> They get updated on split and when location moves. I don't seem to be able to 
> disable this facility -- it is on always. It also called 'unused' in title of 
> HBASE-20117. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23307) Add running of ReplicationBarrierCleaner to hbck2 fixMeta invocation

2019-11-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-23307.
---
Fix Version/s: 2.2.3
   2.3.0
   3.0.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged to branch-2.2+. Thanks for review [~binlijin]. Confirmed this works out 
on loaded cluster.

> Add running of ReplicationBarrierCleaner to hbck2 fixMeta invocation
> 
>
> Key: HBASE-23307
> URL: https://issues.apache.org/jira/browse/HBASE-23307
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbck2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Run the ReplicationBarrierCleaner chore when hbck2 invokes fixMeta. It will 
> clean up stale rep_barrier entries in hbase:meta which can help if trying to 
> do a restore of hbase:meta to good state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #869: HBASE-22969 A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position; ADDENDUM

2019-11-22 Thread GitBox
Apache-HBase commented on issue #869: HBASE-22969 A new binary component 
comparator(BinaryComponentComparator) to perform comparison of arbitrary length 
and position; ADDENDUM
URL: https://github.com/apache/hbase/pull/869#issuecomment-557604832
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 31s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  8s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 46s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 44s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   5m 26s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  18m  5s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 276m 39s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 342m  1s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.client.TestFromClientSideWithCoprocessor 
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-869/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/869 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux d96176d577cf 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-869/out/precommit/personality/provided.sh
 |
   | git revision | master / 54ad797abb |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-869/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-869/1/testReport/
 |
   | Max. process+thread count | 4704 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-869/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack merged pull request #859: HBASE-23307 Add running of ReplicationBarrierCleaner to hbck2 fixMeta…

2019-11-22 Thread GitBox
saintstack merged pull request #859: HBASE-23307 Add running of 
ReplicationBarrierCleaner to hbck2 fixMeta…
URL: https://github.com/apache/hbase/pull/859
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #867: HBASE-23117: Bad enum in hbase:meta info:state column can fail loadMeta and stop startup

2019-11-22 Thread GitBox
saintstack commented on a change in pull request #867: HBASE-23117: Bad enum in 
hbase:meta info:state column can fail loadMeta and stop startup
URL: https://github.com/apache/hbase/pull/867#discussion_r349680625
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/RegionStateStore.java
 ##
 @@ -352,8 +352,16 @@ public static State getRegionState(final Result r, int 
replicaId) {
 if (cell == null || cell.getValueLength() == 0) {
   return null;
 }
-return State.valueOf(Bytes.toString(cell.getValueArray(), 
cell.getValueOffset(),
-cell.getValueLength()));
+
+String state = Bytes.toString(cell.getValueArray(), cell.getValueOffset(),
+cell.getValueLength());
+try {
+  return State.valueOf(state);
+}
+catch (IllegalArgumentException e) {
+  LOG.debug("BAD value {} in hbase:meta info:state column", state);
 
 Review comment:
   What @wchevreuil  said


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #871: HBASE-23085 Network and Data related Actions; ADDENDUM

2019-11-22 Thread GitBox
Apache-HBase commented on issue #871: HBASE-23085 Network and Data related 
Actions; ADDENDUM
URL: https://github.com/apache/hbase/pull/871#issuecomment-557597468
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 12s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m  0s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  2s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 40s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 38s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m  0s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 51s |  hbase-it in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  46m 32s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-871/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/871 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux c80a92d09182 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-871/out/precommit/personality/provided.sh
 |
   | git revision | master / 3b0c276aa3 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-871/1/testReport/
 |
   | Max. process+thread count | 402 (vs. ulimit of 1) |
   | modules | C: hbase-it U: hbase-it |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-871/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #864: HBASE-23313 [hbck2] setRegionState should update Master in-memory sta…

2019-11-22 Thread GitBox
Apache-HBase commented on issue #864: HBASE-23313 [hbck2] setRegionState should 
update Master in-memory sta…
URL: https://github.com/apache/hbase/pull/864#issuecomment-557575504
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 38s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 18s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 27s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 25s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 38s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  master passed  |
   | +0 :ok: |  spotbugs  |   3m 36s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 55s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 28s |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m 28s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 28s |  the patch passed  |
   | -1 :x: |  checkstyle  |   0m 40s |  hbase-client: The patch generated 1 
new + 305 unchanged - 0 fixed = 306 total (was 305)  |
   | -1 :x: |  checkstyle  |   0m 15s |  hbase-examples: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 38s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  hbaseprotoc  |   2m 20s |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   8m 49s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 42s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 51s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 162m 27s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 59s |  hbase-examples in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   2m  8s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 244m 46s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-864/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/864 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool |
   | uname | Linux 5835eee19aa7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-864/out/precommit/personality/provided.sh
 |
   | git revision | master / 54ad797abb |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-864/2/artifact/out/diff-checkstyle-hbase-client.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-864/2/artifact/out/diff-checkstyle-hbase-examples.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-864/2/testReport/
 |
   | Max. process+thread count | 4389 (vs. ulimit of 1) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server 
hbase-examples U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-864/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the 

[GitHub] [hbase] petersomogyi opened a new pull request #871: HBASE-23085 Network and Data related Actions; ADDENDUM

2019-11-22 Thread GitBox
petersomogyi opened a new pull request #871: HBASE-23085 Network and Data 
related Actions; ADDENDUM
URL: https://github.com/apache/hbase/pull/871
 
 
   Fix percentage in String.format


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] chenxu14 commented on a change in pull request #868: HBASE-23296 Add CompositeBucketCache to support tiered BC

2019-11-22 Thread GitBox
chenxu14 commented on a change in pull request #868: HBASE-23296 Add 
CompositeBucketCache to support tiered BC
URL: https://github.com/apache/hbase/pull/868#discussion_r349637124
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 ##
 @@ -110,29 +113,35 @@ public static BlockCache createBlockCache(Configuration 
conf) {
   + "we will remove the deprecated config.", 
DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY,
 BLOCKCACHE_BLOCKSIZE_KEY);
 }
-FirstLevelBlockCache l1Cache = createFirstLevelCache(conf);
+BlockCache l1Cache = createFirstLevelCache(conf);
 if (l1Cache == null) {
   return null;
 }
-boolean useExternal = conf.getBoolean(EXTERNAL_BLOCKCACHE_KEY, 
EXTERNAL_BLOCKCACHE_DEFAULT);
-if (useExternal) {
-  BlockCache l2CacheInstance = createExternalBlockcache(conf);
-  return l2CacheInstance == null ?
-  l1Cache :
-  new InclusiveCombinedBlockCache(l1Cache, l2CacheInstance);
+if (conf.getBoolean(EXTERNAL_BLOCKCACHE_KEY, EXTERNAL_BLOCKCACHE_DEFAULT)) 
{
+  BlockCache l2Cache = createExternalBlockcache(conf);
+  return l2Cache == null ? l1Cache : new InclusiveCombinedBlockCache(
+  (FirstLevelBlockCache)l1Cache, l2Cache);
 } else {
   // otherwise use the bucket cache.
-  BucketCache bucketCache = createBucketCache(conf);
-  if (!conf.getBoolean("hbase.bucketcache.combinedcache.enabled", true)) {
-// Non combined mode is off from 2.0
-LOG.warn(
-"From HBase 2.0 onwards only combined mode of LRU cache and bucket 
cache is available");
+  BucketCache l2Cache = createBucketCache(conf, CacheLevel.L2);
+  if (conf.getBoolean(BUCKET_CACHE_COMPOSITE_KEY, false)) {
+return l2Cache == null ? l1Cache : new 
CompositeBucketCache((BucketCache)l1Cache, l2Cache);
+  } else {
+if (!conf.getBoolean("hbase.bucketcache.combinedcache.enabled", true)) 
{
+  // Non combined mode is off from 2.0
+  LOG.warn("From HBase 2.0 onwards only combined mode of LRU cache and 
bucket"
+  + " cache is available");
+}
+return l2Cache == null ? l1Cache : new CombinedBlockCache(
+(FirstLevelBlockCache)l1Cache, l2Cache);
   }
-  return bucketCache == null ? l1Cache : new CombinedBlockCache(l1Cache, 
bucketCache);
 }
   }
 
-  private static FirstLevelBlockCache createFirstLevelCache(final 
Configuration c) {
+  private static BlockCache createFirstLevelCache(final Configuration c) {
+if (c.getBoolean(BUCKET_CACHE_COMPOSITE_KEY, false)) {
+  return createBucketCache(c, CacheLevel.L1);
 
 Review comment:
   We have exposed some conf key for each level's BucketCache (see 
CompositeBucketCache), such as the ioengine and cacheSize.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23117) Bad enum in hbase:meta info:state column can fail loadMeta and stop startup

2019-11-22 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980201#comment-16980201
 ] 

HBase QA commented on HBASE-23117:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  6m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
48s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
36s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
33s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
39s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}292m 47s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}364m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hbase/pull/867 |
| JIRA Issue | HBASE-23117 |
| Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
| uname | Linux 9159de90a3c4 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-867/out/precommit/personality/provided.sh
 |
| 

[GitHub] [hbase] Apache-HBase commented on issue #867: HBASE-23117: Bad enum in hbase:meta info:state column can fail loadMeta and stop startup

2019-11-22 Thread GitBox
Apache-HBase commented on issue #867: HBASE-23117: Bad enum in hbase:meta 
info:state column can fail loadMeta and stop startup
URL: https://github.com/apache/hbase/pull/867#issuecomment-557557923
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   6m 54s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   8m 12s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 45s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 48s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  branch-2 passed  |
   | +0 :ok: |  spotbugs  |   4m 36s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 33s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 39s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 16s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 292m 47s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 364m 21s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/867 |
   | JIRA Issue | HBASE-23117 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 9159de90a3c4 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-867/out/precommit/personality/provided.sh
 |
   | git revision | branch-2 / 12b2e1fd52 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/1/testReport/
 |
   | Max. process+thread count | 4847 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-867/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-22 Thread GitBox
risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r349626630
 
 

 ##
 File path: 
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftSpnegoHttpFallbackServer.java
 ##
 @@ -0,0 +1,238 @@
+/*
+ * Copyright The Apache Software Foundation
 
 Review comment:
   Hmmm ok copied from another file nearby this one will remove.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-22 Thread GitBox
risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r349626455
 
 

 ##
 File path: 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
 ##
 @@ -249,6 +263,27 @@ protected void setupParamters() throws IOException {
 pauseMonitor.start();
   }
 
+  private String getSpengoPrincipal(Configuration conf, String host) throws 
IOException {
+String principal = conf.get(THRIFT_SPNEGO_PRINCIPAL_KEY);
 
 Review comment:
   Hmm let me take a look at what Hadoop Configuration's deprecation options 
are - I didn't know that was a fine. The old config is still technically valid 
just not for SPNEGO. I'll see if that is an option here to fallback with a 
message.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-23329) Remove unused methods from RequestConverter

2019-11-22 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil resolved HBASE-23329.
--
Resolution: Fixed

> Remove unused methods from RequestConverter
> ---
>
> Key: HBASE-23329
> URL: https://issues.apache.org/jira/browse/HBASE-23329
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Trivial
> Fix For: 3.0.0
>
>
> Noticed some unused methods on *RequestConverter* class, probably some 
> leftovers from previous refactorings. Since this is targeted for private use, 
> should be fine to just remove those extra unused on master branch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23329) Remove unused methods from RequestConverter

2019-11-22 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-23329:
-
Affects Version/s: 3.0.0

> Remove unused methods from RequestConverter
> ---
>
> Key: HBASE-23329
> URL: https://issues.apache.org/jira/browse/HBASE-23329
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Trivial
>
> Noticed some unused methods on *RequestConverter* class, probably some 
> leftovers from previous refactorings. Since this is targeted for private use, 
> should be fine to just remove those extra unused on master branch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23329) Remove unused methods from RequestConverter

2019-11-22 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-23329:
-
Fix Version/s: 3.0.0

> Remove unused methods from RequestConverter
> ---
>
> Key: HBASE-23329
> URL: https://issues.apache.org/jira/browse/HBASE-23329
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Trivial
> Fix For: 3.0.0
>
>
> Noticed some unused methods on *RequestConverter* class, probably some 
> leftovers from previous refactorings. Since this is targeted for private use, 
> should be fine to just remove those extra unused on master branch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] wchevreuil merged pull request #870: HBASE-23329 Remove unused methods from RequestConverter

2019-11-22 Thread GitBox
wchevreuil merged pull request #870: HBASE-23329 Remove unused methods from 
RequestConverter
URL: https://github.com/apache/hbase/pull/870
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #843: HBASE-23303 Add security headers to REST server/info page

2019-11-22 Thread GitBox
Apache-HBase commented on issue #843: HBASE-23303 Add security headers to REST 
server/info page
URL: https://github.com/apache/hbase/pull/843#issuecomment-557549281
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 47s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 56s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  master passed  |
   | +0 :ok: |  spotbugs  |   0m 53s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 28s |  master passed  |
   | -0 :warning: |  patch  |   1m  7s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 51s |  the patch passed  |
   | -1 :x: |  checkstyle  |   0m 13s |  hbase-http: The patch generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  hbase-rest: The patch 
generated 0 new + 15 unchanged - 1 fixed = 15 total (was 16)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   5m  4s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 19s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 19s |  hbase-http in the patch passed.  |
   | +1 :green_heart: |  unit  |   6m  6s |  hbase-rest in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 22s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  64m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-843/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/843 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux cab7b5ff5719 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-843/out/precommit/personality/provided.sh
 |
   | git revision | master / 54ad797abb |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-843/4/artifact/out/diff-checkstyle-hbase-http.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-843/4/testReport/
 |
   | Max. process+thread count | 2001 (vs. ulimit of 1) |
   | modules | C: hbase-http hbase-rest U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-843/4/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] petersomogyi commented on issue #869: HBASE-22969 A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position; ADDENDUM

2019-11-22 Thread GitBox
petersomogyi commented on issue #869: HBASE-22969 A new binary component 
comparator(BinaryComponentComparator) to perform comparison of arbitrary length 
and position; ADDENDUM
URL: https://github.com/apache/hbase/pull/869#issuecomment-557548097
 
 
   I couldn't find a way to pass a lambda expression to the logger so I just 
added `isInfoEnabled` block. This is only in a test code where we generally run 
all the tests with DEBUG logging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-22 Thread GitBox
busbey commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r349609518
 
 

 ##
 File path: 
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftSpnegoHttpFallbackServer.java
 ##
 @@ -0,0 +1,238 @@
+/*
+ * Copyright The Apache Software Foundation
 
 Review comment:
   No copyright statements in file headers please.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-22 Thread GitBox
busbey commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r349614411
 
 

 ##
 File path: 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
 ##
 @@ -249,6 +263,27 @@ protected void setupParamters() throws IOException {
 pauseMonitor.start();
   }
 
+  private String getSpengoPrincipal(Configuration conf, String host) throws 
IOException {
+String principal = conf.get(THRIFT_SPNEGO_PRINCIPAL_KEY);
 
 Review comment:
   We're doing this ourselves instead of using Hadoop Configuration's 
deprecation mechanisms because we want a different fall back order? Or we can't 
set deprecation soon enough for some reason?
   
   We should have comments proactively letting future folks know the reasoning.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23237) Negative 'Requests per Second' counts in UI

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980169#comment-16980169
 ] 

Hudson commented on HBASE-23237:


Results for branch master
[build #1544 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1544/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Negative 'Requests per Second' counts in UI
> ---
>
> Key: HBASE-23237
> URL: https://issues.apache.org/jira/browse/HBASE-23237
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: Screen Shot 2019-10-30 at 9.45.58 PM.png
>
>
> I see request per second showing with negative sign.
>  !Screen Shot 2019-10-30 at 9.45.58 PM.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23328) info:regioninfo goes wrong when region replicas enabled

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980171#comment-16980171
 ] 

Hudson commented on HBASE-23328:


Results for branch master
[build #1544 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1544/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> info:regioninfo goes wrong when region replicas enabled
> ---
>
> Key: HBASE-23328
> URL: https://issues.apache.org/jira/browse/HBASE-23328
> Project: HBase
>  Issue Type: Bug
>  Components: read replicas
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
>
> Noticed that the info:regioninfo content in hbase:meta can become that of a 
> serialized replica. I think it mostly harmless but accounting especially 
> debugging is frustrated because hbase:meta row name does not match the 
> info:regioninfo.
> Here is an example:
> {code}
> t1,c6e977ef,1572669121340.0b455b2d57f91c153d5088533205c268. 
> column=info:regioninfo, timestamp=1574367093772, value={ENCODED => 
> 5199f7826c340ba944517e97c6ebaf04, NAME => 
> 't1,c6e977ef,1572669121340_0001.5199f7826c340ba944517e97c6ebaf04.', STARTKEY 
> => 'c6e977ef', ENDKEY => 'c72b0126', REPLICA_ID => 1}
> {code}
> Notice how hbase:meta row name is like that of the info:regioninfo content 
> only we are listing REPLICA_ID content and the encoded name is different (as 
> it factors replicaid).
> The original Region Replica design describes how the info:regioninfo is 
> supposed to have the default HRI serialized only. See comment on HRI changes 
> in 
> https://issues.apache.org/jira/secure/attachment/12627276/hbase-10347_redo_v8.patch
> -Going back over history, this may have been a bug since Region Replicas came 
> in.- <= No. Looking at an old cluster w/ region replicas, it doesn't have 
> this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23325) [UI]rsgoup average load keep two decimals

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980170#comment-16980170
 ] 

Hudson commented on HBASE-23325:


Results for branch master
[build #1544 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1544/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [UI]rsgoup average load keep two decimals
> -
>
> Key: HBASE-23325
> URL: https://issues.apache.org/jira/browse/HBASE-23325
> Project: HBase
>  Issue Type: Improvement
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: 20191121165713.png
>
>
> In */master-status*,  rsgoup average load keep two decimals. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23234) Provide .editorconfig based on checkstyle configuration

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980168#comment-16980168
 ] 

Hudson commented on HBASE-23234:


Results for branch master
[build #1544 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1544/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1544//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Provide .editorconfig based on checkstyle configuration
> ---
>
> Key: HBASE-23234
> URL: https://issues.apache.org/jira/browse/HBASE-23234
> Project: HBase
>  Issue Type: Task
>  Components: build, tooling
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 1.6.0
>
>
> I don't have an exhaustive analysis of the issue, but there's at least one 
> case where check style plugin configuration disagrees with our settings in 
> {{dev-support/hbase_eclipse_formatter.xml}}.
> Formatter settings produce this code chunk
> {noformat}
>   uncaughtExceptionHandler =
>   (t, e) -> abort("Uncaught exception in executorService thread " + 
> t.getName(), e);
> {noformat}
> but check style wants
> {noformat}
>   uncaughtExceptionHandler =
> (t, e) -> abort("Uncaught exception in executorService thread " + 
> t.getName(), e);
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HBASE-23303) Add security headers to REST server/info page

2019-11-22 Thread Andor Molnar (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-23303 started by Andor Molnar.

> Add security headers to REST server/info page
> -
>
> Key: HBASE-23303
> URL: https://issues.apache.org/jira/browse/HBASE-23303
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 3.0.0, 2.0.6, 2.1.7, 2.2.2
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
>
> Vulnerability scanners suggest that the following extra headers should be 
> added to both Info/Rest server endpoints which are exposed by {{hbase-rest}} 
> project.
>  * X-Frame-Options: SAMEORIGIN
>  * X-Xss-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  * Strict-Transport-Security: “max-age=63072000;includeSubDomains;preload”
>  * Content-Security-Policy: default-src https: data: 'unsafe-inline' 
> 'unsafe-eval'
> Info server already has "X-Frame-Options: DENY" which is more restrictive 
> than "SAMEORIGIN", so it's probably fine. All of three headers are missing 
> from REST responses.
> I'll put together a patch to resolve this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23303) Add security headers to REST server/info page

2019-11-22 Thread Andor Molnar (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andor Molnar updated HBASE-23303:
-
Description: 
Vulnerability scanners suggest that the following extra headers should be added 
to both Info/Rest server endpoints which are exposed by {{hbase-rest}} project.
 * X-Frame-Options: SAMEORIGIN
 * X-Xss-Protection: 1; mode=block
 * X-Content-Type-Options: nosniff
 * Strict-Transport-Security: “max-age=63072000;includeSubDomains;preload”
 * Content-Security-Policy: default-src https: data: 'unsafe-inline' 
'unsafe-eval'

Info server already has "X-Frame-Options: DENY" which is more restrictive than 
"SAMEORIGIN", so it's probably fine. All of three headers are missing from REST 
responses.

I'll put together a patch to resolve this. 

  was:
Vulnerability scanners suggest that the following extra headers should be added 
to both Info/Rest server endpoints which are exposed by {{hbase-rest}} project.
 * X-Content-Type-Options: nosniff
 * X-XSS-Protection: 1; mode=block
 * X-Frame-Options: SAMEORIGIN

Info server already has "X-Frame-Options: DENY" which is more restrictive than 
"SAMEORIGIN", so it's probably fine. All of three headers are missing from REST 
responses.

I'll put together a patch to resolve this.

Let's add HSTS header too:
 * Strict-Transport-Security: max-age=31536000

 


> Add security headers to REST server/info page
> -
>
> Key: HBASE-23303
> URL: https://issues.apache.org/jira/browse/HBASE-23303
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 3.0.0, 2.0.6, 2.1.7, 2.2.2
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
>
> Vulnerability scanners suggest that the following extra headers should be 
> added to both Info/Rest server endpoints which are exposed by {{hbase-rest}} 
> project.
>  * X-Frame-Options: SAMEORIGIN
>  * X-Xss-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  * Strict-Transport-Security: “max-age=63072000;includeSubDomains;preload”
>  * Content-Security-Policy: default-src https: data: 'unsafe-inline' 
> 'unsafe-eval'
> Info server already has "X-Frame-Options: DENY" which is more restrictive 
> than "SAMEORIGIN", so it's probably fine. All of three headers are missing 
> from REST responses.
> I'll put together a patch to resolve this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] anmolnar commented on issue #843: HBASE-23303 Add security headers to REST server/info page

2019-11-22 Thread GitBox
anmolnar commented on issue #843: HBASE-23303 Add security headers to REST 
server/info page
URL: https://github.com/apache/hbase/pull/843#issuecomment-557524431
 
 
   @petersomogyi @brfrn169 Sorry for messing things up. I added one more 
important security header to the patch and also made a small refactoring for 
setting up parameters at a common place.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #870: HBASE-23329 Remove unused methods from RequestConverter

2019-11-22 Thread GitBox
Apache-HBase commented on issue #870: HBASE-23329 Remove unused methods from 
RequestConverter
URL: https://github.com/apache/hbase/pull/870#issuecomment-557518008
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 54s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 48s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  5s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  master passed  |
   | +0 :ok: |  spotbugs  |   1m 19s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 17s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  hbase-client: The patch 
generated 0 new + 100 unchanged - 14 fixed = 100 total (was 114)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 59s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  16m 17s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 52s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  53m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-870/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/870 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 191f6efd8a12 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-870/out/precommit/personality/provided.sh
 |
   | git revision | master / 54ad797abb |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-870/1/testReport/
 |
   | Max. process+thread count | 296 (vs. ulimit of 1) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-870/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23085) Network and Data related Actions

2019-11-22 Thread Peter Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980092#comment-16980092
 ] 

Peter Somogyi commented on HBASE-23085:
---

[~bszabolcs] can you create an addendum to fix the format string? It should 
have 50%% at the end to escape the percentage sign.

[https://github.com/apache/hbase/blob/d2142a8ebb00eafb69e00147afa51fff4331014c/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/ReorderPackagesCommandAction.java#L73]

> Network and Data related Actions
> 
>
> Key: HBASE-23085
> URL: https://issues.apache.org/jira/browse/HBASE-23085
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Szabolcs Bukros
>Assignee: Szabolcs Bukros
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Add additional actions to:
>  * manipulate network packages with tc (reorder, loose,...)
>  * add CPU load
>  * fill the disk
>  * corrupt or delete regionserver data files
> Create new monkey factories for the new actions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #868: HBASE-23296 Add CompositeBucketCache to support tiered BC

2019-11-22 Thread GitBox
Apache-HBase commented on issue #868: HBASE-23296 Add CompositeBucketCache to 
support tiered BC
URL: https://github.com/apache/hbase/pull/868#issuecomment-557509056
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 13s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 46s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 39s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  4s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 23s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 49s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 17s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 33s |  hbase-server: The patch generated 3 
new + 58 unchanged - 1 fixed = 61 total (was 59)  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   5m 14s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  18m  1s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  33m 15s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  unit  |   0m 23s |  hbase-external-blockcache in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 100m 54s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.io.hfile.TestCompositeBucketCache |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-868/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/868 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 346ffe75a9fc 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-868/out/precommit/personality/provided.sh
 |
   | git revision | master / 54ad797abb |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-868/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-868/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-868/1/testReport/
 |
   | Max. process+thread count | 669 (vs. ulimit of 1) |
   | modules | C: hbase-server hbase-external-blockcache U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-868/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Reopened] (HBASE-23085) Network and Data related Actions

2019-11-22 Thread Peter Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi reopened HBASE-23085:
---

> Network and Data related Actions
> 
>
> Key: HBASE-23085
> URL: https://issues.apache.org/jira/browse/HBASE-23085
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Szabolcs Bukros
>Assignee: Szabolcs Bukros
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Add additional actions to:
>  * manipulate network packages with tc (reorder, loose,...)
>  * add CPU load
>  * fill the disk
>  * corrupt or delete regionserver data files
> Create new monkey factories for the new actions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23259) Ability to run mini cluster using pre-determined available random ports

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980084#comment-16980084
 ] 

Hudson commented on HBASE-23259:


Results for branch branch-1
[build #1145 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1145/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1145//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1145//JDK7_Nightly_Build_Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1145//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Ability to run mini cluster using pre-determined available random ports
> ---
>
> Key: HBASE-23259
> URL: https://issues.apache.org/jira/browse/HBASE-23259
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0, 1.4.12, 2.2.3
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 1.6.0
>
>
> As noted in the code reviews for HBASE-18095, we need the ability to run the 
> mini-cluster using a pre-determined set of random (and available) port 
> numbers. When I say pre-determined, I mean the test knows these ports even 
> before starting the mini cluster. 
> In short, the workflow is something like,
> {noformat}
> List ports = getRandomAvailablePorts();
> startMiniCluster(conf, ports);
> {noformat}
> The reason we need this is that certain configs introduced in HBASE-18095 
> depend on the ports on which the master is expected to serve the RPCs. While 
> that is known for regular deployments (like 16000 for master etc), it is 
> totally random in the mini cluster tests. So we need to know them before hand 
> for templating out the configs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23234) Provide .editorconfig based on checkstyle configuration

2019-11-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980085#comment-16980085
 ] 

Hudson commented on HBASE-23234:


Results for branch branch-1
[build #1145 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1145/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1145//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1145//JDK7_Nightly_Build_Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1145//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Provide .editorconfig based on checkstyle configuration
> ---
>
> Key: HBASE-23234
> URL: https://issues.apache.org/jira/browse/HBASE-23234
> Project: HBase
>  Issue Type: Task
>  Components: build, tooling
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 1.6.0
>
>
> I don't have an exhaustive analysis of the issue, but there's at least one 
> case where check style plugin configuration disagrees with our settings in 
> {{dev-support/hbase_eclipse_formatter.xml}}.
> Formatter settings produce this code chunk
> {noformat}
>   uncaughtExceptionHandler =
>   (t, e) -> abort("Uncaught exception in executorService thread " + 
> t.getName(), e);
> {noformat}
> but check style wants
> {noformat}
>   uncaughtExceptionHandler =
> (t, e) -> abort("Uncaught exception in executorService thread " + 
> t.getName(), e);
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >