[GitHub] [hbase] sandeepvinayak opened a new pull request #856: Optimizing calls to setStoragePolicy

2019-11-19 Thread GitBox
sandeepvinayak opened a new pull request #856: Optimizing calls to 
setStoragePolicy
URL: https://github.com/apache/hbase/pull/856
 
 
   In CommonFSUtils's ` invokeSetStoragePolicy(final FileSystem fs, final Path 
path,
 final String storagePolicy)`, we invoke the setStoragePolicy by the 
following:
   
   ```java
   m = fs.getClass().getDeclaredMethod("setStoragePolicy",
   new Class[] { Path.class, String.class });
   ...
   m.invoke(fs, path, storagePolicy);
   ```
   
   When the call is initiated by HRegionFileSystem and fs is HFileSystem, 
`m.invoke(fs, path, storagePolicy)`repeats the same function call 
`setStoragePolicy(final FileSystem fs, final Path path,
 final String storagePolicy)` with underneath file system. 
   
   We can avoid this duplicate calls by checking in advance if the file system 
for HRegionFileSystem is HFileSystem call the setStoragePolicy with the 
underneath fs. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #854: HBASE-23307 Add running of ReplicationBarrierCleaner to hbck2 fixMeta…

2019-11-19 Thread GitBox
Apache-HBase commented on issue #854: HBASE-23307 Add running of 
ReplicationBarrierCleaner to hbck2 fixMeta…
URL: https://github.com/apache/hbase/pull/854#issuecomment-555859178
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  6s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 14s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 23s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 13s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  branch-2 passed  |
   | +0 :ok: |  spotbugs  |   3m 34s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 31s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 44s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 11s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m  9s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   | -1 :x: |  findbugs  |   3m 29s |  hbase-server generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 286m 47s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 341m 21s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-server |
   |  |  Nullcheck of regionDir at line 268 of value previously dereferenced in 
org.apache.hadoop.hbase.master.HbckChore.loadRegionsFromFS()  At 
HbckChore.java:268 of value previously dereferenced in 
org.apache.hadoop.hbase.master.HbckChore.loadRegionsFromFS()  At 
HbckChore.java:[line 268] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-854/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/854 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux d63aa78665c4 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-854/out/precommit/personality/provided.sh
 |
   | git revision | branch-2 / 70771b603e |
   | Default Java | 1.8.0_181 |
   | findbugs | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-854/1/artifact/out/new-findbugs-hbase-server.html
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-854/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-854/1/testReport/
 |
   | Max. process+thread count | 4863 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-854/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
Apache-HBase commented on issue #830: HBASE-23281: Track meta region locations 
in masters
URL: https://github.com/apache/hbase/pull/830#issuecomment-555857685
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 18s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 45s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   7m 26s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 21s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 52s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m  8s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  master passed  |
   | +0 :ok: |  spotbugs  |   5m 24s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 34s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   7m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 15s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 45s |  hbase-server: The patch generated 1 
new + 111 unchanged - 0 fixed = 112 total (was 111)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   6m 10s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  23m  4s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   7m 59s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 13s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m  0s |  hbase-zookeeper in the patch 
passed.  |
   | -1 :x: |  unit  | 288m 23s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 384m 52s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.master.TestMasterShutdown |
   |   | hadoop.hbase.master.TestAssignmentManagerMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-830/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/830 |
   | JIRA Issue | HBASE-23281 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux d3e4516f6e52 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-830/out/precommit/personality/provided.sh
 |
   | git revision | master / 33bedf8d4d |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-830/4/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-830/4/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-830/4/testReport/
 |
   | Max. process+thread count | 4806 (vs. ulimit of 1) |
   | modules | C: hbase-client hbase-zookeeper hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-830/4/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23281) Track meta region changes on masters

2019-11-19 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978100#comment-16978100
 ] 

HBase QA commented on HBASE-23281:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
 8s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  5m 
24s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
34s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
45s{color} | {color:red} hbase-server: The patch generated 1 new + 111 
unchanged - 0 fixed = 112 total (was 111) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
10s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m  4s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
13s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hbase-zookeeper in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}288m 23s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}384m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestMasterShutdown |
|   | hadoop.hbase.master.TestAssignmentManagerMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 base: 

[GitHub] [hbase] Apache-HBase commented on issue #855: HBASE-23322 [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread GitBox
Apache-HBase commented on issue #855: HBASE-23322 [hbck2] Simplification on 
HBCKSCP scheduling
URL: https://github.com/apache/hbase/pull/855#issuecomment-555853969
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 54s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 16s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 40s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 22s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  branch-2 passed  |
   | +0 :ok: |  spotbugs  |   3m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 54s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 38s |  hbase-server: The patch generated 2 
new + 40 unchanged - 0 fixed = 42 total (was 40)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   5m 20s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 19s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 166m 38s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 234m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-855/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/855 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux c26cbc5c7ead 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-855/out/precommit/personality/provided.sh
 |
   | git revision | branch-2 / 70771b603e |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-855/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-855/1/testReport/
 |
   | Max. process+thread count | 4639 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-855/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #834: HBASE-23237 Negative sign in requestsPerSecond

2019-11-19 Thread GitBox
Apache-HBase commented on issue #834: HBASE-23237 Negative sign in 
requestsPerSecond
URL: https://github.com/apache/hbase/pull/834#issuecomment-555851559
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 49s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 32s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  6s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 28s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 27s |  master passed  |
   | -0 :warning: |  patch  |   4m 35s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   5m  1s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  17m  8s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m 43s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 163m 18s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 26s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 225m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-834/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/834 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 78cdb4c5b0f1 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-834/out/precommit/personality/provided.sh
 |
   | git revision | master / 33bedf8d4d |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-834/4/testReport/
 |
   | Max. process+thread count | 4324 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-834/4/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #852: HBASE-23322 [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread GitBox
Apache-HBase commented on issue #852: HBASE-23322 [hbck2] Simplification on 
HBCKSCP scheduling
URL: https://github.com/apache/hbase/pull/852#issuecomment-555843517
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 15s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 49s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 46s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 43s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 32s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  branch-2 passed  |
   | +0 :ok: |  spotbugs  |   3m 20s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 20s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 47s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 47s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 32s |  hbase-server: The patch generated 4 
new + 297 unchanged - 9 fixed = 301 total (was 306)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  17m  0s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   6m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 54s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   3m 29s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 258m 12s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 33s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 338m 22s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.regionserver.TestEndToEndSplitTransaction |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-852/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/852 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux e3308e74e948 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-852/out/precommit/personality/provided.sh
 |
   | git revision | branch-2 / 70771b603e |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-852/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-852/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-852/1/testReport/
 |
   | Max. process+thread count | 4436 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-client hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-852/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #851: Hbase 23321

2019-11-19 Thread GitBox
Apache-HBase commented on issue #851: Hbase 23321
URL: https://github.com/apache/hbase/pull/851#issuecomment-555830453
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 27s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m  4s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  3s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 20s |  hbase-server: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 39s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 29s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m 10s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 169m 16s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 225m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-851/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/851 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 8b98e81b6f84 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-851/out/precommit/personality/provided.sh
 |
   | git revision | master / 33bedf8d4d |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-851/2/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-851/2/testReport/
 |
   | Max. process+thread count | 4500 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-851/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #849: HBASE-23320 Upgrade surefire plugin to 3.0.0-M4

2019-11-19 Thread GitBox
Apache-HBase commented on issue #849: HBASE-23320 Upgrade surefire plugin to 
3.0.0-M4
URL: https://github.com/apache/hbase/pull/849#issuecomment-555824700
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  7s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m  1s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 16s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 57s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 56s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 17s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedjars  |   5m  2s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 27s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 294m 23s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 354m  9s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.client.TestFromClientSideWithCoprocessor 
|
   |   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-849/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/849 |
   | Optional Tests | dupname asflicense javac javadoc unit shadedjars 
hadoopcheck xml compile |
   | uname | Linux e63be24df775 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-849/out/precommit/personality/provided.sh
 |
   | git revision | master / 33bedf8d4d |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-849/2/artifact/out/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-849/2/testReport/
 |
   | Max. process+thread count | 4633 (vs. ulimit of 1) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-849/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23318) LoadTestTool doesn't start

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978039#comment-16978039
 ] 

Hudson commented on HBASE-23318:


Results for branch branch-2.1
[build #1715 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> LoadTestTool doesn't start
> --
>
> Key: HBASE-23318
> URL: https://issues.apache.org/jira/browse/HBASE-23318
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> ./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
> with a CNFE. We are missing the tests jar from hbase-zookeeper. 
> The client tarball includes this but if one wants to launch it on a server or 
> a general purpose deploy (i.e. not the client tarball) the test jar has to be 
> in the server classpath as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23301) Generate CHANGES.md and RELEASENOTES.md for 2.1.8

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978041#comment-16978041
 ] 

Hudson commented on HBASE-23301:


Results for branch branch-2.1
[build #1715 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Generate CHANGES.md and RELEASENOTES.md for 2.1.8
> -
>
> Key: HBASE-23301
> URL: https://issues.apache.org/jira/browse/HBASE-23301
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.1.8
>
> Attachments: HBASE-23301-branch-2.1-addendum.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23192) CatalogJanitor consistencyCheck does not log problematic row on exception

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978040#comment-16978040
 ] 

Hudson commented on HBASE-23192:


Results for branch branch-2.1
[build #1715 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1715//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> CatalogJanitor consistencyCheck does not log problematic row on exception
> -
>
> Key: HBASE-23192
> URL: https://issues.apache.org/jira/browse/HBASE-23192
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Affects Versions: 2.1.7, 2.2.2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> Small stuff. Trying to fix a cluser, cleared an info:server field. Damaged 
> hbase:meta for CatalogJanitor; when it should have just logged and skipped 
> the bad entity when doing consistency check, instead CJ crashed. Also doesn't 
> log bad raw which would help debugging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23279) Switch default block encoding to ROW_INDEX_V1

2019-11-19 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978012#comment-16978012
 ] 

HBase QA commented on HBASE-23279:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
15s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
42s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
43s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} The patch passed checkstyle in hbase-common {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} hbase-client: The patch generated 0 new + 50 
unchanged - 1 fixed = 50 total (was 51) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} The patch passed checkstyle in hbase-server {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 9s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
17s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
56s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}302m 29s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}384m 

[GitHub] [hbase] saintstack opened a new pull request #855: HBASE-23322 [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread GitBox
saintstack opened a new pull request #855: HBASE-23322 [hbck2] Simplification 
on HBCKSCP scheduling
URL: https://github.com/apache/hbase/pull/855
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] karthikhw commented on issue #834: HBASE-23237 Negative sign in requestsPerSecond

2019-11-19 Thread GitBox
karthikhw commented on issue #834: HBASE-23237 Negative sign in 
requestsPerSecond
URL: https://github.com/apache/hbase/pull/834#issuecomment-555800436
 
 
   @joshelser @guangxuCheng  Made all of your suggestions. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23278) Add a table-level compaction progress display on the UI

2019-11-19 Thread Baiqiang Zhao (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977989#comment-16977989
 ] 

Baiqiang Zhao commented on HBASE-23278:
---

Thanks [~gxcheng] !

>  Add a table-level compaction progress display on the UI
> 
>
> Key: HBASE-23278
> URL: https://issues.apache.org/jira/browse/HBASE-23278
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Affects Versions: 1.6.0, master
>Reporter: Baiqiang Zhao
>Assignee: Baiqiang Zhao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 1.6.0, 2.2.3
>
> Attachments: HBase-23278-v2.png, HBase-23278.png, 
> image-2019-11-11-20-35-56-103.png, image-2019-11-11-20-37-53-367.png, 
> image-2019-11-11-20-44-04-050.png
>
>
> We have regionserver-level compaction progress in UI. However, we often 
> compact a table, why there is no table-level compaction progress?Use multiple 
> tabs to show  compaction progress.
> !HBase-23278-v2.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #835: HBASE-23307 Add running of ReplicationBarrierCleaner to hbck2 fixMeta…

2019-11-19 Thread GitBox
Apache-HBase commented on issue #835: HBASE-23307 Add running of 
ReplicationBarrierCleaner to hbck2 fixMeta…
URL: https://github.com/apache/hbase/pull/835#issuecomment-555794001
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-2.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 10s |  branch-2.2 passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  branch-2.2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 22s |  branch-2.2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m  6s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  branch-2.2 passed  |
   | +0 :ok: |  spotbugs  |   3m 13s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 11s |  branch-2.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 21s |  hbase-server: The patch generated 2 
new + 148 unchanged - 0 fixed = 150 total (was 148)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   3m 59s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  14m 54s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  the patch passed  |
   | -1 :x: |  findbugs  |   2m 58s |  hbase-server generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 144m  2s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 196m 11s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-server |
   |  |  Nullcheck of regionDir at line 268 of value previously dereferenced in 
org.apache.hadoop.hbase.master.HbckChore.loadRegionsFromFS()  At 
HbckChore.java:268 of value previously dereferenced in 
org.apache.hadoop.hbase.master.HbckChore.loadRegionsFromFS()  At 
HbckChore.java:[line 268] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-835/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/835 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 85b17fa335a5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-835/out/precommit/personality/provided.sh
 |
   | git revision | branch-2.2 / 0b23be9ea2 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-835/5/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | findbugs | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-835/5/artifact/out/new-findbugs-hbase-server.html
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-835/5/testReport/
 |
   | Max. process+thread count | 4824 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-835/5/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #852: HBASE-23322 [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread GitBox
Apache9 commented on a change in pull request #852: HBASE-23322 [hbck2] 
Simplification on HBCKSCP scheduling
URL: https://github.com/apache/hbase/pull/852#discussion_r348251347
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
 ##
 @@ -582,10 +562,7 @@ synchronized long expireServer(final ServerName 
serverName,
   return Procedure.NO_PROC_ID;
 }
 LOG.info("Processing expiration of " + serverName + " on " + 
this.master.getServerName());
-long pid = function.apply(serverName);
-if (pid <= 0) {
 
 Review comment:
   We do not need this test any more? It will skip the later listener calls.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #852: HBASE-23322 [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread GitBox
Apache9 commented on a change in pull request #852: HBASE-23322 [hbck2] 
Simplification on HBCKSCP scheduling
URL: https://github.com/apache/hbase/pull/852#discussion_r348251753
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
 ##
 @@ -1502,26 +1503,38 @@ public long submitServerCrash(ServerName serverName, 
boolean shouldSplitWal) {
 // server state to CRASHED, we will no longer accept the 
reportRegionStateTransition call from
 // this server. This is used to simplify the implementation for TRSP and 
SCP, where we can make
 // sure that, the region list fetched by SCP will not be changed any more.
-serverNode.writeLock().lock();
+if (serverNode != null) {
 
 Review comment:
   I think here we'd better extract a special method for handling the unknown 
server case? It will make the code much cleaner, now lots of serverNode != null 
in the code base which means the code hard to understand...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #852: HBASE-23322 [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread GitBox
Apache9 commented on a change in pull request #852: HBASE-23322 [hbck2] 
Simplification on HBCKSCP scheduling
URL: https://github.com/apache/hbase/pull/852#discussion_r348250931
 
 

 ##
 File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AESDecryptor.java
 ##
 @@ -83,10 +83,8 @@ public void reset() {
   }
 
   protected void init() {
+Preconditions.checkState(iv != null, "IV is null");
 
 Review comment:
   IllegalState instead of NPE?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #852: HBASE-23322 [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread GitBox
Apache9 commented on a change in pull request #852: HBASE-23322 [hbck2] 
Simplification on HBCKSCP scheduling
URL: https://github.com/apache/hbase/pull/852#discussion_r348250654
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java
 ##
 @@ -231,13 +232,13 @@ public static void fullScanTables(Connection connection, 
final Visitor visitor)
* Callers should call close on the returned {@link Table} instance.
* @param connection connection we're using to access Meta
* @return An {@link Table} for hbase:meta
+   * @throws NullPointerException if {@code connection} is {@code null}
 
 Review comment:
   I think the intention here should be 'do not pass null connection'?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23315) Miscellaneous HBCK Report page cleanup

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977975#comment-16977975
 ] 

Hudson commented on HBASE-23315:


Results for branch branch-2
[build #2359 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Miscellaneous HBCK Report page cleanup
> --
>
> Key: HBASE-23315
> URL: https://issues.apache.org/jira/browse/HBASE-23315
> Project: HBase
>  Issue Type: Improvement
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> A bunch of touch up on the hbck report page:
>  * Add a bit of javadoc around SerialReplicationChecker.
>  * Miniscule edit to the profiler jsp page and then a bit of doc on how to 
> make it work that might help.
>  * Add some detail if NPE getting BitSetNode to help w/ debug.
>  * Change HbckChore to log region names instead of encoded names; helps doing 
> diagnostics; can take region name and query in shell to find out all about 
> the region according to hbase:meta.
>  * Add some fix-it help inline in the HBCK Report page -- how to fix.
>  * Add counts in procedures page so can see if making progress; move listing 
> of WALs to end of the page.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23318) LoadTestTool doesn't start

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977972#comment-16977972
 ] 

Hudson commented on HBASE-23318:


Results for branch branch-2
[build #2359 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> LoadTestTool doesn't start
> --
>
> Key: HBASE-23318
> URL: https://issues.apache.org/jira/browse/HBASE-23318
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> ./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
> with a CNFE. We are missing the tests jar from hbase-zookeeper. 
> The client tarball includes this but if one wants to launch it on a server or 
> a general purpose deploy (i.e. not the client tarball) the test jar has to be 
> in the server classpath as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23282) HBCKServerCrashProcedure for 'Unknown Servers'

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977971#comment-16977971
 ] 

Hudson commented on HBASE-23282:


Results for branch branch-2
[build #2359 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HBCKServerCrashProcedure for 'Unknown Servers'
> --
>
> Key: HBASE-23282
> URL: https://issues.apache.org/jira/browse/HBASE-23282
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2, proc-v2
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> With an overdriving, sustained load, I can fairly easily manufacture an 
> hbase:meta table that references servers that are no longer in the live list 
> nor are members of deadservers; i.e. 'Unknown Servers'.  The new 'HBCK 
> Report' UI in Master has a section where it lists 'Unknown Servers' if any in 
> hbase:meta.
> Once in this state, the repair is awkward. Our assign/unassign Procedure is 
> particularly dogged about insisting that we confirm close/open of Regions 
> when it is going about its business which is well and good if server is in 
> live/dead sets but when an 'Unknown Server', we invariably end up trying to 
> confirm against a non-longer present server (More on this in follow-on 
> issues).
> What is wanted is queuing of a ServerCrashProcedure for each 'Unknown 
> Server'. It would split any WALs (there shouldn't be any if server was 
> restarted) and ideally it would cancel out any assigns and reassign regions 
> off the 'Unknown Server'.  But the 'normal' SCP consults the in-memory 
> cluster state figuring what Regions were on the crashed server... And 
> 'Unknown Servers' don't have state in in-master memory Maps of Servers to 
> Regions or  in DeadServers list which works fine for the usual case.
> Suggestion here is that hbck2 be able to drive in a special SCP, one which 
> would get list of Regions by scanning hbase:meta rather than asking Master 
> memory; an HBCKSCP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23278) Add a table-level compaction progress display on the UI

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977974#comment-16977974
 ] 

Hudson commented on HBASE-23278:


Results for branch branch-2
[build #2359 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


>  Add a table-level compaction progress display on the UI
> 
>
> Key: HBASE-23278
> URL: https://issues.apache.org/jira/browse/HBASE-23278
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Affects Versions: 1.6.0, master
>Reporter: Baiqiang Zhao
>Assignee: Baiqiang Zhao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 1.6.0, 2.2.3
>
> Attachments: HBase-23278-v2.png, HBase-23278.png, 
> image-2019-11-11-20-35-56-103.png, image-2019-11-11-20-37-53-367.png, 
> image-2019-11-11-20-44-04-050.png
>
>
> We have regionserver-level compaction progress in UI. However, we often 
> compact a table, why there is no table-level compaction progress?Use multiple 
> tabs to show  compaction progress.
> !HBase-23278-v2.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23085) Network and Data related Actions

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977973#comment-16977973
 ] 

Hudson commented on HBASE-23085:


Results for branch branch-2
[build #2359 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2359//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Network and Data related Actions
> 
>
> Key: HBASE-23085
> URL: https://issues.apache.org/jira/browse/HBASE-23085
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Szabolcs Bukros
>Assignee: Szabolcs Bukros
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Add additional actions to:
>  * manipulate network packages with tc (reorder, loose,...)
>  * add CPU load
>  * fill the disk
>  * corrupt or delete regionserver data files
> Create new monkey factories for the new actions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-18095) Provide an option for clients to find the server hosting META that does not involve the ZooKeeper client

2019-11-19 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977953#comment-16977953
 ] 

Andrew Kyle Purtell edited comment on HBASE-18095 at 11/20/19 12:57 AM:


We want this in our operations so that would be a vote for a branch-1 backport. 
Means we won't have to maintain a local patch. The upgrade story is clean. This 
is an additive change. Masters and zookeeper will both be able to provide 
service for a client configured to use either, until such time a site is 
completely migrated. Then as an operator you'd probably want to restrict 
cluster clients from accessing zookeeper service ports, but this would be up to 
the site operator. These separate means of discovery can coexist for as long as 
they need to.


was (Author: apurtell):
We want this in our operations so that would be a vote for a branch-1 backport. 
Means we won't have to maintain a local patch. The upgrade story is clean. This 
is an additive change. Masters and zookeeper will both be able to provide 
service for a client configured to use either, until such time a site is 
completely migrated. 

> Provide an option for clients to find the server hosting META that does not 
> involve the ZooKeeper client
> 
>
> Key: HBASE-18095
> URL: https://issues.apache.org/jira/browse/HBASE-18095
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Reporter: Andrew Kyle Purtell
>Assignee: Bharath Vissapragada
>Priority: Major
> Attachments: HBASE-18095.master-v1.patch, HBASE-18095.master-v2.patch
>
>
> Clients are required to connect to ZooKeeper to find the location of the 
> regionserver hosting the meta table region. Site configuration provides the 
> client a list of ZK quorum peers and the client uses an embedded ZK client to 
> query meta location. Timeouts and retry behavior of this embedded ZK client 
> are managed orthogonally to HBase layer settings and in some cases the ZK 
> cannot manage what in theory the HBase client can, i.e. fail fast upon outage 
> or network partition.
> We should consider new configuration settings that provide a list of 
> well-known master and backup master locations, and with this information the 
> client can contact any of the master processes directly. Any master in either 
> active or passive state will track meta location and respond to requests for 
> it with its cached last known location. If this location is stale, the client 
> can ask again with a flag set that requests the master refresh its location 
> cache and return the up-to-date location. Every client interaction with the 
> cluster thus uses only HBase RPC as transport, with appropriate settings 
> applied to the connection. The configuration toggle that enables this 
> alternative meta location lookup should be false by default.
> This removes the requirement that HBase clients embed the ZK client and 
> contact the ZK service directly at the beginning of the connection lifecycle. 
> This has several benefits. ZK service need not be exposed to clients, and 
> their potential abuse, yet no benefit ZK provides the HBase server cluster is 
> compromised. Normalizing HBase client and ZK client timeout settings and 
> retry behavior - in some cases, impossible, i.e. for fail-fast - is no longer 
> necessary. 
> And, from [~ghelmling]: There is an additional complication here for 
> token-based authentication. When a delegation token is used for SASL 
> authentication, the client uses the cluster ID obtained from Zookeeper to 
> select the token identifier to use. So there would also need to be some 
> Zookeeper-less, unauthenticated way to obtain the cluster ID as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-18095) Provide an option for clients to find the server hosting META that does not involve the ZooKeeper client

2019-11-19 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977956#comment-16977956
 ] 

Andrew Kyle Purtell commented on HBASE-18095:
-

bq. why limit many operations to just the active master or even just the 
collection of masters

I missed this question, sorry.

A number of similar distributed systems have a notion of a bootstrap set, a 
list of well known addresses from which a client can discover all other service 
roles and locations necessary to know for operation. In today's HBase 
operations the zookeeper quorum peer list serves this role. When proposing this 
I thought zk quorum list -> hbase master list would be a nice simple lateral 
change easy to understand and familiar. 

> Provide an option for clients to find the server hosting META that does not 
> involve the ZooKeeper client
> 
>
> Key: HBASE-18095
> URL: https://issues.apache.org/jira/browse/HBASE-18095
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Reporter: Andrew Kyle Purtell
>Assignee: Bharath Vissapragada
>Priority: Major
> Attachments: HBASE-18095.master-v1.patch, HBASE-18095.master-v2.patch
>
>
> Clients are required to connect to ZooKeeper to find the location of the 
> regionserver hosting the meta table region. Site configuration provides the 
> client a list of ZK quorum peers and the client uses an embedded ZK client to 
> query meta location. Timeouts and retry behavior of this embedded ZK client 
> are managed orthogonally to HBase layer settings and in some cases the ZK 
> cannot manage what in theory the HBase client can, i.e. fail fast upon outage 
> or network partition.
> We should consider new configuration settings that provide a list of 
> well-known master and backup master locations, and with this information the 
> client can contact any of the master processes directly. Any master in either 
> active or passive state will track meta location and respond to requests for 
> it with its cached last known location. If this location is stale, the client 
> can ask again with a flag set that requests the master refresh its location 
> cache and return the up-to-date location. Every client interaction with the 
> cluster thus uses only HBase RPC as transport, with appropriate settings 
> applied to the connection. The configuration toggle that enables this 
> alternative meta location lookup should be false by default.
> This removes the requirement that HBase clients embed the ZK client and 
> contact the ZK service directly at the beginning of the connection lifecycle. 
> This has several benefits. ZK service need not be exposed to clients, and 
> their potential abuse, yet no benefit ZK provides the HBase server cluster is 
> compromised. Normalizing HBase client and ZK client timeout settings and 
> retry behavior - in some cases, impossible, i.e. for fail-fast - is no longer 
> necessary. 
> And, from [~ghelmling]: There is an additional complication here for 
> token-based authentication. When a delegation token is used for SASL 
> authentication, the client uses the cluster ID obtained from Zookeeper to 
> select the token identifier to use. So there would also need to be some 
> Zookeeper-less, unauthenticated way to obtain the cluster ID as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-18095) Provide an option for clients to find the server hosting META that does not involve the ZooKeeper client

2019-11-19 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977953#comment-16977953
 ] 

Andrew Kyle Purtell commented on HBASE-18095:
-

We want this in our operations so that would be a vote for a branch-1 backport. 
Means we won't have to maintain a local patch. The upgrade story is clean. This 
is an additive change. Masters and zookeeper will both be able to provide 
service for a client configured to use either, until such time a site is 
completely migrated. 

> Provide an option for clients to find the server hosting META that does not 
> involve the ZooKeeper client
> 
>
> Key: HBASE-18095
> URL: https://issues.apache.org/jira/browse/HBASE-18095
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Reporter: Andrew Kyle Purtell
>Assignee: Bharath Vissapragada
>Priority: Major
> Attachments: HBASE-18095.master-v1.patch, HBASE-18095.master-v2.patch
>
>
> Clients are required to connect to ZooKeeper to find the location of the 
> regionserver hosting the meta table region. Site configuration provides the 
> client a list of ZK quorum peers and the client uses an embedded ZK client to 
> query meta location. Timeouts and retry behavior of this embedded ZK client 
> are managed orthogonally to HBase layer settings and in some cases the ZK 
> cannot manage what in theory the HBase client can, i.e. fail fast upon outage 
> or network partition.
> We should consider new configuration settings that provide a list of 
> well-known master and backup master locations, and with this information the 
> client can contact any of the master processes directly. Any master in either 
> active or passive state will track meta location and respond to requests for 
> it with its cached last known location. If this location is stale, the client 
> can ask again with a flag set that requests the master refresh its location 
> cache and return the up-to-date location. Every client interaction with the 
> cluster thus uses only HBase RPC as transport, with appropriate settings 
> applied to the connection. The configuration toggle that enables this 
> alternative meta location lookup should be false by default.
> This removes the requirement that HBase clients embed the ZK client and 
> contact the ZK service directly at the beginning of the connection lifecycle. 
> This has several benefits. ZK service need not be exposed to clients, and 
> their potential abuse, yet no benefit ZK provides the HBase server cluster is 
> compromised. Normalizing HBase client and ZK client timeout settings and 
> retry behavior - in some cases, impossible, i.e. for fail-fast - is no longer 
> necessary. 
> And, from [~ghelmling]: There is an additional complication here for 
> token-based authentication. When a delegation token is used for SASL 
> authentication, the client uses the cluster ID obtained from Zookeeper to 
> select the token identifier to use. So there would also need to be some 
> Zookeeper-less, unauthenticated way to obtain the cluster ID as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack opened a new pull request #854: HBASE-23307 Add running of ReplicationBarrierCleaner to hbck2 fixMeta…

2019-11-19 Thread GitBox
saintstack opened a new pull request #854: HBASE-23307 Add running of 
ReplicationBarrierCleaner to hbck2 fixMeta…
URL: https://github.com/apache/hbase/pull/854
 
 
   … invocation


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack opened a new pull request #853: Hbase 23322

2019-11-19 Thread GitBox
saintstack opened a new pull request #853: Hbase 23322
URL: https://github.com/apache/hbase/pull/853
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack closed pull request #853: Hbase 23322

2019-11-19 Thread GitBox
saintstack closed pull request #853: Hbase 23322
URL: https://github.com/apache/hbase/pull/853
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack closed pull request #835: HBASE-23307 Add running of ReplicationBarrierCleaner to hbck2 fixMeta…

2019-11-19 Thread GitBox
saintstack closed pull request #835: HBASE-23307 Add running of 
ReplicationBarrierCleaner to hbck2 fixMeta…
URL: https://github.com/apache/hbase/pull/835
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #851: Hbase 23321

2019-11-19 Thread GitBox
Apache-HBase commented on issue #851: Hbase 23321
URL: https://github.com/apache/hbase/pull/851#issuecomment-555779882
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 51s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 28s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 37s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 13s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 19s |  hbase-server: The patch generated 3 
new + 0 unchanged - 0 fixed = 3 total (was 0)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 41s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m  8s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 193m 15s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 251m 23s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.master.replication.TestTransitPeerSyncReplicationStateProcedureRetry
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-851/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/851 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 9b9e9750262a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-851/out/precommit/personality/provided.sh
 |
   | git revision | master / ca6e67a6de |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-851/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-851/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-851/1/testReport/
 |
   | Max. process+thread count | 4984 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-851/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23315) Miscellaneous HBCK Report page cleanup

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977945#comment-16977945
 ] 

Hudson commented on HBASE-23315:


Results for branch branch-2.2
[build #698 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Miscellaneous HBCK Report page cleanup
> --
>
> Key: HBASE-23315
> URL: https://issues.apache.org/jira/browse/HBASE-23315
> Project: HBase
>  Issue Type: Improvement
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> A bunch of touch up on the hbck report page:
>  * Add a bit of javadoc around SerialReplicationChecker.
>  * Miniscule edit to the profiler jsp page and then a bit of doc on how to 
> make it work that might help.
>  * Add some detail if NPE getting BitSetNode to help w/ debug.
>  * Change HbckChore to log region names instead of encoded names; helps doing 
> diagnostics; can take region name and query in shell to find out all about 
> the region according to hbase:meta.
>  * Add some fix-it help inline in the HBCK Report page -- how to fix.
>  * Add counts in procedures page so can see if making progress; move listing 
> of WALs to end of the page.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23085) Network and Data related Actions

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977943#comment-16977943
 ] 

Hudson commented on HBASE-23085:


Results for branch branch-2.2
[build #698 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Network and Data related Actions
> 
>
> Key: HBASE-23085
> URL: https://issues.apache.org/jira/browse/HBASE-23085
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Szabolcs Bukros
>Assignee: Szabolcs Bukros
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Add additional actions to:
>  * manipulate network packages with tc (reorder, loose,...)
>  * add CPU load
>  * fill the disk
>  * corrupt or delete regionserver data files
> Create new monkey factories for the new actions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23318) LoadTestTool doesn't start

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977942#comment-16977942
 ] 

Hudson commented on HBASE-23318:


Results for branch branch-2.2
[build #698 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> LoadTestTool doesn't start
> --
>
> Key: HBASE-23318
> URL: https://issues.apache.org/jira/browse/HBASE-23318
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> ./bin/hbase ltt after unpacking a binary tarball distribution doesn't start 
> with a CNFE. We are missing the tests jar from hbase-zookeeper. 
> The client tarball includes this but if one wants to launch it on a server or 
> a general purpose deploy (i.e. not the client tarball) the test jar has to be 
> in the server classpath as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23282) HBCKServerCrashProcedure for 'Unknown Servers'

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977941#comment-16977941
 ] 

Hudson commented on HBASE-23282:


Results for branch branch-2.2
[build #698 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HBCKServerCrashProcedure for 'Unknown Servers'
> --
>
> Key: HBASE-23282
> URL: https://issues.apache.org/jira/browse/HBASE-23282
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2, proc-v2
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> With an overdriving, sustained load, I can fairly easily manufacture an 
> hbase:meta table that references servers that are no longer in the live list 
> nor are members of deadservers; i.e. 'Unknown Servers'.  The new 'HBCK 
> Report' UI in Master has a section where it lists 'Unknown Servers' if any in 
> hbase:meta.
> Once in this state, the repair is awkward. Our assign/unassign Procedure is 
> particularly dogged about insisting that we confirm close/open of Regions 
> when it is going about its business which is well and good if server is in 
> live/dead sets but when an 'Unknown Server', we invariably end up trying to 
> confirm against a non-longer present server (More on this in follow-on 
> issues).
> What is wanted is queuing of a ServerCrashProcedure for each 'Unknown 
> Server'. It would split any WALs (there shouldn't be any if server was 
> restarted) and ideally it would cancel out any assigns and reassign regions 
> off the 'Unknown Server'.  But the 'normal' SCP consults the in-memory 
> cluster state figuring what Regions were on the crashed server... And 
> 'Unknown Servers' don't have state in in-master memory Maps of Servers to 
> Regions or  in DeadServers list which works fine for the usual case.
> Suggestion here is that hbck2 be able to drive in a special SCP, one which 
> would get list of Regions by scanning hbase:meta rather than asking Master 
> memory; an HBCKSCP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23278) Add a table-level compaction progress display on the UI

2019-11-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977944#comment-16977944
 ] 

Hudson commented on HBASE-23278:


Results for branch branch-2.2
[build #698 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/698//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


>  Add a table-level compaction progress display on the UI
> 
>
> Key: HBASE-23278
> URL: https://issues.apache.org/jira/browse/HBASE-23278
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Affects Versions: 1.6.0, master
>Reporter: Baiqiang Zhao
>Assignee: Baiqiang Zhao
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 1.6.0, 2.2.3
>
> Attachments: HBase-23278-v2.png, HBase-23278.png, 
> image-2019-11-11-20-35-56-103.png, image-2019-11-11-20-37-53-367.png, 
> image-2019-11-11-20-44-04-050.png
>
>
> We have regionserver-level compaction progress in UI. However, we often 
> compact a table, why there is no table-level compaction progress?Use multiple 
> tabs to show  compaction progress.
> !HBase-23278-v2.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23322) [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977935#comment-16977935
 ] 

HBase QA commented on HBASE-23322:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HBASE-23322 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-23322 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986279/0001-HBASE-23322-hbck2-Simplification-on-HBCKSCP-scheduli.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1033/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |


This message was automatically generated.



> [hbck2] Simplification on HBCKSCP scheduling
> 
>
> Key: HBASE-23322
> URL: https://issues.apache.org/jira/browse/HBASE-23322
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbck2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Attachments: 
> 0001-HBASE-23322-hbck2-Simplification-on-HBCKSCP-scheduli.patch
>
>
> I can make the scheduling of HBCKSCP simpler.  I can also fix a bug in parent 
> issue that I notice after exercising it a bunch on a cluster.
> The bug is that 'Unknown Servers' seem to be retained in the Map of reporting 
> servers. They are usually cleared just before an SCP is scheduled but 
> scheduling HBCKSCP doesn't go the usual route.
> The patch here forces HBCKSCP via the usual SCP route only at the scheduling 
> time, context dictates whether SCP or the scouring HBCKSCP.
> Let me put up a patch and will test in meantime.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs 
(HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#issuecomment-555769956
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 24s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 33s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  master passed  |
   | +0 :ok: |  spotbugs  |   1m 28s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 27s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   5m 12s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 47s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 41s |  hbase-thrift in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  53m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/850 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 296c7b368965 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-850/out/precommit/personality/provided.sh
 |
   | git revision | master / 33bedf8d4d |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/6/testReport/
 |
   | Max. process+thread count | 1738 (vs. ulimit of 1) |
   | modules | C: hbase-thrift U: hbase-thrift |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/6/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23322) [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23322:
--
Status: Patch Available  (was: Open)

> [hbck2] Simplification on HBCKSCP scheduling
> 
>
> Key: HBASE-23322
> URL: https://issues.apache.org/jira/browse/HBASE-23322
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbck2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Attachments: 
> 0001-HBASE-23322-hbck2-Simplification-on-HBCKSCP-scheduli.patch
>
>
> I can make the scheduling of HBCKSCP simpler.  I can also fix a bug in parent 
> issue that I notice after exercising it a bunch on a cluster.
> The bug is that 'Unknown Servers' seem to be retained in the Map of reporting 
> servers. They are usually cleared just before an SCP is scheduled but 
> scheduling HBCKSCP doesn't go the usual route.
> The patch here forces HBCKSCP via the usual SCP route only at the scheduling 
> time, context dictates whether SCP or the scouring HBCKSCP.
> Let me put up a patch and will test in meantime.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23322) [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23322:
--
Attachment: 0001-HBASE-23322-hbck2-Simplification-on-HBCKSCP-scheduli.patch

> [hbck2] Simplification on HBCKSCP scheduling
> 
>
> Key: HBASE-23322
> URL: https://issues.apache.org/jira/browse/HBASE-23322
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbck2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Attachments: 
> 0001-HBASE-23322-hbck2-Simplification-on-HBCKSCP-scheduli.patch
>
>
> I can make the scheduling of HBCKSCP simpler.  I can also fix a bug in parent 
> issue that I notice after exercising it a bunch on a cluster.
> The bug is that 'Unknown Servers' seem to be retained in the Map of reporting 
> servers. They are usually cleared just before an SCP is scheduled but 
> scheduling HBCKSCP doesn't go the usual route.
> The patch here forces HBCKSCP via the usual SCP route only at the scheduling 
> time, context dictates whether SCP or the scouring HBCKSCP.
> Let me put up a patch and will test in meantime.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bharathv commented on a change in pull request #807: HBASE-23259: Ability to start minicluster with pre-determined master ports

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #807: HBASE-23259: Ability to 
start minicluster with pre-determined master ports
URL: https://github.com/apache/hbase/pull/807#discussion_r348228916
 
 

 ##
 File path: hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
 ##
 @@ -171,6 +171,11 @@
   /** Configuration key for master web API port */
   public static final String MASTER_INFO_PORT = "hbase.master.info.port";
 
+  /** Configuration key for the list of master host:ports **/
+  public static final String MASTER_ADDRS_KEY = "hbase.master.addrs";
 
 Review comment:
   @ndimiduk The parsing logic will come as a part of PR for HBASE-23305. There 
is nothing in this patch that consumes the content of "hbase.master.addrs". 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack closed pull request #852: HBASE-23322 [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread GitBox
saintstack closed pull request #852: HBASE-23322 [hbck2] Simplification on 
HBCKSCP scheduling
URL: https://github.com/apache/hbase/pull/852
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348089363
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
+  break;
+}
+  }
+} while (retryCounter.shouldRetry());
+if (znodes == null) {
+  return;
+}
+for (String znode: znodes) {
+  String path = ZNodePaths.joinZNode(watcher.getZNodePaths().baseZNode, 
znode);
+  updateMetaLocation(path, ZNodeOpType.INIT);
+}
+  }
+
+  /**
+   * Gets the HRegionLocation for a given meta replica ID. Renews the watch on 
the znode for
+   * future updates.
+   * @param replicaId 

[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348082109
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
 ##
 @@ -3051,6 +3053,44 @@ public static ProcedureDescription 
buildProcedureDescription(String signature, S
 return builder.build();
   }
 
+  /**
+   * Get the Meta region state from the passed data bytes. Can handle both old 
and new style
+   * server names.
+   * @param data protobuf serialized data with meta server name.
+   * @param replicaId replica ID for this region
+   * @return RegionState instance corresponding to the serialized data.
+   * @throws DeserializationException if the data is invalid.
+   */
+  public static RegionState parseMetaRegionStateFrom(final byte[] data, int 
replicaId)
+  throws DeserializationException {
+RegionState.State state = RegionState.State.OPEN;
+ServerName serverName;
+if (data != null && data.length > 0 && ProtobufUtil.isPBMagicPrefix(data)) 
{
+  try {
+int prefixLen = ProtobufUtil.lengthOfPBMagic();
+ZooKeeperProtos.MetaRegionServer rl =
+ZooKeeperProtos.MetaRegionServer.parser().parseFrom(data, 
prefixLen,
+data.length - prefixLen);
+if (rl.hasState()) {
+  state = RegionState.State.convert(rl.getState());
+}
+HBaseProtos.ServerName sn = rl.getServer();
+serverName = ServerName.valueOf(
+sn.getHostName(), sn.getPort(), sn.getStartCode());
+  } catch (InvalidProtocolBufferException e) {
+throw new DeserializationException("Unable to parse meta region 
location");
+  }
+} else {
+  // old style of meta region location?
+  serverName = parseServerNameFrom(data);
+}
+if (serverName == null) {
+  state = RegionState.State.OFFLINE;
 
 Review comment:
   I think we still need some sorta null check, especially for old style meta 
regions. Something like,
   
   ```
   state = OFFLINE;
   if (new style) {
 servername, state = parse_from_protobuf();
   } else {
 servername = parse_from_protobuf();
 if (servername != null) {
state == OPEN;
 }
   }
   ```
   
   Am I missing something?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348089694
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaRegionLocationCache.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static junit.framework.TestCase.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.master.RegionState;
+import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({SmallTests.class, MasterTests.class })
+public class TestMetaRegionLocationCache {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+  HBaseClassTestRule.forClass(TestMetaRegionLocationCache.class);
+
+  private static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
+  private static AsyncRegistry REGISTRY;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+TEST_UTIL.getConfiguration().set(BaseLoadBalancer.TABLES_ON_MASTER, 
"none");
+TEST_UTIL.getConfiguration().setInt(HConstants.META_REPLICAS_NUM, 3);
+TEST_UTIL.startMiniCluster(3);
+REGISTRY = AsyncRegistryFactory.getRegistry(TEST_UTIL.getConfiguration());
+RegionReplicaTestHelper.waitUntilAllMetaReplicasHavingRegionLocation(
+TEST_UTIL.getConfiguration(), REGISTRY, 3);
+TEST_UTIL.getAdmin().balancerSwitch(false, true);
+  }
+
+  @AfterClass
+  public static void cleanUp() throws Exception {
+IOUtils.closeQuietly(REGISTRY);
+TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private List getCurrentMetaLocations(ZKWatcher zk) throws 
Exception {
+List result = new ArrayList<>();
+for (String znode: zk.getMetaReplicaNodes()) {
+  String path = ZNodePaths.joinZNode(zk.getZNodePaths().baseZNode, znode);
+  int replicaId = zk.getZNodePaths().getMetaReplicaIdFromPath(path);
+  RegionState state = MetaTableLocator.getMetaRegionState(zk, replicaId);
+  result.add(new HRegionLocation(state.getRegion(), 
state.getServerName()));
+}
+return result;
+  }
+
+  // Verifies that the cached meta locations in the given master are in sync 
with what is in ZK.
+  private void verifyCachedMetaLocations(HMaster master) throws Exception {
+List metaHRLs =
+master.getMetaRegionLocationCache().getMetaRegionLocations().get();
+assertTrue(metaHRLs != null);
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348078125
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
 
 Review comment:
   You are right, done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348104994
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
 
 Review comment:
   So there are two cases.
   
   1. If standby master(s) start *before* meta znodes creation, they get 
CREATED notifications due to the registered listener and they update their 
cache automatically.
   2. If standby master(s) start *after* meta znodes creation, it is a problem 
because they only get a notification after the next change of meta znodes 
(which could theoretically be forever). In this window, the cache would be 
stale and populateInitialMetaLocations() fixes it.
   
   What do you think?
   
   Also, I refactored the code here a bit, let me if it makes it more clear.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348163548
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaRegionLocationCache.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static junit.framework.TestCase.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.master.RegionState;
+import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({SmallTests.class, MasterTests.class })
+public class TestMetaRegionLocationCache {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+  HBaseClassTestRule.forClass(TestMetaRegionLocationCache.class);
+
+  private static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
+  private static AsyncRegistry REGISTRY;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+TEST_UTIL.getConfiguration().set(BaseLoadBalancer.TABLES_ON_MASTER, 
"none");
+TEST_UTIL.getConfiguration().setInt(HConstants.META_REPLICAS_NUM, 3);
+TEST_UTIL.startMiniCluster(3);
+REGISTRY = AsyncRegistryFactory.getRegistry(TEST_UTIL.getConfiguration());
+RegionReplicaTestHelper.waitUntilAllMetaReplicasHavingRegionLocation(
+TEST_UTIL.getConfiguration(), REGISTRY, 3);
+TEST_UTIL.getAdmin().balancerSwitch(false, true);
+  }
+
+  @AfterClass
+  public static void cleanUp() throws Exception {
+IOUtils.closeQuietly(REGISTRY);
+TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private List getCurrentMetaLocations(ZKWatcher zk) throws 
Exception {
+List result = new ArrayList<>();
+for (String znode: zk.getMetaReplicaNodes()) {
+  String path = ZNodePaths.joinZNode(zk.getZNodePaths().baseZNode, znode);
+  int replicaId = zk.getZNodePaths().getMetaReplicaIdFromPath(path);
+  RegionState state = MetaTableLocator.getMetaRegionState(zk, replicaId);
+  result.add(new HRegionLocation(state.getRegion(), 
state.getServerName()));
+}
+return result;
+  }
+
+  // Verifies that the cached meta locations in the given master are in sync 
with what is in ZK.
+  private void verifyCachedMetaLocations(HMaster master) throws Exception {
+List metaHRLs =
+master.getMetaRegionLocationCache().getMetaRegionLocations().get();
+assertTrue(metaHRLs != null);
+assertFalse(metaHRLs.isEmpty());
+ZKWatcher zk = master.getZooKeeper();
+List metaZnodes = zk.getMetaReplicaNodes();
+assertEquals(metaZnodes.size(), metaHRLs.size());
+List actualHRLs = getCurrentMetaLocations(zk);
+Collections.sort(metaHRLs);
+Collections.sort(actualHRLs);
+assertEquals(actualHRLs, metaHRLs);
+  }
+
+  @Test public void testInitialMetaLocations() throws Exception {
+verifyCachedMetaLocations(TEST_UTIL.getMiniHBaseCluster().getMaster());
+  }
+
+  @Test public void testStandByMetaLocations() throws Exception {
+HMaster standBy = 
TEST_UTIL.getMiniHBaseCluster().startMaster().getMaster();
+verifyCachedMetaLocations(standBy);
+  }
+
+  /*
+   * Shuffles the meta region replicas around the cluster and makes sure the 
cache is 

[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348081194
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
+  break;
+}
+  }
+} while (retryCounter.shouldRetry());
+if (znodes == null) {
+  return;
+}
+for (String znode: znodes) {
+  String path = ZNodePaths.joinZNode(watcher.getZNodePaths().baseZNode, 
znode);
+  updateMetaLocation(path, ZNodeOpType.INIT);
+}
+  }
+
+  /**
+   * Gets the HRegionLocation for a given meta replica ID. Renews the watch on 
the znode for
+   * future updates.
+   * @param replicaId 

[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348214839
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
 ##
 @@ -3051,6 +3053,44 @@ public static ProcedureDescription 
buildProcedureDescription(String signature, S
 return builder.build();
   }
 
+  /**
+   * Get the Meta region state from the passed data bytes. Can handle both old 
and new style
+   * server names.
+   * @param data protobuf serialized data with meta server name.
+   * @param replicaId replica ID for this region
+   * @return RegionState instance corresponding to the serialized data.
+   * @throws DeserializationException if the data is invalid.
+   */
+  public static RegionState parseMetaRegionStateFrom(final byte[] data, int 
replicaId)
 
 Review comment:
   It is not fully clear to me (based on reading the code comments) whether 
this should exist in a non-shaded version. Can you please give me more context? 
Do you think this should exist in the non-shaded version for some specific 
reason?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348092214
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaRegionLocationCache.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static junit.framework.TestCase.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.master.RegionState;
+import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({SmallTests.class, MasterTests.class })
+public class TestMetaRegionLocationCache {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+  HBaseClassTestRule.forClass(TestMetaRegionLocationCache.class);
+
+  private static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
+  private static AsyncRegistry REGISTRY;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+TEST_UTIL.getConfiguration().set(BaseLoadBalancer.TABLES_ON_MASTER, 
"none");
+TEST_UTIL.getConfiguration().setInt(HConstants.META_REPLICAS_NUM, 3);
+TEST_UTIL.startMiniCluster(3);
+REGISTRY = AsyncRegistryFactory.getRegistry(TEST_UTIL.getConfiguration());
+RegionReplicaTestHelper.waitUntilAllMetaReplicasHavingRegionLocation(
+TEST_UTIL.getConfiguration(), REGISTRY, 3);
+TEST_UTIL.getAdmin().balancerSwitch(false, true);
+  }
+
+  @AfterClass
+  public static void cleanUp() throws Exception {
+IOUtils.closeQuietly(REGISTRY);
+TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private List getCurrentMetaLocations(ZKWatcher zk) throws 
Exception {
+List result = new ArrayList<>();
+for (String znode: zk.getMetaReplicaNodes()) {
+  String path = ZNodePaths.joinZNode(zk.getZNodePaths().baseZNode, znode);
+  int replicaId = zk.getZNodePaths().getMetaReplicaIdFromPath(path);
+  RegionState state = MetaTableLocator.getMetaRegionState(zk, replicaId);
+  result.add(new HRegionLocation(state.getRegion(), 
state.getServerName()));
+}
+return result;
+  }
+
+  // Verifies that the cached meta locations in the given master are in sync 
with what is in ZK.
+  private void verifyCachedMetaLocations(HMaster master) throws Exception {
+List metaHRLs =
+master.getMetaRegionLocationCache().getMetaRegionLocations().get();
+assertTrue(metaHRLs != null);
+assertFalse(metaHRLs.isEmpty());
+ZKWatcher zk = master.getZooKeeper();
+List metaZnodes = zk.getMetaReplicaNodes();
+assertEquals(metaZnodes.size(), metaHRLs.size());
+List actualHRLs = getCurrentMetaLocations(zk);
+Collections.sort(metaHRLs);
+Collections.sort(actualHRLs);
+assertEquals(actualHRLs, metaHRLs);
+  }
+
+  @Test public void testInitialMetaLocations() throws Exception {
+verifyCachedMetaLocations(TEST_UTIL.getMiniHBaseCluster().getMaster());
+  }
+
+  @Test public void testStandByMetaLocations() throws Exception {
+HMaster standBy = 
TEST_UTIL.getMiniHBaseCluster().startMaster().getMaster();
+verifyCachedMetaLocations(standBy);
+  }
+
+  /*
+   * Shuffles the meta region replicas around the cluster and makes sure the 
cache is 

[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348216525
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
 ##
 @@ -3051,6 +3053,44 @@ public static ProcedureDescription 
buildProcedureDescription(String signature, S
 return builder.build();
   }
 
+  /**
+   * Get the Meta region state from the passed data bytes. Can handle both old 
and new style
+   * server names.
+   * @param data protobuf serialized data with meta server name.
+   * @param replicaId replica ID for this region
+   * @return RegionState instance corresponding to the serialized data.
+   * @throws DeserializationException if the data is invalid.
+   */
+  public static RegionState parseMetaRegionStateFrom(final byte[] data, int 
replicaId)
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348072187
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348135650
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
+  break;
+}
+  }
+} while (retryCounter.shouldRetry());
+if (znodes == null) {
+  return;
 
 Review comment:
   Thats a good point. I think we should throw the exception back to the 
HMaster init and abort. Updated the code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the 

[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348092461
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaRegionLocationCache.java
 ##
 @@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static junit.framework.TestCase.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.master.RegionState;
+import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({SmallTests.class, MasterTests.class })
+public class TestMetaRegionLocationCache {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+  HBaseClassTestRule.forClass(TestMetaRegionLocationCache.class);
+
+  private static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
+  private static AsyncRegistry REGISTRY;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+TEST_UTIL.getConfiguration().set(BaseLoadBalancer.TABLES_ON_MASTER, 
"none");
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348078804
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
 
 Review comment:
   I feel WARN is better since it shows up in the logs with the default configs 
and still not an error that users can get confused with. Let me know if you 
disagree.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact 

[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347685040
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348083449
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
 
 Review comment:
   Ya, HMaster c'tor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348076537
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
 
 Review comment:
   Ya. I meant the same. I thought the usage is pretty common (at least I saw 
it in the other Apache projects).
   
   https://en.wikipedia.org/wiki/Thread_safety#Levels_of_thread_safety
   
   "Thread safe: Implementation is guaranteed to be free of race conditions 
when accessed by multiple threads simultaneously."
   
   I clarified it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348081119
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
+  break;
+}
+  }
+} while (retryCounter.shouldRetry());
+if (znodes == null) {
+  return;
+}
+for (String znode: znodes) {
+  String path = ZNodePaths.joinZNode(watcher.getZNodePaths().baseZNode, 
znode);
+  updateMetaLocation(path, ZNodeOpType.INIT);
+}
+  }
+
+  /**
+   * Gets the HRegionLocation for a given meta replica ID. Renews the watch on 
the znode for
+   * future updates.
+   * @param replicaId 

[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348079117
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaTableLocator.java
 ##
 @@ -274,42 +269,17 @@ public static RegionState getMetaRegionState(ZKWatcher 
zkw) throws KeeperExcepti
* @throws KeeperException if a ZooKeeper operation fails
*/
   public static RegionState getMetaRegionState(ZKWatcher zkw, int replicaId)
-  throws KeeperException {
-RegionState.State state = RegionState.State.OPEN;
-ServerName serverName = null;
+  throws KeeperException {
+RegionState regionState = null;
 try {
   byte[] data = ZKUtil.getData(zkw, 
zkw.getZNodePaths().getZNodeForReplica(replicaId));
-  if (data != null && data.length > 0 && 
ProtobufUtil.isPBMagicPrefix(data)) {
-try {
-  int prefixLen = ProtobufUtil.lengthOfPBMagic();
-  ZooKeeperProtos.MetaRegionServer rl =
-ZooKeeperProtos.MetaRegionServer.parser().parseFrom(data, 
prefixLen,
-data.length - prefixLen);
-  if (rl.hasState()) {
-state = RegionState.State.convert(rl.getState());
-  }
-  HBaseProtos.ServerName sn = rl.getServer();
-  serverName = ServerName.valueOf(
-sn.getHostName(), sn.getPort(), sn.getStartCode());
-} catch (InvalidProtocolBufferException e) {
-  throw new DeserializationException("Unable to parse meta region 
location");
-}
-  } else {
-// old style of meta region location?
-serverName = ProtobufUtil.parseServerNameFrom(data);
-  }
+  regionState = ProtobufUtil.parseMetaRegionStateFrom(data, replicaId);
 
 Review comment:
   Yes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r348087663
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
+try {
+  retryCounter.sleepUntilNextRetry();
+} catch (InterruptedException ie) {
+  LOG.error("Interrupted while populating intial meta locations", ie);
+  return;
+}
+if (!retryCounter.shouldRetry()) {
+  LOG.error("Error populating intial meta locations. Retries 
exhausted. Last error: ", ke);
+  break;
+}
+  }
+} while (retryCounter.shouldRetry());
+if (znodes == null) {
+  return;
+}
+for (String znode: znodes) {
+  String path = ZNodePaths.joinZNode(watcher.getZNodePaths().baseZNode, 
znode);
+  updateMetaLocation(path, ZNodeOpType.INIT);
+}
+  }
+
+  /**
+   * Gets the HRegionLocation for a given meta replica ID. Renews the watch on 
the znode for
+   * future updates.
+   * @param replicaId 

[GitHub] [hbase] bharathv commented on a change in pull request #830: HBASE-23281: Track meta region locations in masters

2019-11-19 Thread GitBox
bharathv commented on a change in pull request #830: HBASE-23281: Track meta 
region locations in masters
URL: https://github.com/apache/hbase/pull/830#discussion_r347684959
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetaRegionLocationCache.java
 ##
 @@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentNavigableMap;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.types.CopyOnWriteArrayMap;
+import org.apache.hadoop.hbase.util.RetryCounter;
+import org.apache.hadoop.hbase.util.RetryCounterFactory;
+import org.apache.hadoop.hbase.zookeeper.ZKListener;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZNodePaths;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+
+/**
+ * A cache of meta region location metadata. Registers a listener on ZK to 
track changes to the
+ * meta table znodes. Clients are expected to retry if the meta information is 
stale. This class
+ * is thread-safe.
+ */
+@InterfaceAudience.Private
+public class MetaRegionLocationCache extends ZKListener {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MetaRegionLocationCache.class);
+
+  // Maximum number of times we retry when ZK operation times out.
+  private static final int MAX_ZK_META_FETCH_RETRIES = 10;
+  // Sleep interval ms between ZK operation retries.
+  private static final int SLEEP_INTERVAL_MS_BETWEEN_RETRIES = 1000;
+  private final RetryCounterFactory retryCounterFactory =
+  new RetryCounterFactory(MAX_ZK_META_FETCH_RETRIES, 
SLEEP_INTERVAL_MS_BETWEEN_RETRIES);
+
+  // Cached meta region locations indexed by replica ID.
+  // CopyOnWriteArrayMap ensures synchronization during updates and a 
consistent snapshot during
+  // client requests. Even though CopyOnWriteArrayMap copies the data 
structure for every write,
+  // that should be OK since the size of the list is often small and mutations 
are not too often
+  // and we do not need to block client requests while mutations are in 
progress.
+  private final CopyOnWriteArrayMap 
cachedMetaLocations;
+
+  private enum ZNodeOpType {
+INIT,
+CREATED,
+CHANGED,
+DELETED
+  };
+
+  MetaRegionLocationCache(ZKWatcher zkWatcher) {
+super(zkWatcher);
+cachedMetaLocations = new CopyOnWriteArrayMap<>();
+watcher.registerListener(this);
+// Populate the initial snapshot of data from meta znodes.
+// This is needed because stand-by masters can potentially start after the 
initial znode
+// creation.
+populateInitialMetaLocations();
+  }
+
+  private void populateInitialMetaLocations() {
+RetryCounter retryCounter = retryCounterFactory.create();
+List znodes = null;
+do {
+  try {
+znodes = watcher.getMetaReplicaNodes();
+break;
+  } catch (KeeperException ke) {
+LOG.debug("Error populating intial meta locations", ke);
 
 Review comment:
   oops, done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack opened a new pull request #852: HBASE-23322 [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread GitBox
saintstack opened a new pull request #852: HBASE-23322 [hbck2] Simplification 
on HBCKSCP scheduling
URL: https://github.com/apache/hbase/pull/852
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23322) [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23322:
--
Priority: Minor  (was: Major)

> [hbck2] Simplification on HBCKSCP scheduling
> 
>
> Key: HBASE-23322
> URL: https://issues.apache.org/jira/browse/HBASE-23322
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbck2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
>
> I can make the scheduling of HBCKSCP simpler.  I can also fix a bug in parent 
> issue that I notice after exercising it a bunch on a cluster.
> The bug is that 'Unknown Servers' seem to be retained in the Map of reporting 
> servers. They are usually cleared just before an SCP is scheduled but 
> scheduling HBCKSCP doesn't go the usual route.
> The patch here forces HBCKSCP via the usual SCP route only at the scheduling 
> time, context dictates whether SCP or the scouring HBCKSCP.
> Let me put up a patch and will test in meantime.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-18095) Provide an option for clients to find the server hosting META that does not involve the ZooKeeper client

2019-11-19 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977931#comment-16977931
 ] 

Nick Dimiduk commented on HBASE-18095:
--

[~bharathv] is making nice progress here. I'd like to have a discussion about 
the viability of pack-porting this patch and rolling upgrades. If we think we 
have a palatable rolling upgrade story, I think we should push it at least as 
far back as branch-1. If the upgrade path isn't clear, I think it stays on 
master.

> Provide an option for clients to find the server hosting META that does not 
> involve the ZooKeeper client
> 
>
> Key: HBASE-18095
> URL: https://issues.apache.org/jira/browse/HBASE-18095
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Reporter: Andrew Kyle Purtell
>Assignee: Bharath Vissapragada
>Priority: Major
> Attachments: HBASE-18095.master-v1.patch, HBASE-18095.master-v2.patch
>
>
> Clients are required to connect to ZooKeeper to find the location of the 
> regionserver hosting the meta table region. Site configuration provides the 
> client a list of ZK quorum peers and the client uses an embedded ZK client to 
> query meta location. Timeouts and retry behavior of this embedded ZK client 
> are managed orthogonally to HBase layer settings and in some cases the ZK 
> cannot manage what in theory the HBase client can, i.e. fail fast upon outage 
> or network partition.
> We should consider new configuration settings that provide a list of 
> well-known master and backup master locations, and with this information the 
> client can contact any of the master processes directly. Any master in either 
> active or passive state will track meta location and respond to requests for 
> it with its cached last known location. If this location is stale, the client 
> can ask again with a flag set that requests the master refresh its location 
> cache and return the up-to-date location. Every client interaction with the 
> cluster thus uses only HBase RPC as transport, with appropriate settings 
> applied to the connection. The configuration toggle that enables this 
> alternative meta location lookup should be false by default.
> This removes the requirement that HBase clients embed the ZK client and 
> contact the ZK service directly at the beginning of the connection lifecycle. 
> This has several benefits. ZK service need not be exposed to clients, and 
> their potential abuse, yet no benefit ZK provides the HBase server cluster is 
> compromised. Normalizing HBase client and ZK client timeout settings and 
> retry behavior - in some cases, impossible, i.e. for fail-fast - is no longer 
> necessary. 
> And, from [~ghelmling]: There is an additional complication here for 
> token-based authentication. When a delegation token is used for SASL 
> authentication, the client uses the cluster ID obtained from Zookeeper to 
> select the token identifier to use. So there would also need to be some 
> Zookeeper-less, unauthenticated way to obtain the cluster ID as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on a change in pull request #807: HBASE-23259: Ability to start minicluster with pre-determined master ports

2019-11-19 Thread GitBox
ndimiduk commented on a change in pull request #807: HBASE-23259: Ability to 
start minicluster with pre-determined master ports
URL: https://github.com/apache/hbase/pull/807#discussion_r348224297
 
 

 ##
 File path: hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
 ##
 @@ -171,6 +171,11 @@
   /** Configuration key for master web API port */
   public static final String MASTER_INFO_PORT = "hbase.master.info.port";
 
+  /** Configuration key for the list of master host:ports **/
+  public static final String MASTER_ADDRS_KEY = "hbase.master.addrs";
 
 Review comment:
   You plan to address this last nit? I'm +1 with this bit handled.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23322) [hbck2] Simplification on HBCKSCP scheduling

2019-11-19 Thread Michael Stack (Jira)
Michael Stack created HBASE-23322:
-

 Summary: [hbck2] Simplification on HBCKSCP scheduling
 Key: HBASE-23322
 URL: https://issues.apache.org/jira/browse/HBASE-23322
 Project: HBase
  Issue Type: Sub-task
  Components: hbck2
Reporter: Michael Stack
Assignee: Michael Stack


I can make the scheduling of HBCKSCP simpler.  I can also fix a bug in parent 
issue that I notice after exercising it a bunch on a cluster.

The bug is that 'Unknown Servers' seem to be retained in the Map of reporting 
servers. They are usually cleared just before an SCP is scheduled but 
scheduling HBCKSCP doesn't go the usual route.

The patch here forces HBCKSCP via the usual SCP route only at the scheduling 
time, context dictates whether SCP or the scouring HBCKSCP.

Let me put up a patch and will test in meantime.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs 
(HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#issuecomment-555755070
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 29s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 36s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  master passed  |
   | +0 :ok: |  spotbugs  |   1m 30s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 27s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 38s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  16m  2s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   3m 14s |  hbase-thrift in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  52m 41s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.thrift.TestThriftSpnegoHttpFallbackServer |
   |   | hadoop.hbase.thrift.TestThriftSpnegoHttpServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/850 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux b807b8892542 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-850/out/precommit/personality/provided.sh
 |
   | git revision | master / 33bedf8d4d |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/5/artifact/out/patch-unit-hbase-thrift.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/5/testReport/
 |
   | Max. process+thread count | 1837 (vs. ulimit of 1) |
   | modules | C: hbase-thrift U: hbase-thrift |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/5/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23279) Switch default block encoding to ROW_INDEX_V1

2019-11-19 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977882#comment-16977882
 ] 

Michael Stack commented on HBASE-23279:
---

On patch, this 

n -> DataBlockEncoding.valueOf(n.toUpperCase()), 
DataBlockEncoding.ROW_INDEX_V1);

should be...

n -> DataBlockEncoding.valueOf(n.toUpperCase()), DataBlockEncoding. 
DEFAULT_DATA_BLOCK_ENCODING); ?

Same here...

  return setValue(DATA_BLOCK_ENCODING_BYTES, type == null ?
812 DataBlockEncoding.ROW_INDEX_V1.name() : type.name());

Same here...

private DataBlockEncoding encoding = DataBlockEncoding.ROW_INDEX_V1;

... and so on .



> Switch default block encoding to ROW_INDEX_V1
> -
>
> Key: HBASE-23279
> URL: https://issues.apache.org/jira/browse/HBASE-23279
> Project: HBase
>  Issue Type: Wish
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Lars Hofhansl
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-23279.master.000.patch, 
> HBASE-23279.master.001.patch
>
>
> Currently we set both block encoding and compression to NONE.
> ROW_INDEX_V1 has many advantages and (almost) no disadvantages (the hfiles 
> are slightly larger about 3% or so). I think that would a better default than 
> NONE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #834: HBASE-23237 Negative sign in requestsPerSecond

2019-11-19 Thread GitBox
saintstack commented on a change in pull request #834: HBASE-23237 Negative 
sign in requestsPerSecond
URL: https://github.com/apache/hbase/pull/834#discussion_r348195867
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRequestsPerSecondMetric.java
 ##
 @@ -0,0 +1,86 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.RegionServerTests;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+/**
+ * Validate no negative value in requestsPerSecond metric.
+ */
+@Category({ RegionServerTests.class, MediumTests.class })
+public class TestRequestsPerSecondMetric {
+
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+  HBaseClassTestRule.forClass(TestRequestsPerSecondMetric.class);
+
+  private static final HBaseTestingUtility UTIL = new HBaseTestingUtility();
+  private static final long METRICS_PERIOD = 2000L;
+  private static Configuration conf;
+
+
+  @BeforeClass
+  public static void setup() throws Exception {
+conf = UTIL.getConfiguration();
+conf.setLong(HConstants.REGIONSERVER_METRICS_PERIOD, METRICS_PERIOD);
+UTIL.startMiniCluster(1);
+  }
+
+  @AfterClass
+  public static void teardown() throws Exception {
+UTIL.shutdownMiniCluster();
+  }
+
+  @Test
+  public void testNoNegativeSignAtRequestsPerSecond() throws IOException, 
InterruptedException {
+final TableName TABLENAME = TableName.valueOf("t");
+final String FAMILY = "f";
+Admin admin = UTIL.getAdmin();
+UTIL.createMultiRegionTable(TABLENAME, FAMILY.getBytes(),25);
+Table table = admin.getConnection().getTable(TABLENAME);
+ServerName serverName = admin.getRegionServers().iterator().next();
+HRegionServer regionServer = 
UTIL.getMiniHBaseCluster().getRegionServer(serverName);
+MetricsRegionServerWrapperImpl metricsWrapper  =
+new MetricsRegionServerWrapperImpl(regionServer);
+MetricsRegionServerWrapperImpl.RegionServerMetricsWrapperRunnable 
metricsServer
+= metricsWrapper.new RegionServerMetricsWrapperRunnable();
+metricsServer.run();
+UTIL.loadRandomRows(table, FAMILY.getBytes(), 1, 2000);
+Thread.sleep(METRICS_PERIOD);
+metricsServer.run();
+admin.disableTable(TABLENAME);
+Thread.sleep(METRICS_PERIOD);
+metricsServer.run();
+Assert.assertTrue(metricsWrapper.getRequestsPerSecond() > -1);
+  }
+}
 
 Review comment:
   Nice. Add above as comment on test if not there already if you cut a new 
patch? I'm good w/ this patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r348195340
 
 

 ##
 File path: 
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftSpnegoHttpFallbackServer.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.hadoop.hbase.thrift;
+
+import static 
org.apache.hadoop.hbase.thrift.Constants.THRIFT_SUPPORT_PROXYUSER_KEY;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.security.Principal;
+import java.security.PrivilegedExceptionAction;
+import java.util.Set;
+
+import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosTicket;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.security.HBaseKerberosUtils;
+import org.apache.hadoop.hbase.testclassification.ClientTests;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.thrift.generated.Hbase;
+import org.apache.hadoop.hbase.util.TableDescriptorChecker;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.http.HttpHeaders;
+import org.apache.http.auth.AuthSchemeProvider;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.KerberosCredentials;
+import org.apache.http.client.config.AuthSchemes;
+import org.apache.http.config.Lookup;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.impl.auth.SPNegoSchemeFactory;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClients;
+import org.apache.kerby.kerberos.kerb.client.JaasKrbUtil;
+import org.apache.kerby.kerberos.kerb.server.SimpleKdcServer;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocol;
+import org.apache.thrift.transport.THttpClient;
+import org.ietf.jgss.GSSCredential;
+import org.ietf.jgss.GSSManager;
+import org.ietf.jgss.GSSName;
+import org.ietf.jgss.Oid;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.experimental.categories.Category;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Start the HBase Thrift HTTP server on a random port through the command-line
+ * interface and talk to it from client side with SPNEGO security enabled.
+ */
+@Category({ClientTests.class, LargeTests.class})
+public class TestThriftSpnegoHttpFallbackServer extends TestThriftHttpServer {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static final Logger LOG =
+LoggerFactory.getLogger(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static SimpleKdcServer kdc;
+  private static File serverKeytab;
+  private static File spnegoServerKeytab;
+  private static File clientKeytab;
+
+  private static String clientPrincipal;
+  private static String serverPrincipal;
+  private static String spnegoServerPrincipal;
+
+  private static SimpleKdcServer buildMiniKdc() throws Exception {
+SimpleKdcServer kdc = new SimpleKdcServer();
+
+final File target = new File(System.getProperty("user.dir"), "target");
+File kdcDir = new File(target, 
TestThriftSpnegoHttpFallbackServer.class.getSimpleName());
+if (kdcDir.exists()) {
+  FileUtils.deleteDirectory(kdcDir);
+}
+kdcDir.mkdirs();
+kdc.setWorkDir(kdcDir);
+
+kdc.setKdcHost(HConstants.LOCALHOST);
+int kdcPort = HBaseTestingUtility.randomFreePort();
+kdc.setAllowTcp(true);
+kdc.setAllowUdp(false);
+kdc.setKdcTcpPort(kdcPort);
+
+LOG.info("Starting KDC server at " + HConstants.LOCALHOST + ":" + 

[jira] [Resolved] (HBASE-23308) Review of NullPointerExceptions

2019-11-19 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-23308.
---
Hadoop Flags: Reviewed
  Resolution: Fixed

Merged to branch-2 and master branch. Thanks for the patch [~belugabehr]

> Review of NullPointerExceptions
> ---
>
> Key: HBASE-23308
> URL: https://issues.apache.org/jira/browse/HBASE-23308
> Project: HBase
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23308) Review of NullPointerExceptions

2019-11-19 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23308:
--
Fix Version/s: 2.3.0
   3.0.0

> Review of NullPointerExceptions
> ---
>
> Key: HBASE-23308
> URL: https://issues.apache.org/jira/browse/HBASE-23308
> Project: HBase
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r348192855
 
 

 ##
 File path: 
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftSpnegoHttpFallbackServer.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.hadoop.hbase.thrift;
+
+import static 
org.apache.hadoop.hbase.thrift.Constants.THRIFT_SUPPORT_PROXYUSER_KEY;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.security.Principal;
+import java.security.PrivilegedExceptionAction;
+import java.util.Set;
+
+import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosTicket;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.security.HBaseKerberosUtils;
+import org.apache.hadoop.hbase.testclassification.ClientTests;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.thrift.generated.Hbase;
+import org.apache.hadoop.hbase.util.TableDescriptorChecker;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.http.HttpHeaders;
+import org.apache.http.auth.AuthSchemeProvider;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.KerberosCredentials;
+import org.apache.http.client.config.AuthSchemes;
+import org.apache.http.config.Lookup;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.impl.auth.SPNegoSchemeFactory;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClients;
+import org.apache.kerby.kerberos.kerb.client.JaasKrbUtil;
+import org.apache.kerby.kerberos.kerb.server.SimpleKdcServer;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocol;
+import org.apache.thrift.transport.THttpClient;
+import org.ietf.jgss.GSSCredential;
+import org.ietf.jgss.GSSManager;
+import org.ietf.jgss.GSSName;
+import org.ietf.jgss.Oid;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.experimental.categories.Category;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Start the HBase Thrift HTTP server on a random port through the command-line
+ * interface and talk to it from client side with SPNEGO security enabled.
+ */
+@Category({ClientTests.class, LargeTests.class})
+public class TestThriftSpnegoHttpFallbackServer extends TestThriftHttpServer {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static final Logger LOG =
+LoggerFactory.getLogger(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static SimpleKdcServer kdc;
+  private static File serverKeytab;
+  private static File spnegoServerKeytab;
+  private static File clientKeytab;
+
+  private static String clientPrincipal;
+  private static String serverPrincipal;
+  private static String spnegoServerPrincipal;
+
+  private static SimpleKdcServer buildMiniKdc() throws Exception {
+SimpleKdcServer kdc = new SimpleKdcServer();
+
+final File target = new File(System.getProperty("user.dir"), "target");
+File kdcDir = new File(target, 
TestThriftSpnegoHttpFallbackServer.class.getSimpleName());
+if (kdcDir.exists()) {
+  FileUtils.deleteDirectory(kdcDir);
+}
+kdcDir.mkdirs();
+kdc.setWorkDir(kdcDir);
+
+kdc.setKdcHost(HConstants.LOCALHOST);
+int kdcPort = HBaseTestingUtility.randomFreePort();
+kdc.setAllowTcp(true);
+kdc.setAllowUdp(false);
+kdc.setKdcTcpPort(kdcPort);
+
+LOG.info("Starting KDC server at " + HConstants.LOCALHOST + ":" + 

[GitHub] [hbase] risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
risdenk commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r348190479
 
 

 ##
 File path: 
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftSpnegoHttpFallbackServer.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.hadoop.hbase.thrift;
+
+import static 
org.apache.hadoop.hbase.thrift.Constants.THRIFT_SUPPORT_PROXYUSER_KEY;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.security.Principal;
+import java.security.PrivilegedExceptionAction;
+import java.util.Set;
+
+import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosTicket;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.security.HBaseKerberosUtils;
+import org.apache.hadoop.hbase.testclassification.ClientTests;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.thrift.generated.Hbase;
+import org.apache.hadoop.hbase.util.TableDescriptorChecker;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.http.HttpHeaders;
+import org.apache.http.auth.AuthSchemeProvider;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.KerberosCredentials;
+import org.apache.http.client.config.AuthSchemes;
+import org.apache.http.config.Lookup;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.impl.auth.SPNegoSchemeFactory;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClients;
+import org.apache.kerby.kerberos.kerb.client.JaasKrbUtil;
+import org.apache.kerby.kerberos.kerb.server.SimpleKdcServer;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocol;
+import org.apache.thrift.transport.THttpClient;
+import org.ietf.jgss.GSSCredential;
+import org.ietf.jgss.GSSManager;
+import org.ietf.jgss.GSSName;
+import org.ietf.jgss.Oid;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.experimental.categories.Category;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Start the HBase Thrift HTTP server on a random port through the command-line
+ * interface and talk to it from client side with SPNEGO security enabled.
+ */
+@Category({ClientTests.class, LargeTests.class})
+public class TestThriftSpnegoHttpFallbackServer extends TestThriftHttpServer {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static final Logger LOG =
+LoggerFactory.getLogger(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static SimpleKdcServer kdc;
+  private static File serverKeytab;
+  private static File spnegoServerKeytab;
+  private static File clientKeytab;
+
+  private static String clientPrincipal;
+  private static String serverPrincipal;
+  private static String spnegoServerPrincipal;
+
+  private static SimpleKdcServer buildMiniKdc() throws Exception {
+SimpleKdcServer kdc = new SimpleKdcServer();
+
+final File target = new File(System.getProperty("user.dir"), "target");
+File kdcDir = new File(target, 
TestThriftSpnegoHttpFallbackServer.class.getSimpleName());
+if (kdcDir.exists()) {
+  FileUtils.deleteDirectory(kdcDir);
+}
+kdcDir.mkdirs();
+kdc.setWorkDir(kdcDir);
+
+kdc.setKdcHost(HConstants.LOCALHOST);
+int kdcPort = HBaseTestingUtility.randomFreePort();
+kdc.setAllowTcp(true);
+kdc.setAllowUdp(false);
+kdc.setKdcTcpPort(kdcPort);
+
+LOG.info("Starting KDC server at " + HConstants.LOCALHOST + ":" + 

[GitHub] [hbase] saintstack merged pull request #836: HBASE-23308: Review of NullPointerExceptions

2019-11-19 Thread GitBox
saintstack merged pull request #836: HBASE-23308: Review of 
NullPointerExceptions
URL: https://github.com/apache/hbase/pull/836
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on issue #849: HBASE-23320 Upgrade surefire plugin to 3.0.0-M4

2019-11-19 Thread GitBox
saintstack commented on issue #849: HBASE-23320 Upgrade surefire plugin to 
3.0.0-M4
URL: https://github.com/apache/hbase/pull/849#issuecomment-555729663
 
 
   I tried rerunning build... Not sure if that helps.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] HorizonNet commented on issue #849: HBASE-23320 Upgrade surefire plugin to 3.0.0-M4

2019-11-19 Thread GitBox
HorizonNet commented on issue #849: HBASE-23320 Upgrade surefire plugin to 
3.0.0-M4
URL: https://github.com/apache/hbase/pull/849#issuecomment-555706080
 
 
   @ravowlga123 It seems that some tests are failing due to a crash of the VM. 
Can you please make sure that this is not related to your changes?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] joshelser commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
joshelser commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r348155061
 
 

 ##
 File path: 
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftSpnegoHttpFallbackServer.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.hadoop.hbase.thrift;
+
+import static 
org.apache.hadoop.hbase.thrift.Constants.THRIFT_SUPPORT_PROXYUSER_KEY;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.security.Principal;
+import java.security.PrivilegedExceptionAction;
+import java.util.Set;
+
+import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosTicket;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.security.HBaseKerberosUtils;
+import org.apache.hadoop.hbase.testclassification.ClientTests;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.thrift.generated.Hbase;
+import org.apache.hadoop.hbase.util.TableDescriptorChecker;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.http.HttpHeaders;
+import org.apache.http.auth.AuthSchemeProvider;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.KerberosCredentials;
+import org.apache.http.client.config.AuthSchemes;
+import org.apache.http.config.Lookup;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.impl.auth.SPNegoSchemeFactory;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClients;
+import org.apache.kerby.kerberos.kerb.client.JaasKrbUtil;
+import org.apache.kerby.kerberos.kerb.server.SimpleKdcServer;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocol;
+import org.apache.thrift.transport.THttpClient;
+import org.ietf.jgss.GSSCredential;
+import org.ietf.jgss.GSSManager;
+import org.ietf.jgss.GSSName;
+import org.ietf.jgss.Oid;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.experimental.categories.Category;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Start the HBase Thrift HTTP server on a random port through the command-line
+ * interface and talk to it from client side with SPNEGO security enabled.
+ */
+@Category({ClientTests.class, LargeTests.class})
+public class TestThriftSpnegoHttpFallbackServer extends TestThriftHttpServer {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static final Logger LOG =
+LoggerFactory.getLogger(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static SimpleKdcServer kdc;
+  private static File serverKeytab;
+  private static File spnegoServerKeytab;
+  private static File clientKeytab;
+
+  private static String clientPrincipal;
+  private static String serverPrincipal;
+  private static String spnegoServerPrincipal;
+
+  private static SimpleKdcServer buildMiniKdc() throws Exception {
+SimpleKdcServer kdc = new SimpleKdcServer();
+
+final File target = new File(System.getProperty("user.dir"), "target");
+File kdcDir = new File(target, 
TestThriftSpnegoHttpFallbackServer.class.getSimpleName());
+if (kdcDir.exists()) {
+  FileUtils.deleteDirectory(kdcDir);
+}
+kdcDir.mkdirs();
+kdc.setWorkDir(kdcDir);
+
+kdc.setKdcHost(HConstants.LOCALHOST);
+int kdcPort = HBaseTestingUtility.randomFreePort();
+kdc.setAllowTcp(true);
+kdc.setAllowUdp(false);
+kdc.setKdcTcpPort(kdcPort);
+
+LOG.info("Starting KDC server at " + HConstants.LOCALHOST + ":" + 

[GitHub] [hbase] joshelser commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
joshelser commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r348154200
 
 

 ##
 File path: 
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftSpnegoHttpFallbackServer.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.hadoop.hbase.thrift;
+
+import static 
org.apache.hadoop.hbase.thrift.Constants.THRIFT_SUPPORT_PROXYUSER_KEY;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.security.Principal;
+import java.security.PrivilegedExceptionAction;
+import java.util.Set;
+
+import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosTicket;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.security.HBaseKerberosUtils;
+import org.apache.hadoop.hbase.testclassification.ClientTests;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.thrift.generated.Hbase;
+import org.apache.hadoop.hbase.util.TableDescriptorChecker;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.http.HttpHeaders;
+import org.apache.http.auth.AuthSchemeProvider;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.KerberosCredentials;
+import org.apache.http.client.config.AuthSchemes;
+import org.apache.http.config.Lookup;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.impl.auth.SPNegoSchemeFactory;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClients;
+import org.apache.kerby.kerberos.kerb.client.JaasKrbUtil;
+import org.apache.kerby.kerberos.kerb.server.SimpleKdcServer;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocol;
+import org.apache.thrift.transport.THttpClient;
+import org.ietf.jgss.GSSCredential;
+import org.ietf.jgss.GSSManager;
+import org.ietf.jgss.GSSName;
+import org.ietf.jgss.Oid;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.experimental.categories.Category;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Start the HBase Thrift HTTP server on a random port through the command-line
+ * interface and talk to it from client side with SPNEGO security enabled.
+ */
+@Category({ClientTests.class, LargeTests.class})
+public class TestThriftSpnegoHttpFallbackServer extends TestThriftHttpServer {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static final Logger LOG =
+LoggerFactory.getLogger(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static SimpleKdcServer kdc;
+  private static File serverKeytab;
+  private static File spnegoServerKeytab;
+  private static File clientKeytab;
+
+  private static String clientPrincipal;
+  private static String serverPrincipal;
+  private static String spnegoServerPrincipal;
+
+  private static SimpleKdcServer buildMiniKdc() throws Exception {
+SimpleKdcServer kdc = new SimpleKdcServer();
+
+final File target = new File(System.getProperty("user.dir"), "target");
+File kdcDir = new File(target, 
TestThriftSpnegoHttpFallbackServer.class.getSimpleName());
+if (kdcDir.exists()) {
+  FileUtils.deleteDirectory(kdcDir);
+}
+kdcDir.mkdirs();
+kdc.setWorkDir(kdcDir);
+
+kdc.setKdcHost(HConstants.LOCALHOST);
+int kdcPort = HBaseTestingUtility.randomFreePort();
+kdc.setAllowTcp(true);
+kdc.setAllowUdp(false);
+kdc.setKdcTcpPort(kdcPort);
+
+LOG.info("Starting KDC server at " + HConstants.LOCALHOST + ":" + 

[GitHub] [hbase] joshelser commented on a change in pull request #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
joshelser commented on a change in pull request #850: HBASE-23312 HBase Thrift 
SPNEGO configs (HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#discussion_r348154057
 
 

 ##
 File path: 
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftSpnegoHttpFallbackServer.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.hadoop.hbase.thrift;
+
+import static 
org.apache.hadoop.hbase.thrift.Constants.THRIFT_SUPPORT_PROXYUSER_KEY;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.security.Principal;
+import java.security.PrivilegedExceptionAction;
+import java.util.Set;
+
+import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosTicket;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.security.HBaseKerberosUtils;
+import org.apache.hadoop.hbase.testclassification.ClientTests;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.thrift.generated.Hbase;
+import org.apache.hadoop.hbase.util.TableDescriptorChecker;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.http.HttpHeaders;
+import org.apache.http.auth.AuthSchemeProvider;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.KerberosCredentials;
+import org.apache.http.client.config.AuthSchemes;
+import org.apache.http.config.Lookup;
+import org.apache.http.config.RegistryBuilder;
+import org.apache.http.impl.auth.SPNegoSchemeFactory;
+import org.apache.http.impl.client.BasicCredentialsProvider;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClients;
+import org.apache.kerby.kerberos.kerb.client.JaasKrbUtil;
+import org.apache.kerby.kerberos.kerb.server.SimpleKdcServer;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocol;
+import org.apache.thrift.transport.THttpClient;
+import org.ietf.jgss.GSSCredential;
+import org.ietf.jgss.GSSManager;
+import org.ietf.jgss.GSSName;
+import org.ietf.jgss.Oid;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.experimental.categories.Category;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Start the HBase Thrift HTTP server on a random port through the command-line
+ * interface and talk to it from client side with SPNEGO security enabled.
+ */
+@Category({ClientTests.class, LargeTests.class})
+public class TestThriftSpnegoHttpFallbackServer extends TestThriftHttpServer {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static final Logger LOG =
+LoggerFactory.getLogger(TestThriftSpnegoHttpFallbackServer.class);
+
+  private static SimpleKdcServer kdc;
+  private static File serverKeytab;
+  private static File spnegoServerKeytab;
+  private static File clientKeytab;
+
+  private static String clientPrincipal;
+  private static String serverPrincipal;
+  private static String spnegoServerPrincipal;
+
+  private static SimpleKdcServer buildMiniKdc() throws Exception {
+SimpleKdcServer kdc = new SimpleKdcServer();
+
+final File target = new File(System.getProperty("user.dir"), "target");
+File kdcDir = new File(target, 
TestThriftSpnegoHttpFallbackServer.class.getSimpleName());
+if (kdcDir.exists()) {
+  FileUtils.deleteDirectory(kdcDir);
+}
+kdcDir.mkdirs();
+kdc.setWorkDir(kdcDir);
+
+kdc.setKdcHost(HConstants.LOCALHOST);
+int kdcPort = HBaseTestingUtility.randomFreePort();
+kdc.setAllowTcp(true);
+kdc.setAllowUdp(false);
+kdc.setKdcTcpPort(kdcPort);
+
+LOG.info("Starting KDC server at " + HConstants.LOCALHOST + ":" + 

[GitHub] [hbase] Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs 
(HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#issuecomment-555702984
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 26s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 38s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  master passed  |
   | +0 :ok: |  spotbugs  |   1m 31s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 28s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 36s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 43s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 39s |  hbase-thrift in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  52m 55s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/850 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 7fb54f63b7c2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-850/out/precommit/personality/provided.sh
 |
   | git revision | master / ca6e67a6de |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/4/testReport/
 |
   | Max. process+thread count | 1613 (vs. ulimit of 1) |
   | modules | C: hbase-thrift U: hbase-thrift |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/4/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack opened a new pull request #851: Hbase 23321

2019-11-19 Thread GitBox
saintstack opened a new pull request #851: Hbase 23321
URL: https://github.com/apache/hbase/pull/851
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23321) [hbck2] fixHoles of fixMeta doesn't update in-memory state

2019-11-19 Thread Michael Stack (Jira)
Michael Stack created HBASE-23321:
-

 Summary: [hbck2] fixHoles of fixMeta doesn't update in-memory state
 Key: HBASE-23321
 URL: https://issues.apache.org/jira/browse/HBASE-23321
 Project: HBase
  Issue Type: Improvement
  Components: hbck2
Reporter: Michael Stack
Assignee: Michael Stack


If hbase:meta has holes, you can run fixMeta from hbck2. This will close the 
holes but you have to restart the Master for it to notice the new region 
additions. Also, we were plugging holes by adding regions but no state for the 
region which makes it awkward to subsequently assign. Fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs 
(HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#issuecomment-555678576
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 25s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  master passed  |
   | +0 :ok: |  spotbugs  |   1m 36s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 34s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedjars  |   4m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 28s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 34s |  hbase-thrift in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  52m 23s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/850 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 6a92efd8ed32 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-850/out/precommit/personality/provided.sh
 |
   | git revision | master / ca6e67a6de |
   | Default Java | 1.8.0_181 |
   | whitespace | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/3/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/3/testReport/
 |
   | Max. process+thread count | 1729 (vs. ulimit of 1) |
   | modules | C: hbase-thrift U: hbase-thrift |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/3/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23279) Switch default block encoding to ROW_INDEX_V1

2019-11-19 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-23279:
-
Attachment: HBASE-23279.master.001.patch

> Switch default block encoding to ROW_INDEX_V1
> -
>
> Key: HBASE-23279
> URL: https://issues.apache.org/jira/browse/HBASE-23279
> Project: HBase
>  Issue Type: Wish
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Lars Hofhansl
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-23279.master.000.patch, 
> HBASE-23279.master.001.patch
>
>
> Currently we set both block encoding and compression to NONE.
> ROW_INDEX_V1 has many advantages and (almost) no disadvantages (the hfiles 
> are slightly larger about 3% or so). I think that would a better default than 
> NONE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23279) Switch default block encoding to ROW_INDEX_V1

2019-11-19 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1697#comment-1697
 ] 

Viraj Jasani edited comment on HBASE-23279 at 11/19/19 7:34 PM:


ROW_INDEX_V1 indeed seems to be taking more space for BucketCache.

Tried running these commands separately on fresh local cluster:
{code:java}
bin/hbase ltt -init_only -data_block_encoding ROW_INDEX_V1
bin/hbase ltt -skip_init -write 1:10:300 -read 100:200 -num_keys 10 
-multiput -multiget_batchsize 20
echo "flush 'cluster_test'" | bin/hbase shell
bin/hbase ltt -skip_init -read 100:200 -num_keys 10 -multiget_batchsize 
20{code}
{code:java}
bin/hbase ltt -init_only -data_block_encoding NONE
bin/hbase ltt -skip_init -write 1:10:300 -read 100:200 -num_keys 10 
-multiput -multiget_batchsize 20
echo "flush 'cluster_test'" | bin/hbase shell
bin/hbase ltt -skip_init -read 100:200 -num_keys 10 -multiget_batchsize 20
{code}
L1 Cache Size Limit: 805.9 MB

L2 Cache Size Limit: 4.1 GB

Stats for 1st case(ROW_INDEX_V1):
{code:java}
L2 Block Count: 432
Size of Blocks: 38.1 MB (DataBlocks Size: 28.58 MB)
L1+L2 Combined Size: 38.8 MB{code}
Stats for 2nd case(NONE):
{code:java}
L2 Block Count: 432
Size of Blocks: 27.3 MB (DataBlocks Size: 26.86 MB)
L1+L2 Combined Size: 28 MB
{code}
 


was (Author: vjasani):
ROW_INDEX_V1 indeed seems to be taking more space for BucketCache.

Tried running these commands separately on fresh local cluster:
{code:java}
bin/hbase ltt -init_only -data_block_encoding ROW_INDEX_V1
bin/hbase ltt -skip_init -write 1:10:300 -read 100:200 -num_keys 10 
-multiput -multiget_batchsize 20
echo "flush 'cluster_test'" | bin/hbase shell
bin/hbase ltt -skip_init -read 100:200 -num_keys 10 -multiget_batchsize 
20{code}
{code:java}
bin/hbase ltt -init_only -data_block_encoding NONE
bin/hbase ltt -skip_init -write 1:10:300 -read 100:200 -num_keys 10 
-multiput -multiget_batchsize 20
echo "flush 'cluster_test'" | bin/hbase shell
bin/hbase ltt -skip_init -read 100:200 -num_keys 10 -multiget_batchsize 20
{code}
L1 Cache Size Limit: 805.9 MB

L2 Cache Size Limit: 4.1 GB

Stats for 1st case(ROW_INDEX_V1):
{code:java}
L2 Block Count: 432
Size of Blocks: 38.1 MB (DataBlocks Size: 28.58 MB)
L1+L2 Combined Size: 38.8 MB{code}
 

Stats for 2nd case(NONE):
{code:java}
L2 Block Count: 432
Size of Blocks: 27.3 MB (DataBlocks Size: 26.86 MB)
L1+L2 Combined Size: 28 MB
{code}
 

> Switch default block encoding to ROW_INDEX_V1
> -
>
> Key: HBASE-23279
> URL: https://issues.apache.org/jira/browse/HBASE-23279
> Project: HBase
>  Issue Type: Wish
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Lars Hofhansl
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-23279.master.000.patch
>
>
> Currently we set both block encoding and compression to NONE.
> ROW_INDEX_V1 has many advantages and (almost) no disadvantages (the hfiles 
> are slightly larger about 3% or so). I think that would a better default than 
> NONE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23279) Switch default block encoding to ROW_INDEX_V1

2019-11-19 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1697#comment-1697
 ] 

Viraj Jasani edited comment on HBASE-23279 at 11/19/19 7:33 PM:


ROW_INDEX_V1 indeed seems to be taking more space for BucketCache.

Tried running these commands separately on fresh local cluster:
{code:java}
bin/hbase ltt -init_only -data_block_encoding ROW_INDEX_V1
bin/hbase ltt -skip_init -write 1:10:300 -read 100:200 -num_keys 10 
-multiput -multiget_batchsize 20
echo "flush 'cluster_test'" | bin/hbase shell
bin/hbase ltt -skip_init -read 100:200 -num_keys 10 -multiget_batchsize 
20{code}
{code:java}
bin/hbase ltt -init_only -data_block_encoding NONE
bin/hbase ltt -skip_init -write 1:10:300 -read 100:200 -num_keys 10 
-multiput -multiget_batchsize 20
echo "flush 'cluster_test'" | bin/hbase shell
bin/hbase ltt -skip_init -read 100:200 -num_keys 10 -multiget_batchsize 20
{code}
L1 Cache Size Limit: 805.9 MB

L2 Cache Size Limit: 4.1 GB

Stats for 1st case(ROW_INDEX_V1):
{code:java}
L2 Block Count: 432
Size of Blocks: 38.1 MB (DataBlocks Size: 28.58 MB)
L1+L2 Combined Size: 38.8 MB{code}
 

Stats for 2nd case(NONE):
{code:java}
L2 Block Count: 432
Size of Blocks: 27.3 MB (DataBlocks Size: 26.86 MB)
L1+L2 Combined Size: 28 MB
{code}
 


was (Author: vjasani):
ROW_INDEX_V1 indeed seems to be taking more space for BucketCache.

Tried running these commands separately on fresh local cluster:

 
{code:java}
bin/hbase ltt -init_only -data_block_encoding ROW_INDEX_V1
bin/hbase ltt -skip_init -write 1:10:300 -read 100:200 -num_keys 10 
-multiput -multiget_batchsize 20
echo "flush 'cluster_test'" | bin/hbase shell
bin/hbase ltt -skip_init -read 100:200 -num_keys 10 -multiget_batchsize 
20{code}
 
{code:java}
bin/hbase ltt -init_only -data_block_encoding NONE
bin/hbase ltt -skip_init -write 1:10:300 -read 100:200 -num_keys 10 
-multiput -multiget_batchsize 20
echo "flush 'cluster_test'" | bin/hbase shell
bin/hbase ltt -skip_init -read 100:200 -num_keys 10 -multiget_batchsize 20
{code}
L1 Cache Size Limit: 805.9 MB

L2 Cache Size Limit: 4.1 GB

Stats for 1st case(ROW_INDEX_V1):

 
{code:java}
L2 Block Count: 432
Size of Blocks: 38.1 MB (DataBlocks Size: 28.58 MB)
L1+L2 Combined Size: 38.8 MB{code}
 

Stats for 2nd case(NONE):

 

 
{code:java}
L2 Block Count: 432
Size of Blocks: 27.3 MB (DataBlocks Size: 26.86 MB)
L1+L2 Combined Size: 28 MB
{code}
 

 

> Switch default block encoding to ROW_INDEX_V1
> -
>
> Key: HBASE-23279
> URL: https://issues.apache.org/jira/browse/HBASE-23279
> Project: HBase
>  Issue Type: Wish
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Lars Hofhansl
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-23279.master.000.patch
>
>
> Currently we set both block encoding and compression to NONE.
> ROW_INDEX_V1 has many advantages and (almost) no disadvantages (the hfiles 
> are slightly larger about 3% or so). I think that would a better default than 
> NONE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23279) Switch default block encoding to ROW_INDEX_V1

2019-11-19 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1697#comment-1697
 ] 

Viraj Jasani commented on HBASE-23279:
--

ROW_INDEX_V1 indeed seems to be taking more space for BucketCache.

Tried running these commands separately on fresh local cluster:

 
{code:java}
bin/hbase ltt -init_only -data_block_encoding ROW_INDEX_V1
bin/hbase ltt -skip_init -write 1:10:300 -read 100:200 -num_keys 10 
-multiput -multiget_batchsize 20
echo "flush 'cluster_test'" | bin/hbase shell
bin/hbase ltt -skip_init -read 100:200 -num_keys 10 -multiget_batchsize 
20{code}
 
{code:java}
bin/hbase ltt -init_only -data_block_encoding NONE
bin/hbase ltt -skip_init -write 1:10:300 -read 100:200 -num_keys 10 
-multiput -multiget_batchsize 20
echo "flush 'cluster_test'" | bin/hbase shell
bin/hbase ltt -skip_init -read 100:200 -num_keys 10 -multiget_batchsize 20
{code}
L1 Cache Size Limit: 805.9 MB

L2 Cache Size Limit: 4.1 GB

Stats for 1st case(ROW_INDEX_V1):

 
{code:java}
L2 Block Count: 432
Size of Blocks: 38.1 MB (DataBlocks Size: 28.58 MB)
L1+L2 Combined Size: 38.8 MB{code}
 

Stats for 2nd case(NONE):

 

 
{code:java}
L2 Block Count: 432
Size of Blocks: 27.3 MB (DataBlocks Size: 26.86 MB)
L1+L2 Combined Size: 28 MB
{code}
 

 

> Switch default block encoding to ROW_INDEX_V1
> -
>
> Key: HBASE-23279
> URL: https://issues.apache.org/jira/browse/HBASE-23279
> Project: HBase
>  Issue Type: Wish
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Lars Hofhansl
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-23279.master.000.patch
>
>
> Currently we set both block encoding and compression to NONE.
> ROW_INDEX_V1 has many advantages and (almost) no disadvantages (the hfiles 
> are slightly larger about 3% or so). I think that would a better default than 
> NONE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] joshelser commented on a change in pull request #834: HBASE-23237 Negative sign in requestsPerSecond

2019-11-19 Thread GitBox
joshelser commented on a change in pull request #834: HBASE-23237 Negative sign 
in requestsPerSecond
URL: https://github.com/apache/hbase/pull/834#discussion_r348124897
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
 ##
 @@ -115,6 +118,8 @@
   private volatile long mobFileCacheCount = 0;
   private volatile long blockedRequestsCount = 0L;
   private volatile long averageRegionSize = 0L;
+  protected volatile Map> requestsCountCache = new
 
 Review comment:
   You're not changing the object that `requestsCountCache` points to, are you? 
I think you can drop `volatile` and make this `final`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] joshelser commented on a change in pull request #834: HBASE-23237 Negative sign in requestsPerSecond

2019-11-19 Thread GitBox
joshelser commented on a change in pull request #834: HBASE-23237 Negative sign 
in requestsPerSecond
URL: https://github.com/apache/hbase/pull/834#discussion_r348122519
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
 ##
 @@ -709,13 +710,50 @@ synchronized public void run() {
 long tempMobScanCellsSize = 0;
 long tempBlockedRequestsCount = 0;
 int regionCount = 0;
+
+long tempReadRequestsCount = 0;
+long tempWriteRequestsCount = 0;
+long currentReadRequestsCount = 0;
+long currentWriteRequestsCount = 0;
+long lastReadRequestsCount = 0;
+long lastWriteRequestsCount = 0;
+long readRequestsDelta = 0;
+long writeRequestsDelta = 0;
+long totalReadRequestsDelta = 0;
+long totalWriteRequestsDelta = 0;
+String encodedRegionName;
+ArrayList hregions = new ArrayList();
 
 Review comment:
   Should be a HashSet to avoid paying a linear cost on `List.contains(Object)` 
down below.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] risdenk commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
risdenk commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs 
(HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#issuecomment-555656319
 
 
   Pushed whitespace fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-19 Thread GitBox
Apache-HBase commented on issue #850: HBASE-23312 HBase Thrift SPNEGO configs 
(HBASE-19852) should be backwards compatible
URL: https://github.com/apache/hbase/pull/850#issuecomment-555653930
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 29s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 37s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  master passed  |
   | +0 :ok: |  spotbugs  |   1m 31s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 30s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 27s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 37s |  hbase-thrift in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  52m 28s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/850 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 5c2abc96d8d5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-850/out/precommit/personality/provided.sh
 |
   | git revision | master / ca6e67a6de |
   | Default Java | 1.8.0_181 |
   | whitespace | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/2/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/2/testReport/
 |
   | Max. process+thread count | 1721 (vs. ulimit of 1) |
   | modules | C: hbase-thrift U: hbase-thrift |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-850/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] wchevreuil commented on a change in pull request #749: HBASE-23205 Correctly update the position of WALs currently being replicated

2019-11-19 Thread GitBox
wchevreuil commented on a change in pull request #749: HBASE-23205 Correctly 
update the position of WALs currently being replicated
URL: https://github.com/apache/hbase/pull/749#discussion_r348096334
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReaderThread.java
 ##
 @@ -135,59 +127,46 @@ public void run() {
   try (WALEntryStream entryStream =
   new WALEntryStream(logQueue, fs, conf, currentPosition, metrics)) {
 while (isReaderRunning()) { // loop here to keep reusing stream while 
we can
-  if (!checkQuota()) {
+  if (manager.isBufferQuotaReached()) {
+Threads.sleep(sleepForRetries);
 continue;
   }
-  WALEntryBatch batch = null;
-  while (entryStream.hasNext()) {
-if (batch == null) {
-  batch = new WALEntryBatch(replicationBatchCountCapacity, 
entryStream.getCurrentPath());
-}
+  WALEntryBatch batch =
+  new WALEntryBatch(replicationBatchCountCapacity, 
replicationBatchSizeCapacity);
+  boolean hasNext;
+  while ((hasNext = entryStream.hasNext()) == true) {
 Entry entry = entryStream.next();
 entry = filterEntry(entry);
 if (entry != null) {
   WALEdit edit = entry.getEdit();
   if (edit != null && !edit.isEmpty()) {
-long entrySize = getEntrySizeIncludeBulkLoad(entry);
-long entrySizeExlucdeBulkLoad = 
getEntrySizeExcludeBulkLoad(entry);
-batch.addEntry(entry);
-replicationSourceManager.setPendingShipment(true);
-updateBatchStats(batch, entry, entryStream.getPosition(), 
entrySize);
-boolean totalBufferTooLarge = 
acquireBufferQuota(entrySizeExlucdeBulkLoad);
+long entrySizeExcludeBulkLoad = batch.addEntry(entry);
+boolean totalBufferTooLarge = 
manager.acquireBufferQuota(entrySizeExcludeBulkLoad);
 // Stop if too many entries or too big
-if (totalBufferTooLarge || batch.getHeapSize() >= 
replicationBatchSizeCapacity
-|| batch.getNbEntries() >= replicationBatchCountCapacity) {
+if (totalBufferTooLarge || batch.isLimitReached()) {
   break;
 }
   }
-} else {
 
 Review comment:
   I think the answer to my question above is in the _resetStream()_ that gets 
called at the end of the second while loop, which will update 
_lastReadPosition_ variable that is now used for reading here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] wchevreuil commented on a change in pull request #749: HBASE-23205 Correctly update the position of WALs currently being replicated

2019-11-19 Thread GitBox
wchevreuil commented on a change in pull request #749: HBASE-23205 Correctly 
update the position of WALs currently being replicated
URL: https://github.com/apache/hbase/pull/749#discussion_r348092931
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReaderThread.java
 ##
 @@ -135,59 +127,46 @@ public void run() {
   try (WALEntryStream entryStream =
   new WALEntryStream(logQueue, fs, conf, currentPosition, metrics)) {
 while (isReaderRunning()) { // loop here to keep reusing stream while 
we can
-  if (!checkQuota()) {
+  if (manager.isBufferQuotaReached()) {
+Threads.sleep(sleepForRetries);
 continue;
   }
-  WALEntryBatch batch = null;
-  while (entryStream.hasNext()) {
-if (batch == null) {
-  batch = new WALEntryBatch(replicationBatchCountCapacity, 
entryStream.getCurrentPath());
-}
+  WALEntryBatch batch =
+  new WALEntryBatch(replicationBatchCountCapacity, 
replicationBatchSizeCapacity);
+  boolean hasNext;
+  while ((hasNext = entryStream.hasNext()) == true) {
 Entry entry = entryStream.next();
 entry = filterEntry(entry);
 if (entry != null) {
   WALEdit edit = entry.getEdit();
   if (edit != null && !edit.isEmpty()) {
-long entrySize = getEntrySizeIncludeBulkLoad(entry);
-long entrySizeExlucdeBulkLoad = 
getEntrySizeExcludeBulkLoad(entry);
-batch.addEntry(entry);
-replicationSourceManager.setPendingShipment(true);
-updateBatchStats(batch, entry, entryStream.getPosition(), 
entrySize);
-boolean totalBufferTooLarge = 
acquireBufferQuota(entrySizeExlucdeBulkLoad);
+long entrySizeExcludeBulkLoad = batch.addEntry(entry);
+boolean totalBufferTooLarge = 
manager.acquireBufferQuota(entrySizeExcludeBulkLoad);
 // Stop if too many entries or too big
-if (totalBufferTooLarge || batch.getHeapSize() >= 
replicationBatchSizeCapacity
-|| batch.getNbEntries() >= replicationBatchCountCapacity) {
+if (totalBufferTooLarge || batch.isLimitReached()) {
   break;
 }
   }
-} else {
-  
replicationSourceManager.logPositionAndCleanOldLogs(entryStream.getCurrentPath(),
-this.replicationQueueInfo.getPeerClusterZnode(),
-entryStream.getPosition(),
-this.replicationQueueInfo.isQueueRecovered(), false);
 }
   }
-  if (batch != null && (!batch.getLastSeqIds().isEmpty() || 
batch.getNbEntries() > 0)) {
-if (LOG.isTraceEnabled()) {
-  LOG.trace(String.format("Read %s WAL entries eligible for 
replication",
-batch.getNbEntries()));
-}
-entryBatchQueue.put(batch);
+
+  if (LOG.isTraceEnabled()) {
+LOG.trace(String.format("Read %s WAL entries eligible for 
replication",
+batch.getNbEntries()));
+  }
+
+  updateBatch(entryStream, batch, hasNext);
+  if (isShippable(batch)) {
 sleepMultiplier = 1;
-  } else { // got no entries and didn't advance position in WAL
-LOG.trace("Didn't read any new entries from WAL");
-if (replicationQueueInfo.isQueueRecovered()) {
-  // we're done with queue recovery, shut ourself down
+entryBatchQueue.put(batch);
+if (!batch.hasMoreEntries()) {
+  // we're done with queue recovery, shut ourselves down
   setReaderRunning(false);
-  // shuts down shipper thread immediately
-  entryBatchQueue.put(batch != null ? batch
-  : new WALEntryBatch(replicationBatchCountCapacity, 
entryStream.getCurrentPath()));
-} else {
-  Thread.sleep(sleepForRetries);
 }
+  } else {
 
 Review comment:
   Ok, so _batch.hasMoreEntries()_ returns true if this is isn't a recovered 
queue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] wchevreuil commented on a change in pull request #749: HBASE-23205 Correctly update the position of WALs currently being replicated

2019-11-19 Thread GitBox
wchevreuil commented on a change in pull request #749: HBASE-23205 Correctly 
update the position of WALs currently being replicated
URL: https://github.com/apache/hbase/pull/749#discussion_r348087671
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReaderThread.java
 ##
 @@ -135,59 +127,46 @@ public void run() {
   try (WALEntryStream entryStream =
   new WALEntryStream(logQueue, fs, conf, currentPosition, metrics)) {
 while (isReaderRunning()) { // loop here to keep reusing stream while 
we can
-  if (!checkQuota()) {
+  if (manager.isBufferQuotaReached()) {
+Threads.sleep(sleepForRetries);
 continue;
   }
-  WALEntryBatch batch = null;
-  while (entryStream.hasNext()) {
-if (batch == null) {
-  batch = new WALEntryBatch(replicationBatchCountCapacity, 
entryStream.getCurrentPath());
-}
+  WALEntryBatch batch =
+  new WALEntryBatch(replicationBatchCountCapacity, 
replicationBatchSizeCapacity);
+  boolean hasNext;
+  while ((hasNext = entryStream.hasNext()) == true) {
 Entry entry = entryStream.next();
 entry = filterEntry(entry);
 if (entry != null) {
   WALEdit edit = entry.getEdit();
   if (edit != null && !edit.isEmpty()) {
-long entrySize = getEntrySizeIncludeBulkLoad(entry);
-long entrySizeExlucdeBulkLoad = 
getEntrySizeExcludeBulkLoad(entry);
-batch.addEntry(entry);
-replicationSourceManager.setPendingShipment(true);
-updateBatchStats(batch, entry, entryStream.getPosition(), 
entrySize);
-boolean totalBufferTooLarge = 
acquireBufferQuota(entrySizeExlucdeBulkLoad);
+long entrySizeExcludeBulkLoad = batch.addEntry(entry);
+boolean totalBufferTooLarge = 
manager.acquireBufferQuota(entrySizeExcludeBulkLoad);
 // Stop if too many entries or too big
-if (totalBufferTooLarge || batch.getHeapSize() >= 
replicationBatchSizeCapacity
-|| batch.getNbEntries() >= replicationBatchCountCapacity) {
+if (totalBufferTooLarge || batch.isLimitReached()) {
   break;
 }
   }
-} else {
 
 Review comment:
   > If some use cases means "no mutations come for a long time, but a batch 
has entries"
   
   What if the whole WAL section read got no entries for replication? In this 
case, batch would be empty, so 
_ReplicationSourceManager.logPositionAndCleanOldLogs_ does not ever get called 
(at least, I guess, until the log is rolled).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >