[jira] [Commented] (HBASE-22291) Fix recovery of recovered.edits files under root dir

2019-04-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826657#comment-16826657
 ] 

Hudson commented on HBASE-22291:


Results for branch branch-1.4
[build #766 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/766/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/766//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/766//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/766//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix recovery of recovered.edits files under root dir
> 
>
> Key: HBASE-22291
> URL: https://issues.apache.org/jira/browse/HBASE-22291
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.9
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22291.branch-1.001.patch, 
> HBASE-22291.master.001.patch, HBASE-22291.master.002.patch
>
>
> It looks like a few places are using incorrect FS instances in the 
> replayRecoveredEditsForPath method that was introduced in HBASE-20734.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #191: HBASE-22036 Rewrite TestScannerHeartbeatMessages

2019-04-25 Thread GitBox
Apache-HBase commented on issue #191: HBASE-22036 Rewrite 
TestScannerHeartbeatMessages
URL: https://github.com/apache/hbase/pull/191#issuecomment-486931961
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 21 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ HBASE-21512 Compile Tests _ |
   | +1 | mvninstall | 257 | HBASE-21512 passed |
   | +1 | compile | 57 | HBASE-21512 passed |
   | +1 | checkstyle | 75 | HBASE-21512 passed |
   | +1 | shadedjars | 275 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 274 | HBASE-21512 passed |
   | +1 | javadoc | 40 | HBASE-21512 passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 266 | the patch passed |
   | -1 | compile | 32 | hbase-server in the patch failed. |
   | -1 | javac | 32 | hbase-server in the patch failed. |
   | +1 | checkstyle | 73 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 279 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 546 | Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. |
   | +1 | findbugs | 257 | the patch passed |
   | -1 | javadoc | 33 | hbase-server generated 4 new + 0 unchanged - 0 fixed = 
4 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 36 | hbase-server in the patch failed. |
   | +1 | asflicense | 11 | The patch does not generate ASF License warnings. |
   | | | 2606 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-191/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/191 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux e43c2fdbab7c 4.4.0-131-generic #157~14.04.1-Ubuntu SMP Fri 
Jul 13 08:53:17 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | HBASE-21512 / dfa4f47b59 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | compile | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-191/2/artifact/out/patch-compile-hbase-server.txt
 |
   | javac | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-191/2/artifact/out/patch-compile-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-191/2/artifact/out/diff-javadoc-javadoc-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-191/2/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-191/2/testReport/
 |
   | Max. process+thread count | 86 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-191/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826656#comment-16826656
 ] 

Andrew Purtell commented on HBASE-22301:


To be more precise what I’m thinking is if a long time had elapsed and then we 
are finally pushed over the threshold, we should set the counter to 1 and 
return false. We start over rather than trigger a roll. Seems reasonable to me. 

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17884) Backport HBASE-16217 to branch-1

2019-04-25 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826655#comment-16826655
 ] 

Lars Hofhansl commented on HBASE-17884:
---

This is probably another case where Phoenix reaches too deep into HBase.
This is where it fails:
{code}
private static abstract class CoprocessorOperation extends ObserverContext {
abstract void call(MetaDataEndpointObserver oserver, ObserverContext 
ctx) throws IOException;

public void postEnvCall(T env) {}
}
{code}
Could probably add a no-argument constructor there.

> Backport HBASE-16217 to branch-1
> 
>
> Key: HBASE-17884
> URL: https://issues.apache.org/jira/browse/HBASE-17884
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Major
> Fix For: 1.5.0, 1.4.10
>
> Attachments: HBASE-17884-branch-1.patch, HBASE-17884-branch-1.patch, 
> HBASE-17884.branch-1.001.patch
>
>
> The change to add calling user to ObserverContext in HBASE-16217 should also 
> be applied to branch-1 to avoid use of UserGroupInformation.doAs() for access 
> control checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826654#comment-16826654
 ] 

Andrew Purtell commented on HBASE-22301:


I like your first suggestion better too [~dmanning] and will put up a new patch 
with it incorporated tomorrow. Thanks for the idea. 

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17884) Backport HBASE-16217 to branch-1

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826653#comment-16826653
 ] 

Andrew Purtell commented on HBASE-17884:


This kind of change is allowed for a minor I believe 

> Backport HBASE-16217 to branch-1
> 
>
> Key: HBASE-17884
> URL: https://issues.apache.org/jira/browse/HBASE-17884
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Major
> Fix For: 1.5.0, 1.4.10
>
> Attachments: HBASE-17884-branch-1.patch, HBASE-17884-branch-1.patch, 
> HBASE-17884.branch-1.001.patch
>
>
> The change to add calling user to ObserverContext in HBASE-16217 should also 
> be applied to branch-1 to avoid use of UserGroupInformation.doAs() for access 
> control checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17884) Backport HBASE-16217 to branch-1

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826652#comment-16826652
 ] 

Andrew Purtell commented on HBASE-17884:


It seems you’ve confirmed it is a problem so pushing that revert to branch-1.4 
would be the right thing to do I think. At the next 1.5.0 RC the compat report 
will surface this and it can be discussed then if need be. (Or now if anyone 
has an objection, although I claim the change is worth it.)

> Backport HBASE-16217 to branch-1
> 
>
> Key: HBASE-17884
> URL: https://issues.apache.org/jira/browse/HBASE-17884
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Major
> Fix For: 1.5.0, 1.4.10
>
> Attachments: HBASE-17884-branch-1.patch, HBASE-17884-branch-1.patch, 
> HBASE-17884.branch-1.001.patch
>
>
> The change to add calling user to ObserverContext in HBASE-16217 should also 
> be applied to branch-1 to avoid use of UserGroupInformation.doAs() for access 
> control checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-17884) Backport HBASE-16217 to branch-1

2019-04-25 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826651#comment-16826651
 ] 

Lars Hofhansl edited comment on HBASE-17884 at 4/26/19 5:25 AM:


Locally reverted for now. This is for Phoenix which deploys a coprocessor that 
was built against a 1.4.x version of HBase.

Edit: I agree it's a good change. Not sure how we can do that without breaking 
Phoenix (in this case, there're possibly other things broken)


was (Author: lhofhansl):
Locally reverted for now. This is for Phoenix which deploys a coprocessor that 
was built against a 1.4.x version of HBase.

> Backport HBASE-16217 to branch-1
> 
>
> Key: HBASE-17884
> URL: https://issues.apache.org/jira/browse/HBASE-17884
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Major
> Fix For: 1.5.0, 1.4.10
>
> Attachments: HBASE-17884-branch-1.patch, HBASE-17884-branch-1.patch, 
> HBASE-17884.branch-1.001.patch
>
>
> The change to add calling user to ObserverContext in HBASE-16217 should also 
> be applied to branch-1 to avoid use of UserGroupInformation.doAs() for access 
> control checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17884) Backport HBASE-16217 to branch-1

2019-04-25 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826651#comment-16826651
 ] 

Lars Hofhansl commented on HBASE-17884:
---

Locally reverted for now. This is for Phoenix which deploys a coprocessor that 
was built against a 1.4.x version of HBase.

> Backport HBASE-16217 to branch-1
> 
>
> Key: HBASE-17884
> URL: https://issues.apache.org/jira/browse/HBASE-17884
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Major
> Fix For: 1.5.0, 1.4.10
>
> Attachments: HBASE-17884-branch-1.patch, HBASE-17884-branch-1.patch, 
> HBASE-17884.branch-1.001.patch
>
>
> The change to add calling user to ObserverContext in HBASE-16217 should also 
> be applied to branch-1 to avoid use of UserGroupInformation.doAs() for access 
> control checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17884) Backport HBASE-16217 to branch-1

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826650#comment-16826650
 ] 

Andrew Purtell commented on HBASE-17884:


Does it? I haven’t attempted a 1.4 RC for a while. If so we can revert it on 
branch-1.4. For branch-1 and 1.5.0 for a new minor I believe this kind of CP 
change is allowed and certainly it is worth it (IMHO)

> Backport HBASE-16217 to branch-1
> 
>
> Key: HBASE-17884
> URL: https://issues.apache.org/jira/browse/HBASE-17884
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Major
> Fix For: 1.5.0, 1.4.10
>
> Attachments: HBASE-17884-branch-1.patch, HBASE-17884-branch-1.patch, 
> HBASE-17884.branch-1.001.patch
>
>
> The change to add calling user to ObserverContext in HBASE-16217 should also 
> be applied to branch-1 to avoid use of UserGroupInformation.doAs() for access 
> control checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread David Manning (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826649#comment-16826649
 ] 

David Manning commented on HBASE-22301:
---

 
{code:java}
private boolean checkSlowSync() {
  boolean result = false;
  long now = EnvironmentEdgeManager.currentTime();
  if (now - lastTimeCheckSlowSync >= slowSyncCheckInterval) {
if (slowSyncCount.get() >= slowSyncRollThreshold) {
  LOG.warn("Requesting log roll because we exceeded slow sync threshold; 
count=" +
slowSyncCount.get() + ", threshold=" + slowSyncRollThreshold +
", current pipeline: " + Arrays.toString(getPipeLine()));
  result = true;
}
lastTimeCheckSlowSync = now;
slowSyncCount.set(0);
  }
  return result;
}
{code}
Assuming {{slowSyncCheckInterval}} is 6, and {{slowSyncRollThreshold}} is 
10, what about the scenario where we get 20 slow syncs in 50 seconds, and then 
we don't get any more slow syncs for an hour. On the next slow sync an hour 
later, it looks like we will roll the WAL on the first slow sync.

Can we check to see that it's not too long after the interval period? If it's 
been too long after the interval period, we can assume we should be resetting 
the counters because the previous situation corrected itself. It gets a little 
messy, but perhaps something like:
{code:java}
  if (now - lastTimeCheckSlowSync >= slowSyncCheckInterval) {
if (now - lastTimeCheckSlowSync <= 2 * slowSyncCheckInterval && 
slowSyncCount.get() >= slowSyncRollThreshold) {
{code}
Alternatively, resetting {{lastTimeCheckSlowSync}} and {{slowSyncCount}} could 
also be done in {{requestLogRoll}}. I like that approach less, but it also 
would make it less likely we would request a WAL roll from one rogue slow sync 
much later.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> 

[jira] [Commented] (HBASE-17884) Backport HBASE-16217 to branch-1

2019-04-25 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826645#comment-16826645
 ] 

Lars Hofhansl commented on HBASE-17884:
---

Does this break binary compatability?!
{code:java}
19/04/25 22:15:04 WARN ipc.CoprocessorRpcChannel: Call failed on IOException
org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: TEST: 
org.apache.hadoop.hbase.coprocessor.ObserverContext: method ()V not found
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:121)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:656)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17038)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8466)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2276)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2258)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36617)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2380)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.coprocessor.ObserverContext: method ()V not found
at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$CoprocessorOperation.(PhoenixMetaDataCoprocessorHost.java:63)
at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$CoprocessorOperation.(PhoenixMetaDataCoprocessorHost.java:63)
at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.(PhoenixMetaDataCoprocessorHost.java:157)
at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preGetTable(PhoenixMetaDataCoprocessorHost.java:157)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:621)
... 9 more
{code}

[~apurtell]

> Backport HBASE-16217 to branch-1
> 
>
> Key: HBASE-17884
> URL: https://issues.apache.org/jira/browse/HBASE-17884
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Major
> Fix For: 1.5.0, 1.4.10
>
> Attachments: HBASE-17884-branch-1.patch, HBASE-17884-branch-1.patch, 
> HBASE-17884.branch-1.001.patch
>
>
> The change to add calling user to ObserverContext in HBASE-16217 should also 
> be applied to branch-1 to avoid use of UserGroupInformation.doAs() for access 
> control checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22291) Fix recovery of recovered.edits files under root dir

2019-04-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826643#comment-16826643
 ] 

Hudson commented on HBASE-22291:


Results for branch branch-1.3
[build #741 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/741/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/741//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/741//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/741//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix recovery of recovered.edits files under root dir
> 
>
> Key: HBASE-22291
> URL: https://issues.apache.org/jira/browse/HBASE-22291
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.9
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22291.branch-1.001.patch, 
> HBASE-22291.master.001.patch, HBASE-22291.master.002.patch
>
>
> It looks like a few places are using incorrect FS instances in the 
> replayRecoveredEditsForPath method that was introduced in HBASE-20734.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22313) Add a method to FsDelegationToken to accept token kind

2019-04-25 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826641#comment-16826641
 ] 

Wei-Chiu Chuang commented on HBASE-22313:
-

Is the purpose to support a different file system other than HDFS?

> Add a method to FsDelegationToken to accept token kind
> --
>
> Key: HBASE-22313
> URL: https://issues.apache.org/jira/browse/HBASE-22313
> Project: HBase
>  Issue Type: New Feature
>Reporter: Venkatesh Sridharan
>Priority: Minor
>
> The acquireDelegationToken method [1] defaults to checking for delegation 
> token of kind "HDFS_DELEGATION_TOKEN" before fetching it from the FileSystem. 
> It would be helpful to have a method that accepts the token kind and fetches 
> delegation token from UserProvider for that token kind.
> [1] - 
> [https://github.com/apache/hbase/blob/rel/2.1.4/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/FsDelegationToken.java#L67]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826624#comment-16826624
 ] 

Hudson commented on HBASE-22020:


Results for branch branch-2.0
[build #1543 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1543/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1543//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1543//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1543//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.0.6, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22310) checkAndMutate used an incorrect row to check the condition

2019-04-25 Thread Adonis Ling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adonis Ling updated HBASE-22310:

Attachment: HBASE-22310.branch-1.4.002.patch
Status: Patch Available  (was: Open)

> checkAndMutate used an incorrect row to check the condition
> ---
>
> Key: HBASE-22310
> URL: https://issues.apache.org/jira/browse/HBASE-22310
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.4.9
>Reporter: Adonis Ling
>Assignee: Adonis Ling
>Priority: Major
> Attachments: HBASE-22310.branch-1.4.001.patch, 
> HBASE-22310.branch-1.4.002.patch
>
>
> In branch-1.4, checkAndMutate used the row of RowMutation to check the 
> condition which is incorrect. It will fail in the case which is checking a 
> row and mutate a different row.
> The issue doesn't happen in the master branch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22310) checkAndMutate used an incorrect row to check the condition

2019-04-25 Thread Adonis Ling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adonis Ling updated HBASE-22310:

Attachment: (was: HBASE-22310.branch-1.4.002.patch)

> checkAndMutate used an incorrect row to check the condition
> ---
>
> Key: HBASE-22310
> URL: https://issues.apache.org/jira/browse/HBASE-22310
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.4.9
>Reporter: Adonis Ling
>Assignee: Adonis Ling
>Priority: Major
> Attachments: HBASE-22310.branch-1.4.001.patch
>
>
> In branch-1.4, checkAndMutate used the row of RowMutation to check the 
> condition which is incorrect. It will fail in the case which is checking a 
> row and mutate a different row.
> The issue doesn't happen in the master branch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22310) checkAndMutate used an incorrect row to check the condition

2019-04-25 Thread Adonis Ling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adonis Ling updated HBASE-22310:

Status: Open  (was: Patch Available)

> checkAndMutate used an incorrect row to check the condition
> ---
>
> Key: HBASE-22310
> URL: https://issues.apache.org/jira/browse/HBASE-22310
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.4.9
>Reporter: Adonis Ling
>Assignee: Adonis Ling
>Priority: Major
> Attachments: HBASE-22310.branch-1.4.001.patch
>
>
> In branch-1.4, checkAndMutate used the row of RowMutation to check the 
> condition which is incorrect. It will fail in the case which is checking a 
> row and mutate a different row.
> The issue doesn't happen in the master branch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826614#comment-16826614
 ] 

Sean Busbey commented on HBASE-22301:
-

I'm +1 on the current patch either as-is or with adjustments for handling the 
sync thread count and to defaults based on feedback from David.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826609#comment-16826609
 ] 

HBase QA commented on HBASE-22301:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
53s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
56s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch passed checkstyle in hbase-hadoop-compat 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch passed checkstyle in hbase-hadoop2-compat 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} hbase-server: The patch generated 0 new + 94 
unchanged - 6 fixed = 94 total (was 100) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
52s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} 

[jira] [Commented] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-25 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826607#comment-16826607
 ] 

HBase QA commented on HBASE-22086:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
23s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m 
51s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
51s{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 51s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  3m 
37s{color} | {color:red} patch has 28 errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  2m  
1s{color} | {color:red} The patch causes 28 errors with Hadoop v2.7.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  4m  
8s{color} | {color:red} The patch causes 28 errors with Hadoop v3.0.0. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
31s{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 53s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/192/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22086 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967087/hbase-22086.addendum.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux cb708b5eb603 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / 0db0491a9e |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HBASE-Build/192/artifact/patchprocess/patch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HBASE-Build/192/artifact/patchprocess/patch-compile-hbase-server.txt
 |
| javac | 

[jira] [Updated] (HBASE-22310) checkAndMutate used an incorrect row to check the condition

2019-04-25 Thread Adonis Ling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adonis Ling updated HBASE-22310:

Attachment: HBASE-22310.branch-1.4.002.patch

> checkAndMutate used an incorrect row to check the condition
> ---
>
> Key: HBASE-22310
> URL: https://issues.apache.org/jira/browse/HBASE-22310
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.4.9
>Reporter: Adonis Ling
>Assignee: Adonis Ling
>Priority: Major
> Attachments: HBASE-22310.branch-1.4.001.patch, 
> HBASE-22310.branch-1.4.002.patch
>
>
> In branch-1.4, checkAndMutate used the row of RowMutation to check the 
> condition which is incorrect. It will fail in the case which is checking a 
> row and mutate a different row.
> The issue doesn't happen in the master branch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22291) Fix recovery of recovered.edits files under root dir

2019-04-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826603#comment-16826603
 ] 

Hudson commented on HBASE-22291:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #546 (See 
[https://builds.apache.org/job/HBase-1.3-IT/546/])
HBASE-22291 Fix recovery of recovered.edits files under root dir (apurtell: 
[https://github.com/apache/hbase/commit/0153872b3679ac791288c76c06bd73a98d20f0f1])
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Fix recovery of recovered.edits files under root dir
> 
>
> Key: HBASE-22291
> URL: https://issues.apache.org/jira/browse/HBASE-22291
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.9
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22291.branch-1.001.patch, 
> HBASE-22291.master.001.patch, HBASE-22291.master.002.patch
>
>
> It looks like a few places are using incorrect FS instances in the 
> replayRecoveredEditsForPath method that was introduced in HBASE-20734.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-25 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826595#comment-16826595
 ] 

Sakthi commented on HBASE-22086:


Have attached the patch [~Apache9]. The issue was that the existing other tests 
in the class weren't cleaning up the snapshotSizes from the QuotaTableUtil and 
hence the failure in the testDeleteSnapshots during initial phase itself. Have 
added cleanup steps in the other 2 functions. The tests pass now.

Will work for patch on other branches as well.

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, space
> Fix For: 3.0.0
>
> Attachments: hbase-22086.addendum.patch, 
> hbase-22086.master.001.patch, hbase-22086.master.002.patch, 
> hbase-22086.master.003.patch, hbase-22086.master.004.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-25 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-22086:
---
Status: Patch Available  (was: Reopened)

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, space
> Fix For: 3.0.0
>
> Attachments: hbase-22086.addendum.patch, 
> hbase-22086.master.001.patch, hbase-22086.master.002.patch, 
> hbase-22086.master.003.patch, hbase-22086.master.004.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-25 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-22086:
---
Attachment: hbase-22086.addendum.patch

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, space
> Fix For: 3.0.0
>
> Attachments: hbase-22086.addendum.patch, 
> hbase-22086.master.001.patch, hbase-22086.master.002.patch, 
> hbase-22086.master.003.patch, hbase-22086.master.004.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-25 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-22086:
---
Attachment: (was: hbase-22086.master.001.patch)

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, space
> Fix For: 3.0.0
>
> Attachments: hbase-22086.addendum.patch, 
> hbase-22086.master.001.patch, hbase-22086.master.002.patch, 
> hbase-22086.master.003.patch, hbase-22086.master.004.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-25 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-22086:
---
Attachment: hbase-22086.master.001.patch

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, space
> Fix For: 3.0.0
>
> Attachments: hbase-22086.addendum.patch, 
> hbase-22086.master.001.patch, hbase-22086.master.002.patch, 
> hbase-22086.master.003.patch, hbase-22086.master.004.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-25 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826587#comment-16826587
 ] 

Duo Zhang commented on HBASE-22086:
---

Temporarily reverted from master branch.

Please provide a new patch [~jatsakthi]. And also, since this is a bug fix, it 
should go into all branches, not only master? So please also provide patches 
for other branches at the same time.

Thanks.

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, space
> Fix For: 3.0.0
>
> Attachments: hbase-22086.master.001.patch, 
> hbase-22086.master.002.patch, hbase-22086.master.003.patch, 
> hbase-22086.master.004.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-25 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826583#comment-16826583
 ] 

Sakthi commented on HBASE-22086:


Looking into it [~Apache9]. 

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, space
> Fix For: 3.0.0
>
> Attachments: hbase-22086.master.001.patch, 
> hbase-22086.master.002.patch, hbase-22086.master.003.patch, 
> hbase-22086.master.004.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reopened HBASE-22086:
---

This breaks TestQuotaTableUtil.testDeleteSnapshots.

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, space
> Fix For: 3.0.0
>
> Attachments: hbase-22086.master.001.patch, 
> hbase-22086.master.002.patch, hbase-22086.master.003.patch, 
> hbase-22086.master.004.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22289) WAL-based log splitting resubmit threshold may result in a task being stuck forever

2019-04-25 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826579#comment-16826579
 ] 

HBase QA commented on HBASE-22289:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
39s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
52s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} branch-2.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
18s{color} | {color:red} hbase-server: The patch generated 1 new + 15 unchanged 
- 14 fixed = 16 total (was 29) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
52s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m  8s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
42s{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}179m 18s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}222m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Switch statement found in 
org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler.process() where 
one case falls through to the next case  At WALSplitterHandler.java:where one 
case falls through to the next case  At WALSplitterHandler.java:[lines 84-87] |
| Failed junit tests | hadoop.hbase.quotas.TestSpaceQuotas |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/190/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22289 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967068/HBASE-22289.03-branch-2.1.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 7507ee5a95f4 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |

[jira] [Updated] (HBASE-22291) Fix recovery of recovered.edits files under root dir

2019-04-25 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22291:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.5
   2.2.1
   2.1.5
   2.3.0
   1.4.10
   1.5.0
   3.0.0
   Status: Resolved  (was: Patch Available)

> Fix recovery of recovered.edits files under root dir
> 
>
> Key: HBASE-22291
> URL: https://issues.apache.org/jira/browse/HBASE-22291
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.9
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22291.branch-1.001.patch, 
> HBASE-22291.master.001.patch, HBASE-22291.master.002.patch
>
>
> It looks like a few places are using incorrect FS instances in the 
> replayRecoveredEditsForPath method that was introduced in HBASE-20734.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22291) Fix recovery of recovered.edits files under root dir

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826558#comment-16826558
 ] 

Andrew Purtell commented on HBASE-22291:


I've got a minute, let me try to commit this after some local checks

> Fix recovery of recovered.edits files under root dir
> 
>
> Key: HBASE-22291
> URL: https://issues.apache.org/jira/browse/HBASE-22291
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.9
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
> Attachments: HBASE-22291.branch-1.001.patch, 
> HBASE-22291.master.001.patch, HBASE-22291.master.002.patch
>
>
> It looks like a few places are using incorrect FS instances in the 
> replayRecoveredEditsForPath method that was introduced in HBASE-20734.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22301:
---
Attachment: HBASE-22301-branch-1.patch

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826557#comment-16826557
 ] 

Andrew Purtell commented on HBASE-22301:


Updated patch implements the logging change requested by [~busbey]

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch, 
> HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22274) Cell size limit check on append should consider cell's previous size.

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826549#comment-16826549
 ] 

Andrew Purtell commented on HBASE-22274:


Pointing at this commit isn't useful. If this is really causing the test 
failure we need an analysis of why, because at first glance it is an unlikely 
reason, unless somehow that filter is used in the test...

> Cell size limit check on append should consider cell's previous size.
> -
>
> Key: HBASE-22274
> URL: https://issues.apache.org/jira/browse/HBASE-22274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0, 1.3.5
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-22274-branch-1.001.patch, 
> HBASE-22274-branch-1.002.patch, HBASE-22274-master.001.patch, 
> HBASE-22274-master.002.patch, HBASE-22274-master.002.patch, 
> HBASE-22274-master.003.patch
>
>
> Now we have cell size limit check based on this parameter 
> *hbase.server.keyvalue.maxsize* 
> One case was missing: appending to a cell only take append op's cell size 
> into account against this limit check. we should check against the potential 
> final cell size after the append.'
> It's easy to reproduce this :
>  
> Apply this diff
>  
> {code:java}
> diff --git 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  index 5a285ef6ba..8633177ebe 100644 --- 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  +++ 
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  @@ -6455,7 +6455,7 
> - t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[10 * 
> 1024])); 
> + t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[2 * 1024])); 
> {code}
>  
> Fix is to add this check in #reckonDeltas in HRegion class, where we have 
> already got the appended cell's size. 
> Will throw DoNotRetryIOException if checks is failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-1295) Multi data center replication

2019-04-25 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-1295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-1295:
-
Component/s: Replication

> Multi data center replication
> -
>
> Key: HBASE-1295
> URL: https://issues.apache.org/jira/browse/HBASE-1295
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Andrew Purtell
>Priority: Major
> Attachments: hbase_repl.3.odp, hbase_repl.3.pdf
>
>
> HBase should consider supporting a federated deployment where someone might 
> have terascale (or beyond) clusters in more than one geography and would want 
> the system to handle replication between the clusters/regions. It would be 
> sweet if HBase had something on the roadmap to sync between replicas out of 
> the box. 
> Consider if rows, columns, or even cells could be scoped: local, or global.
> Then, consider a background task on each cluster that replicates new globally 
> scoped edits to peer clusters. The HBase/Bigtable data model has convenient 
> features (timestamps, multiversioning) such that simple exchange of globally 
> scoped cells would be conflict free and would "just work". Implementation 
> effort here would be in producing an efficient mechanism for collecting up 
> edits from all the HRS and transmitting the edits over the network to peers 
> where they would then be split out to the HRS there. Holding on to the edit 
> trace and tracking it until the remote commits succeed would also be 
> necessary. So, HLog is probably the right place to set up the tee. This would 
> be filtered log shipping, basically.  
> This proposal does not consider transactional tables. For transactional 
> tables, enforcement of global mutation commit ordering would come into the 
> picture if the user  wants the  transaction to span the federation. This 
> should be an optional feature even with transactional tables themselves being 
> optional because of how slow it would be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-1295) Multi data center replication

2019-04-25 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-1295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-1295:
-
Labels: replication  (was: )

> Multi data center replication
> -
>
> Key: HBASE-1295
> URL: https://issues.apache.org/jira/browse/HBASE-1295
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Andrew Purtell
>Priority: Major
>  Labels: replication
> Attachments: hbase_repl.3.odp, hbase_repl.3.pdf
>
>
> HBase should consider supporting a federated deployment where someone might 
> have terascale (or beyond) clusters in more than one geography and would want 
> the system to handle replication between the clusters/regions. It would be 
> sweet if HBase had something on the roadmap to sync between replicas out of 
> the box. 
> Consider if rows, columns, or even cells could be scoped: local, or global.
> Then, consider a background task on each cluster that replicates new globally 
> scoped edits to peer clusters. The HBase/Bigtable data model has convenient 
> features (timestamps, multiversioning) such that simple exchange of globally 
> scoped cells would be conflict free and would "just work". Implementation 
> effort here would be in producing an efficient mechanism for collecting up 
> edits from all the HRS and transmitting the edits over the network to peers 
> where they would then be split out to the HRS there. Holding on to the edit 
> trace and tracking it until the remote commits succeed would also be 
> necessary. So, HLog is probably the right place to set up the tee. This would 
> be filtered log shipping, basically.  
> This proposal does not consider transactional tables. For transactional 
> tables, enforcement of global mutation commit ordering would come into the 
> picture if the user  wants the  transaction to span the federation. This 
> should be an optional feature even with transactional tables themselves being 
> optional because of how slow it would be.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-25 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826523#comment-16826523
 ] 

Sean Mackrory commented on HBASE-22149:
---

(swapped out patch #5 - there was a variable name change that I hadn't done 
everywhere, and didn't notice it until I did a clean build).

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch, HBASE-22149-hbase-2.patch, 
> HBASE-22149-hbase-3.patch, HBASE-22149-hbase-4.patch, 
> HBASE-22149-hbase-5.patch, HBASE-22149-hbase.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file creates. Both of these tests fail reliably on a naked s3a 
> instance. I've also done a small YCSB run against a small cluster to sanity 
> check other functionality and was successful. I will post the patch, and my 
> laundry list of things that still need work. The WAL is still placed on HDFS, 
> but the HBase root directory is otherwise on S3.
> Note that my prototype is built on Hadoop's source tree right now. That's 
> purely for my convenience in putting it together quickly, as that's where I 
> mostly work. I actually think long-term, if this is accepted as a good 
> solution, it makes sense to live in HBase (or it's own repository). It only 
> depends on stable, public APIs in Hadoop and is targeted entirely at HBase's 
> needs, so it should be able to iterate on the HBase community's terms alone.
> Another idea [~ste...@apache.org] proposed to me is that of an inode-based 
> FileSystem that keeps hierarchical metadata in a more appropriate store that 
> would allow the required transactions (maybe a special table in HBase could 
> provide that store itself for other tables), and stores the underlying files 
> with unique identifiers on S3. This allows renames to actually become fast 
> instead of just large atomic operations. It does however place a strong 
> dependency on the metadata store. I have not explored this idea much. My 
> current proof-of-concept has been pleasantly simple, so I think it's the 
> right solution unless it proves unable to provide the required performance 
> characteristics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-25 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HBASE-22149:
--
Attachment: HBASE-22149-hbase-5.patch

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch, HBASE-22149-hbase-2.patch, 
> HBASE-22149-hbase-3.patch, HBASE-22149-hbase-4.patch, 
> HBASE-22149-hbase-5.patch, HBASE-22149-hbase.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file creates. Both of these tests fail reliably on a naked s3a 
> instance. I've also done a small YCSB run against a small cluster to sanity 
> check other functionality and was successful. I will post the patch, and my 
> laundry list of things that still need work. The WAL is still placed on HDFS, 
> but the HBase root directory is otherwise on S3.
> Note that my prototype is built on Hadoop's source tree right now. That's 
> purely for my convenience in putting it together quickly, as that's where I 
> mostly work. I actually think long-term, if this is accepted as a good 
> solution, it makes sense to live in HBase (or it's own repository). It only 
> depends on stable, public APIs in Hadoop and is targeted entirely at HBase's 
> needs, so it should be able to iterate on the HBase community's terms alone.
> Another idea [~ste...@apache.org] proposed to me is that of an inode-based 
> FileSystem that keeps hierarchical metadata in a more appropriate store that 
> would allow the required transactions (maybe a special table in HBase could 
> provide that store itself for other tables), and stores the underlying files 
> with unique identifiers on S3. This allows renames to actually become fast 
> instead of just large atomic operations. It does however place a strong 
> dependency on the metadata store. I have not explored this idea much. My 
> current proof-of-concept has been pleasantly simple, so I think it's the 
> right solution unless it proves unable to provide the required performance 
> characteristics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-25 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HBASE-22149:
--
Attachment: (was: HBASE-22149-hbase-5.patch)

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch, HBASE-22149-hbase-2.patch, 
> HBASE-22149-hbase-3.patch, HBASE-22149-hbase-4.patch, 
> HBASE-22149-hbase-5.patch, HBASE-22149-hbase.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file creates. Both of these tests fail reliably on a naked s3a 
> instance. I've also done a small YCSB run against a small cluster to sanity 
> check other functionality and was successful. I will post the patch, and my 
> laundry list of things that still need work. The WAL is still placed on HDFS, 
> but the HBase root directory is otherwise on S3.
> Note that my prototype is built on Hadoop's source tree right now. That's 
> purely for my convenience in putting it together quickly, as that's where I 
> mostly work. I actually think long-term, if this is accepted as a good 
> solution, it makes sense to live in HBase (or it's own repository). It only 
> depends on stable, public APIs in Hadoop and is targeted entirely at HBase's 
> needs, so it should be able to iterate on the HBase community's terms alone.
> Another idea [~ste...@apache.org] proposed to me is that of an inode-based 
> FileSystem that keeps hierarchical metadata in a more appropriate store that 
> would allow the required transactions (maybe a special table in HBase could 
> provide that store itself for other tables), and stores the underlying files 
> with unique identifiers on S3. This allows renames to actually become fast 
> instead of just large atomic operations. It does however place a strong 
> dependency on the metadata store. I have not explored this idea much. My 
> current proof-of-concept has been pleasantly simple, so I think it's the 
> right solution unless it proves unable to provide the required performance 
> characteristics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22289) WAL-based log splitting resubmit threshold may result in a task being stuck forever

2019-04-25 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826490#comment-16826490
 ] 

Sergey Shelukhin commented on HBASE-22289:
--

Fixed checkstyle; findbugs issue is the same as the previous fall-thru pattern 
in this case. 

> WAL-based log splitting resubmit threshold may result in a task being stuck 
> forever
> ---
>
> Key: HBASE-22289
> URL: https://issues.apache.org/jira/browse/HBASE-22289
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 1.5.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 2.1.5
>
> Attachments: HBASE-22289.01-branch-2.1.patch, 
> HBASE-22289.02-branch-2.1.patch, HBASE-22289.03-branch-2.1.patch
>
>
> Not sure if this is handled better in procedure based WAL splitting; in any 
> case it affects versions before that.
> The problem is not in ZK as such but in internal state tracking in master, it 
> seems.
> Master:
> {noformat}
> 2019-04-21 01:49:49,584 INFO  
> [master/:17000.splitLogManager..Chore.1] 
> coordination.SplitLogManagerCoordination: Resubmitting task 
> .1555831286638
> {noformat}
> worker-rs, split fails 
> {noformat}
> 
> 2019-04-21 02:05:31,774 INFO  
> [RS_LOG_REPLAY_OPS-regionserver/:17020-1] wal.WALSplitter: 
> Processed 24 edits across 2 regions; edits skipped=457; log 
> file=.1555831286638, length=2156363702, corrupted=false, progress 
> failed=true
> {noformat}
> Master (not sure about the delay of the acquired-message; at any rate it 
> seems to detect the failure fine from this server)
> {noformat}
> 2019-04-21 02:11:14,928 INFO  [main-EventThread] 
> coordination.SplitLogManagerCoordination: Task .1555831286638 acquired 
> by ,17020,139815097
> 2019-04-21 02:19:41,264 INFO  
> [master/:17000.splitLogManager..Chore.1] 
> coordination.SplitLogManagerCoordination: Skipping resubmissions of task 
> .1555831286638 because threshold 3 reached
> {noformat}
> After that this task is stuck in the limbo forever with the old worker, and 
> never resubmitted. 
> RS never logs anything else for this task.
> Killing the RS on the worker unblocked the task and some other server did the 
> split very quickly, so seems like master doesn't clear the worker name in its 
> internal state when hitting the threshold... master never restarted so 
> restarting the master might have also cleared it.
> This is extracted from splitlogmanager log messages, note the times.
> {noformat}
> 2019-04-21 02:2   1555831286638=last_update = 1555837874928 last_version = 11 
> cur_worker_name = ,17020,139815097 status = in_progress 
> incarnation = 3 resubmits = 3 batch = installed = 24 done = 3 error = 20, 
> 
> 2019-04-22 11:1   1555831286638=last_update = 1555837874928 last_version = 11 
> cur_worker_name = ,17020,139815097 status = in_progress 
> incarnation = 3 resubmits = 3 batch = installed = 24 done = 3 error = 20}
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22289) WAL-based log splitting resubmit threshold may result in a task being stuck forever

2019-04-25 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-22289:
-
Attachment: HBASE-22289.03-branch-2.1.patch

> WAL-based log splitting resubmit threshold may result in a task being stuck 
> forever
> ---
>
> Key: HBASE-22289
> URL: https://issues.apache.org/jira/browse/HBASE-22289
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 1.5.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 2.1.5
>
> Attachments: HBASE-22289.01-branch-2.1.patch, 
> HBASE-22289.02-branch-2.1.patch, HBASE-22289.03-branch-2.1.patch
>
>
> Not sure if this is handled better in procedure based WAL splitting; in any 
> case it affects versions before that.
> The problem is not in ZK as such but in internal state tracking in master, it 
> seems.
> Master:
> {noformat}
> 2019-04-21 01:49:49,584 INFO  
> [master/:17000.splitLogManager..Chore.1] 
> coordination.SplitLogManagerCoordination: Resubmitting task 
> .1555831286638
> {noformat}
> worker-rs, split fails 
> {noformat}
> 
> 2019-04-21 02:05:31,774 INFO  
> [RS_LOG_REPLAY_OPS-regionserver/:17020-1] wal.WALSplitter: 
> Processed 24 edits across 2 regions; edits skipped=457; log 
> file=.1555831286638, length=2156363702, corrupted=false, progress 
> failed=true
> {noformat}
> Master (not sure about the delay of the acquired-message; at any rate it 
> seems to detect the failure fine from this server)
> {noformat}
> 2019-04-21 02:11:14,928 INFO  [main-EventThread] 
> coordination.SplitLogManagerCoordination: Task .1555831286638 acquired 
> by ,17020,139815097
> 2019-04-21 02:19:41,264 INFO  
> [master/:17000.splitLogManager..Chore.1] 
> coordination.SplitLogManagerCoordination: Skipping resubmissions of task 
> .1555831286638 because threshold 3 reached
> {noformat}
> After that this task is stuck in the limbo forever with the old worker, and 
> never resubmitted. 
> RS never logs anything else for this task.
> Killing the RS on the worker unblocked the task and some other server did the 
> split very quickly, so seems like master doesn't clear the worker name in its 
> internal state when hitting the threshold... master never restarted so 
> restarting the master might have also cleared it.
> This is extracted from splitlogmanager log messages, note the times.
> {noformat}
> 2019-04-21 02:2   1555831286638=last_update = 1555837874928 last_version = 11 
> cur_worker_name = ,17020,139815097 status = in_progress 
> incarnation = 3 resubmits = 3 batch = installed = 24 done = 3 error = 20, 
> 
> 2019-04-22 11:1   1555831286638=last_update = 1555837874928 last_version = 11 
> cur_worker_name = ,17020,139815097 status = in_progress 
> incarnation = 3 resubmits = 3 batch = installed = 24 done = 3 error = 20}
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22083) move eclipse specific configs into a profile

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22083:

   Resolution: Fixed
Fix Version/s: 2.3.0
   3.0.0
 Release Note: 

Maven project integration for Eclipse has been isolated into a maven profile to 
ensure it only is active when in an Eclipse project.

Things should continue to behave the same for Eclipse users. If something 
should go wrong folks should manually activate the `eclipse-specific` profile.
   Status: Resolved  (was: Patch Available)

> move eclipse specific configs into a profile
> 
>
> Key: HBASE-22083
> URL: https://issues.apache.org/jira/browse/HBASE-22083
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>  Labels: eclipse
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22083.0.patch
>
>
> move our eclipse specific configs into profiles so they don't show up a 
> non-eclipse build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22315) Adding some logging which helped us internally in debugging issues

2019-04-25 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826468#comment-16826468
 ] 

HBase QA commented on HBASE-22315:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
49s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HBASE-22315 does not apply to branch-2.2. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/189/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22315 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967067/HBASE-22315.branch-2.2.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/189/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> Adding some logging which helped us internally in debugging issues
> --
>
> Key: HBASE-22315
> URL: https://issues.apache.org/jira/browse/HBASE-22315
> Project: HBase
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 2.2.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Minor
> Attachments: HBASE-22315.branch-2.2.001.patch
>
>
> [~ted_yu]/[~elserj] has added some logging to debug certain cases(defined 
> below) encountered during the testing.
>  
> * Region replica ends up on the same RS - add debug log for 
> RegionReplicaHostCostFunction
> * Debugging why regionservers are going down with Memstore compaction
> * When hbase:namespace table was not assigned
> * Logging in Procedure wals to debug HBASE-20552
> * HMaster not assigning regions properly with RSGroups feature
>  
> Currently, it is sitting in our local branch but it would be good if we can 
> add them upstream so others can also benefit from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22315) Adding some logging which helped us internally in debugging issues

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22315:

Status: Patch Available  (was: Open)

> Adding some logging which helped us internally in debugging issues
> --
>
> Key: HBASE-22315
> URL: https://issues.apache.org/jira/browse/HBASE-22315
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.2.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-22315.branch-2.2.001.patch
>
>
> [~ted_yu]/[~elserj] has added some logging to debug certain cases(defined 
> below) encountered during the testing.
>  
> * Region replica ends up on the same RS - add debug log for 
> RegionReplicaHostCostFunction
> * Debugging why regionservers are going down with Memstore compaction
> * When hbase:namespace table was not assigned
> * Logging in Procedure wals to debug HBASE-20552
> * HMaster not assigning regions properly with RSGroups feature
>  
> Currently, it is sitting in our local branch but it would be good if we can 
> add them upstream so others can also benefit from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22315) Adding some logging which helped us internally in debugging issues

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22315:

Issue Type: Improvement  (was: Bug)

> Adding some logging which helped us internally in debugging issues
> --
>
> Key: HBASE-22315
> URL: https://issues.apache.org/jira/browse/HBASE-22315
> Project: HBase
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 2.2.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-22315.branch-2.2.001.patch
>
>
> [~ted_yu]/[~elserj] has added some logging to debug certain cases(defined 
> below) encountered during the testing.
>  
> * Region replica ends up on the same RS - add debug log for 
> RegionReplicaHostCostFunction
> * Debugging why regionservers are going down with Memstore compaction
> * When hbase:namespace table was not assigned
> * Logging in Procedure wals to debug HBASE-20552
> * HMaster not assigning regions properly with RSGroups feature
>  
> Currently, it is sitting in our local branch but it would be good if we can 
> add them upstream so others can also benefit from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22315) Adding some logging which helped us internally in debugging issues

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22315:

Priority: Minor  (was: Major)

> Adding some logging which helped us internally in debugging issues
> --
>
> Key: HBASE-22315
> URL: https://issues.apache.org/jira/browse/HBASE-22315
> Project: HBase
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 2.2.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Minor
> Attachments: HBASE-22315.branch-2.2.001.patch
>
>
> [~ted_yu]/[~elserj] has added some logging to debug certain cases(defined 
> below) encountered during the testing.
>  
> * Region replica ends up on the same RS - add debug log for 
> RegionReplicaHostCostFunction
> * Debugging why regionservers are going down with Memstore compaction
> * When hbase:namespace table was not assigned
> * Logging in Procedure wals to debug HBASE-20552
> * HMaster not assigning regions properly with RSGroups feature
>  
> Currently, it is sitting in our local branch but it would be good if we can 
> add them upstream so others can also benefit from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-22315) Adding some logging which helped us internally in debugging issues

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-22315:
---

Assignee: Ankit Singhal

> Adding some logging which helped us internally in debugging issues
> --
>
> Key: HBASE-22315
> URL: https://issues.apache.org/jira/browse/HBASE-22315
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.2.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-22315.branch-2.2.001.patch
>
>
> [~ted_yu]/[~elserj] has added some logging to debug certain cases(defined 
> below) encountered during the testing.
>  
> * Region replica ends up on the same RS - add debug log for 
> RegionReplicaHostCostFunction
> * Debugging why regionservers are going down with Memstore compaction
> * When hbase:namespace table was not assigned
> * Logging in Procedure wals to debug HBASE-20552
> * HMaster not assigning regions properly with RSGroups feature
>  
> Currently, it is sitting in our local branch but it would be good if we can 
> add them upstream so others can also benefit from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22315) Adding some logging which helped us internally in debugging issues

2019-04-25 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated HBASE-22315:
--
Attachment: HBASE-22315.branch-2.2.001.patch

> Adding some logging which helped us internally in debugging issues
> --
>
> Key: HBASE-22315
> URL: https://issues.apache.org/jira/browse/HBASE-22315
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.2.0
>Reporter: Ankit Singhal
>Priority: Major
> Attachments: HBASE-22315.branch-2.2.001.patch
>
>
> [~ted_yu]/[~elserj] has added some logging to debug certain cases(defined 
> below) encountered during the testing.
>  
> * Region replica ends up on the same RS - add debug log for 
> RegionReplicaHostCostFunction
> * Debugging why regionservers are going down with Memstore compaction
> * When hbase:namespace table was not assigned
> * Logging in Procedure wals to debug HBASE-20552
> * HMaster not assigning regions properly with RSGroups feature
>  
> Currently, it is sitting in our local branch but it would be good if we can 
> add them upstream so others can also benefit from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22315) Adding some logging which helped us internally in debugging issues

2019-04-25 Thread Ankit Singhal (JIRA)
Ankit Singhal created HBASE-22315:
-

 Summary: Adding some logging which helped us internally in 
debugging issues
 Key: HBASE-22315
 URL: https://issues.apache.org/jira/browse/HBASE-22315
 Project: HBase
  Issue Type: Bug
  Components: logging
Affects Versions: 2.2.0
Reporter: Ankit Singhal


[~ted_yu]/[~elserj] has added some logging to debug certain cases(defined 
below) encountered during the testing.

 

* Region replica ends up on the same RS - add debug log for 
RegionReplicaHostCostFunction
* Debugging why regionservers are going down with Memstore compaction
* When hbase:namespace table was not assigned
* Logging in Procedure wals to debug HBASE-20552
* HMaster not assigning regions properly with RSGroups feature

 

Currently, it is sitting in our local branch but it would be good if we can add 
them upstream so others can also benefit from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22274) Cell size limit check on append should consider cell's previous size.

2019-04-25 Thread Xu Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826451#comment-16826451
 ] 

Xu Cang commented on HBASE-22274:
-

I checked out code on branch-1 on this commit: 
c10ee4d23be40a26070448d48e0608c7be95d4e1

(before this,I cleaned up my workspace too)

And I can reproduce this test failure:

$ mvn clean install -DskipITs -Dtest=TestFromClientSide,TestHRegion

[INFO] ---
[INFO] T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hbase.regionserver.TestHRegion
[INFO] Tests run: 108, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.365 
s - in org.apache.hadoop.hbase.regionserver.TestHRegion
[INFO] Running org.apache.hadoop.hbase.client.TestFromClientSide
[ERROR] Tests run: 86, Failures: 1, Errors: 0, Skipped: 4, Time elapsed: 
230.667 s <<< FAILURE! - in org.apache.hadoop.hbase.client.TestFromClientSide
[ERROR] 
testCheckAndDeleteWithCompareOp(org.apache.hadoop.hbase.client.TestFromClientSide)
 Time elapsed: 1.344 s <<< FAILURE!
java.lang.AssertionError: expected: but was:
 at 
org.apache.hadoop.hbase.client.TestFromClientSide.testCheckAndDeleteWithCompareOp(TestFromClientSide.java:5002)

[INFO]
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] TestFromClientSide.testCheckAndDeleteWithCompareOp:5002 
expected: but was:
[INFO]
[ERROR] Tests run: 194, Failures: 1, Errors: 0, Skipped: 4
[INFO]

> Cell size limit check on append should consider cell's previous size.
> -
>
> Key: HBASE-22274
> URL: https://issues.apache.org/jira/browse/HBASE-22274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0, 1.3.5
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-22274-branch-1.001.patch, 
> HBASE-22274-branch-1.002.patch, HBASE-22274-master.001.patch, 
> HBASE-22274-master.002.patch, HBASE-22274-master.002.patch, 
> HBASE-22274-master.003.patch
>
>
> Now we have cell size limit check based on this parameter 
> *hbase.server.keyvalue.maxsize* 
> One case was missing: appending to a cell only take append op's cell size 
> into account against this limit check. we should check against the potential 
> final cell size after the append.'
> It's easy to reproduce this :
>  
> Apply this diff
>  
> {code:java}
> diff --git 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  index 5a285ef6ba..8633177ebe 100644 --- 
> a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  +++ 
> b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
>  @@ -6455,7 +6455,7 
> - t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[10 * 
> 1024])); 
> + t.append(new Append(ROW).addColumn(FAMILY, QUALIFIER, new byte[2 * 1024])); 
> {code}
>  
> Fix is to add this check in #reckonDeltas in HRegion class, where we have 
> already got the appended cell's size. 
> Will throw DoNotRetryIOException if checks is failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22218) Shell throws "Unsupported Java version" when tried with Java 11 (run-time)

2019-04-25 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-22218:
---
Fix Version/s: 2.3.0

> Shell throws "Unsupported Java version" when tried with Java 11 (run-time)
> --
>
> Key: HBASE-22218
> URL: https://issues.apache.org/jira/browse/HBASE-22218
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Fix For: 3.0.0, 2.3.0
>
>
> Following warning is thrown in the shell.
> {noformat}
> unsupported Java version "11", defaulting to 1.7{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22218) Shell throws "Unsupported Java version" when tried with Java 11 (run-time)

2019-04-25 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi resolved HBASE-22218.

   Resolution: Fixed
Fix Version/s: 2.0.6
   3.0.0

> Shell throws "Unsupported Java version" when tried with Java 11 (run-time)
> --
>
> Key: HBASE-22218
> URL: https://issues.apache.org/jira/browse/HBASE-22218
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Fix For: 3.0.0, 2.0.6
>
>
> Following warning is thrown in the shell.
> {noformat}
> unsupported Java version "11", defaulting to 1.7{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22218) Shell throws "Unsupported Java version" when tried with Java 11 (run-time)

2019-04-25 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-22218:
---
Fix Version/s: (was: 2.0.6)

> Shell throws "Unsupported Java version" when tried with Java 11 (run-time)
> --
>
> Key: HBASE-22218
> URL: https://issues.apache.org/jira/browse/HBASE-22218
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Fix For: 3.0.0
>
>
> Following warning is thrown in the shell.
> {noformat}
> unsupported Java version "11", defaulting to 1.7{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22218) Shell throws "Unsupported Java version" when tried with Java 11 (run-time)

2019-04-25 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826448#comment-16826448
 ] 

Sakthi commented on HBASE-22218:


Yup!

> Shell throws "Unsupported Java version" when tried with Java 11 (run-time)
> --
>
> Key: HBASE-22218
> URL: https://issues.apache.org/jira/browse/HBASE-22218
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
>
> Following warning is thrown in the shell.
> {noformat}
> unsupported Java version "11", defaulting to 1.7{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22307) Deprecated Preemptive Fail Fast

2019-04-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826444#comment-16826444
 ] 

Hudson commented on HBASE-22307:


Results for branch branch-2
[build #1846 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1846/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1846//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1846//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1846//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Deprecated Preemptive Fail Fast
> ---
>
> Key: HBASE-22307
> URL: https://issues.apache.org/jira/browse/HBASE-22307
> Project: HBase
>  Issue Type: Task
>  Components: Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22307-addendum.patch
>
>
> Opened a discuss thread in dev & user mailing list but get no response, so I 
> assume that there is no critical users for this feafure. And the problem we 
> want to solve here is mainly the same with HBASE-16388, so I think user could 
> make use of the config in HBASE-16388.
> Plan to deprecated the related classes and configs on branch-2, and the 
> IA.Private classes will be removed in 3.0.0(on branch HBASE-21512 first), and 
> the constants in HConstants class will be kept till 4.0.0, so we do not break 
> the public API, although it is useless to config them on 3.0.0+.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22313) Add a method to FsDelegationToken to accept token kind

2019-04-25 Thread Venkatesh Sridharan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826408#comment-16826408
 ] 

Venkatesh Sridharan commented on HBASE-22313:
-

Thanks for linking this. But isn't your patch still only use 
HDFS_DELEGATION_TOKEN kind?

> Add a method to FsDelegationToken to accept token kind
> --
>
> Key: HBASE-22313
> URL: https://issues.apache.org/jira/browse/HBASE-22313
> Project: HBase
>  Issue Type: New Feature
>Reporter: Venkatesh Sridharan
>Priority: Minor
>
> The acquireDelegationToken method [1] defaults to checking for delegation 
> token of kind "HDFS_DELEGATION_TOKEN" before fetching it from the FileSystem. 
> It would be helpful to have a method that accepts the token kind and fetches 
> delegation token from UserProvider for that token kind.
> [1] - 
> [https://github.com/apache/hbase/blob/rel/2.1.4/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/FsDelegationToken.java#L67]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-25 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826393#comment-16826393
 ] 

Sean Mackrory commented on HBASE-22149:
---

Added some contract tests I had missed that get a lot more coverage, fixed all 
the issues I was having in the tests (it was just tests stepping on each 
other's paths because they weren't all in separate directories and are supposed 
to be), am now normalizing all paths and sorting arrays in one central place. 
When I normalize paths for locking, I'm doing /scheme/hostname/path to ensure 
using this for multiple filesystems is safe.

Some minor to do's left, but they definitely don't impact my test cases or the 
HBase workloads that have run on this so far:

- mkdirs has implications for any parent directories that don't exist yet, 
although it will only lock the path. I can't think of a scenario where this 
would cause a problem, though.
- The local lock implementation isn't re-entrant if you read-lock a path and 
then try to read-lock a parent in the same thread. I don't think anyone would 
use it in production, and the ZK implementation is the default even for the 
unit tests. This implementation is really only still there in case it helps 
with debugging other logic.
- The whole thing really depends on Hadoop 3+ (in production, S3Guard is 
required and isn't in the Hadoop 2 releases, and even just for testing there's 
a lot of changes required to get it to compile). I'm wondering if there's an 
easy way to only include this module only with the Hadoop 3 profile. I haven't 
seen one, so... hints welcome :)

Other than that: what else would the community like to see before this was 
committed (albeit perhaps with a big "experimental" label until this has gone 
through more scale and integration testing).

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch, HBASE-22149-hbase-2.patch, 
> HBASE-22149-hbase-3.patch, HBASE-22149-hbase-4.patch, 
> HBASE-22149-hbase-5.patch, HBASE-22149-hbase.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file creates. Both of these tests fail reliably on a naked s3a 
> instance. I've also done a small YCSB run against a small cluster to sanity 
> check other functionality and was successful. I will post the patch, and my 
> laundry list of things that still need work. The WAL is still placed on HDFS, 
> but the HBase root directory is otherwise on S3.
> Note that my prototype is built on Hadoop's source tree right now. That's 
> purely for my convenience in putting it together quickly, as that's where I 
> mostly work. I actually think long-term, if this is accepted as a good 
> solution, it makes sense to live in HBase (or it's own repository). It only 
> depends on stable, public APIs in Hadoop and is targeted entirely at HBase's 
> needs, so it should be able to iterate on the HBase community's terms alone.
> 

[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-04-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826395#comment-16826395
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #77 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/77/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/77//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/77//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/77//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #192: Related jiras that update our handling of Hadoop transitive dependencies

2019-04-25 Thread GitBox
Apache-HBase commented on issue #192: Related jiras that update our handling of 
Hadoop transitive dependencies
URL: https://github.com/apache/hbase/pull/192#issuecomment-486803439
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ master Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 253 | master passed |
   | +1 | compile | 65 | master passed |
   | +1 | shadedjars | 264 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | javadoc | 47 | master passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | +1 | mvninstall | 238 | the patch passed |
   | +1 | compile | 66 | the patch passed |
   | +1 | javac | 66 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 5 | The patch has no ill-formed XML file. |
   | +1 | shadedjars | 256 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 494 | Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. |
   | +1 | javadoc | 47 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 12 | hbase-resource-bundle in the patch passed. |
   | +1 | unit | 23 | hbase-shaded in the patch passed. |
   | +1 | unit | 15 | hbase-shaded-client-byo-hadoop in the patch passed. |
   | +1 | unit | 16 | hbase-shaded-mapreduce in the patch passed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 1999 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-192/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/192 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  xml  
shadedjars  hadoopcheck  compile  |
   | uname | Linux 0741eedfde1b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / ec36372649 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-192/1/testReport/
 |
   | Max. process+thread count | 86 (vs. ulimit of 1) |
   | modules | C: hbase-resource-bundle hbase-shaded 
hbase-shaded/hbase-shaded-client-byo-hadoop hbase-shaded/hbase-shaded-mapreduce 
U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-192/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-25 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HBASE-22149:
--
Attachment: HBASE-22149-hbase-5.patch

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch, HBASE-22149-hbase-2.patch, 
> HBASE-22149-hbase-3.patch, HBASE-22149-hbase-4.patch, 
> HBASE-22149-hbase-5.patch, HBASE-22149-hbase.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file creates. Both of these tests fail reliably on a naked s3a 
> instance. I've also done a small YCSB run against a small cluster to sanity 
> check other functionality and was successful. I will post the patch, and my 
> laundry list of things that still need work. The WAL is still placed on HDFS, 
> but the HBase root directory is otherwise on S3.
> Note that my prototype is built on Hadoop's source tree right now. That's 
> purely for my convenience in putting it together quickly, as that's where I 
> mostly work. I actually think long-term, if this is accepted as a good 
> solution, it makes sense to live in HBase (or it's own repository). It only 
> depends on stable, public APIs in Hadoop and is targeted entirely at HBase's 
> needs, so it should be able to iterate on the HBase community's terms alone.
> Another idea [~ste...@apache.org] proposed to me is that of an inode-based 
> FileSystem that keeps hierarchical metadata in a more appropriate store that 
> would allow the required transactions (maybe a special table in HBase could 
> provide that store itself for other tables), and stores the underlying files 
> with unique identifiers on S3. This allows renames to actually become fast 
> instead of just large atomic operations. It does however place a strong 
> dependency on the metadata store. I have not explored this idea much. My 
> current proof-of-concept has been pleasantly simple, so I think it's the 
> right solution unless it proves unable to provide the required performance 
> characteristics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22172) Suppress Java 11 reflective access warnings

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22172:

Component/s: scripts
 java

> Suppress Java 11 reflective access warnings
> ---
>
> Key: HBASE-22172
> URL: https://issues.apache.org/jira/browse/HBASE-22172
> Project: HBase
>  Issue Type: Task
>  Components: java, scripts
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Minor
>  Labels: jdk11
> Attachments: hbase-22172.master.001.patch
>
>
> While running a Java 8 compiled hbase on Java 11 system, I found the 
> following warnings being thrown. I think we can add the "--add-opens" flag to 
> HBASE_OPTS (if the jdk version is 11) to suppress this warning.
> {code:java}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.hadoop.hbase.util.UnsafeAvailChecker 
> (file:/Users/jatsakthi/test/HBASE_TEST_AREA/hbase-3.0.0-SNAPSHOT/lib/hbase-common-3.0.0-SNAPSHOT.jar)
>  to method java.nio.Bits.unaligned()
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.hadoop.hbase.util.UnsafeAvailChecker
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22172) Suppress Java 11 reflective access warnings

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22172:

Priority: Minor  (was: Major)

> Suppress Java 11 reflective access warnings
> ---
>
> Key: HBASE-22172
> URL: https://issues.apache.org/jira/browse/HBASE-22172
> Project: HBase
>  Issue Type: Task
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Minor
>  Labels: jdk11
> Attachments: hbase-22172.master.001.patch
>
>
> While running a Java 8 compiled hbase on Java 11 system, I found the 
> following warnings being thrown. I think we can add the "--add-opens" flag to 
> HBASE_OPTS (if the jdk version is 11) to suppress this warning.
> {code:java}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.hadoop.hbase.util.UnsafeAvailChecker 
> (file:/Users/jatsakthi/test/HBASE_TEST_AREA/hbase-3.0.0-SNAPSHOT/lib/hbase-common-3.0.0-SNAPSHOT.jar)
>  to method java.nio.Bits.unaligned()
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.hadoop.hbase.util.UnsafeAvailChecker
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22218) Shell throws "Unsupported Java version" when tried with Java 11 (run-time)

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826382#comment-16826382
 ] 

Sean Busbey commented on HBASE-22218:
-

excellent! so we can just close this then?

> Shell throws "Unsupported Java version" when tried with Java 11 (run-time)
> --
>
> Key: HBASE-22218
> URL: https://issues.apache.org/jira/browse/HBASE-22218
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
>
> Following warning is thrown in the shell.
> {noformat}
> unsupported Java version "11", defaulting to 1.7{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-14850) C++ client implementation

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826381#comment-16826381
 ] 

Sean Busbey commented on HBASE-14850:
-

we need a jira version for things in the hbase-native-client. e.g. HBASE-22201 
is currently closed with no version.

> C++ client implementation
> -
>
> Key: HBASE-14850
> URL: https://issues.apache.org/jira/browse/HBASE-14850
> Project: HBase
>  Issue Type: Task
>Reporter: Elliott Clark
>Priority: Major
>
> It's happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21502) Update SyncTable section on RefGuide once HBASE-20586 is committed

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-21502:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

thanks Wellington!

> Update SyncTable section on RefGuide once HBASE-20586 is committed
> --
>
> Key: HBASE-21502
> URL: https://issues.apache.org/jira/browse/HBASE-21502
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HBASE-21502.master.001.patch
>
>
> SyncTable [refguide 
> section|https://hbase.apache.org/book.html#_step_2_synctable] currently 
> mentions limitation to run it on different kerberos realm. HBASE-20586 is 
> ongoing to resolve this problem. This jira is to make sure RefGuide is 
> updated accordingly once HBASE-20586 is resolved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826360#comment-16826360
 ] 

Sean Busbey commented on HBASE-22020:
-

pushed to branch-2.0 since dev@ discussion about 2.0.z EOL is still happening. 
branch-2.0 essentially has a branch-1 forward port because its test set up is 
like branch-1.

> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.0.6, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22020) upgrade to yetus 0.9.0

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22020:

Fix Version/s: 2.0.6

> upgrade to yetus 0.9.0
> --
>
> Key: HBASE-22020
> URL: https://issues.apache.org/jira/browse/HBASE-22020
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.0.6, 2.1.5, 2.2.1, 1.3.5
>
> Attachments: HBASE-22020-branch-1.v1.patch, HBASE-22020.0.patch, 
> HBASE-22020.1.patch
>
>
> branch-1/jdk7 checkstyle dtd xml parse complaint; "script engine for language 
> js can not be found"
> See parent for some context. Checkstyle references dtds that were hosted on 
> puppycrawl, then on sourceforge up until ten days ago. Nightlies are failing 
> for among other reasons, complaint that there is bad xml in the build... 
> notably,  the unresolvable DTDs.
> I'd just update the DTDs but there is a need for a js engine some where and 
> openjdk7 doesn't ship with one (openjdk8 does). That needs addressing and 
> then we can backport the parent issue...
> See 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1/710/artifact/output-general/xml.txt
>  ... which in case its rolled away, is filled with this message:
> "script engine for language js can not be found"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22109) Update hbase shaded content checker after guava update in hadoop branch-3.0 to 27.0-jre

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826358#comment-16826358
 ] 

Sean Busbey commented on HBASE-22109:
-

Okay I've modified the provided patch to sit on a branch with HBASE-22312, 
HBASE-22314, and HBASE-22087:

 

[https://github.com/apache/hbase/pull/192]

 

With all of these in place I get a clean build against Hadoop 3 trunk (as of

3f787cd5065560f1bbb9f56a617cd4815803ca8a)  that I think also have correct 
artifacts.

> Update hbase shaded content checker after guava update in hadoop branch-3.0 
> to 27.0-jre
> ---
>
> Key: HBASE-22109
> URL: https://issues.apache.org/jira/browse/HBASE-22109
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HBASE-22109.001.patch
>
>
> I'm updating guava version from 11.0.2 to 27.0-jre in HADOOP-15960 because of 
> a CVE. I will create a patch for branch-3.0, 3.1, 3.2 and trunk (3.3).  
> I wanted to be sure that HBase works with the updated guava, I compiled and 
> run the HBase tests with my hadoop snapshot containing the updated version, 
> but there were some issues that I had to fix:
> * New shaded dependency: org.checkerframework
> * New license needs to be added to LICENSE.vm: Apache 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22087) Update LICENSE/shading for the dependencies from the latest Hadoop trunk

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826357#comment-16826357
 ] 

Sean Busbey commented on HBASE-22087:
-

Okay I've modified the provided patch to sit on top of HBASE-22312, 
HBASE-22314, and HBASE-22109:

 

[https://github.com/apache/hbase/pull/192]

 

With all of these in place I get a clean build against Hadoop 3 trunk (as of

3f787cd5065560f1bbb9f56a617cd4815803ca8a)  that I think also have correct 
artifacts.

> Update LICENSE/shading for the dependencies from the latest Hadoop trunk
> 
>
> Key: HBASE-22087
> URL: https://issues.apache.org/jira/browse/HBASE-22087
> Project: HBase
>  Issue Type: Improvement
>  Components: hadoop3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HBASE-22087.master.001.patch, depcheck_hadoop33.log
>
>
> The following list of dependencies were added in Hadoop trunk (3.3.0) and 
> HBase does not compile successfully:
> YARN-8778 added jline 3.9.0
> HADOOP-15775 added javax.activation
> HADOOP-15531 added org.apache.common.text (commons-text)
> HADOOP-15764 added dnsjava (org.xbill)
> Some of these are needed to support JDK9/10/11 in Hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-22109) Update hbase shaded content checker after guava update in hadoop branch-3.0 to 27.0-jre

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-22109:
---

Assignee: Gabor Bota  (was: Sean Busbey)

> Update hbase shaded content checker after guava update in hadoop branch-3.0 
> to 27.0-jre
> ---
>
> Key: HBASE-22109
> URL: https://issues.apache.org/jira/browse/HBASE-22109
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HBASE-22109.001.patch
>
>
> I'm updating guava version from 11.0.2 to 27.0-jre in HADOOP-15960 because of 
> a CVE. I will create a patch for branch-3.0, 3.1, 3.2 and trunk (3.3).  
> I wanted to be sure that HBase works with the updated guava, I compiled and 
> run the HBase tests with my hadoop snapshot containing the updated version, 
> but there were some issues that I had to fix:
> * New shaded dependency: org.checkerframework
> * New license needs to be added to LICENSE.vm: Apache 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] busbey opened a new pull request #192: Related jiras that update our handling of Hadoop transitive dependencies

2019-04-25 Thread GitBox
busbey opened a new pull request #192: Related jiras that update our handling 
of Hadoop transitive dependencies
URL: https://github.com/apache/hbase/pull/192
 
 
   This includes two general fixes that will be needed by all branch-2 releases
   
   * HBASE-22312 when built against hadoop 3 our hbase-shaded-mapreduce module 
incorrectly includes the mapreduce client's transitive dependencies
   * HBASE-22314 when built against hadoop 3 our hbase-shaded-client-byo-hadoop 
module incorrectly includes the hadoop client's transitive dependencies
   
   and also two fixes that will only be needed for upcoming minor releases, 
since they fix problems that depend on which version(s) of Hadoop 3 end up 
getting released and built against. These are both modified versions of patches 
provided by other contributors.
   
   * HBASE-22109 Update hbase shaded client for new transitive dependencies of 
guava after hadoop update
   * HBASE-22087 Update LICENSE/shading for the dependencies from the latest 
Hadoop trunk


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2019-04-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826345#comment-16826345
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #195 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/195/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/195//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/195//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/195//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22307) Deprecated Preemptive Fail Fast

2019-04-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826342#comment-16826342
 ] 

Hudson commented on HBASE-22307:


Results for branch master
[build #962 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/962/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/962//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/962//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/962//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Deprecated Preemptive Fail Fast
> ---
>
> Key: HBASE-22307
> URL: https://issues.apache.org/jira/browse/HBASE-22307
> Project: HBase
>  Issue Type: Task
>  Components: Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22307-addendum.patch
>
>
> Opened a discuss thread in dev & user mailing list but get no response, so I 
> assume that there is no critical users for this feafure. And the problem we 
> want to solve here is mainly the same with HBASE-16388, so I think user could 
> make use of the config in HBASE-16388.
> Plan to deprecated the related classes and configs on branch-2, and the 
> IA.Private classes will be removed in 3.0.0(on branch HBASE-21512 first), and 
> the constants in HConstants class will be kept till 4.0.0, so we do not break 
> the public API, although it is useless to config them on 3.0.0+.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22231) Remove unused and * imports

2019-04-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826332#comment-16826332
 ] 

Hudson commented on HBASE-22231:


Results for branch branch-2.2
[build #213 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/213/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/213//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/213//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/213//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove unused and * imports
> ---
>
> Key: HBASE-22231
> URL: https://issues.apache.org/jira/browse/HBASE-22231
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1
>
>
> Currently there are a lot of unused imports, as well as '*' imports, are 
> used. They should be removed or replaced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-25 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826327#comment-16826327
 ] 

Sakthi commented on HBASE-22264:


{quote}
can we grab the number before the first "." and then do a numeric comparison?
{quote}
Was thinking of going this way. But I am not sure what went wrong with the 
version detection of jdk1.9-ea-b102.jdk. 

{quote}could you check a jdk12 and jdk13 on some linux variant? or maybe the 
azul version that's in our docker container? {quote}
Will do. Good idea Sean. 


> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch, 
> hbase-22264_jdks.txt
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826325#comment-16826325
 ] 

Sean Busbey commented on HBASE-22264:
-

can we grab the number before the first "." and then do a numeric comparison?

could you check a jdk12 and jdk13 on some linux variant? or maybe the azul 
version that's in our docker container?

> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch, 
> hbase-22264_jdks.txt
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826319#comment-16826319
 ] 

Sean Busbey commented on HBASE-22264:
-

here's a a log. it looks like it was a different jdk9 that didn't get a version.

maybe it's just that you're doing a string compare?

> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch, 
> hbase-22264_jdks.txt
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22264:

Attachment: hbase-22264_jdks.txt

> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch, 
> hbase-22264_jdks.txt
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-25 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826288#comment-16826288
 ] 

Sakthi commented on HBASE-22264:


Thanks Sean for the feedback. If you could please share the versions with which 
you saw these issues, I can get back to fixing this. :)

> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22264) Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : javax/annotation/Priority

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826286#comment-16826286
 ] 

Sean Busbey commented on HBASE-22264:
-

I think something is off with the jdk version checking. I ran though a few JVMs 
I have available and JDK9 triggered the "load the jdk11 jars!" path. Also it 
looks like jdk8 didn't load it just because we couldn't get a version number at 
all.

> Rest Server (master branch) on jdk 11 throws NoClassDefFoundError : 
> javax/annotation/Priority
> -
>
> Key: HBASE-22264
> URL: https://issues.apache.org/jira/browse/HBASE-22264
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: jdk11
> Attachments: hbase-22264.master.001.patch, 
> hbase-22264.master.002.patch, hbase-22264.master.003.patch
>
>
> This is in continuation with HBASE-22249. When compiled with jdk 8 and run on 
> jdk 11, the master branch throws the following exception during an attempt to 
> start the hbase rest server:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> javax/annotation/Priority
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.modelFor(ComponentBag.java:483)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.access$100(ComponentBag.java:89)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:408)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag$5.call(ComponentBag.java:398)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.registerModel(ComponentBag.java:398)
>   at 
> org.glassfish.jersey.model.internal.ComponentBag.register(ComponentBag.java:235)
>   at 
> org.glassfish.jersey.model.internal.CommonConfig.register(CommonConfig.java:420)
>   at 
> org.glassfish.jersey.server.ResourceConfig.register(ResourceConfig.java:425)
>   at org.apache.hadoop.hbase.rest.RESTServer.run(RESTServer.java:245)
>   at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:421)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22314) shaded byo-hadoop client should list needed hadoop modules as provided scope to avoid inclusion of unnecessary transitive depednencies

2019-04-25 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826249#comment-16826249
 ] 

HBase QA commented on HBASE-22314:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
38s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
30s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hbase-shaded-client-byo-hadoop in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/187/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22314 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967036/HBASE-22314.0.patch |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  |
| uname | Linux 1d928012bca5 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / ec36372649 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/187/testReport/ |
| Max. process+thread count | 85 (vs. ulimit of 1) |
| modules | C: hbase-shaded/hbase-shaded-client-byo-hadoop U: 
hbase-shaded/hbase-shaded-client-byo-hadoop |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/187/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> shaded byo-hadoop client should list needed hadoop modules as provided scope 
> to avoid inclusion of unnecessary transitive depednencies
> --
>
> Key: HBASE-22314
> URL: https://issues.apache.org/jira/browse/HBASE-22314
> 

[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826232#comment-16826232
 ] 

Andrew Purtell commented on HBASE-22301:


{quote}Should the threshold factor in {{hbase.regionserver.hlog.syncer.count}}?
{quote}
We could divide the count by number of syncer threads. Or, multiply the 
theshold by number of threads. Or, simply set a higher threshold.

The latter is simplest but I'd be interested in thoughts here.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826231#comment-16826231
 ] 

Andrew Purtell edited comment on HBASE-22301 at 4/25/19 4:51 PM:
-

{quote}maybe the log message could go into checkSlowSync so that the count is 
still visible

or {{checkSlowSync}} could return it and treat "<= 0" to mean "don't request a 
roll"?
{quote}
Sure, no problem.


was (Author: apurtell):
{quote}maybe the log message could go into checkSlowSync so that the count is 
still visible
{quote}
Sure, no problem.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826231#comment-16826231
 ] 

Andrew Purtell commented on HBASE-22301:


{quote}maybe the log message could go into checkSlowSync so that the count is 
still visible
{quote}
Sure, no problem.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not roll more than once during this interval for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22314) shaded byo-hadoop client should list needed hadoop modules as provided scope to avoid inclusion of unnecessary transitive depednencies

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22314:

Status: Patch Available  (was: In Progress)

v0

 - lists the hadoop modules needed for the hbase-client as provided

> shaded byo-hadoop client should list needed hadoop modules as provided scope 
> to avoid inclusion of unnecessary transitive depednencies
> --
>
> Key: HBASE-22314
> URL: https://issues.apache.org/jira/browse/HBASE-22314
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, hadoop3, shading
>Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1
>
> Attachments: HBASE-22314.0.patch
>
>
> attempting to build against current hadoop trunk for HBASE-22087 shows that 
> hte byo-hadoop client is trying to package transitive dependencies from the 
> hadoop dependencies that we expressly say we don't need to bring with us.
> it's because we don't list those modules as provided, so all of their 
> transitives are also in compile scope. The shading module does simple 
> filtering when excluding things in a given scope, it doesn't e.g. make sure 
> to also exclude the transitive dependencies of things it keeps out.
> since we don't want to list all the transitive dependencies of hadoop in our 
> shading exclusion, we should list the needed hadoop modules as provided.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22314) shaded byo-hadoop client should list needed hadoop modules as provided scope to avoid inclusion of unnecessary transitive depednencies

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22314:

Attachment: HBASE-22314.0.patch

> shaded byo-hadoop client should list needed hadoop modules as provided scope 
> to avoid inclusion of unnecessary transitive depednencies
> --
>
> Key: HBASE-22314
> URL: https://issues.apache.org/jira/browse/HBASE-22314
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, hadoop3, shading
>Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1
>
> Attachments: HBASE-22314.0.patch
>
>
> attempting to build against current hadoop trunk for HBASE-22087 shows that 
> hte byo-hadoop client is trying to package transitive dependencies from the 
> hadoop dependencies that we expressly say we don't need to bring with us.
> it's because we don't list those modules as provided, so all of their 
> transitives are also in compile scope. The shading module does simple 
> filtering when excluding things in a given scope, it doesn't e.g. make sure 
> to also exclude the transitive dependencies of things it keeps out.
> since we don't want to list all the transitive dependencies of hadoop in our 
> shading exclusion, we should list the needed hadoop modules as provided.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826220#comment-16826220
 ] 

Sean Busbey commented on HBASE-22301:
-

the approach makes sense.

{code}

1354  if (checkSlowSync()) {
1355LOG.warn("Requesting log roll because we exceeded slow sync 
threshold; threshold=" +
1356  slowSyncRollThreshold + ", current pipeline: " + 
Arrays.toString(getPipeLine()));
1357requestLogRoll(SLOW_SYNC);
1335  } 1358  }
{code}

this log message doesn't have enough detail since it's just going to tell me 
e.g. "10" without saying how slow things had to be over what period of time, 
nor how many times we actually crossed that line.

maybe the log message could go into checkSlowSync so that the count is still 
visible? or {{checkSlowSync}} could return it and treat "<= 0" to mean "don't 
request a roll"?

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and abnormal 
> conditions is to define a fairly lengthy interval, default 5 minutes, and 
> then insure we do not 

[jira] [Commented] (HBASE-21920) Ignoring 'empty' end_key while calculating end_key for new region in HBCK -fixHdfsOverlaps command can cause data loss

2019-04-25 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826217#comment-16826217
 ] 

HBase QA commented on HBASE-21920:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
59s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} branch-1 passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} branch-1 passed with JDK v1.7.0_211 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
37s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
 2s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} branch-1 passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} branch-1 passed with JDK v1.7.0_211 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_211 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
25s{color} | {color:red} hbase-server: The patch generated 1 new + 181 
unchanged - 1 fixed = 182 total (was 182) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
46s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 40s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed with JDK v1.7.0_211 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 31s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestReplicasClient |
|   | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFilesSplitRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/185/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-21920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967024/HBASE-21920.branch-1.002.patch
 |
| 

[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread David Manning (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826216#comment-16826216
 ] 

David Manning commented on HBASE-22301:
---

I do like the count-based approach better, and think it may offer better 
results in both a default or well-tuned state. Thank you for presenting that 
option. I'm trying to review the incident data and non-incident data to help 
inform the defaults, if possible. So far, I've seen in sample incident data 
that we had ~800 slow syncs per minute (160 per thread, with 5 syncer threads.) 
Background level for that cluster, for hot nodes, ends up being around ~10 slow 
syncs per minute. So I could imagine having a higher default to avoid too much 
log rolling, but still be a useful default. Should the threshold factor in 
{{hbase.regionserver.hlog.syncer.count}}? A slow pipeline will be reported X 
times, where X is the number of syncer threads waiting on the pipeline.

I will spend more time looking at data today, and see what I can find.

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log roll time that includes datanode pipeline details for 
> further debugging and analysis, similar to the existing slow FSHLog sync log 
> line.
> If we roll too many times within a short interval of time this probably means 
> there is a widespread problem with the fleet and so our mitigation is not 
> helping and may be exacerbating those problems or operator difficulties. 
> Ensure log roll requests triggered by this new feature happen infrequently 
> enough to not cause difficulties under either normal or abnormal conditions. 
> A very simple strategy that could work well under both normal and 

[jira] [Commented] (HBASE-22054) Space Quota: Compaction is not working for super user in case of NO_WRITES_COMPACTIONS

2019-04-25 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826212#comment-16826212
 ] 

Sakthi commented on HBASE-22054:


[~elserj], the units pass. Have added a note in the doc as well in the latest 
patch. Mind taking a look, please?

> Space Quota: Compaction is not working for super user in case of 
> NO_WRITES_COMPACTIONS
> --
>
> Key: HBASE-22054
> URL: https://issues.apache.org/jira/browse/HBASE-22054
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
>  Labels: Quota, Space
> Attachments: hbase-22054.master.001.patch, 
> hbase-22054.master.002.patch, hbase-22054.master.003.patch, 
> hbase-22054.master.004.patch, hbase-22054.master.005.patch
>
>
> Space Quota: Compaction is not working for super user. Compaction command is 
> issued successfully at client but actually compaction is not happening.
> In debug log below message is printed:
> as an active space quota violation policy disallows compaction.
>  Reference: 
>  
> [https://lists.apache.org/thread.html/d09aa7abaacf1f0be9d59fa9260515ddc0c17ac0aba9cc0f2ac569bf@%3Cuser.hbase.apache.org%3E]
> Actually in requestCompactionInternal method of  CompactSplit class ,there is 
> no check for super user and compcations are disallowed
> {noformat}
>   RegionServerSpaceQuotaManager spaceQuotaManager =
> this.server.getRegionServerSpaceQuotaManager();
> if (spaceQuotaManager != null &&
> 
> spaceQuotaManager.areCompactionsDisabled(region.getTableDescriptor().getTableName()))
>  {
>   String reason = "Ignoring compaction request for " + region +
>   " as an active space quota violation " + " policy disallows 
> compactions.";
>   tracker.notExecuted(store, reason);
>   completeTracker.completed(store);
>   LOG.debug(reason);
>   return;
> }
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22301) Consider rolling the WAL if the HDFS write pipeline is slow

2019-04-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826205#comment-16826205
 ] 

Andrew Purtell commented on HBASE-22301:


No. The problem is GC activity is indistinguishable from real slow syncs if you 
only examine a single data point, unless you set a very high threshold, and 
then we would probably not trigger enough to make a difference. Data from our 
incident shows a train of slow sync warnings, a few peaks at 1-3 seconds. 
Unlikely triggering only on the rare peak outliers would have made a 
difference. The conservative 10s trigger in this patch would never have been 
reached. Instead, if we triggered on trains of smaller data points in the range 
of 200-600ms the mitigation would have fired enough to make a difference and 
these trains correlated to real problems not GC activity. And by GC activity I 
mean that of the regionserver process. As you probably know any one or a 
handful of slow sync warnings can be false positives due to GC rather than real 
latency on the pipeline. It makes things difficult here. We can try to avoid 
false positives either by setting a high latency threshold or by waiting for an 
unusual number to occur within some window of time. There are patches for 
review that take either approach. It would seem the high threshold approach may 
not offer enough mitigation in practice given the data on hand. At any rate the 
thresholds are tunable and can be experimented with in production to find the 
right trade off, and the feature is self limiting so slow sync triggered log 
rolls do not become a problem themselves. 

> Consider rolling the WAL if the HDFS write pipeline is slow
> ---
>
> Key: HBASE-22301
> URL: https://issues.apache.org/jira/browse/HBASE-22301
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22301-branch-1.patch, HBASE-22301-branch-1.patch
>
>
> Consider the case when a subset of the HDFS fleet is unhealthy but suffering 
> a gray failure not an outright outage. HDFS operations, notably syncs, are 
> abnormally slow on pipelines which include this subset of hosts. If the 
> regionserver's WAL is backed by an impacted pipeline, all WAL handlers can be 
> consumed waiting for acks from the datanodes in the pipeline (recall that 
> some of them are sick). Imagine a write heavy application distributing load 
> uniformly over the cluster at a fairly high rate. With the WAL subsystem 
> slowed by HDFS level issues, all handlers can be blocked waiting to append to 
> the WAL. Once all handlers are blocked, the application will experience 
> backpressure. All (HBase) clients eventually have too many outstanding writes 
> and block.
> Because the application is distributing writes near uniformly in the 
> keyspace, the probability any given service endpoint will dispatch a request 
> to an impacted regionserver, even a single regionserver, approaches 1.0. So 
> the probability that all service endpoints will be affected approaches 1.0.
> In order to break the logjam, we need to remove the slow datanodes. Although 
> there is HDFS level monitoring, mechanisms, and procedures for this, we 
> should also attempt to take mitigating action at the HBase layer as soon as 
> we find ourselves in trouble. It would be enough to remove the affected 
> datanodes from the writer pipelines. A super simple strategy that can be 
> effective is described below:
> This is with branch-1 code. I think branch-2's async WAL can mitigate but 
> still can be susceptible. branch-2 sync WAL is susceptible. 
> We already roll the WAL writer if the pipeline suffers the failure of a 
> datanode and the replication factor on the pipeline is too low. We should 
> also consider how much time it took for the write pipeline to complete a sync 
> the last time we measured it, or the max over the interval from now to the 
> last time we checked. If the sync time exceeds a configured threshold, roll 
> the log writer then too. Fortunately we don't need to know which datanode is 
> making the WAL write pipeline slow, only that syncs on the pipeline are too 
> slow and exceeding a threshold. This is enough information to know when to 
> roll it. Once we roll it, we will get three new randomly selected datanodes. 
> On most clusters the probability the new pipeline includes the slow datanode 
> will be low. (And if for some reason it does end up with a problematic 
> datanode again, we roll again.)
> This is not a silver bullet but this can be a reasonably effective mitigation.
> Provide a metric for tracking when log roll is requested (and for what 
> reason).
> Emit a log line at log 

[jira] [Work started] (HBASE-22314) shaded byo-hadoop client should list needed hadoop modules as provided scope to avoid inclusion of unnecessary transitive depednencies

2019-04-25 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22314 started by Sean Busbey.
---
> shaded byo-hadoop client should list needed hadoop modules as provided scope 
> to avoid inclusion of unnecessary transitive depednencies
> --
>
> Key: HBASE-22314
> URL: https://issues.apache.org/jira/browse/HBASE-22314
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, hadoop3, shading
>Affects Versions: 2.1.0, 2.0.0, 2.2.0, 2.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1
>
>
> attempting to build against current hadoop trunk for HBASE-22087 shows that 
> hte byo-hadoop client is trying to package transitive dependencies from the 
> hadoop dependencies that we expressly say we don't need to bring with us.
> it's because we don't list those modules as provided, so all of their 
> transitives are also in compile scope. The shading module does simple 
> filtering when excluding things in a given scope, it doesn't e.g. make sure 
> to also exclude the transitive dependencies of things it keeps out.
> since we don't want to list all the transitive dependencies of hadoop in our 
> shading exclusion, we should list the needed hadoop modules as provided.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22314) shaded byo-hadoop client should list needed hadoop modules as provided scope to avoid inclusion of unnecessary transitive depednencies

2019-04-25 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-22314:
---

 Summary: shaded byo-hadoop client should list needed hadoop 
modules as provided scope to avoid inclusion of unnecessary transitive 
depednencies
 Key: HBASE-22314
 URL: https://issues.apache.org/jira/browse/HBASE-22314
 Project: HBase
  Issue Type: Bug
  Components: hadoop2, hadoop3, shading
Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 3.0.0, 2.3.0, 2.0.6, 2.1.5, 2.2.1


attempting to build against current hadoop trunk for HBASE-22087 shows that hte 
byo-hadoop client is trying to package transitive dependencies from the hadoop 
dependencies that we expressly say we don't need to bring with us.

it's because we don't list those modules as provided, so all of their 
transitives are also in compile scope. The shading module does simple filtering 
when excluding things in a given scope, it doesn't e.g. make sure to also 
exclude the transitive dependencies of things it keeps out.

since we don't want to list all the transitive dependencies of hadoop in our 
shading exclusion, we should list the needed hadoop modules as provided.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22312) Hadoop 3 profile for hbase-shaded-mapreduce should like mapreduce as a provided dependency

2019-04-25 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826204#comment-16826204
 ] 

HBase QA commented on HBASE-22312:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
38s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 1s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m  0s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hbase-shaded-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/186/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22312 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967029/HBASE-22312.0.patch |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  |
| uname | Linux 2445f56485c6 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / ec36372649 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/186/testReport/ |
| Max. process+thread count | 85 (vs. ulimit of 1) |
| modules | C: hbase-shaded/hbase-shaded-mapreduce U: 
hbase-shaded/hbase-shaded-mapreduce |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/186/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> Hadoop 3 profile for hbase-shaded-mapreduce should like mapreduce as a 
> provided dependency
> --
>
> Key: HBASE-22312
> URL: https://issues.apache.org/jira/browse/HBASE-22312
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, shading
>Affects Versions: 2.1.0, 

[jira] [Commented] (HBASE-22087) Update LICENSE/shading for the dependencies from the latest Hadoop trunk

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826197#comment-16826197
 ] 

Sean Busbey commented on HBASE-22087:
-

this is similar to the jline problem. {{hbase-shaded-client-byo-hadoop}} 
excludes the hadoop modules from inclusion in the shading, but fails to list 
them as provided. The way the shading plugin works that means we'll still 
include all of the transitive dependencies of the hadoop modules.

this'll need to be fixed in all branch-2s; I'll file a related jira.

> Update LICENSE/shading for the dependencies from the latest Hadoop trunk
> 
>
> Key: HBASE-22087
> URL: https://issues.apache.org/jira/browse/HBASE-22087
> Project: HBase
>  Issue Type: Improvement
>  Components: hadoop3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HBASE-22087.master.001.patch, depcheck_hadoop33.log
>
>
> The following list of dependencies were added in Hadoop trunk (3.3.0) and 
> HBase does not compile successfully:
> YARN-8778 added jline 3.9.0
> HADOOP-15775 added javax.activation
> HADOOP-15531 added org.apache.common.text (commons-text)
> HADOOP-15764 added dnsjava (org.xbill)
> Some of these are needed to support JDK9/10/11 in Hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22310) checkAndMutate used an incorrect row to check the condition

2019-04-25 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826195#comment-16826195
 ] 

HBase QA commented on HBASE-22310:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1.4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
44s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
40s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
13s{color} | {color:red} hbase-server: The patch generated 2 new + 3 unchanged 
- 2 fixed = 5 total (was 5) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
30s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  1m 
25s{color} | {color:red} The patch causes 26 errors with Hadoop v2.4.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  2m 
51s{color} | {color:red} The patch causes 26 errors with Hadoop v2.5.2. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  4m 
33s{color} | {color:red} The patch causes 16 errors with Hadoop v2.6.5. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
18s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}121m 
38s{color} | {color:green} hbase-server in the patch passed. {color} |
| 

[jira] [Commented] (HBASE-22087) Update LICENSE/shading for the dependencies from the latest Hadoop trunk

2019-04-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826187#comment-16826187
 ] 

Sean Busbey commented on HBASE-22087:
-

{{hbase-shaded-client-byo-hadoop}} failing due to not relocated dependencies 
that were added to hadoop is a sign that we're not properly excluding some 
hadoop dependency since that artifact is supposed to leave hadoop related 
classes to hadoop. let me see why it thinks it should be including commons-text 
and dnsjava 

> Update LICENSE/shading for the dependencies from the latest Hadoop trunk
> 
>
> Key: HBASE-22087
> URL: https://issues.apache.org/jira/browse/HBASE-22087
> Project: HBase
>  Issue Type: Improvement
>  Components: hadoop3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HBASE-22087.master.001.patch, depcheck_hadoop33.log
>
>
> The following list of dependencies were added in Hadoop trunk (3.3.0) and 
> HBase does not compile successfully:
> YARN-8778 added jline 3.9.0
> HADOOP-15775 added javax.activation
> HADOOP-15531 added org.apache.common.text (commons-text)
> HADOOP-15764 added dnsjava (org.xbill)
> Some of these are needed to support JDK9/10/11 in Hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >