[GitHub] [hadoop-ozone] swagle commented on a change in pull request #46: HDDS-2286. Add a log info in ozone client and scm to print the exclus…

2019-10-16 Thread GitBox
swagle commented on a change in pull request #46: HDDS-2286. Add a log info in 
ozone client and scm to print the exclus…
URL: https://github.com/apache/hadoop-ozone/pull/46#discussion_r335818232
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
 ##
 @@ -176,6 +176,10 @@ public void join() throws InterruptedException {
 auditMap.put("owner", owner);
 List blocks = new ArrayList<>(num);
 boolean auditSuccess = true;
+LOG.info("Allocating blocks {} of size {}, with excludeList: " +
+"datanodes = {}, pipelines = {}, containers = {}",
+excludeList.getDatanodes(), excludeList.getPipelineIds(),
+excludeList.getContainerIds());
 
 Review comment:
   Thanks @adoroszlai for catching this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #46: HDDS-2286. Add a log info in ozone client and scm to print the exclus…

2019-10-16 Thread GitBox
adoroszlai commented on a change in pull request #46: HDDS-2286. Add a log info 
in ozone client and scm to print the exclus…
URL: https://github.com/apache/hadoop-ozone/pull/46#discussion_r335815224
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
 ##
 @@ -176,6 +176,10 @@ public void join() throws InterruptedException {
 auditMap.put("owner", owner);
 List blocks = new ArrayList<>(num);
 boolean auditSuccess = true;
+LOG.info("Allocating blocks {} of size {}, with excludeList: " +
+"datanodes = {}, pipelines = {}, containers = {}",
+excludeList.getDatanodes(), excludeList.getPipelineIds(),
+excludeList.getContainerIds());
 
 Review comment:
   There are 5 placeholders, but only 3 values.  Number and size of blocks 
seems to be missing.
   
   Also I think `Allocating {} blocks` would be more natural than `Allocating 
blocks {}`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle opened a new pull request #46: HDDS-2286. Add a log info in ozone client and scm to print the exclus…

2019-10-16 Thread GitBox
swagle opened a new pull request #46: HDDS-2286. Add a log info in ozone client 
and scm to print the exclus…
URL: https://github.com/apache/hadoop-ozone/pull/46
 
 
   ##  What changes were proposed in this pull request?
   Added additional logging to print exclude lists on client and SCM
   
   (Please fill in changes proposed in this fix)
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2286
   
   ## How was this patch tested?
   Waiting on unit tests since only log statements were added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] szetszwo opened a new pull request #45: HDDS-2275. In BatchOperation.SingleOperation, do not clone byte[].

2019-10-16 Thread GitBox
szetszwo opened a new pull request #45: HDDS-2275. In 
BatchOperation.SingleOperation, do not clone byte[].
URL: https://github.com/apache/hadoop-ozone/pull/45
 
 
   See https://issues.apache.org/jira/browse/HDDS-2275


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] szetszwo merged pull request #44: HDDS-2271. Avoid buffer copying in KeyValueHandler.

2019-10-16 Thread GitBox
szetszwo merged pull request #44: HDDS-2271. Avoid buffer copying in 
KeyValueHandler.
URL: https://github.com/apache/hadoop-ozone/pull/44
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] szetszwo opened a new pull request #44: HDDS-2271. Avoid buffer copying in KeyValueHandler.

2019-10-16 Thread GitBox
szetszwo opened a new pull request #44: HDDS-2271. Avoid buffer copying in 
KeyValueHandler.
URL: https://github.com/apache/hadoop-ozone/pull/44
 
 
   Migrated from https://github.com/apache/hadoop/pull/1625
   
   See https://issues.apache.org/jira/browse/HDDS-2271


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335793351
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -147,10 +152,33 @@ private void initializePipelineState() throws 
IOException {
 }
   }
 
+  private boolean exceedPipelineNumberLimit(ReplicationFactor factor) {
+if (heavyNodeCriteria > 0 && factor == ReplicationFactor.THREE) {
+  return (stateManager.getPipelines(ReplicationType.RATIS, factor).size() -
+  stateManager.getPipelines(ReplicationType.RATIS, factor,
+  Pipeline.PipelineState.CLOSED).size()) >= heavyNodeCriteria *
+  nodeManager.getNodeCount(HddsProtos.NodeState.HEALTHY);
 
 Review comment:
   Good point. Should divide by factor number.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335793270
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -92,65 +86,53 @@
 this.stateManager = stateManager;
 this.conf = conf;
 this.tlsConfig = tlsConfig;
+this.placementPolicy =
+new PipelinePlacementPolicy(nodeManager, stateManager, conf);
   }
 
-
-  /**
-   * Create pluggable container placement policy implementation instance.
-   *
-   * @param nodeManager - SCM node manager.
-   * @param conf - configuration.
-   * @return SCM container placement policy implementation instance.
-   */
-  @SuppressWarnings("unchecked")
-  // TODO: should we rename PlacementPolicy to PipelinePlacementPolicy?
-  private static PlacementPolicy createContainerPlacementPolicy(
-  final NodeManager nodeManager, final Configuration conf) {
-Class implClass =
-(Class) conf.getClass(
-ScmConfigKeys.OZONE_SCM_CONTAINER_PLACEMENT_IMPL_KEY,
-SCMContainerPlacementRandom.class);
-
-try {
-  Constructor ctor =
-  implClass.getDeclaredConstructor(NodeManager.class,
-  Configuration.class);
-  return ctor.newInstance(nodeManager, conf);
-} catch (RuntimeException e) {
-  throw e;
-} catch (InvocationTargetException e) {
-  throw new RuntimeException(implClass.getName()
-  + " could not be constructed.", e.getCause());
-} catch (Exception e) {
-//  LOG.error("Unhandled exception occurred, Placement policy will not " +
-//  "be functional.");
-  throw new IllegalArgumentException("Unable to load " +
-  "PlacementPolicy", e);
-}
-  }
-
-  @Override
-  public Pipeline create(ReplicationFactor factor) throws IOException {
-// Get set of datanodes already used for ratis pipeline
+  private List pickNodesNeverUsed(ReplicationFactor factor)
+  throws SCMException {
 Set dnsUsed = new HashSet<>();
-stateManager.getPipelines(ReplicationType.RATIS, factor).stream().filter(
-p -> p.getPipelineState().equals(PipelineState.OPEN) ||
-p.getPipelineState().equals(PipelineState.DORMANT) ||
-p.getPipelineState().equals(PipelineState.ALLOCATED))
+stateManager.getPipelines(ReplicationType.RATIS, factor)
+.stream().filter(
+  p -> p.getPipelineState().equals(PipelineState.OPEN) ||
 
 Review comment:
   Haven't made logical changes to it. Just quiet off checkstyle. Intend to 
keep it this way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335792538
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -44,13 +43,7 @@
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
-import java.lang.reflect.Constructor;
-import java.lang.reflect.InvocationTargetException;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Set;
+import java.util.*;
 
 Review comment:
   IDE did it. Change it manually


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335792254
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineActionHandler.java
 ##
 @@ -57,7 +57,7 @@ public void onMessage(PipelineActionsFromDatanode report,
   pipelineID = PipelineID.
   getFromProtobuf(action.getClosePipeline().getPipelineID());
   Pipeline pipeline = pipelineManager.getPipeline(pipelineID);
-  LOG.error("Received pipeline action {} for {} from datanode {}. " +
+  LOG.info("Received pipeline action {} for {} from datanode {}. " +
 
 Review comment:
   Rolled it back


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335792190
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 ##
 @@ -322,7 +322,15 @@
   // the max number of pipelines can a single datanode be engaged in.
   public static final String OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT =
   "ozone.scm.datanode.max.pipeline.engagement";
-  public static final int OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT = 5;
+  // Setting to zero by default means this limit doesn't take effect.
+  public static final int OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT = 0;
+
+  // Upper limit for how many pipelines can be created.
+  // Only for test purpose now.
+  public static final String OZONE_SCM_PIPELINE_NUMBER_LIMIT =
+  "ozone.scm.datanode.pipeline.number.limit";
 
 Review comment:
   Ok sure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335791972
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -846,10 +846,17 @@
 
   
   
-ozone.scm.datanode.max.pipeline.engagement
-5
+  ozone.scm.datanode.max.pipeline.engagement
 
 Review comment:
   Ok. Will remove from ozone-default.xml


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335785884
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/ContainerPlacementPolicyFactory.java
 ##
 @@ -43,10 +43,10 @@ private ContainerPlacementPolicyFactory() {
   }
 
 
-  public static PlacementPolicy getPolicy(Configuration conf,
-final NodeManager nodeManager, NetworkTopology clusterMap,
-final boolean fallback, SCMContainerPlacementMetrics metrics)
-throws SCMException{
+  public static PlacementPolicy getPolicy(
 
 Review comment:
   > Any change here?
   
   Not effective changes. It was due to complaints from checkstyle
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[VOTE] Release Apache Hadoop 2.10.0 (RC0)

2019-10-16 Thread Jonathan Hung
Hi folks,

This is the first release candidate for the first release of Apache Hadoop
2.10 line. It contains 361 fixes/improvements since 2.9 [1]. It includes
features such as:

- User-defined resource types
- Native GPU support as a schedulable resource type
- Consistent reads from standby node
- Namenode port based selective encryption
- Improvements related to rolling upgrade support from 2.x to 3.x

The RC0 artifacts are at: http://home.apache.org/~jhung/hadoop-2.10.0-RC0/

RC tag is release-2.10.0-RC0.

The maven artifacts are hosted here:
https://repository.apache.org/content/repositories/orgapachehadoop-1241/

My public key is available here:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

The vote will run for 5 weekdays, until Wednesday, October 23 at 6:00 pm
PDT.

Thanks,
Jonathan Hung

[1]
https://issues.apache.org/jira/issues/?jql=project%20in%20(HDFS%2C%20YARN%2C%20HADOOP%2C%20MAPREDUCE)%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%202.10.0%20AND%20fixVersion%20not%20in%20(2.9.2%2C%202.9.1%2C%202.9.0)


Re: [DISCUSS] Hadoop 2.10.0 release plan

2019-10-16 Thread Jonathan Hung
I've moved all jiras with target version 2.10.0 to 2.10.1. Also I've
created branch-2.10 and branch-2.10.0, please commit any 2.10.x bug fixes
to branch-2.10.

I'll send out a vote thread for 2.10.0-RC0 shortly.

Jonathan Hung


On Fri, Oct 11, 2019 at 10:32 AM Jonathan Hung  wrote:

> Edit: seems a 2.10.0 blocker was reopened (HDFS-14305). I'll continue
> watching this jira and start the release once this is resolved.
>
> Jonathan Hung
>
>
> On Thu, Oct 10, 2019 at 5:13 PM Jonathan Hung 
> wrote:
>
>> Hi folks, as of now all 2.10.0 blockers have been resolved [1]. So I'll
>> start the release process soon (cutting branches, updating target versions,
>> etc).
>>
>> [1] https://issues.apache.org/jira/issues/?filter=12346975
>>
>> Jonathan Hung
>>
>>
>> On Mon, Aug 26, 2019 at 10:19 AM Jonathan Hung 
>> wrote:
>>
>>> Hi folks,
>>>
>>> As discussed previously (e.g. [1], [2]) we'd like to do a 2.10.0 release
>>> soon. Some features/big-items we're targeting for this release:
>>>
>>>- YARN resource types/GPU support (YARN-8200
>>>)
>>>- Selective wire encryption (HDFS-13541
>>>)
>>>- Rolling upgrade support from 2.x to 3.x (e.g. HDFS-14509
>>>)
>>>
>>> Per [3] sounds like there's concern around upgrading dependencies as
>>> well.
>>>
>>> We created a public jira filter here (
>>> https://issues.apache.org/jira/issues/?filter=12346975) marking all
>>> blockers for 2.10.0 release. If you have other jiras that should be 2.10.0
>>> blockers, please mark "Target Version/s" as "2.10.0" and add label
>>> "release-blocker" so we can track it through this filter.
>>>
>>> We're targeting a release at end of September.
>>>
>>> Please share any thoughts you have about this. Thanks!
>>>
>>> [1]
>>> https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg29461.html
>>> [2]
>>> https://www.mail-archive.com/mapreduce-dev@hadoop.apache.org/msg21293.html
>>> [3]
>>> https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg33440.html
>>>
>>>
>>> Jonathan Hung
>>>
>>


[GitHub] [hadoop-ozone] vivekratnavel commented on issue #43: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-10-16 Thread GitBox
vivekratnavel commented on issue #43: HDDS-2181. Ozone Manager should send 
correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop-ozone/pull/43#issuecomment-542942274
 
 
   @xiaoyuyao Pls review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel opened a new pull request #43: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-10-16 Thread GitBox
vivekratnavel opened a new pull request #43: HDDS-2181. Ozone Manager should 
send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop-ozone/pull/43
 
 
   … to Authorizer
   
   ## What changes were proposed in this pull request?
   
   The ACL types sent to authorizers is changed from sending "WRITE" ACL type 
always to sending appropriate ACL types as required.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2181
   
   ## How was this patch tested?
   
   This patch was tested by updating and running unit tests and acceptance 
tests. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2302) Manage common pom versions in one common place

2019-10-16 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2302.

Fix Version/s: 0.5.0
   Resolution: Fixed

> Manage common pom versions in one common place
> --
>
> Key: HDDS-2302
> URL: https://issues.apache.org/jira/browse/HDDS-2302
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some of the versions (eg. ozone.version, hdds.version, ratis.version) are 
> required for both ozone and hdds subprojects. As we have a common pom.xml it 
> can be safer to manage them in one common place at the root pom.xml instead 
> of managing them multiple times.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #21: HDDS-2302. Manage common pom versions in one common place

2019-10-16 Thread GitBox
anuengineer commented on issue #21: HDDS-2302. Manage common pom versions in 
one common place
URL: https://github.com/apache/hadoop-ozone/pull/21#issuecomment-542933249
 
 
   Thank you for the contribution. I have committed this patch to the master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer closed pull request #21: HDDS-2302. Manage common pom versions in one common place

2019-10-16 Thread GitBox
anuengineer closed pull request #21: HDDS-2302. Manage common pom versions in 
one common place
URL: https://github.com/apache/hadoop-ozone/pull/21
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2289) Put testing information and a problem description to the github PR template

2019-10-16 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2289.

Resolution: Fixed

> Put testing information and a problem description to the github PR template
> ---
>
> Key: HDDS-2289
> URL: https://issues.apache.org/jira/browse/HDDS-2289
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is suggested by [~aengineer] during an offline discussion to add more 
> information to the github PR template based on the template of ambari (by 
> Vivek):
> https://github.com/apache/ambari/commit/579cec8cf5bcfe1a1a0feacf055ed6569f674e6a



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer merged pull request #5: HDDS-2289. Put testing information and a problem description to the g…

2019-10-16 Thread GitBox
anuengineer merged pull request #5: HDDS-2289. Put testing information and a 
problem description to the g…
URL: https://github.com/apache/hadoop-ozone/pull/5
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on issue #5: HDDS-2289. Put testing information and a problem description to the g…

2019-10-16 Thread GitBox
vivekratnavel commented on issue #5: HDDS-2289. Put testing information and a 
problem description to the g…
URL: https://github.com/apache/hadoop-ozone/pull/5#issuecomment-542889844
 
 
   +1 LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx closed pull request #42: Update pull_request_template.md

2019-10-16 Thread GitBox
avijayanhwx closed pull request #42: Update pull_request_template.md
URL: https://github.com/apache/hadoop-ozone/pull/42
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #42: Update pull_request_template.md

2019-10-16 Thread GitBox
avijayanhwx commented on issue #42: Update pull_request_template.md
URL: https://github.com/apache/hadoop-ozone/pull/42#issuecomment-542889556
 
 
   Thanks @adoroszlai. I will close this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on issue #42: Update pull_request_template.md

2019-10-16 Thread GitBox
vivekratnavel commented on issue #42: Update pull_request_template.md
URL: https://github.com/apache/hadoop-ozone/pull/42#issuecomment-542889407
 
 
   Yes, #5 is not merged yet. We can close this PR and merge #5. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai edited a comment on issue #42: Update pull_request_template.md

2019-10-16 Thread GitBox
adoroszlai edited a comment on issue #42: Update pull_request_template.md
URL: https://github.com/apache/hadoop-ozone/pull/42#issuecomment-542887995
 
 
   This ~was~ is already being done in #5.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #42: Update pull_request_template.md

2019-10-16 Thread GitBox
adoroszlai commented on issue #42: Update pull_request_template.md
URL: https://github.com/apache/hadoop-ozone/pull/42#issuecomment-542887995
 
 
   This was already done in #5, was it lost during the forced update?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #42: Update pull_request_template.md

2019-10-16 Thread GitBox
anuengineer commented on issue #42: Update pull_request_template.md
URL: https://github.com/apache/hadoop-ozone/pull/42#issuecomment-542885560
 
 
   We need a JIRA number to commit anything into the branch. So you will have 
to file a JIRA and refer to that in the pull request. and it might be a good 
idea to add needing a link to the original JIRA also.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx opened a new pull request #42: Update pull_request_template.md

2019-10-16 Thread GitBox
avijayanhwx opened a new pull request #42: Update pull_request_template.md
URL: https://github.com/apache/hadoop-ozone/pull/42
 
 
   Update Pull Request Template


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #41: HDDS-2283. Container Creation on datanodes take around 300ms due to rocksdb creation.

2019-10-16 Thread GitBox
anuengineer commented on issue #41: HDDS-2283. Container Creation on datanodes 
take around 300ms due to rocksdb creation.
URL: https://github.com/apache/hadoop-ozone/pull/41#issuecomment-542881435
 
 
   Can we please use the Ambari inspired template? 
   
   With info like -- a link to the HDDS JIRA, a clear description of the 
problem and information on how it was tested and how you expect the code 
reviewer to test it ? 
   
   Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on issue #23: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-16 Thread GitBox
swagle commented on issue #23: HDDS-1868. Ozone pipelines should be marked as 
ready only after the leader election is complete.
URL: https://github.com/apache/hadoop-ozone/pull/23#issuecomment-542869954
 
 
   Thank you @mukul1987 and @nandakumar131 for the reviews.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on a change in pull request #23: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-16 Thread GitBox
swagle commented on a change in pull request #23: HDDS-1868. Ozone pipelines 
should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop-ozone/pull/23#discussion_r335654846
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 ##
 @@ -107,6 +112,8 @@ private static long nextCallId() {
   // TODO: Remove the gids set when Ratis supports an api to query active
   // pipelines
   private final Set raftGids = new HashSet<>();
+  // pipeline leaders
+  private Map leaderIdMap = new HashMap<>();
 
 Review comment:
   Yes, I thought about this already and left it HashMap, thinking we would 
only get leader changed notification reporting new leaderId, so the updated 
value will always be the same from any member. So for a key there will be no 
correctness problem. For the sake of caution I can make it CHM but the only way 
we get notified a different leaderID will be major correctness issue in Ratis.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on a change in pull request #23: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-16 Thread GitBox
swagle commented on a change in pull request #23: HDDS-1868. Ozone pipelines 
should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop-ozone/pull/23#discussion_r335654846
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 ##
 @@ -107,6 +112,8 @@ private static long nextCallId() {
   // TODO: Remove the gids set when Ratis supports an api to query active
   // pipelines
   private final Set raftGids = new HashSet<>();
+  // pipeline leaders
+  private Map leaderIdMap = new HashMap<>();
 
 Review comment:
   Yes, I thought about this already and left it HashMap, thinking we would 
only get leader changed notification reporting new leaderId, so the updated 
value will always be the same from any member. For the sake of caution making 
it CHM. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on a change in pull request #23: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-16 Thread GitBox
swagle commented on a change in pull request #23: HDDS-1868. Ozone pipelines 
should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop-ozone/pull/23#discussion_r335654012
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 ##
 @@ -598,6 +605,9 @@ public boolean isExist(HddsProtos.PipelineID pipelineId) {
   for (RaftGroupId groupId : gids) {
 reports.add(PipelineReport.newBuilder()
 .setPipelineID(PipelineID.valueOf(groupId.getUuid()).getProtobuf())
+.setLeaderID(leaderIdMap.containsKey(groupId) ?
+ByteString.copyFromUtf8(leaderIdMap.get(groupId).toString()) :
 
 Review comment:
   Problem is again shaded ratis ByteString vs google Bytestring. 
   So I change the Map to hold ByteString and use that, lookup are cheap 
though, IMO we are nit-picking :=)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #41: HDDS-2283. Container Creation on datanodes take around 300ms due to rocksdb creation.

2019-10-16 Thread GitBox
avijayanhwx commented on issue #41: HDDS-2283. Container Creation on datanodes 
take around 300ms due to rocksdb creation.
URL: https://github.com/apache/hadoop-ozone/pull/41#issuecomment-542818462
 
 
   A good find. LGTM +1 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2317) Change rocksDB per Container model to have table per container on RocksDb per disk

2019-10-16 Thread Siddharth Wagle (Jira)
Siddharth Wagle created HDDS-2317:
-

 Summary: Change rocksDB per Container model to have table per 
container on RocksDb per disk
 Key: HDDS-2317
 URL: https://issues.apache.org/jira/browse/HDDS-2317
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Affects Versions: 0.5.0
Reporter: Siddharth Wagle
Assignee: Siddharth Wagle
 Fix For: 0.5.0


Idea proposed by [~msingh] in HDDS-2283.

Better utilize disk bandwidth by having Rocks DB per disk and put containers as 
tables inside.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Meeting notes from today's Hadoop storage community sync

2019-10-16 Thread Wei-Chiu Chuang
Here's today's notes for future reference:
https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit?usp=sharing
10/16/2019

Attendee:

Weichiu, Cynthia, Craig, Stephen, Akira, David

Stephen introduced upgrade domain, which was developed at Twitter. Cloudera
is going to support this feature in the next release. The feature was
developed a few years back and quite complete, so Cloudera is just adding
UI and verification/guardrails to support this feature.

Akira is interested in decommission and maintenance mode. Decomm is slow at
Y! Japan. Akira’s interested in maintenance mode too, but they are on 2.6.x
so can’t try yet.

Stephen introduced the decommissioning improvement project. Decommissioning
in practice has a few weird behavior and tend to be slow.

HDFS-14814  a new decommissioning monitor. It reduces NameNode lock holding
time, and spread replication load across DataNodes. It also gives priority
to dead nodes than decommissioning nodes. But it’s hard to simulate its
performance. It will have to run on a real large cluster to prove it works.
Looking for community members to pick it up and introduce it in some large
clusters to try out.

HDFS-14861 instead of letting the block to go to the end of replication
queue, iterator is reset periodically.

EC is not considered yet.


Next week we will have the Hadoop storage community sync for the APAC time
(PDT 10pm Wednesday, CST 1pm Thursday). Looking for topics.

Best,
Weichiu


[GitHub] [hadoop-ozone] swagle opened a new pull request #41: HDDS-2283. Container Creation on datanodes take around 300ms due to rocksdb creation.

2019-10-16 Thread GitBox
swagle opened a new pull request #41: HDDS-2283. Container Creation on 
datanodes take around 300ms due to rocksdb creation.
URL: https://github.com/apache/hadoop-ozone/pull/41
 
 
   Container Creation on datanodes take around 300ms due to rocksdb creation. 
Rocksdb creation is taking a considerable time and this needs to be optimized.
   
   Creating a rocksdb per disk should be enough and each container can be table 
inside the rocksdb.
   
   `2019-10-15 13:20:10,714 INFO  utils.MetadataStoreBuilder 
(MetadataStoreBuilder.java:build(124)) - Time before create, load options: 81
   2019-10-15 13:20:10,715 INFO  utils.RocksDBStore 
(RocksDBStore.java:(68)) - Time to load library: 0
   2019-10-15 13:20:10,723 INFO  utils.RocksDBStore 
(RocksDBStore.java:(75)) - Time to open: 8
   2019-10-15 13:20:10,723 INFO  helpers.KeyValueContainerUtil 
(KeyValueContainerUtil.java:createContainerMetaData(85)) - Total time to 
create: {}95`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru opened a new pull request #40: HDDS-2285. GetBlock and ReadChunk command from the client should be s…

2019-10-16 Thread GitBox
hanishakoneru opened a new pull request #40: HDDS-2285. GetBlock and ReadChunk 
command from the client should be s…
URL: https://github.com/apache/hadoop-ozone/pull/40
 
 
   It can be observed that the GetBlock and ReadChunk command is sent to 2 
different datanodes. It should be sent to the same datanode to re-use the 
connection.
   
   ```
   19/10/10 00:43:42 INFO scm.XceiverClientGrpc: Send command GetBlock to 
datanode 172.26.32.224
   19/10/10 00:43:42 INFO scm.XceiverClientGrpc: Send command ReadChunk to 
datanode 172.26.32.231
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #27: HDDS-2278. Run S3 test suite on OM HA cluster.

2019-10-16 Thread GitBox
bharatviswa504 commented on issue #27: HDDS-2278. Run S3 test suite on OM HA 
cluster.
URL: https://github.com/apache/hadoop-ozone/pull/27#issuecomment-542800119
 
 
   > +1 thanks the patch @bharatviswa504
   > 
   > Note: While I am happy to have more and more environments, I feel that 
it's harder and harder to manage them and some of them are confusing. I am 
thinking about to merge some of the environments, for example include s3 in the 
simple ozone (and ozone-om-ha) and delete ozone-s3.
   > 
   > S3 doesn't have huge resource requirement and very core of our provided 
functionality.
   > 
   > This is not related to this patch (as this patch follows the current 
practice) but I am very interested about your opinion.
   
   Yes, I am with you on this, S3 now became core functionality, we don't need 
a separate environment for it, we can include S3 in Ozone compose. And also 
this helps in reducing run time of our acceptance test suite.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #27: HDDS-2278. Run S3 test suite on OM HA cluster.

2019-10-16 Thread GitBox
bharatviswa504 commented on issue #27: HDDS-2278. Run S3 test suite on OM HA 
cluster.
URL: https://github.com/apache/hadoop-ozone/pull/27#issuecomment-542799585
 
 
   Thank You @elek for the review.
   I see newly acceptance tests are passing. Let me know if you still see 
errors in your local run.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx opened a new pull request #31: HDDS-2254. Fix flaky unit test TestContainerStateMachine#testRatisSn…

2019-10-16 Thread GitBox
avijayanhwx opened a new pull request #31: HDDS-2254. Fix flaky unit test 
TestContainerStateMachine#testRatisSn…
URL: https://github.com/apache/hadoop-ozone/pull/31
 
 
   …apshotRetention.
   
   ## What changes were proposed in this pull request?
   On locally trying out repeated runs of the unit test, the unit test failed 
intermittently while asserting "Null" value for CSM snapshot. This assertion is 
not valid when the other unit test in the class executes before and creates 
keys in the cluster/container. Hence, moved to a model where each unit test 
creates its own cluster.
   
   https://issues.apache.org/jira/browse/HDDS-2254
   
   ## How was this patch tested?
   Ran the unit tests in the IDE and command line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #31: HDDS-2254. Fix flaky unit test TestContainerStateMachine#testRatisSn…

2019-10-16 Thread GitBox
avijayanhwx commented on issue #31: HDDS-2254. Fix flaky unit test 
TestContainerStateMachine#testRatisSn…
URL: https://github.com/apache/hadoop-ozone/pull/31#issuecomment-542797782
 
 
   > Thanks the PR @avijayanhwx. As far as I see it includes additional 
commits. Can you please try to rebase it to remove the unnecessary commits?
   
   @elek Yes, these were brought in by the HDDS-2181 force push. I will clean 
up the PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx closed pull request #31: HDDS-2254. Fix flaky unit test TestContainerStateMachine#testRatisSn…

2019-10-16 Thread GitBox
avijayanhwx closed pull request #31: HDDS-2254. Fix flaky unit test 
TestContainerStateMachine#testRatisSn…
URL: https://github.com/apache/hadoop-ozone/pull/31
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-16 Thread Jitendra Nath Pandey (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey reopened HDDS-2181:


The pull request is still open.

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #23: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-16 Thread GitBox
nandakumar131 commented on a change in pull request #23: HDDS-1868. Ozone 
pipelines should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop-ozone/pull/23#discussion_r335543661
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/PipelineReportPublisher.java
 ##
 @@ -33,7 +33,7 @@
  * Publishes Pipeline which will be sent to SCM as part of heartbeat.
  * PipelineReport consist of the following information about each containers:
 
 Review comment:
   Typo: ```PipelineReport consist of the following information about each 
pipeline:```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #23: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-16 Thread GitBox
nandakumar131 commented on a change in pull request #23: HDDS-1868. Ozone 
pipelines should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop-ozone/pull/23#discussion_r335553429
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 ##
 @@ -686,4 +696,17 @@ void notifyGroupRemove(RaftGroupId gid) {
   void notifyGroupAdd(RaftGroupId gid) {
 raftGids.add(gid);
   }
+
+  void handleLeaderChangedNotification(RaftGroupMemberId groupMemberId,
+   RaftPeerId raftPeerId) {
+LOG.info("Leader change notification received for group: {} with new " +
+"leaderId: {}", groupMemberId.getGroupId(), raftPeerId);
+// Save the reported leader to be sent with the report to SCM
+leaderIdMap.put(groupMemberId.getGroupId(), raftPeerId);
+// Publish new reports with leaderID
+context.getParent().getReportManager().getReportPublisher(
+PipelineReportsProto.class).run();
 
 Review comment:
   Instead of modifying ReportManager, you directly add PipelineReport to 
context
   
```context.addReport(context.getParent().getContainer().getPipelineReport())```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #23: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-16 Thread GitBox
nandakumar131 commented on a change in pull request #23: HDDS-1868. Ozone 
pipelines should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop-ozone/pull/23#discussion_r335533998
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
 ##
 @@ -161,4 +161,9 @@ public void deactivatePipeline(PipelineID pipelineID)
 pipelineStateMap
 .updatePipelineState(pipelineID, PipelineState.DORMANT);
   }
+
+  @VisibleForTesting
+  PipelineStateMap getPipelineStateMap() {
+return pipelineStateMap;
+  }
 
 Review comment:
   This is not used anywhere, can be removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on issue #2: HDDS-1737. Add Volume check in KeyManager and File Operations.

2019-10-16 Thread GitBox
cxorm commented on issue #2: HDDS-1737. Add Volume check in KeyManager and File 
Operations.
URL: https://github.com/apache/hadoop-ozone/pull/2#issuecomment-542723039
 
 
   Thanks @bharatviswa504 and @jojochuang for the review.
   Thanks @elek for the commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Reminder: This Wednesday's community sync

2019-10-16 Thread Wei-Chiu Chuang
Gentle reminder: community sync happening in 3 hours.

On Mon, Oct 14, 2019 at 9:07 AM Wei-Chiu Chuang  wrote:

> Hadoop devs,
>
> This Wednesday (PDT 10am, EDT 1pm, BST 6pm, CEST 7pm), @Stephen O'Donnell
>   is going to share with us the projects he's
> spent most time on recently, DataNode decommissioning improvement (
> HDFS-14854 ) and
> Upgrade Domain support improvement (HDFS-14637
> ). HDFS-14854 is
> closely associated with HDFS-13157
> , where @David Mollitor
>  made an interesting discovery on
> decommissioning performance.
>
> Both of which are useful for ease of management of large scale clusters,
> so you don't want to miss out this time.
>
> Check out the past community sync notes here:
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit?usp=sharing
> Join the sync via Zoom: https://cloudera.zoom.us/j/880548968
>
> Have something to share? Feel free to volunteer a session next time.
>
> Weichiu
>


[jira] [Resolved] (HDDS-2316) Support to skip recon and/or ozonefs during the build

2019-10-16 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2316.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the master.

> Support to skip recon and/or ozonefs during the build
> -
>
> Key: HDDS-2316
> URL: https://issues.apache.org/jira/browse/HDDS-2316
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> (I almost use this Jira summary: "Fast-lane to ozone build" It was very hard 
> to resist...)
>  
>  The two slowest part of Ozone build as of now:
>  # The (multiple) shading of ozonefs
>  # And the frontend build/obfuscation of ozone recon
> [~aengineer] suggested to introduce options to skip them as they are not 
> required for the build all the time.
> This patch introduces '-DskipRecon' and '-DskipShade' options to provide a 
> faster way to create a *partial* build.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer merged pull request #39: HDDS-2316. Support to skip recon and/or ozonefs during the build

2019-10-16 Thread GitBox
anuengineer merged pull request #39: HDDS-2316. Support to skip recon and/or 
ozonefs during the build
URL: https://github.com/apache/hadoop-ozone/pull/39
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #39: HDDS-2316. Support to skip recon and/or ozonefs during the build

2019-10-16 Thread GitBox
anuengineer commented on issue #39: HDDS-2316. Support to skip recon and/or 
ozonefs during the build
URL: https://github.com/apache/hadoop-ozone/pull/39#issuecomment-542712298
 
 
   Thank you for getting this done. Appreciate it.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #31: HDDS-2254. Fix flaky unit test TestContainerStateMachine#testRatisSn…

2019-10-16 Thread GitBox
elek commented on issue #31: HDDS-2254. Fix flaky unit test 
TestContainerStateMachine#testRatisSn…
URL: https://github.com/apache/hadoop-ozone/pull/31#issuecomment-542694071
 
 
   Thanks the PR @avijayanhwx. As far as I see it includes additional commits. 
Can you please try to rebase it to remove the unnecessary commits?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/

[Oct 15, 2019 3:52:25 PM] (ericp) YARN-8750. Refactor TestQueueMetrics. 
(Contributed by Szilard Nemeth)
[Oct 15, 2019 5:01:45 PM] (jhung) Preparing for 2.11.0 development
[Oct 16, 2019 12:40:29 AM] (jhung) HADOOP-16655. Change cipher suite when 
fetching tomcat tarball for




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [160K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/476/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [316K]
   

[GitHub] [hadoop-ozone] elek closed pull request #2: HDDS-1737. Add Volume check in KeyManager and File Operations.

2019-10-16 Thread GitBox
elek closed pull request #2: HDDS-1737. Add Volume check in KeyManager and File 
Operations.
URL: https://github.com/apache/hadoop-ozone/pull/2
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #27: HDDS-2278. Run S3 test suite on OM HA cluster.

2019-10-16 Thread GitBox
elek commented on issue #27: HDDS-2278. Run S3 test suite on OM HA cluster.
URL: https://github.com/apache/hadoop-ozone/pull/27#issuecomment-542677287
 
 
   UPDATE: it's failing during my local test all the time. the ozones3 is 
working but not the new tests. Let's try to request a new test run...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #27: HDDS-2278. Run S3 test suite on OM HA cluster.

2019-10-16 Thread GitBox
elek commented on issue #27: HDDS-2278. Run S3 test suite on OM HA cluster.
URL: https://github.com/apache/hadoop-ozone/pull/27#issuecomment-542677319
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek opened a new pull request #39: HDDS-2316. Support to skip recon and/or ozonefs during the build

2019-10-16 Thread GitBox
elek opened a new pull request #39: HDDS-2316. Support to skip recon and/or 
ozonefs during the build
URL: https://github.com/apache/hadoop-ozone/pull/39
 
 
   ## What changes were proposed in this pull request?
   
The two slowest part of Ozone build as of now:
   
   * The (multiple) shading of ozonefs
   * And the frontend build/obfuscation of ozone recon
   
   @anuengineer suggested to introduce options to skip them as they are not 
required for the build all the time.
   
   This patch introduces `-DskipRecon` and `-DskipShade` options to provide a 
faster way to create a partial build.
   
   ## What is the link to the Apache JIRA
   
   https://github.com/elek/hadoop-ozone/pull/new/HDDS-2316
   
   ## How this patch can be tested?
   
   ```
   mvn clean install -DskipShade -DskipRecon -DskipTests
   ```
   
   ```
   mvn clean install -DskipShade -DskipTests
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2316) Support to skip recon and/or ozonefs during the build

2019-10-16 Thread Marton Elek (Jira)
Marton Elek created HDDS-2316:
-

 Summary: Support to skip recon and/or ozonefs during the build
 Key: HDDS-2316
 URL: https://issues.apache.org/jira/browse/HDDS-2316
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Anu Engineer
Assignee: Marton Elek


(I almost use this Jira summary: "Fast-lane to ozone build" It was very hard to 
resist...)

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2313) Duplicate release of lock in OMKeyCommitRequest

2019-10-16 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-2313.

Resolution: Invalid

Fixed by revert: https://github.com/apache/hadoop-ozone/commit/17081c2e

> Duplicate release of lock in OMKeyCommitRequest
> ---
>
> Key: HDDS-2313
> URL: https://issues.apache.org/jira/browse/HDDS-2313
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat}
> om_1| 2019-10-16 05:33:57,413 [IPC Server handler 19 on 9862] ERROR   
>- Trying to release the lock on /bypdd/mybucket4, which was never acquired.
> om_1| 2019-10-16 05:33:57,414 WARN ipc.Server: IPC Server handler 19 
> on 9862, call Call#4 Retry#8 
> org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol.submitRequest from 
> 172.29.0.4:37018
> om_1| java.lang.IllegalMonitorStateException: Releasing lock on 
> resource /bypdd/mybucket4 without acquiring lock
> om_1| at 
> org.apache.hadoop.ozone.lock.LockManager.getLockForReleasing(LockManager.java:220)
> om_1| at 
> org.apache.hadoop.ozone.lock.LockManager.release(LockManager.java:168)
> om_1| at 
> org.apache.hadoop.ozone.lock.LockManager.writeUnlock(LockManager.java:148)
> om_1| at 
> org.apache.hadoop.ozone.om.lock.OzoneManagerLock.unlock(OzoneManagerLock.java:364)
> om_1| at 
> org.apache.hadoop.ozone.om.lock.OzoneManagerLock.releaseWriteLock(OzoneManagerLock.java:329)
> om_1| at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCommitRequest.validateAndUpdateCache(OMKeyCommitRequest.java:177)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2315) bucket creation fails because bucket does not exist

2019-10-16 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-2315.

Resolution: Invalid

Fixed by revert: https://github.com/apache/hadoop-ozone/commit/17081c2e

> bucket creation fails because bucket does not exist
> ---
>
> Key: HDDS-2315
> URL: https://issues.apache.org/jira/browse/HDDS-2315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Priority: Blocker
>
> Secure acceptance tests fail because no bucket can be created due to ACL 
> check:
> {noformat}
> om_1| 2019-10-16 10:42:04,422 [IPC Server handler 0 on 9862] INFO 
>   - created volume:vol-0-38760 for user:HTTP/s...@example.com
> om_1| 2019-10-16 10:42:04,464 [IPC Server handler 4 on 9862] INFO 
>   - created volume:vol-1-41642 for user:HTTP/s...@example.com
> om_1| 2019-10-16 10:42:04,481 [IPC Server handler 11 on 9862] INFO
>- created volume:vol-2-97489 for user:HTTP/s...@example.com
> om_1| 2019-10-16 10:42:04,496 [IPC Server handler 12 on 9862] INFO
>- created volume:vol-3-24784 for user:HTTP/s...@example.com
> om_1| 2019-10-16 10:42:04,512 [IPC Server handler 6 on 9862] INFO 
>   - created volume:vol-4-01299 for user:HTTP/s...@example.com
> om_1| 2019-10-16 10:42:04,550 [IPC Server handler 7 on 9862] ERROR
>   - Bucket creation failed for bucket:bucket-0-94230 in volume:vol-0-38760
> om_1| BUCKET_NOT_FOUND 
> org.apache.hadoop.ozone.om.exceptions.OMException: Bucket bucket-0-94230 is 
> not found
> om_1| at 
> org.apache.hadoop.ozone.om.BucketManagerImpl.checkAccess(BucketManagerImpl.java:568)
> om_1| at 
> org.apache.hadoop.ozone.security.acl.OzoneNativeAuthorizer.checkAccess(OzoneNativeAuthorizer.java:89)
> om_1| at 
> org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1625)
> om_1| at 
> org.apache.hadoop.ozone.om.request.OMClientRequest.checkAcls(OMClientRequest.java:135)
> om_1| at 
> org.apache.hadoop.ozone.om.request.bucket.OMBucketCreateRequest.validateAndUpdateCache(OMBucketCreateRequest.java:146)
> om_1| at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.java:219)
> om_1| at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:134)
> om_1| at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
> om_1| at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:100)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #35: HDDS-2313. Duplicate release of lock in OMKeyCommitRequest

2019-10-16 Thread GitBox
adoroszlai commented on issue #35: HDDS-2313. Duplicate release of lock in 
OMKeyCommitRequest
URL: https://github.com/apache/hadoop-ozone/pull/35#issuecomment-542666371
 
 
   Fixed by revert: 17081c2e


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai closed pull request #35: HDDS-2313. Duplicate release of lock in OMKeyCommitRequest

2019-10-16 Thread GitBox
adoroszlai closed pull request #35: HDDS-2313. Duplicate release of lock in 
OMKeyCommitRequest
URL: https://github.com/apache/hadoop-ozone/pull/35
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2312) Fix typo in ozone command

2019-10-16 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-2312.
---
Fix Version/s: 0.5.0
   Resolution: Fixed

> Fix typo in ozone command
> -
>
> Key: HDDS-2312
> URL: https://issues.apache.org/jira/browse/HDDS-2312
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat:title=ozone}
> Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
> ...
> insight   tool to get runtime opeartion information
> ...
> {noformat}
> Should be "operation".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek closed pull request #34: HDDS-2312. Fix typo in ozone command

2019-10-16 Thread GitBox
elek closed pull request #34: HDDS-2312. Fix typo in ozone command
URL: https://github.com/apache/hadoop-ozone/pull/34
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #8: HDDS-2267. Container metadata scanner interval mismatch

2019-10-16 Thread GitBox
adoroszlai commented on issue #8: HDDS-2267. Container metadata scanner 
interval mismatch
URL: https://github.com/apache/hadoop-ozone/pull/8#issuecomment-542651011
 
 
   Thanks @xiaoyuyao for the review.  Thanks @elek for reviewing and committing 
it.  I like your idea about using type-safe time objects and applied it to #7.  
But here I wanted to minimize the change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #33: HDDS-1985. Fix listVolumes API

2019-10-16 Thread GitBox
elek commented on issue #33: HDDS-1985. Fix listVolumes API
URL: https://github.com/apache/hadoop-ozone/pull/33#issuecomment-542650306
 
 
   Oh, sorry I am confused. This is the listVolume. I accidentally close it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #33: HDDS-1985. Fix listVolumes API

2019-10-16 Thread GitBox
bharatviswa504 opened a new pull request #33: HDDS-1985. Fix listVolumes API
URL: https://github.com/apache/hadoop-ozone/pull/33
 
 
   https://issues.apache.org/jira/browse/HDDS-1985
   
   No fix is required for this, as the information is retrieved from the MPU 
Key table, this information is not retrieved through RocksDB Table iteration. 
(As when we use get() this checks from cache first, and then it checks table)
   

   
   Used this Jira to add an integration test to verify the behavior.
   
   (This has cumulative changes required for HDDS-1988 and HDDS-1985)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek closed pull request #33: HDDS-1985. Fix listVolumes API

2019-10-16 Thread GitBox
elek closed pull request #33: HDDS-1985. Fix listVolumes API
URL: https://github.com/apache/hadoop-ozone/pull/33
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #33: HDDS-1985. Fix listVolumes API

2019-10-16 Thread GitBox
elek commented on issue #33: HDDS-1985. Fix listVolumes API
URL: https://github.com/apache/hadoop-ozone/pull/33#issuecomment-542650094
 
 
   Merged in 5adf6a1e3175e2a583eea0d2384ed1f8e11da57b


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek removed a comment on issue #33: HDDS-1985. Fix listVolumes API

2019-10-16 Thread GitBox
elek removed a comment on issue #33: HDDS-1985. Fix listVolumes API
URL: https://github.com/apache/hadoop-ozone/pull/33#issuecomment-542650094
 
 
   Merged in 5adf6a1e3175e2a583eea0d2384ed1f8e11da57b


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek closed pull request #8: HDDS-2267. Container metadata scanner interval mismatch

2019-10-16 Thread GitBox
elek closed pull request #8: HDDS-2267. Container metadata scanner interval 
mismatch
URL: https://github.com/apache/hadoop-ozone/pull/8
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #8: HDDS-2267. Container metadata scanner interval mismatch

2019-10-16 Thread GitBox
elek commented on issue #8: HDDS-2267. Container metadata scanner interval 
mismatch
URL: https://github.com/apache/hadoop-ozone/pull/8#issuecomment-542646488
 
 
   My previous comments are not addressed but it was an optional suggestion. I 
commit it right now as it's better than the current code.
   
   From the old PR:
   
   
   
   
   Thanks to fix it @adoroszlai. Nice catch.
   
   As the millisecond resolution is enough I think it can be more safer to use 
type-safe java API:
   
   Sg like:
   
   ```
   Instant start = Instant.now();
   
   ...
   
   nextCheck = start.plus(metadataScanInterval, ChronoUnit.SECONDS)
   remainingMs = Instant.now().until(nextCheck, ChronoUnit.MILLIS)
   Time.sleep(remainingMs)
   ```
   
   But I am also fine with the current patch as it's definitely better than the 
earlier code ;-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2315) bucket creation fails because bucket does not exist

2019-10-16 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2315:
--

 Summary: bucket creation fails because bucket does not exist
 Key: HDDS-2315
 URL: https://issues.apache.org/jira/browse/HDDS-2315
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.5.0
Reporter: Attila Doroszlai


Secure acceptance tests fail because no bucket can be created due to ACL check:

{noformat}
om_1| 2019-10-16 10:42:04,422 [IPC Server handler 0 on 9862] INFO   
- created volume:vol-0-38760 for user:HTTP/s...@example.com
om_1| 2019-10-16 10:42:04,464 [IPC Server handler 4 on 9862] INFO   
- created volume:vol-1-41642 for user:HTTP/s...@example.com
om_1| 2019-10-16 10:42:04,481 [IPC Server handler 11 on 9862] INFO  
 - created volume:vol-2-97489 for user:HTTP/s...@example.com
om_1| 2019-10-16 10:42:04,496 [IPC Server handler 12 on 9862] INFO  
 - created volume:vol-3-24784 for user:HTTP/s...@example.com
om_1| 2019-10-16 10:42:04,512 [IPC Server handler 6 on 9862] INFO   
- created volume:vol-4-01299 for user:HTTP/s...@example.com
om_1| 2019-10-16 10:42:04,550 [IPC Server handler 7 on 9862] ERROR  
- Bucket creation failed for bucket:bucket-0-94230 in volume:vol-0-38760
om_1| BUCKET_NOT_FOUND 
org.apache.hadoop.ozone.om.exceptions.OMException: Bucket bucket-0-94230 is not 
found
om_1|   at 
org.apache.hadoop.ozone.om.BucketManagerImpl.checkAccess(BucketManagerImpl.java:568)
om_1|   at 
org.apache.hadoop.ozone.security.acl.OzoneNativeAuthorizer.checkAccess(OzoneNativeAuthorizer.java:89)
om_1|   at 
org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1625)
om_1|   at 
org.apache.hadoop.ozone.om.request.OMClientRequest.checkAcls(OMClientRequest.java:135)
om_1|   at 
org.apache.hadoop.ozone.om.request.bucket.OMBucketCreateRequest.validateAndUpdateCache(OMBucketCreateRequest.java:146)
om_1|   at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.java:219)
om_1|   at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:134)
om_1|   at 
org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
om_1|   at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:100)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335399857
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestroy.java
 ##
 @@ -103,7 +108,7 @@ public void testPipelineCreationOnNodeRestart() throws 
Exception {
 } catch (IOException ioe) {
   // As now all datanodes are shutdown, they move to stale state, there
   // will be no sufficient datanodes to create the pipeline.
-  Assert.assertTrue(ioe instanceof InsufficientDatanodesException);
+  Assert.assertTrue(ioe instanceof SCMException);
 
 Review comment:
   SCMException is a generic exception, please use specific exception if 
possible


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335323752
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -147,10 +152,33 @@ private void initializePipelineState() throws 
IOException {
 }
   }
 
+  private boolean exceedPipelineNumberLimit(ReplicationFactor factor) {
+if (heavyNodeCriteria > 0 && factor == ReplicationFactor.THREE) {
+  return (stateManager.getPipelines(ReplicationType.RATIS, factor).size() -
+  stateManager.getPipelines(ReplicationType.RATIS, factor,
+  Pipeline.PipelineState.CLOSED).size()) >= heavyNodeCriteria *
+  nodeManager.getNodeCount(HddsProtos.NodeState.HEALTHY);
 
 Review comment:
   should we divide the "heavyNodeCriteria * 
nodeManager.getNodeCount(HddsProtos.NodeState.HEALTHY" using the "factor" ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335307601
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineActionHandler.java
 ##
 @@ -57,7 +57,7 @@ public void onMessage(PipelineActionsFromDatanode report,
   pipelineID = PipelineID.
   getFromProtobuf(action.getClosePipeline().getPipelineID());
   Pipeline pipeline = pipelineManager.getPipeline(pipelineID);
-  LOG.error("Received pipeline action {} for {} from datanode {}. " +
+  LOG.info("Received pipeline action {} for {} from datanode {}. " +
 
 Review comment:
   Don't change this level.  The action is "close pipeline" due to some error 
happened on Datanode side. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335305877
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/ContainerPlacementPolicyFactory.java
 ##
 @@ -43,10 +43,10 @@ private ContainerPlacementPolicyFactory() {
   }
 
 
-  public static PlacementPolicy getPolicy(Configuration conf,
-final NodeManager nodeManager, NetworkTopology clusterMap,
-final boolean fallback, SCMContainerPlacementMetrics metrics)
-throws SCMException{
+  public static PlacementPolicy getPolicy(
 
 Review comment:
   Any change here? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335377568
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -76,11 +79,34 @@ public PipelinePlacementPolicy(
* Returns true if this node meets the criteria.
*
* @param datanodeDetails DatanodeDetails
+   * @param nodesRequired nodes required count
* @return true if we have enough space.
*/
   @VisibleForTesting
-  boolean meetCriteria(DatanodeDetails datanodeDetails, long heavyNodeLimit) {
-return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  boolean meetCriteria(DatanodeDetails datanodeDetails, int nodesRequired) {
+if (heavyNodeCriteria == 0) {
+  // no limit applied.
+  return true;
+}
+// Datanodes from pipeline in some states can also be considered available
+// for pipeline allocation. Thus the number of these pipeline shall be
+// deducted from total heaviness calculation.
+int pipelineNumDeductable = (int)stateManager.getPipelines(
+HddsProtos.ReplicationType.RATIS,
+HddsProtos.ReplicationFactor.valueOf(nodesRequired),
+Pipeline.PipelineState.CLOSED)
+.stream().filter(
+p -> nodeManager.getPipelines(datanodeDetails).contains(p.getId()))
 
 Review comment:
   Suggest change Node2PipelineMap, use Pipeline instead of PipelineID as the 
value of the Map, to simpify this query. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek closed pull request #38: github-actions

2019-10-16 Thread GitBox
elek closed pull request #38: github-actions
URL: https://github.com/apache/hadoop-ozone/pull/38
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335382664
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -92,65 +86,53 @@
 this.stateManager = stateManager;
 this.conf = conf;
 this.tlsConfig = tlsConfig;
+this.placementPolicy =
+new PipelinePlacementPolicy(nodeManager, stateManager, conf);
   }
 
-
-  /**
-   * Create pluggable container placement policy implementation instance.
-   *
-   * @param nodeManager - SCM node manager.
-   * @param conf - configuration.
-   * @return SCM container placement policy implementation instance.
-   */
-  @SuppressWarnings("unchecked")
-  // TODO: should we rename PlacementPolicy to PipelinePlacementPolicy?
-  private static PlacementPolicy createContainerPlacementPolicy(
-  final NodeManager nodeManager, final Configuration conf) {
-Class implClass =
-(Class) conf.getClass(
-ScmConfigKeys.OZONE_SCM_CONTAINER_PLACEMENT_IMPL_KEY,
-SCMContainerPlacementRandom.class);
-
-try {
-  Constructor ctor =
-  implClass.getDeclaredConstructor(NodeManager.class,
-  Configuration.class);
-  return ctor.newInstance(nodeManager, conf);
-} catch (RuntimeException e) {
-  throw e;
-} catch (InvocationTargetException e) {
-  throw new RuntimeException(implClass.getName()
-  + " could not be constructed.", e.getCause());
-} catch (Exception e) {
-//  LOG.error("Unhandled exception occurred, Placement policy will not " +
-//  "be functional.");
-  throw new IllegalArgumentException("Unable to load " +
-  "PlacementPolicy", e);
-}
-  }
-
-  @Override
-  public Pipeline create(ReplicationFactor factor) throws IOException {
-// Get set of datanodes already used for ratis pipeline
+  private List pickNodesNeverUsed(ReplicationFactor factor)
+  throws SCMException {
 Set dnsUsed = new HashSet<>();
-stateManager.getPipelines(ReplicationType.RATIS, factor).stream().filter(
-p -> p.getPipelineState().equals(PipelineState.OPEN) ||
-p.getPipelineState().equals(PipelineState.DORMANT) ||
-p.getPipelineState().equals(PipelineState.ALLOCATED))
+stateManager.getPipelines(ReplicationType.RATIS, factor)
+.stream().filter(
+  p -> p.getPipelineState().equals(PipelineState.OPEN) ||
 
 Review comment:
!p.getPipelineState().equals(PipelineState.CLOSED)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335399342
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestroy.java
 ##
 @@ -34,6 +35,7 @@
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
 
+import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT;
 
 Review comment:
   line longer than 80 chars


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335379816
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -44,13 +43,7 @@
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
-import java.lang.reflect.Constructor;
-import java.lang.reflect.InvocationTargetException;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Set;
+import java.util.*;
 
 Review comment:
   Import * is not recommended. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335304602
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 ##
 @@ -322,7 +322,15 @@
   // the max number of pipelines can a single datanode be engaged in.
   public static final String OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT =
   "ozone.scm.datanode.max.pipeline.engagement";
-  public static final int OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT = 5;
+  // Setting to zero by default means this limit doesn't take effect.
+  public static final int OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT = 0;
+
+  // Upper limit for how many pipelines can be created.
+  // Only for test purpose now.
+  public static final String OZONE_SCM_PIPELINE_NUMBER_LIMIT =
+  "ozone.scm.datanode.pipeline.number.limit";
 
 Review comment:
   .datanode can be removed. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335401278
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java
 ##
 @@ -352,6 +353,16 @@ public Builder setNumDatanodes(int val) {
   return this;
 }
 
+/**
+ * Sets the total number of pipelines to create.
+ * @param val number of pipelines
+ * @return MiniOzoneCluster.Builder
+ */
+public Builder setPipelineNumber(int val) {
 
 Review comment:
   setPipelineNumberLimit? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335326566
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -147,10 +152,33 @@ private void initializePipelineState() throws 
IOException {
 }
   }
 
+  private boolean exceedPipelineNumberLimit(ReplicationFactor factor) {
+if (heavyNodeCriteria > 0 && factor == ReplicationFactor.THREE) {
+  return (stateManager.getPipelines(ReplicationType.RATIS, factor).size() -
+  stateManager.getPipelines(ReplicationType.RATIS, factor,
+  Pipeline.PipelineState.CLOSED).size()) >= heavyNodeCriteria *
+  nodeManager.getNodeCount(HddsProtos.NodeState.HEALTHY);
+}
+
+if (pipelineNumberLimit > 0) {
+  return (stateManager.getPipelines(ReplicationType.RATIS).size() -
+  stateManager.getPipelines(ReplicationType.RATIS,
+  Pipeline.PipelineState.CLOSED).size()) >= pipelineNumberLimit;
+}
+
+return false;
+  }
+
   @Override
   public synchronized Pipeline createPipeline(
   ReplicationType type, ReplicationFactor factor) throws IOException {
 lock.writeLock().lock();
+if (type == ReplicationType.RATIS && exceedPipelineNumberLimit(factor)) {
+  lock.writeLock().unlock();
+  throw new SCMException("Pipeline number meets the limit: " +
+  pipelineNumberLimit,
+  SCMException.ResultCodes.FAILED_TO_FIND_HEALTHY_NODES);
 
 Review comment:
   define  a new resultcode for this case or use FAILED_TO_FIND_SUITABLE_NODE


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-16 Thread GitBox
ChenSammi commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r335305442
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -846,10 +846,17 @@
 
   
   
-ozone.scm.datanode.max.pipeline.engagement
-5
+  ozone.scm.datanode.max.pipeline.engagement
 
 Review comment:
   If this property is only for test,  suggest remove it from the 
ozone-default.xml


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek opened a new pull request #38: github-actions

2019-10-16 Thread GitBox
elek opened a new pull request #38: github-actions
URL: https://github.com/apache/hadoop-ozone/pull/38
 
 
   github actions test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek closed pull request #30: HDDS-1988. Fix listParts API.

2019-10-16 Thread GitBox
elek closed pull request #30: HDDS-1988. Fix listParts API.
URL: https://github.com/apache/hadoop-ozone/pull/30
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel opened a new pull request #37: HDDS-2197 Extend SCMCLI Topology command to print node Operational States

2019-10-16 Thread GitBox
sodonnel opened a new pull request #37: HDDS-2197 Extend SCMCLI Topology 
command to print node Operational States
URL: https://github.com/apache/hadoop-ozone/pull/37
 
 
   The scmcli topology command only consider the node health (healthy, stale or 
dead). With decommission and maintenance stages, we need to also consider the 
operational states and display them with this command.
   
   The current topology command prints details in the format:
   ```
   State = HEALTHY
IpAddress(hostName)networkLocation
IpAddress(hostName) networkLocation
IpAddress(hostName) networkLocation
   
   State = STALE
 ...
   
   State = DEAD
 ... 
   ```
   Alternatively, it prints the details ordered by network location
   ```
   State = HEALTHY
   Location: somelocation
ipAddress(hostName)
ipAddress(hostName)
ipAddress(hostName)
   Location: otherlocation
ipAddress(hostName)
ipAddress(hostName)
ipAddress(hostName) 
   
   State = STALE
   Location: someLocation
...
...
   ```
   
   In this PR, I propose we simple add the operational state into the existing 
output, and keep the ordering and formatting as is:
   
   ```
   State = HEALTHY
IpAddress(hostName) IN_SERVICE networkLocation
IpAddress(hostName) IN_SERVICE networkLocation
IpAddress(hostName) DECOMMISSIONING networkLocation
   
   State = STALE
...
   
   State = DEAD
...  
   ```
   Or
   ```
   State = HEALTHY
   Location: somelocation
ipAddress(hostName) IN_SERVICE
ipAddress(hostName) IN_SERVICE
ipAddress(hostName) DECOMMISSIONED
   Location: otherlocation
ipAddress(hostName) IN_SERVICE
ipAddress(hostName) IN_MAINTENANE
ipAddress(hostName) IN_SERVIE
   
   State = STALE
   Location: someLocation
...
...
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #23: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-16 Thread GitBox
mukul1987 commented on a change in pull request #23: HDDS-1868. Ozone pipelines 
should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop-ozone/pull/23#discussion_r335387189
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 ##
 @@ -598,6 +605,9 @@ public boolean isExist(HddsProtos.PipelineID pipelineId) {
   for (RaftGroupId groupId : gids) {
 reports.add(PipelineReport.newBuilder()
 .setPipelineID(PipelineID.valueOf(groupId.getUuid()).getProtobuf())
+.setLeaderID(leaderIdMap.containsKey(groupId) ?
+ByteString.copyFromUtf8(leaderIdMap.get(groupId).toString()) :
 
 Review comment:
   This does two lookups from leaderIdMap, lets replace this ith 
leaderIdMap.get and then use the ByteString later.
   Also should we add this to te leaderIdmap ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #23: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-16 Thread GitBox
mukul1987 commented on a change in pull request #23: HDDS-1868. Ozone pipelines 
should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop-ozone/pull/23#discussion_r335388484
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 ##
 @@ -107,6 +112,8 @@ private static long nextCallId() {
   // TODO: Remove the gids set when Ratis supports an api to query active
   // pipelines
   private final Set raftGids = new HashSet<>();
+  // pipeline leaders
+  private Map leaderIdMap = new HashMap<>();
 
 Review comment:
   This should be a concurrentHashMap as pipeline report and changeLeader can 
happen at anytime.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2314) Fix TestOMKeyCommitRequest Error

2019-10-16 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien resolved HDDS-2314.

Resolution: Duplicate

> Fix TestOMKeyCommitRequest Error
> 
>
> Key: HDDS-2314
> URL: https://issues.apache.org/jira/browse/HDDS-2314
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: YiSheng Lien
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code}
> [ERROR] Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 2.479 
> s <<< FAILURE! - in 
> org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest
> [ERROR] 
> testValidateAndUpdateCacheWithKeyNotFound(org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest)
>   Time elapsed: 2.045 s  <<< ERROR!
> java.lang.IllegalMonitorStateException: Releasing lock on resource 
> /e4ec6d72-f27c-46f8-8434-e704e091f87b/db3319a6-6d78-42e1-8352-9feb099de70a 
> without acquiring lock
>   at 
> org.apache.hadoop.ozone.lock.LockManager.getLockForReleasing(LockManager.java:220)
>   at 
> org.apache.hadoop.ozone.lock.LockManager.release(LockManager.java:168)
>   at 
> org.apache.hadoop.ozone.lock.LockManager.writeUnlock(LockManager.java:148)
>   at 
> org.apache.hadoop.ozone.om.lock.OzoneManagerLock.unlock(OzoneManagerLock.java:364)
>   at 
> org.apache.hadoop.ozone.om.lock.OzoneManagerLock.releaseWriteLock(OzoneManagerLock.java:329)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCommitRequest.validateAndUpdateCache(OMKeyCommitRequest.java:177)
>   at 
> org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest.testValidateAndUpdateCacheWithKeyNotFound(TestOMKeyCommitRequest.java:202)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> [ERROR] 
> testValidateAndUpdateCacheWithBucketNotFound(org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest)
>   Time elapsed: 0.098 s  <<< ERROR!
> java.lang.IllegalMonitorStateException: Releasing lock on resource 
> /4696e0f1-6439-4300-a1bc-f30c37a12a37/352527b9-eb75-49af-b06a-57cbc697730c 
> without acquiring lock
>   at 
> org.apache.hadoop.ozone.lock.LockManager.getLockForReleasing(LockManager.java:220)
>   at 
> 

[jira] [Reopened] (HDDS-2314) Fix TestOMKeyCommitRequest Error

2019-10-16 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reopened HDDS-2314:


> Fix TestOMKeyCommitRequest Error
> 
>
> Key: HDDS-2314
> URL: https://issues.apache.org/jira/browse/HDDS-2314
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: YiSheng Lien
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code}
> [ERROR] Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 2.479 
> s <<< FAILURE! - in 
> org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest
> [ERROR] 
> testValidateAndUpdateCacheWithKeyNotFound(org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest)
>   Time elapsed: 2.045 s  <<< ERROR!
> java.lang.IllegalMonitorStateException: Releasing lock on resource 
> /e4ec6d72-f27c-46f8-8434-e704e091f87b/db3319a6-6d78-42e1-8352-9feb099de70a 
> without acquiring lock
>   at 
> org.apache.hadoop.ozone.lock.LockManager.getLockForReleasing(LockManager.java:220)
>   at 
> org.apache.hadoop.ozone.lock.LockManager.release(LockManager.java:168)
>   at 
> org.apache.hadoop.ozone.lock.LockManager.writeUnlock(LockManager.java:148)
>   at 
> org.apache.hadoop.ozone.om.lock.OzoneManagerLock.unlock(OzoneManagerLock.java:364)
>   at 
> org.apache.hadoop.ozone.om.lock.OzoneManagerLock.releaseWriteLock(OzoneManagerLock.java:329)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCommitRequest.validateAndUpdateCache(OMKeyCommitRequest.java:177)
>   at 
> org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest.testValidateAndUpdateCacheWithKeyNotFound(TestOMKeyCommitRequest.java:202)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> [ERROR] 
> testValidateAndUpdateCacheWithBucketNotFound(org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest)
>   Time elapsed: 0.098 s  <<< ERROR!
> java.lang.IllegalMonitorStateException: Releasing lock on resource 
> /4696e0f1-6439-4300-a1bc-f30c37a12a37/352527b9-eb75-49af-b06a-57cbc697730c 
> without acquiring lock
>   at 
> org.apache.hadoop.ozone.lock.LockManager.getLockForReleasing(LockManager.java:220)
>   at 
> org.apache.hadoop.ozone.lock.LockManager.release(LockManager.java:168)

[GitHub] [hadoop-ozone] cxorm closed pull request #36: HDDS-2314. Fix TestOMKeyCommitRequest Error.

2019-10-16 Thread GitBox
cxorm closed pull request #36: HDDS-2314. Fix TestOMKeyCommitRequest Error.
URL: https://github.com/apache/hadoop-ozone/pull/36
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2314) Fix TestOMKeyCommitRequest Error

2019-10-16 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien resolved HDDS-2314.

Resolution: Duplicate

> Fix TestOMKeyCommitRequest Error
> 
>
> Key: HDDS-2314
> URL: https://issues.apache.org/jira/browse/HDDS-2314
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: YiSheng Lien
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> [ERROR] Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 2.479 
> s <<< FAILURE! - in 
> org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest
> [ERROR] 
> testValidateAndUpdateCacheWithKeyNotFound(org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest)
>   Time elapsed: 2.045 s  <<< ERROR!
> java.lang.IllegalMonitorStateException: Releasing lock on resource 
> /e4ec6d72-f27c-46f8-8434-e704e091f87b/db3319a6-6d78-42e1-8352-9feb099de70a 
> without acquiring lock
>   at 
> org.apache.hadoop.ozone.lock.LockManager.getLockForReleasing(LockManager.java:220)
>   at 
> org.apache.hadoop.ozone.lock.LockManager.release(LockManager.java:168)
>   at 
> org.apache.hadoop.ozone.lock.LockManager.writeUnlock(LockManager.java:148)
>   at 
> org.apache.hadoop.ozone.om.lock.OzoneManagerLock.unlock(OzoneManagerLock.java:364)
>   at 
> org.apache.hadoop.ozone.om.lock.OzoneManagerLock.releaseWriteLock(OzoneManagerLock.java:329)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCommitRequest.validateAndUpdateCache(OMKeyCommitRequest.java:177)
>   at 
> org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest.testValidateAndUpdateCacheWithKeyNotFound(TestOMKeyCommitRequest.java:202)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> [ERROR] 
> testValidateAndUpdateCacheWithBucketNotFound(org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest)
>   Time elapsed: 0.098 s  <<< ERROR!
> java.lang.IllegalMonitorStateException: Releasing lock on resource 
> /4696e0f1-6439-4300-a1bc-f30c37a12a37/352527b9-eb75-49af-b06a-57cbc697730c 
> without acquiring lock
>   at 
> org.apache.hadoop.ozone.lock.LockManager.getLockForReleasing(LockManager.java:220)
>   at 
> 

[GitHub] [hadoop-ozone] cxorm opened a new pull request #36: HDDS-2314. Fix TestOMKeyCommitRequest Error.

2019-10-16 Thread GitBox
cxorm opened a new pull request #36: HDDS-2314. Fix TestOMKeyCommitRequest 
Error.
URL: https://github.com/apache/hadoop-ozone/pull/36
 
 
   **### What changes were proposed in this pull request?**
   
   Fix the acquire/release lock in OMKeyCommitRequest.java
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2314) Fix TestOMKeyCommitRequest failure

2019-10-16 Thread YiSheng Lien (Jira)
YiSheng Lien created HDDS-2314:
--

 Summary: Fix TestOMKeyCommitRequest failure
 Key: HDDS-2314
 URL: https://issues.apache.org/jira/browse/HDDS-2314
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: YiSheng Lien


[ERROR] Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 2.479 s 
<<< FAILURE! - in org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest
[ERROR] 
testValidateAndUpdateCacheWithKeyNotFound(org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest)
  Time elapsed: 2.045 s  <<< ERROR!
java.lang.IllegalMonitorStateException: Releasing lock on resource 
/e4ec6d72-f27c-46f8-8434-e704e091f87b/db3319a6-6d78-42e1-8352-9feb099de70a 
without acquiring lock
at 
org.apache.hadoop.ozone.lock.LockManager.getLockForReleasing(LockManager.java:220)
at 
org.apache.hadoop.ozone.lock.LockManager.release(LockManager.java:168)
at 
org.apache.hadoop.ozone.lock.LockManager.writeUnlock(LockManager.java:148)
at 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.unlock(OzoneManagerLock.java:364)
at 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.releaseWriteLock(OzoneManagerLock.java:329)
at 
org.apache.hadoop.ozone.om.request.key.OMKeyCommitRequest.validateAndUpdateCache(OMKeyCommitRequest.java:177)
at 
org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest.testValidateAndUpdateCacheWithKeyNotFound(TestOMKeyCommitRequest.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

[ERROR] 
testValidateAndUpdateCacheWithBucketNotFound(org.apache.hadoop.ozone.om.request.key.TestOMKeyCommitRequest)
  Time elapsed: 0.098 s  <<< ERROR!
java.lang.IllegalMonitorStateException: Releasing lock on resource 
/4696e0f1-6439-4300-a1bc-f30c37a12a37/352527b9-eb75-49af-b06a-57cbc697730c 
without acquiring lock
at 
org.apache.hadoop.ozone.lock.LockManager.getLockForReleasing(LockManager.java:220)
at 
org.apache.hadoop.ozone.lock.LockManager.release(LockManager.java:168)
at 
org.apache.hadoop.ozone.lock.LockManager.writeUnlock(LockManager.java:148)
at 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.unlock(OzoneManagerLock.java:364)
at 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.releaseWriteLock(OzoneManagerLock.java:329)
at 

  1   2   >