[jira] [Commented] (HDFS-14816) TestFileCorruption#testCorruptionWithDiskFailure logic is not correct

2019-09-03 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921954#comment-16921954
 ] 

hemanthboyina commented on HDFS-14816:
--

in +HDFS-9958+ the UT added to check if any replica is in failed storage , but 
the UT not able to make blocks storage as failed

update the block storage to failed

> TestFileCorruption#testCorruptionWithDiskFailure logic is not correct
> -
>
> Key: HDFS-14816
> URL: https://issues.apache.org/jira/browse/HDFS-14816
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14816.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14816) TestFileCorruption#testCorruptionWithDiskFailure logic is not correct

2019-09-03 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14816:
-
Summary: TestFileCorruption#testCorruptionWithDiskFailure logic is not 
correct  (was: TestFileCorruption#testCorruptionWithDiskFailure is flaky)

> TestFileCorruption#testCorruptionWithDiskFailure logic is not correct
> -
>
> Key: HDFS-14816
> URL: https://issues.apache.org/jira/browse/HDFS-14816
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14816.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2078) Fix TestSecureOzoneCluster

2019-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-2078:


Assignee: (was: Bharat Viswanadham)

> Fix TestSecureOzoneCluster
> --
>
> Key: HDDS-2078
> URL: https://issues.apache.org/jira/browse/HDDS-2078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2015?focusedWorklogId=306057=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306057
 ]

ASF GitHub Bot logged work on HDDS-2015:


Author: ASF GitHub Bot
Created on: 04/Sep/19 04:44
Start Date: 04/Sep/19 04:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1386: 
HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r320570103
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -601,6 +605,13 @@ public OzoneOutputStream createKey(
 HddsClientUtils.verifyResourceName(volumeName, bucketName);
 HddsClientUtils.checkNotNull(keyName, type, factor);
 String requestId = UUID.randomUUID().toString();
+
 
 Review comment:
   thanks for info.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306057)
Time Spent: 2h 50m  (was: 2h 40m)

> Encrypt/decrypt key using symmetric key while writing/reading
> -
>
> Key: HDDS-2015
> URL: https://issues.apache.org/jira/browse/HDDS-2015
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2015?focusedWorklogId=306055=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306055
 ]

ASF GitHub Bot logged work on HDDS-2015:


Author: ASF GitHub Bot
Created on: 04/Sep/19 04:40
Start Date: 04/Sep/19 04:40
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1386: 
HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r320569509
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -601,6 +605,13 @@ public OzoneOutputStream createKey(
 HddsClientUtils.verifyResourceName(volumeName, bucketName);
 HddsClientUtils.checkNotNull(keyName, type, factor);
 String requestId = UUID.randomUUID().toString();
+
 
 Review comment:
   @bharatviswa504 GDPR compliance feature does not depend on whether or not 
the cluster is secure. No matter how you set up the cluster, you can enable 
GDPR compliance feature.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306055)
Time Spent: 2h 40m  (was: 2.5h)

> Encrypt/decrypt key using symmetric key while writing/reading
> -
>
> Key: HDDS-2015
> URL: https://issues.apache.org/jira/browse/HDDS-2015
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2015?focusedWorklogId=306053=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306053
 ]

ASF GitHub Bot logged work on HDDS-2015:


Author: ASF GitHub Bot
Created on: 04/Sep/19 04:32
Start Date: 04/Sep/19 04:32
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1386: 
HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r320568216
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -601,6 +605,13 @@ public OzoneOutputStream createKey(
 HddsClientUtils.verifyResourceName(volumeName, bucketName);
 HddsClientUtils.checkNotNull(keyName, type, factor);
 String requestId = UUID.randomUUID().toString();
+
 
 Review comment:
   One question, is GDPR feature is available with/with out security enabled in 
Ozone?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306053)
Time Spent: 2.5h  (was: 2h 20m)

> Encrypt/decrypt key using symmetric key while writing/reading
> -
>
> Key: HDDS-2015
> URL: https://issues.apache.org/jira/browse/HDDS-2015
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2079) Fix TestSecureOzoneManager

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2079?focusedWorklogId=306052=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306052
 ]

ASF GitHub Bot logged work on HDDS-2079:


Author: ASF GitHub Bot
Created on: 04/Sep/19 04:25
Start Date: 04/Sep/19 04:25
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1400: HDDS-2079. 
Fix TestSecureOzoneManager. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1400
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306052)
Remaining Estimate: 0h
Time Spent: 10m

> Fix TestSecureOzoneManager
> --
>
> Key: HDDS-2079
> URL: https://issues.apache.org/jira/browse/HDDS-2079
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.om.TestSecureOzoneManager.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2079) Fix TestSecureOzoneManager

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2079:
-
Labels: pull-request-available  (was: )

> Fix TestSecureOzoneManager
> --
>
> Key: HDDS-2079
> URL: https://issues.apache.org/jira/browse/HDDS-2079
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.om.TestSecureOzoneManager.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2084) TestSecureOzoneManager#testSecureOmInitFailures is timing out

2019-09-03 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDDS-2084.
-
Resolution: Duplicate

> TestSecureOzoneManager#testSecureOmInitFailures is timing out
> -
>
> Key: HDDS-2084
> URL: https://issues.apache.org/jira/browse/HDDS-2084
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Dinesh Chitlangia
>Priority: Major
>
> {code:java}
> ---
> Test set: org.apache.hadoop.ozone.om.TestSecureOzoneManager
> ---
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 25.824 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.om.TestSecureOzoneManager
> testSecureOmInitFailures(org.apache.hadoop.ozone.om.TestSecureOzoneManager)  
> Time elapsed: 25.011 s  <<< ERROR!
> java.lang.Exception: test timed out after 25000 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure(Client.java:943)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:706)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
>   at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
>   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1403)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy18.getOMCertificate(Unknown Source)
>   at 
> org.apache.hadoop.hdds.protocolPB.SCMSecurityProtocolClientSideTranslatorPB.getOMCertChain(SCMSecurityProtocolClientSideTranslatorPB.java:115)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.getSCMSignedCert(OzoneManager.java:1543)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.initializeSecurity(OzoneManager.java:1117)
>   at 
> org.apache.hadoop.ozone.om.TestSecureOzoneManager.lambda$testSecureOmInitFailures$0(TestSecureOzoneManager.java:128)
>   at 
> org.apache.hadoop.ozone.om.TestSecureOzoneManager$$Lambda$8/1494777578.call(Unknown
>  Source)
>   at 
> org.apache.hadoop.test.LambdaTestUtils.lambda$intercept$0(LambdaTestUtils.java:527)
>   at 
> org.apache.hadoop.test.LambdaTestUtils$$Lambda$9/1108847704.call(Unknown 
> Source)
>   at 
> org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:491)
>   at 
> org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:522)
>   at 
> org.apache.hadoop.ozone.om.TestSecureOzoneManager.testSecureOmInitFailures(TestSecureOzoneManager.java:126)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2079) Fix TestSecureOzoneManager

2019-09-03 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-2079:


Assignee: Xiaoyu Yao

> Fix TestSecureOzoneManager
> --
>
> Key: HDDS-2079
> URL: https://issues.apache.org/jira/browse/HDDS-2079
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.om.TestSecureOzoneManager.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14816) TestFileCorruption#testCorruptionWithDiskFailure is flaky

2019-09-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921937#comment-16921937
 ] 

Ayush Saxena commented on HDFS-14816:
-

Can you attach an error log, and a description regarding the fix?

> TestFileCorruption#testCorruptionWithDiskFailure is flaky
> -
>
> Key: HDFS-14816
> URL: https://issues.apache.org/jira/browse/HDFS-14816
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14816.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=306047=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306047
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 04/Sep/19 03:34
Start Date: 04/Sep/19 03:34
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320559908
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy 

[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=306040=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306040
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 04/Sep/19 03:22
Start Date: 04/Sep/19 03:22
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320557920
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
 
 Review comment:
   Improve these comments
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306040)
Time Spent: 3.5h  (was: 3h 20m)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2057?focusedWorklogId=306039=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306039
 ]

ASF GitHub Bot logged work on HDDS-2057:


Author: ASF GitHub Bot
Created on: 04/Sep/19 03:19
Start Date: 04/Sep/19 03:19
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1377: HDDS-2057. Incorrect 
Default OM Port in Ozone FS URI Error Message. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/1377#issuecomment-527722111
 
 
   @bharatviswa504 does the updated patch look okay to you? You had a comment 
on the earlier patch revision.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306039)
Time Spent: 50m  (was: 40m)

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=306036=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306036
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 04/Sep/19 03:08
Start Date: 04/Sep/19 03:08
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320555713
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy 

[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=306033=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306033
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 04/Sep/19 03:06
Start Date: 04/Sep/19 03:06
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320555339
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy 

[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=306035=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306035
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 04/Sep/19 03:06
Start Date: 04/Sep/19 03:06
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320555432
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy 

[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=306027=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306027
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 04/Sep/19 02:59
Start Date: 04/Sep/19 02:59
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320554141
 
 

 ##
 File path: hadoop-ozone/pom.xml
 ##
 @@ -19,7 +19,7 @@
 3.2.0
 
   
-  hadoop-ozone
+  hadoop-_ozone
 
 Review comment:
   What's this for? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306027)
Time Spent: 2h 50m  (was: 2h 40m)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=306025=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306025
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 04/Sep/19 02:55
Start Date: 04/Sep/19 02:55
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320553505
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
 ##
 @@ -199,4 +220,10 @@ void processNodeReport(DatanodeDetails datanodeDetails,
* @return the given datanode, or null if not found
*/
   DatanodeDetails getNodeByAddress(String address);
+
+  /**
+   * Get cluster map as in network topology for this node manager.
+   * @return cluster map
+   */
+  NetworkTopology getClusterMap();
 
 Review comment:
   Suggest to change the name to getClusterNetworkTopologyMap. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306025)
Time Spent: 2h 40m  (was: 2.5h)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=306024=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306024
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 04/Sep/19 02:52
Start Date: 04/Sep/19 02:52
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320552958
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
 ##
 @@ -129,6 +138,18 @@
*/
   void removePipeline(Pipeline pipeline);
 
+  /**
+   * Get the entire Node2PipelineMap.
+   * @return Node2PipelineMap
+   */
+  Node2PipelineMap getNode2PipelineMap();
+
+  /**
+   * Set the Node2PipelineMap.
+   * @param node2PipelineMap Node2PipelineMap
+   */
+  void setNode2PipelineMap(Node2PipelineMap node2PipelineMap);
 
 Review comment:
   I would suggest to remove this function from interface definition since it's 
only used in test. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306024)
Time Spent: 2.5h  (was: 2h 20m)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2069) Default values of properties hdds.datanode.storage.utilization.{critical | warning}.threshold are not reasonable

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2069?focusedWorklogId=306017=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306017
 ]

ASF GitHub Bot logged work on HDDS-2069:


Author: ASF GitHub Bot
Created on: 04/Sep/19 02:26
Start Date: 04/Sep/19 02:26
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on issue #1393: HDDS-2069. Default 
values of properties hdds.datanode.storage.utilization.{critical | 
warning}.threshold are not reasonable
URL: https://github.com/apache/hadoop/pull/1393#issuecomment-527712611
 
 
   Thanks @nandakumar131  for the review. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306017)
Time Spent: 1h  (was: 50m)

> Default values of properties hdds.datanode.storage.utilization.{critical | 
> warning}.threshold are not reasonable
> 
>
> Key: HDDS-2069
> URL: https://issues.apache.org/jira/browse/HDDS-2069
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, hdds.datanode.storage.utilization.warning.threshold is 0.95 and 
> hdds.datanode.storage.utilization.critical.threshold is 0.75.
> The values should be exchanged. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2015?focusedWorklogId=306016=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306016
 ]

ASF GitHub Bot logged work on HDDS-2015:


Author: ASF GitHub Bot
Created on: 04/Sep/19 02:25
Start Date: 04/Sep/19 02:25
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#issuecomment-527712406
 
 
   @anuengineer , @bharatviswa504 - Failures unrelated to patch.
   I have filed HDDS-2081, HDDS-2082, HDDS-2083, HDDS-2084, HDDS-2085 for the 
test failures observed here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306016)
Time Spent: 2h 20m  (was: 2h 10m)

> Encrypt/decrypt key using symmetric key while writing/reading
> -
>
> Key: HDDS-2015
> URL: https://issues.apache.org/jira/browse/HDDS-2015
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2085) TestBlockManager#testMultipleBlockAllocationWithClosedContainer timed out

2019-09-03 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2085:
---

 Summary: 
TestBlockManager#testMultipleBlockAllocationWithClosedContainer timed out
 Key: HDDS-2085
 URL: https://issues.apache.org/jira/browse/HDDS-2085
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Dinesh Chitlangia



{code:java}
---
Test set: org.apache.hadoop.hdds.scm.block.TestBlockManager
---
Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.697 s <<< 
FAILURE! - in org.apache.hadoop.hdds.scm.block.TestBlockManager
testMultipleBlockAllocationWithClosedContainer(org.apache.hadoop.hdds.scm.block.TestBlockManager)
  Time elapsed: 3.619 s  <<< ERROR!
java.util.concurrent.TimeoutException: 
Timed out waiting for condition. Thread diagnostics:
Timestamp: 2019-09-03 08:46:46,870

"Socket Reader #1 for port 32840"  prio=5 tid=14 runnable
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
at 
org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1097)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1076)
"Socket Reader #1 for port 43576"  prio=5 tid=22 runnable
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
at 
org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1097)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1076)
"surefire-forkedjvm-command-thread" daemon prio=5 tid=8 runnable
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:255)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at 
org.apache.maven.surefire.booter.MasterProcessCommand.decode(MasterProcessCommand.java:115)
at 
org.apache.maven.surefire.booter.CommandReader$CommandRunnable.run(CommandReader.java:390)
at java.lang.Thread.run(Thread.java:748)
"surefire-forkedjvm-ping-30s" daemon prio=5 tid=9 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
"Thread-15" daemon prio=5 tid=30 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdds.scm.safemode.SafeModeHandler.lambda$onMessage$0(SafeModeHandler.java:114)
at 
org.apache.hadoop.hdds.scm.safemode.SafeModeHandler$$Lambda$33/1541519391.run(Unknown
 Source)
at java.lang.Thread.run(Thread.java:748)
"process reaper" daemon prio=10 tid=10 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at 

[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=306014=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-306014
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 04/Sep/19 02:23
Start Date: 04/Sep/19 02:23
Worklog Time Spent: 10m 
  Work Description: timmylicheng commented on pull request #1366: 
HDDS-1577. Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320548487
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/Node2ObjectsMap.java
 ##
 @@ -83,7 +83,7 @@ public void insertNewDatanode(UUID datanodeID, Set 
containerIDs)
*
* @param datanodeID - Datanode ID.
*/
-  void removeDatanode(UUID datanodeID) {
+  public void removeDatanode(UUID datanodeID) {
 
 Review comment:
   Sure. Updated.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 306014)
Time Spent: 2h 20m  (was: 2h 10m)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2084) TestSecureOzoneManager#testSecureOmInitFailures is timing out

2019-09-03 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2084:
---

 Summary: TestSecureOzoneManager#testSecureOmInitFailures is timing 
out
 Key: HDDS-2084
 URL: https://issues.apache.org/jira/browse/HDDS-2084
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Dinesh Chitlangia



{code:java}
---
Test set: org.apache.hadoop.ozone.om.TestSecureOzoneManager
---
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 25.824 s <<< 
FAILURE! - in org.apache.hadoop.ozone.om.TestSecureOzoneManager
testSecureOmInitFailures(org.apache.hadoop.ozone.om.TestSecureOzoneManager)  
Time elapsed: 25.011 s  <<< ERROR!
java.lang.Exception: test timed out after 25000 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure(Client.java:943)
at 
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:706)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy18.getOMCertificate(Unknown Source)
at 
org.apache.hadoop.hdds.protocolPB.SCMSecurityProtocolClientSideTranslatorPB.getOMCertChain(SCMSecurityProtocolClientSideTranslatorPB.java:115)
at 
org.apache.hadoop.ozone.om.OzoneManager.getSCMSignedCert(OzoneManager.java:1543)
at 
org.apache.hadoop.ozone.om.OzoneManager.initializeSecurity(OzoneManager.java:1117)
at 
org.apache.hadoop.ozone.om.TestSecureOzoneManager.lambda$testSecureOmInitFailures$0(TestSecureOzoneManager.java:128)
at 
org.apache.hadoop.ozone.om.TestSecureOzoneManager$$Lambda$8/1494777578.call(Unknown
 Source)
at 
org.apache.hadoop.test.LambdaTestUtils.lambda$intercept$0(LambdaTestUtils.java:527)
at 
org.apache.hadoop.test.LambdaTestUtils$$Lambda$9/1108847704.call(Unknown Source)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:491)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:522)
at 
org.apache.hadoop.ozone.om.TestSecureOzoneManager.testSecureOmInitFailures(TestSecureOzoneManager.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

{code}




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2083) Fix TestQueryNode#testStaleNodesCount

2019-09-03 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2083:
---

 Summary: Fix TestQueryNode#testStaleNodesCount
 Key: HDDS-2083
 URL: https://issues.apache.org/jira/browse/HDDS-2083
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Dinesh Chitlangia
 Attachments: stacktrace.rtf

It appears this test is failing due to several threads in waiting state.

Attached complete stack trace.





--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2082) Fix flaky TestContainerStateMachineFailures#testApplyTransactionFailure

2019-09-03 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2082:
---

 Summary: Fix flaky 
TestContainerStateMachineFailures#testApplyTransactionFailure
 Key: HDDS-2082
 URL: https://issues.apache.org/jira/browse/HDDS-2082
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Dinesh Chitlangia



{code:java}
---
Test set: org.apache.hadoop.ozone.client.rpc.TestContainerStateMachineFailures
---
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 102.615 s <<< 
FAILURE! - in 
org.apache.hadoop.ozone.client.rpc.TestContainerStateMachineFailures
testApplyTransactionFailure(org.apache.hadoop.ozone.client.rpc.TestContainerStateMachineFailures)
  Time elapsed: 15.677 s  <<< FAILURE!
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.ozone.client.rpc.TestContainerStateMachineFailures.testApplyTransactionFailure(TestContainerStateMachineFailures.java:349)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

{code}




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2081) Fix TestRatisPipelineProvider#testCreatePipelinesDnExclude

2019-09-03 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2081:
---

 Summary: Fix TestRatisPipelineProvider#testCreatePipelinesDnExclude
 Key: HDDS-2081
 URL: https://issues.apache.org/jira/browse/HDDS-2081
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Dinesh Chitlangia



{code:java}
---
Test set: org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider
---
Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.374 s <<< 
FAILURE! - in org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider
testCreatePipelinesDnExclude(org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider)
  Time elapsed: 0.044 s  <<< ERROR!
org.apache.hadoop.hdds.scm.pipeline.InsufficientDatanodesException: Cannot 
create pipeline of factor 3 using 2 nodes.
at 
org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.create(RatisPipelineProvider.java:151)
at 
org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider.testCreatePipelinesDnExclude(TestRatisPipelineProvider.java:182)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)


{code}




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14817) [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct arguments are given.

2019-09-03 Thread Soya Miyoshi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921819#comment-16921819
 ] 

Soya Miyoshi edited comment on HDFS-14817 at 9/4/19 2:00 AM:
-

Changes made to the command-line-parsing init() function, so that 
- if one of the arguments given is either "-h" or "-help", it prints the usage 
without creating Client. 
- it does not print the usage when the first argument is "-hadoop_binary_path".

Changed a variable name,
CommandLine cliParser  -> 
CommandLine commandLine
since the variable cliParser is not a parser itself.


was (Author: soyamiyoshi):
Changes made to the command-line-parsing init() function, so that 
- if one of the arguments given is either "-h" or "-help", print the usage 
without creating Client. 
- it does not print the usage when the first argument is "-hadoop_binary_path".

Changed a variable name,
CommandLine cliParser  -> 
CommandLine commandLine
since the variable cliParser is not a parser itself.

> [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct 
> arguments are given.
> ---
>
> Key: HDFS-14817
> URL: https://issues.apache.org/jira/browse/HDFS-14817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Soya Miyoshi
>Assignee: Soya Miyoshi
>Priority: Major
> Attachments: HDFS-14817.001.patch
>
>
> When trying to launch the infrastructure application to begin the startup of 
> the internal HDFS cluster as shown in the Manual Workload Launch section in 
> [here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]
>  {code:|borderStyle=solid}
> $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
>  -hadoop_binary_path hadoop-3.0.2.tar.gz \
>  -conf_path my-hadoop-conf \
>  -fs_image_dir hdfs:///fsimage \
>  -block_list_path hdfs:///dyno/blocks
> {code}
>  its usage is always shown even if correct arguments are given, if 
> `-hadoop_binary_path` is placed as a first argument for the script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14817) [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct arguments are given.

2019-09-03 Thread Soya Miyoshi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921819#comment-16921819
 ] 

Soya Miyoshi commented on HDFS-14817:
-

Changes made to the command-line-parsing init() function, so that 
- if one of the arguments given is either "-h" or "-help", print the usage 
without creating Client. 
- it does not print the usage when the first argument is "-hadoop_binary_path".

Changed a variable name,
CommandLine cliParser  -> 
CommandLine commandLine
since the variable cliParser is not a parser itself.

> [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct 
> arguments are given.
> ---
>
> Key: HDFS-14817
> URL: https://issues.apache.org/jira/browse/HDFS-14817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Soya Miyoshi
>Assignee: Soya Miyoshi
>Priority: Major
> Attachments: HDFS-14817.001.patch
>
>
> When trying to launch the infrastructure application to begin the startup of 
> the internal HDFS cluster as shown in the Manual Workload Launch section in 
> [here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]
>  {code:|borderStyle=solid}
> $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
>  -hadoop_binary_path hadoop-3.0.2.tar.gz \
>  -conf_path my-hadoop-conf \
>  -fs_image_dir hdfs:///fsimage \
>  -block_list_path hdfs:///dyno/blocks
> {code}
>  its usage is always shown even if correct arguments are given, if 
> `-hadoop_binary_path` is placed as a first argument for the script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14817) [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct arguments are given.

2019-09-03 Thread Soya Miyoshi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soya Miyoshi updated HDFS-14817:

Status: Patch Available  (was: Open)

> [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct 
> arguments are given.
> ---
>
> Key: HDFS-14817
> URL: https://issues.apache.org/jira/browse/HDFS-14817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Soya Miyoshi
>Assignee: Soya Miyoshi
>Priority: Major
> Attachments: HDFS-14817.001.patch
>
>
> When trying to launch the infrastructure application to begin the startup of 
> the internal HDFS cluster as shown in the Manual Workload Launch section in 
> [here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]
>  {code:|borderStyle=solid}
> $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
>  -hadoop_binary_path hadoop-3.0.2.tar.gz \
>  -conf_path my-hadoop-conf \
>  -fs_image_dir hdfs:///fsimage \
>  -block_list_path hdfs:///dyno/blocks
> {code}
>  its usage is always shown even if correct arguments are given, if 
> `-hadoop_binary_path` is placed as a first argument for the script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14817) [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct arguments are given.

2019-09-03 Thread Soya Miyoshi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soya Miyoshi updated HDFS-14817:

Attachment: HDFS-14817.001.patch

> [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct 
> arguments are given.
> ---
>
> Key: HDFS-14817
> URL: https://issues.apache.org/jira/browse/HDFS-14817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Soya Miyoshi
>Assignee: Soya Miyoshi
>Priority: Major
> Attachments: HDFS-14817.001.patch
>
>
> When trying to launch the infrastructure application to begin the startup of 
> the internal HDFS cluster as shown in the Manual Workload Launch section in 
> [here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]
>  {code:|borderStyle=solid}
> $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
>  -hadoop_binary_path hadoop-3.0.2.tar.gz \
>  -conf_path my-hadoop-conf \
>  -fs_image_dir hdfs:///fsimage \
>  -block_list_path hdfs:///dyno/blocks
> {code}
>  its usage is always shown even if correct arguments are given, if 
> `-hadoop_binary_path` is placed as a first argument for the script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14817) [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct arguments are given.

2019-09-03 Thread Soya Miyoshi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soya Miyoshi reassigned HDFS-14817:
---

Assignee: Soya Miyoshi

> [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct 
> arguments are given.
> ---
>
> Key: HDFS-14817
> URL: https://issues.apache.org/jira/browse/HDFS-14817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Soya Miyoshi
>Assignee: Soya Miyoshi
>Priority: Major
>
> When trying to launch the infrastructure application to begin the startup of 
> the internal HDFS cluster as shown in the Manual Workload Launch section in 
> [here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]
>  {code:|borderStyle=solid}
> $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
>  -hadoop_binary_path hadoop-3.0.2.tar.gz \
>  -conf_path my-hadoop-conf \
>  -fs_image_dir hdfs:///fsimage \
>  -block_list_path hdfs:///dyno/blocks
> {code}
>  its usage is always shown even if correct arguments are given, if 
> `-hadoop_binary_path` is placed as a first argument for the script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14817) [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct arguments are given.

2019-09-03 Thread Soya Miyoshi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soya Miyoshi updated HDFS-14817:

Description: 
When trying to launch the infrastructure application to begin the startup of 
the internal HDFS cluster as shown in the Manual Workload Launch section in 
[here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]
 {code:|borderStyle=solid}
$ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
 -hadoop_binary_path hadoop-3.0.2.tar.gz \
 -conf_path my-hadoop-conf \
 -fs_image_dir hdfs:///fsimage \
 -block_list_path hdfs:///dyno/blocks

{code}


 its usage is always shown even if correct arguments are given, if 
`-hadoop_binary_path` is placed as a first argument for the script.

  was:
When trying to launch the infrastructure application to begin the startup of 
the internal HDFS cluster as shown in the Manual Workload Launch section in 
[here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]

 {code:|borderStyle=solid}

$ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
 -hadoop_binary_path hadoop-3.0.2.tar.gz \
 -conf_path my-hadoop-conf \
 -fs_image_dir hdfs:///fsimage \
 -block_list_path hdfs:///dyno/blocks

{code}


 its usage is always shown even if correct arguments are given, if 
`-hadoop_binary_path` is placed as a first argument for the script.


> [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct 
> arguments are given.
> ---
>
> Key: HDFS-14817
> URL: https://issues.apache.org/jira/browse/HDFS-14817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Soya Miyoshi
>Priority: Major
>
> When trying to launch the infrastructure application to begin the startup of 
> the internal HDFS cluster as shown in the Manual Workload Launch section in 
> [here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]
>  {code:|borderStyle=solid}
> $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
>  -hadoop_binary_path hadoop-3.0.2.tar.gz \
>  -conf_path my-hadoop-conf \
>  -fs_image_dir hdfs:///fsimage \
>  -block_list_path hdfs:///dyno/blocks
> {code}
>  its usage is always shown even if correct arguments are given, if 
> `-hadoop_binary_path` is placed as a first argument for the script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14817) [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct arguments are given.

2019-09-03 Thread Soya Miyoshi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soya Miyoshi updated HDFS-14817:

Description: 
When trying to launch the infrastructure application to begin the startup of 
the internal HDFS cluster as shown in the Manual Workload Launch section in 
[here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]

 {code:|borderStyle=solid}

$ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
 -hadoop_binary_path hadoop-3.0.2.tar.gz \
 -conf_path my-hadoop-conf \
 -fs_image_dir hdfs:///fsimage \
 -block_list_path hdfs:///dyno/blocks

{code}


 its usage is always shown even if correct arguments are given, if 
`-hadoop_binary_path` is placed as a first argument for the script.

  was:
When trying to launch the infrastructure application to begin the startup of 
the internal HDFS cluster as shown in the Manual Workload Launch section in 
[here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]

 

 {{{code:|borderStyle=solid}}}

$ ./dynamometer-infra/bin/start-dynamometer-cluster.sh
 -hadoop_binary_path hadoop-3.0.2.tar.gz
 -conf_path my-hadoop-conf
 -fs_image_dir hdfs:///fsimage
 -block_list_path hdfs:///dyno/blocks{{}}

{{{code}}}

 

```
 $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh
 -hadoop_binary_path hadoop-3.0.2.tar.gz
 -conf_path my-hadoop-conf
 -fs_image_dir hdfs:///fsimage
 -block_list_path hdfs:///dyno/blocks
 ```
 its usage is always shown even if correct arguments are given,
 if `-hadoop_binary_path` is placed as the first argument for the script.


> [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct 
> arguments are given.
> ---
>
> Key: HDFS-14817
> URL: https://issues.apache.org/jira/browse/HDFS-14817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Soya Miyoshi
>Priority: Major
>
> When trying to launch the infrastructure application to begin the startup of 
> the internal HDFS cluster as shown in the Manual Workload Launch section in 
> [here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]
>  {code:|borderStyle=solid}
> $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh \
>  -hadoop_binary_path hadoop-3.0.2.tar.gz \
>  -conf_path my-hadoop-conf \
>  -fs_image_dir hdfs:///fsimage \
>  -block_list_path hdfs:///dyno/blocks
> {code}
>  its usage is always shown even if correct arguments are given, if 
> `-hadoop_binary_path` is placed as a first argument for the script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14817) [Dynamometer] start-dynamometer-cluster.sh shows its usage even if correct arguments are given.

2019-09-03 Thread Soya Miyoshi (Jira)
Soya Miyoshi created HDFS-14817:
---

 Summary: [Dynamometer] start-dynamometer-cluster.sh shows its 
usage even if correct arguments are given.
 Key: HDFS-14817
 URL: https://issues.apache.org/jira/browse/HDFS-14817
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Reporter: Soya Miyoshi


When trying to launch the infrastructure application to begin the startup of 
the internal HDFS cluster as shown in the Manual Workload Launch section in 
[here|https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-dynamometer/Dynamometer.html]

 

 {{{code:|borderStyle=solid}}}

$ ./dynamometer-infra/bin/start-dynamometer-cluster.sh
 -hadoop_binary_path hadoop-3.0.2.tar.gz
 -conf_path my-hadoop-conf
 -fs_image_dir hdfs:///fsimage
 -block_list_path hdfs:///dyno/blocks{{}}

{{{code}}}

 

```
 $ ./dynamometer-infra/bin/start-dynamometer-cluster.sh
 -hadoop_binary_path hadoop-3.0.2.tar.gz
 -conf_path my-hadoop-conf
 -fs_image_dir hdfs:///fsimage
 -block_list_path hdfs:///dyno/blocks
 ```
 its usage is always shown even if correct arguments are given,
 if `-hadoop_binary_path` is placed as the first argument for the script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14758) Decrease lease hard limit

2019-09-03 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921793#comment-16921793
 ] 

Wei-Chiu Chuang commented on HDFS-14758:


I think Kihwal is much more experienced than I do so sounds good to me.

> Decrease lease hard limit
> -
>
> Key: HDFS-14758
> URL: https://issues.apache.org/jira/browse/HDFS-14758
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: hemanthboyina
>Priority: Minor
>
> The hard limit is currently hard-coded to be 1 hour. This also determines the 
> NN automatic lease recovery interval. Something like 20 min will make more 
> sense.
> After the 5 min soft limit, other clients can recover the lease. If no one 
> else takes the lease away, the original client still can renew the lease 
> within the hard limit. So even after a NN full GC of 8 minutes, leases can be 
> still valid.
> However, there is one risk in reducing the hard limit. E.g. Reduced to 20 
> min. If the NN crashes and the manual failover takes more than 20 minutes, 
> clients will abort.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13541) NameNode Port based selective encryption

2019-09-03 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921781#comment-16921781
 ] 

Chen Liang commented on HDFS-13541:
---

Thanks [~shv]! I've pushed v003 patch of branch-2, with the white space fixed.

> NameNode Port based selective encryption
> 
>
> Key: HDFS-13541
> URL: https://issues.apache.org/jira/browse/HDFS-13541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, security
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-13541-branch-2.001.patch, 
> HDFS-13541-branch-2.002.patch, HDFS-13541-branch-2.003.patch, 
> HDFS-13541-branch-3.1.001.patch, HDFS-13541-branch-3.1.002.patch, 
> HDFS-13541-branch-3.2.001.patch, HDFS-13541-branch-3.2.002.patch, NameNode 
> Port based selective encryption-v1.pdf
>
>
> Here at LinkedIn, one issue we face is that we need to enforce different 
> security requirement based on the location of client and the cluster. 
> Specifically, for clients from outside of the data center, it is required by 
> regulation that all traffic must be encrypted. But for clients within the 
> same data center, unencrypted connections are more desired to avoid the high 
> encryption overhead. 
> HADOOP-10221 introduced pluggable SASL resolver, based on which HADOOP-10335 
> introduced WhitelistBasedResolver which solves the same problem. However we 
> found it difficult to fit into our environment for several reasons. In this 
> JIRA, on top of pluggable SASL resolver, *we propose a different approach of 
> running RPC two ports on NameNode, and the two ports will be enforcing 
> encrypted and unencrypted connections respectively, and the following 
> DataNode access will simply follow the same behaviour of 
> encryption/unencryption*. Then by blocking unencrypted port on datacenter 
> firewall, we can completely block unencrypted external access.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-03 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921771#comment-16921771
 ] 

Arpit Agarwal commented on HDFS-14305:
--

{quote}But there are techniques to avoid collisions by starting NNs in a 
certain order. Which we should document.
{quote}
This sounds somewhat unsafe.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2080) Document details regarding how to implement write request in OzoneManager

2019-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2080:
-
Description: 
This Jira is to add details regarding how to implement write requests in OM.

Some of the details can be taken from the design doc uploaded to HDDS-505 Jira.

Design doc link [Write request design doc 
link|https://issues.apache.org/jira/secure/attachment/12973260/Handling%20Write%20Requests%20with%20OM%20HA.pdf]

 

And also what are the considerations need to be taken care for implementing 
list*Apis.

  was:
This Jira is to add details regarding how to implement write requests in OM.

Some of the details can be taken from the design doc uploaded to HDDS-505 Jira.

 

And also what are the considerations need to be taken care for implementing 
list*Apis.


> Document details regarding how to implement write request in OzoneManager
> -
>
> Key: HDDS-2080
> URL: https://issues.apache.org/jira/browse/HDDS-2080
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> This Jira is to add details regarding how to implement write requests in OM.
> Some of the details can be taken from the design doc uploaded to HDDS-505 
> Jira.
> Design doc link [Write request design doc 
> link|https://issues.apache.org/jira/secure/attachment/12973260/Handling%20Write%20Requests%20with%20OM%20HA.pdf]
>  
> And also what are the considerations need to be taken care for implementing 
> list*Apis.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2080) Document details regarding how to implement write request in OzoneManager

2019-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2080:
-
Description: 
This Jira is to add details regarding how to implement write requests in OM.

Some of the details can be taken from the design doc uploaded to HDDS-505 Jira.

 

And also what are the considerations need to be taken care for implementing 
list*Apis.

> Document details regarding how to implement write request in OzoneManager
> -
>
> Key: HDDS-2080
> URL: https://issues.apache.org/jira/browse/HDDS-2080
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> This Jira is to add details regarding how to implement write requests in OM.
> Some of the details can be taken from the design doc uploaded to HDDS-505 
> Jira.
>  
> And also what are the considerations need to be taken care for implementing 
> list*Apis.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2080) Document details regarding how to implement write request in OzoneManager

2019-09-03 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2080:


 Summary: Document details regarding how to implement write request 
in OzoneManager
 Key: HDDS-2080
 URL: https://issues.apache.org/jira/browse/HDDS-2080
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2080) Document details regarding how to implement write request in OzoneManager

2019-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2080:
-
Parent: HDDS-505
Issue Type: Sub-task  (was: Bug)

> Document details regarding how to implement write request in OzoneManager
> -
>
> Key: HDDS-2080
> URL: https://issues.apache.org/jira/browse/HDDS-2080
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=305940=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305940
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 03/Sep/19 22:31
Start Date: 03/Sep/19 22:31
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320505479
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -826,6 +828,28 @@ private VolumeList getVolumesByUser(String userNameKey)
 return count;
   }
 
+  @Override
+  public List getMultipartUploadKeys(
+  String volumeName, String bucketName, String prefix) throws IOException {
+List response = new ArrayList<>();
+TableIterator>
+iterator = getMultipartInfoTable().iterator();
+
+String prefixKey =
+OmMultipartUpload.getDbKey(volumeName, bucketName, prefix);
+iterator.seek(prefixKey);
+
+while (iterator.hasNext()) {
 
 Review comment:
   Here, we need to consider table cache also now HA/Non-HA code path is merged 
(HDDS-1909)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305940)
Time Spent: 6h 10m  (was: 6h)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2019 started by Bharat Viswanadham.

> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2064) OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2064?focusedWorklogId=305938=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305938
 ]

ASF GitHub Bot logged work on HDDS-2064:


Author: ASF GitHub Bot
Created on: 03/Sep/19 22:25
Start Date: 03/Sep/19 22:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1398: 
HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is 
configured incorrectly
URL: https://github.com/apache/hadoop/pull/1398#discussion_r320503702
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -614,6 +615,12 @@ private void loadOMHAConfigs(Configuration conf) {
 " system with " + OZONE_OM_SERVICE_IDS_KEY + " and " +
 OZONE_OM_ADDRESS_KEY;
 throw new OzoneIllegalArgumentException(msg);
+  } else if (!isOMAddressSet && found == 0) {
 
 Review comment:
   And also try out test cases with multiple name services set also. And for 
this test case, you can use ozone.om.node.id if needed. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305938)
Time Spent: 50m  (was: 40m)

> OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
> incorrectly
> 
>
> Key: HDDS-2064
> URL: https://issues.apache.org/jira/browse/HDDS-2064
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> OM will NPE and crash when `ozone.om.service.ids=id1,id2` is configured but 
> `ozone.om.nodes.id1` doesn't exist; or `ozone.om.address.id1.omX` doesn't 
> exist.
> Root cause:
> `OzoneManager#loadOMHAConfigs()` didn't check the case where `found == 0`. 
> This happens when local OM doesn't match any `ozone.om.address.idX.omX` in 
> the config.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2064) OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2064?focusedWorklogId=305935=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305935
 ]

ASF GitHub Bot logged work on HDDS-2064:


Author: ASF GitHub Bot
Created on: 03/Sep/19 22:23
Start Date: 03/Sep/19 22:23
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1398: 
HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is 
configured incorrectly
URL: https://github.com/apache/hadoop/pull/1398#discussion_r320503175
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -614,6 +615,12 @@ private void loadOMHAConfigs(Configuration conf) {
 " system with " + OZONE_OM_SERVICE_IDS_KEY + " and " +
 OZONE_OM_ADDRESS_KEY;
 throw new OzoneIllegalArgumentException(msg);
+  } else if (!isOMAddressSet && found == 0) {
 
 Review comment:
   We cannot add this condition here, because think of case
   OZONE_OM_SERVICE_IDS_KEY = ns1,ns2
   Because after one iteration of nameserviceID if we are not able to find, we 
should not say illegal configuration. We should try out all name services and 
see if it is matching with any nameservice. 
   
   And also, we can eliminate iteration of for loop with 
OmUtils.emptyAsSingletonNull(omServiceIds))  and similar for nameServiceIds. As 
for HA this is a must required configuration which needs to be set.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305935)
Time Spent: 0.5h  (was: 20m)

> OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
> incorrectly
> 
>
> Key: HDDS-2064
> URL: https://issues.apache.org/jira/browse/HDDS-2064
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> OM will NPE and crash when `ozone.om.service.ids=id1,id2` is configured but 
> `ozone.om.nodes.id1` doesn't exist; or `ozone.om.address.id1.omX` doesn't 
> exist.
> Root cause:
> `OzoneManager#loadOMHAConfigs()` didn't check the case where `found == 0`. 
> This happens when local OM doesn't match any `ozone.om.address.idX.omX` in 
> the config.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2064) OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2064?focusedWorklogId=305937=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305937
 ]

ASF GitHub Bot logged work on HDDS-2064:


Author: ASF GitHub Bot
Created on: 03/Sep/19 22:23
Start Date: 03/Sep/19 22:23
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1398: 
HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is 
configured incorrectly
URL: https://github.com/apache/hadoop/pull/1398#discussion_r320503175
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -614,6 +615,12 @@ private void loadOMHAConfigs(Configuration conf) {
 " system with " + OZONE_OM_SERVICE_IDS_KEY + " and " +
 OZONE_OM_ADDRESS_KEY;
 throw new OzoneIllegalArgumentException(msg);
+  } else if (!isOMAddressSet && found == 0) {
 
 Review comment:
   We cannot add this condition here, because think of case
   OZONE_OM_SERVICE_IDS_KEY = ns1,ns2
   Because after one iteration of nameserviceID if we are not able to find any 
om node matching with nameservice, we should not throw illegal configuration. 
We should try out all name services and see if it is matching with any 
nameservice. 
   
   And also, we can eliminate iteration of for loop with 
OmUtils.emptyAsSingletonNull(omServiceIds))  and similar for nameServiceIds. As 
for HA this is a must required configuration which needs to be set.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305937)
Time Spent: 40m  (was: 0.5h)

> OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
> incorrectly
> 
>
> Key: HDDS-2064
> URL: https://issues.apache.org/jira/browse/HDDS-2064
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> OM will NPE and crash when `ozone.om.service.ids=id1,id2` is configured but 
> `ozone.om.nodes.id1` doesn't exist; or `ozone.om.address.id1.omX` doesn't 
> exist.
> Root cause:
> `OzoneManager#loadOMHAConfigs()` didn't check the case where `found == 0`. 
> This happens when local OM doesn't match any `ozone.om.address.idX.omX` in 
> the config.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14792) [SBN read] StanbyNode does not come out of safemode while adding new blocks.

2019-09-03 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14792:
---
Description: 
During startup StandbyNode reports that it needs additional X blocks to reach 
the threshold 1.. Where X is changing up and down.
This is because with fast tailing SBN adds new blocks from edits while DNs have 
not reported replicas yet. Being in SafeMode SBN counts new blocks towards the 
threshold and can stay in SafeMode for a long time.
By design, the purpose of startup SafeMode is to disallow modifications of the 
namespace and blocks map until all DN replicas are reported.

  was:
During startup StandbyNode reports that it needs additional X blocks to reach 
the threshold 1.. Where X is changing up and down.
This is because with fast tailing SBN adds new blocks from edits while DNs have 
not reported replicas yet. Being in SafeMode SBN counts new blocks towards the 
threshold and can stays in SafeMode for a long time.
By design, the purpose of startup SafeMode is to disallow modifications of the 
namespace and blocks map until all DNs replicas are reported.


> [SBN read] StanbyNode does not come out of safemode while adding new blocks.
> 
>
> Key: HDFS-14792
> URL: https://issues.apache.org/jira/browse/HDFS-14792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Priority: Major
>
> During startup StandbyNode reports that it needs additional X blocks to reach 
> the threshold 1.. Where X is changing up and down.
> This is because with fast tailing SBN adds new blocks from edits while DNs 
> have not reported replicas yet. Being in SafeMode SBN counts new blocks 
> towards the threshold and can stay in SafeMode for a long time.
> By design, the purpose of startup SafeMode is to disallow modifications of 
> the namespace and blocks map until all DN replicas are reported.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-03 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921740#comment-16921740
 ] 

Konstantin Shvachko edited comment on HDFS-14305 at 9/3/19 10:01 PM:
-

I don't think we should pursue the bits approach. It is not scalable.
I actually would prefer to go back to computing ranges depending on the number 
of configured NameNodes as in HDFS-6440, just fix the issue with negative 
initial serial number. [~csun] you are right this can cause collisions when 
adding/removing NameNodes to the existing cluster. But there are techniques to 
avoid collisions by starting NNs in a certain order. Which we should document. 
In order to do that we should know the ranges for each node, so I created 
HDFS-14793.


was (Author: shv):
I don't think we should pursue the bits approach. It is not scalable.
I actually would prefer to go back to computing ranges depending in the number 
of configured NameNodes as in HDFS-6440, just fix the issue with negative 
initial serial number. [~csun] you are right this can cause collisions when 
adding/removing NameNodes to the existing cluster. But there are techniques to 
avoid collisions by starting NNs in a certain order. Which we should document. 
In order to do that we should know the ranges for each node, so I created 
HDFS-14793.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14793) BlockTokenSecretManager should LOG block token range it operates on.

2019-09-03 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921742#comment-16921742
 ] 

Konstantin Shvachko commented on HDFS-14793:


Hey [~hemanthboyina], please feel free to assign to yourself if you would like 
to fix it.

> BlockTokenSecretManager should LOG block token range it operates on.
> 
>
> Key: HDFS-14793
> URL: https://issues.apache.org/jira/browse/HDFS-14793
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Priority: Major
>
> At startup log enough information to identified the range of block token keys 
> for the NameNode. This should make it easier to debug issues with block 
> tokens.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-03 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921740#comment-16921740
 ] 

Konstantin Shvachko commented on HDFS-14305:


I don't think we should pursue the bits approach. It is not scalable.
I actually would prefer to go back to computing ranges depending in the number 
of configured NameNodes as in HDFS-6440, just fix the issue with negative 
initial serial number. [~csun] you are right this can cause collisions when 
adding/removing NameNodes to the existing cluster. But there are techniques to 
avoid collisions by starting NNs in a certain order. Which we should document. 
In order to do that we should know the ranges for each node, so I created 
HDFS-14793.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=305918=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305918
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 03/Sep/19 21:33
Start Date: 03/Sep/19 21:33
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1361: HDDS-1553. 
Add metrics in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320487548
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
+  // datanode successful allocation count
+  @Metric private MutableCounterLong datanodeAllocationSuccessCount;
+  // datanode allocated with some allocation constrains compromised
+  @Metric private MutableCounterLong datanodeAllocationCompromiseCount;
+
+  public SCMPlacementMetrics() {
+  }
+
+  public static SCMPlacementMetrics create() {
 
 Review comment:
   can we add a helper to unregister the metrics?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305918)
Time Spent: 2h 40m  (was: 2.5h)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=305916=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305916
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 03/Sep/19 21:32
Start Date: 03/Sep/19 21:32
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1361: HDDS-1553. 
Add metrics in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320487548
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
+  // datanode successful allocation count
+  @Metric private MutableCounterLong datanodeAllocationSuccessCount;
+  // datanode allocated with some allocation constrains compromised
+  @Metric private MutableCounterLong datanodeAllocationCompromiseCount;
+
+  public SCMPlacementMetrics() {
+  }
+
+  public static SCMPlacementMetrics create() {
 
 Review comment:
   can we add a helper to unregister the metrics?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305916)
Time Spent: 2.5h  (was: 2h 20m)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=305914=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305914
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 03/Sep/19 21:31
Start Date: 03/Sep/19 21:31
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1361: HDDS-1553. 
Add metrics in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320487256
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
+  // datanode successful allocation count
+  @Metric private MutableCounterLong datanodeAllocationSuccessCount;
 
 Review comment:
   datanodeAllocationSuccessCount -> datanodeChooseSuccessCount
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305914)
Time Spent: 2h 10m  (was: 2h)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=305915=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305915
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 03/Sep/19 21:31
Start Date: 03/Sep/19 21:31
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1361: HDDS-1553. 
Add metrics in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320487351
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
+  // datanode successful allocation count
+  @Metric private MutableCounterLong datanodeAllocationSuccessCount;
+  // datanode allocated with some allocation constrains compromised
+  @Metric private MutableCounterLong datanodeAllocationCompromiseCount;
 
 Review comment:
   datanodeAllocationCompromiseCount -> datanodeChooseFallbackCount
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305915)
Time Spent: 2h 20m  (was: 2h 10m)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=305911=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305911
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 03/Sep/19 21:30
Start Date: 03/Sep/19 21:30
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1361: HDDS-1553. 
Add metrics in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320486867
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
 
 Review comment:
   NIT: datanodeAllocationTryCount -> datanodeSelectAttemptCount
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305911)
Time Spent: 1h 50m  (was: 1h 40m)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=305912=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305912
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 03/Sep/19 21:30
Start Date: 03/Sep/19 21:30
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1361: HDDS-1553. 
Add metrics in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320486867
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
 
 Review comment:
   NIT: datanodeAllocationTryCount -> datanodeChooseAttemptCount
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305912)
Time Spent: 2h  (was: 1h 50m)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2018) Handle Set DtService of token for OM HA

2019-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2018:
-
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Handle Set DtService of token for OM HA
> ---
>
> Key: HDDS-2018
> URL: https://issues.apache.org/jira/browse/HDDS-2018
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address.
>  # Update token selector to return tokens if there is a match with Service. 
> Because SaslRpcClient calls token selector with server address.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2018) Handle Set DtService of token for OM HA

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2018?focusedWorklogId=305882=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305882
 ]

ASF GitHub Bot logged work on HDDS-2018:


Author: ASF GitHub Bot
Created on: 03/Sep/19 21:06
Start Date: 03/Sep/19 21:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1371: 
HDDS-2018. Handle Set DtService of token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305882)
Time Spent: 1h 40m  (was: 1.5h)

> Handle Set DtService of token for OM HA
> ---
>
> Key: HDDS-2018
> URL: https://issues.apache.org/jira/browse/HDDS-2018
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address.
>  # Update token selector to return tokens if there is a match with Service. 
> Because SaslRpcClient calls token selector with server address.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2018) Handle Set DtService of token for OM HA

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2018?focusedWorklogId=305881=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305881
 ]

ASF GitHub Bot logged work on HDDS-2018:


Author: ASF GitHub Bot
Created on: 03/Sep/19 21:06
Start Date: 03/Sep/19 21:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1371: HDDS-2018. 
Handle Set DtService of token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#issuecomment-527640423
 
 
   Thank You @xiaoyuyao for the review.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305881)
Time Spent: 1.5h  (was: 1h 20m)

> Handle Set DtService of token for OM HA
> ---
>
> Key: HDDS-2018
> URL: https://issues.apache.org/jira/browse/HDDS-2018
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address.
>  # Update token selector to return tokens if there is a match with Service. 
> Because SaslRpcClient calls token selector with server address.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2064) OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2064?focusedWorklogId=305880=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305880
 ]

ASF GitHub Bot logged work on HDDS-2064:


Author: ASF GitHub Bot
Created on: 03/Sep/19 21:05
Start Date: 03/Sep/19 21:05
Worklog Time Spent: 10m 
  Work Description: smengcl commented on issue #1398: HDDS-2064. 
OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
incorrectly
URL: https://github.com/apache/hadoop/pull/1398#issuecomment-527640041
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305880)
Time Spent: 20m  (was: 10m)

> OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
> incorrectly
> 
>
> Key: HDDS-2064
> URL: https://issues.apache.org/jira/browse/HDDS-2064
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> OM will NPE and crash when `ozone.om.service.ids=id1,id2` is configured but 
> `ozone.om.nodes.id1` doesn't exist; or `ozone.om.address.id1.omX` doesn't 
> exist.
> Root cause:
> `OzoneManager#loadOMHAConfigs()` didn't check the case where `found == 0`. 
> This happens when local OM doesn't match any `ozone.om.address.idX.omX` in 
> the config.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2018) Handle Set DtService of token for OM HA

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2018?focusedWorklogId=305867=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305867
 ]

ASF GitHub Bot logged work on HDDS-2018:


Author: ASF GitHub Bot
Created on: 03/Sep/19 20:37
Start Date: 03/Sep/19 20:37
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1371: HDDS-2018. Handle 
Set DtService of token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#issuecomment-527629438
 
 
   +1 pending CI.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305867)
Time Spent: 1h 20m  (was: 1h 10m)

> Handle Set DtService of token for OM HA
> ---
>
> Key: HDDS-2018
> URL: https://issues.apache.org/jira/browse/HDDS-2018
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address.
>  # Update token selector to return tokens if there is a match with Service. 
> Because SaslRpcClient calls token selector with server address.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2015?focusedWorklogId=305865=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305865
 ]

ASF GitHub Bot logged work on HDDS-2015:


Author: ASF GitHub Bot
Created on: 03/Sep/19 20:30
Start Date: 03/Sep/19 20:30
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#issuecomment-527626815
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305865)
Time Spent: 2h 10m  (was: 2h)

> Encrypt/decrypt key using symmetric key while writing/reading
> -
>
> Key: HDDS-2015
> URL: https://issues.apache.org/jira/browse/HDDS-2015
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=305863=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305863
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 03/Sep/19 20:28
Start Date: 03/Sep/19 20:28
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-527626117
 
 
   Opened jira's HDDS-2078 and HDDS-2079 for secure cluster test failures.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305863)
Time Spent: 17h  (was: 16h 50m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 17h
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2079) Fix TestSecureOzoneManager

2019-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2079:
-
Description: 
[https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.om.TestSecureOzoneManager.txt]
  (was: 
[https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt])

> Fix TestSecureOzoneManager
> --
>
> Key: HDDS-2079
> URL: https://issues.apache.org/jira/browse/HDDS-2079
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.om.TestSecureOzoneManager.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1909) Use new HA code for Non-HA in OM

2019-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1909:
-
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 16h 50m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2079) Fix TestSecureOzoneManager

2019-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-2079:


Assignee: (was: Bharat Viswanadham)

> Fix TestSecureOzoneManager
> --
>
> Key: HDDS-2079
> URL: https://issues.apache.org/jira/browse/HDDS-2079
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2079) Fix TestSecureOzoneManager

2019-09-03 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2079:


 Summary: Fix TestSecureOzoneManager
 Key: HDDS-2079
 URL: https://issues.apache.org/jira/browse/HDDS-2079
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


[https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2078) Fix TestSecureOzoneCluster

2019-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2078:
-
Description: 
[https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]

> Fix TestSecureOzoneCluster
> --
>
> Key: HDDS-2078
> URL: https://issues.apache.org/jira/browse/HDDS-2078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> [https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2078) Fix TestSecureOzoneCluster

2019-09-03 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2078:


 Summary: Fix TestSecureOzoneCluster
 Key: HDDS-2078
 URL: https://issues.apache.org/jira/browse/HDDS-2078
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=305860=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305860
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 03/Sep/19 20:24
Start Date: 03/Sep/19 20:24
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-527624889
 
 
   Thank You @arp7 for the review.
   I have committed this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305860)
Time Spent: 16h 50m  (was: 16h 40m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 16h 50m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=305859=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305859
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 03/Sep/19 20:24
Start Date: 03/Sep/19 20:24
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1225: 
HDDS-1909. Use new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305859)
Time Spent: 16h 40m  (was: 16.5h)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 16h 40m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=305847=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305847
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 03/Sep/19 20:07
Start Date: 03/Sep/19 20:07
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1225: HDDS-1909. Use new HA 
code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-527618227
 
 
   +1 to commit assuming the integration test failures are unrelated.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305847)
Time Spent: 16.5h  (was: 16h 20m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 16.5h
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=305844=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305844
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 03/Sep/19 20:05
Start Date: 03/Sep/19 20:05
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1225: HDDS-1909. Use new HA 
code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-527617350
 
 
   Thanks @bharatviswa504. The patch looks pretty good to me.
   
   +1
   
   One remaining point was to self-terminate the OM in the RocksDB update 
failure path. It can be done in a separate jira.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305844)
Time Spent: 16h 20m  (was: 16h 10m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 16h 20m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=305843=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305843
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 03/Sep/19 20:04
Start Date: 03/Sep/19 20:04
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#discussion_r320453906
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
 ##
 @@ -115,7 +119,36 @@ public OMResponse submitRequest(RpcController controller,
   }
 }
   } else {
-return submitRequestDirectlyToOM(request);
+OMClientResponse omClientResponse = null;
+long index = 0L;
+try {
+  OMClientRequest omClientRequest =
+  OzoneManagerRatisUtils.createClientRequest(request);
+  if (omClientRequest != null) {
+request = omClientRequest.preExecute(ozoneManager);
+index = transactionIndex.incrementAndGet();
+omClientRequest =
+OzoneManagerRatisUtils.createClientRequest(request);
+omClientResponse =
+omClientRequest.validateAndUpdateCache(ozoneManager, index,
+ozoneManagerDoubleBuffer::add);
+  } else {
+return submitRequestDirectlyToOM(request);
+  }
+} catch(IOException ex) {
+  // As some of the preExecute returns error. So handle here.
+  return createErrorResponse(request, ex);
+}
+
+try {
+  omClientResponse.getFlushFuture().get();
+  LOG.trace("Future for {} is completed", request);
+} catch (ExecutionException | InterruptedException ex) {
+  // Do we need to terminate OM here?
 
 Review comment:
   Yes I would err on the side of safety and self-terminate the OM here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305843)
Time Spent: 16h 10m  (was: 16h)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 16h 10m
>  Remaining Estimate: 0h
>
> This Jira is to use new HA code of OM in Non-HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2064) OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2064:
-
Labels: pull-request-available  (was: )

> OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
> incorrectly
> 
>
> Key: HDDS-2064
> URL: https://issues.apache.org/jira/browse/HDDS-2064
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> OM will NPE and crash when `ozone.om.service.ids=id1,id2` is configured but 
> `ozone.om.nodes.id1` doesn't exist; or `ozone.om.address.id1.omX` doesn't 
> exist.
> Root cause:
> `OzoneManager#loadOMHAConfigs()` didn't check the case where `found == 0`. 
> This happens when local OM doesn't match any `ozone.om.address.idX.omX` in 
> the config.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2064) OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2064?focusedWorklogId=305834=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305834
 ]

ASF GitHub Bot logged work on HDDS-2064:


Author: ASF GitHub Bot
Created on: 03/Sep/19 19:58
Start Date: 03/Sep/19 19:58
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #1398: [WIP] 
HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is 
configured incorrectly
URL: https://github.com/apache/hadoop/pull/1398
 
 
   The WIP patch can prevent NPE in `TestOzoneManagerConfiguration` unit test 
`testWrongConfigurationNoOMNodes` and `testWrongConfigurationNoOMAddrs`. But it 
fails `testWrongConfiguration` and `testMultipleOMServiceIds`.
   
   I might need to place the fix logic in other places instead of 
`OzoneManager` to pass the other unit tests without modifying them? Please 
advice.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305834)
Remaining Estimate: 0h
Time Spent: 10m

> OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
> incorrectly
> 
>
> Key: HDDS-2064
> URL: https://issues.apache.org/jira/browse/HDDS-2064
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> OM will NPE and crash when `ozone.om.service.ids=id1,id2` is configured but 
> `ozone.om.nodes.id1` doesn't exist; or `ozone.om.address.id1.omX` doesn't 
> exist.
> Root cause:
> `OzoneManager#loadOMHAConfigs()` didn't check the case where `found == 0`. 
> This happens when local OM doesn't match any `ozone.om.address.idX.omX` in 
> the config.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2018) Handle Set DtService of token for OM HA

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2018?focusedWorklogId=305792=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305792
 ]

ASF GitHub Bot logged work on HDDS-2018:


Author: ASF GitHub Bot
Created on: 03/Sep/19 19:04
Start Date: 03/Sep/19 19:04
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1371: HDDS-2018. 
Handle Set DtService of token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#issuecomment-527595115
 
 
   Thank You @xiaoyuyao for the review.
   Addressed review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305792)
Time Spent: 1h 10m  (was: 1h)

> Handle Set DtService of token for OM HA
> ---
>
> Key: HDDS-2018
> URL: https://issues.apache.org/jira/browse/HDDS-2018
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address.
>  # Update token selector to return tokens if there is a match with Service. 
> Because SaslRpcClient calls token selector with server address.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=305786=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305786
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:53
Start Date: 03/Sep/19 18:53
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320425899
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1270,6 +1271,58 @@ public void abortMultipartUpload(OmKeyArgs omKeyArgs) 
throws IOException {
 
   }
 
+  @Override
+  public OmMultipartUploadList listMultipartUploads(String volumeName,
+  String bucketName, String prefix) throws OMException {
+Preconditions.checkNotNull(volumeName);
+Preconditions.checkNotNull(bucketName);
+
+metadataManager.getLock().acquireLock(BUCKET_LOCK, volumeName, bucketName);
+try {
+
+  List multipartUploadKeys =
+  metadataManager
+  .getMultipartUploadKeys(volumeName, bucketName, prefix);
+
+  List collect = multipartUploadKeys.stream()
+  .map(OmMultipartUpload::from)
+  .map(upload -> {
+String dbKey = metadataManager
+.getOzoneKey(upload.getVolumeName(),
+upload.getBucketName(),
+upload.getKeyName());
+try {
+  Table openKeyTable =
+  metadataManager.getOpenKeyTable();
+
+  OmKeyInfo omKeyInfo =
+  openKeyTable.get(upload.getDbKey());
+  upload.setCreationTime(
+  Instant.ofEpochMilli(omKeyInfo.getCreationTime()));
+} catch (IOException e) {
+  LOG.warn(
+  "Open key entry for multipart upload record can be read  {}",
+  dbKey);
+}
+return upload;
+  })
+  .collect(Collectors.toList());
+
+  OmMultipartUploadList omMultipartUploadList =
+  new OmMultipartUploadList(collect);
+
+  return omMultipartUploadList;
+
+} catch (IOException ex) {
+  LOG.error("List Multipart Uploads Failed: volume: " + volumeName +
 
 Review comment:
   LOG.error("List Multipart Uploads Failed: volume: " + volumeName +
 "bucket: " + bucketName + "prefix: " + prefix, ex);
   to
   LOG.error("List Multipart Uploads Failed: volume: {} bucket: {} prefix: 
{}",volumeName, bucketName, prefix, ex);
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305786)
Time Spent: 6h  (was: 5h 50m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-09-03 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921659#comment-16921659
 ] 

Íñigo Goiri commented on HDFS-14777:


+1 on  [^HDFS-14777.004.patch].

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14777.001.patch, HDFS-14777.002.patch, 
> HDFS-14777.003.patch, HDFS-14777.004.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=305774=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305774
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:43
Start Date: 03/Sep/19 18:43
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1277: HDDS-1054. 
List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-527587093
 
 
   When i run the command I see below output, whereas it is not showing up 
bucketName, keyMarker other fields.
   bash-4.2$ aws s3api --endpoint http://s3g:9878 list-multipart-uploads 
--bucket b1234 --prefix mpu
   {
   "Uploads": [
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:35.916Z", 
   "UploadId": 
"24eea7f4-52db-4a0f-978a-f06cb7a57657-102730037717565440", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }, 
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:37.816Z", 
   "UploadId": 
"81c0e5c2-db11-4b11-a5f7-81f48bdbfb04-102730037842083841", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey1", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }, 
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:39.259Z", 
   "UploadId": 
"4aab75b8-1954-4e8a-a658-0d403bcbc42f-102730037936717826", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey2", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }
   ]
   }
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305774)
Time Spent: 5.5h  (was: 5h 20m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=305777=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305777
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:44
Start Date: 03/Sep/19 18:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320422434
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadList.java
 ##
 @@ -6,9 +6,9 @@
  * to you under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
 
 Review comment:
   Unintended change?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305777)
Time Spent: 5h 50m  (was: 5h 40m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=305775=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305775
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:43
Start Date: 03/Sep/19 18:43
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1277: HDDS-1054. 
List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-527587093
 
 
   When i run the command I see below output, whereas it is not showing up 
bucketName, keyMarker other fields.
   ```
   bash-4.2$ aws s3api --endpoint http://s3g:9878 list-multipart-uploads 
--bucket b1234 --prefix mpu
   {
   "Uploads": [
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:35.916Z", 
   "UploadId": 
"24eea7f4-52db-4a0f-978a-f06cb7a57657-102730037717565440", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }, 
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:37.816Z", 
   "UploadId": 
"81c0e5c2-db11-4b11-a5f7-81f48bdbfb04-102730037842083841", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey1", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }, 
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:39.259Z", 
   "UploadId": 
"4aab75b8-1954-4e8a-a658-0d403bcbc42f-102730037936717826", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey2", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }
   ]
   }
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305775)
Time Spent: 5h 40m  (was: 5.5h)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14816) TestFileCorruption#testCorruptionWithDiskFailure is flaky

2019-09-03 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14816:
-
Attachment: HDFS-14816.001.patch
Status: Patch Available  (was: Open)

> TestFileCorruption#testCorruptionWithDiskFailure is flaky
> -
>
> Key: HDFS-14816
> URL: https://issues.apache.org/jira/browse/HDFS-14816
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14816.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14801) PrometheusMetricsSink: Better support for NNTop

2019-09-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921646#comment-16921646
 ] 

Hadoop QA commented on HDFS-14801:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-14801 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14801 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27772/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> PrometheusMetricsSink: Better support for NNTop
> ---
>
> Key: HDFS-14801
> URL: https://issues.apache.org/jira/browse/HDFS-14801
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: Screen Shot 2019-09-03 at 10.28.46.png
>
>
> Now nntop metrics is flattened as 
> dfs.NNTopUserOpCounts.windowMs=.op=.user=.count.
> I'd like to make windowMs, op, and user as label instead of name for more 
> prometheus-friendly metrics.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2053) Fix TestOzoneManagerRatisServer failure

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2053?focusedWorklogId=305762=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305762
 ]

ASF GitHub Bot logged work on HDDS-2053:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:25
Start Date: 03/Sep/19 18:25
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1373: HDDS-2053. Fix 
TestOzoneManagerRatisServer failure. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/1373#issuecomment-527580152
 
 
   Just try repeat the test run more than 1 times in IntelliJ, you will be able 
repro the metrics leak.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305762)
Time Spent: 1h  (was: 50m)

> Fix TestOzoneManagerRatisServer failure
> ---
>
> Key: HDDS-2053
> URL: https://issues.apache.org/jira/browse/HDDS-2053
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=305752=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305752
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:10
Start Date: 03/Sep/19 18:10
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320407326
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1270,6 +1271,58 @@ public void abortMultipartUpload(OmKeyArgs omKeyArgs) 
throws IOException {
 
   }
 
+  @Override
+  public OmMultipartUploadList listMultipartUploads(String volumeName,
+  String bucketName, String prefix) throws OMException {
+Preconditions.checkNotNull(volumeName);
+Preconditions.checkNotNull(bucketName);
 
 Review comment:
   prefix also should not be null. As prefix is also required in 
ListMultipartUploadRequest in proto.
   
   And also here we using "+" for concatentation, so if we pass null for 
prefix, then it will be /volume/bucket/null. The below method is called by 
getMultipartUploadKeys.
 public static String getDbKey(String volume, String bucket, String key) {
   return OM_KEY_PREFIX + volume + OM_KEY_PREFIX + bucket +
   OM_KEY_PREFIX + key;
 }
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305752)
Time Spent: 5h 20m  (was: 5h 10m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=305751=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305751
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:10
Start Date: 03/Sep/19 18:10
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320408049
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/Node2ObjectsMap.java
 ##
 @@ -83,7 +83,7 @@ public void insertNewDatanode(UUID datanodeID, Set 
containerIDs)
*
* @param datanodeID - Datanode ID.
*/
-  void removeDatanode(UUID datanodeID) {
+  public void removeDatanode(UUID datanodeID) {
 
 Review comment:
   Should annotate these methods as @VisibleForTesting
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305751)
Time Spent: 2h  (was: 1h 50m)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14806) Bootstrap standby may fail if used in-progress tailing

2019-09-03 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921630#comment-16921630
 ] 

Chen Liang commented on HDFS-14806:
---

Thanks for taking a look [~ayushtkn]! The config 
{{dfs.ha.tail-edits.qjm.rpc.max-txns}} was introduced in HDFS-13609. [~xkrogen] 
would you mind sharing some thoughts about making this config exposed? 
Increasing the number dynamically is a very interesting idea, thanks for 
sharing. I can think of some ways to do this, but maybe we should have a 
separate Jira to discuss it.

> Bootstrap standby may fail if used in-progress tailing
> --
>
> Key: HDFS-14806
> URL: https://issues.apache.org/jira/browse/HDFS-14806
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14806.001.patch
>
>
> One issue we went across was that if in-progress tailing is enabled, 
> bootstrap standby could fail.
> When in-progress tailing is enabled, Bootstrap uses the RPC mechanism to get 
> edits. There is a config {{dfs.ha.tail-edits.qjm.rpc.max-txns}} that sets an 
> upper bound on how many txnid can be included in one RPC call. The default is 
> 5000. Meaning bootstraping NN (say NN1) can only pull at most 5000 edits from 
> JN. However, as part of bootstrap, NN1 queries another NN (say NN2) for NN2's 
> current transactionID, NN2 may return a state that is > 5000 txnid from NN1's 
> current image. But NN1 can only see 5000 more txnid from JNs. At this point 
> NN1 goes panic, because txnid retuned by JNs is behind NN2's returned state, 
> bootstrap then fail.
> Essentially, bootstrap standby can fail if both of two following conditions 
> are met:
>  # in-progress tailing is enabled AND
>  # the boostraping NN is too far (>5000 txid)  behind 
> Increasing the value of {{dfs.ha.tail-edits.qjm.rpc.max-txns}} to some super 
> large value allowed bootstrap to continue. But this is hardly the ideal 
> solution.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=305753=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305753
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:10
Start Date: 03/Sep/19 18:10
Worklog Time Spent: 10m 
  Work Description: swagle commented on issue #1366: HDDS-1577. Add default 
pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#issuecomment-527574643
 
 
   +1 overall pointed out a minor nit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305753)
Time Spent: 2h 10m  (was: 2h)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=305750=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305750
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:08
Start Date: 03/Sep/19 18:08
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320407326
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1270,6 +1271,58 @@ public void abortMultipartUpload(OmKeyArgs omKeyArgs) 
throws IOException {
 
   }
 
+  @Override
+  public OmMultipartUploadList listMultipartUploads(String volumeName,
+  String bucketName, String prefix) throws OMException {
+Preconditions.checkNotNull(volumeName);
+Preconditions.checkNotNull(bucketName);
 
 Review comment:
   prefix also should not be null. As prefix is also required in 
ListMultipartUploadRequest in proto.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305750)
Time Spent: 5h 10m  (was: 5h)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=305749=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305749
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:04
Start Date: 03/Sep/19 18:04
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320405582
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUpload.java
 ##
 @@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import java.time.Instant;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OM_KEY_PREFIX;
+
+/**
+ * Information about one initialized upload.
+ */
+public class OmMultipartUpload {
+
+  private String volumeName;
+
+  private String bucketName;
+
+  private String keyName;
+
+  private String uploadId;
+
+  private Instant creationTime;
+
+  public OmMultipartUpload(String volumeName, String bucketName,
+  String keyName, String uploadId) {
+this.volumeName = volumeName;
+this.bucketName = bucketName;
+this.keyName = keyName;
+this.uploadId = uploadId;
+  }
+
+  public OmMultipartUpload(String volumeName, String bucketName,
+  String keyName, String uploadId, Instant creationDate) {
+this.volumeName = volumeName;
+this.bucketName = bucketName;
+this.keyName = keyName;
+this.uploadId = uploadId;
+this.creationTime = creationDate;
+  }
+
+  public static OmMultipartUpload from(String key) {
+String[] split = key.split(OM_KEY_PREFIX);
+if (split.length < 5) {
+  throw new IllegalArgumentException("Key " + key
+  + " doesn't have enough segments to be a valid multpart upload key");
 
 Review comment:
   multpart -> multipart
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305749)
Time Spent: 5h  (was: 4h 50m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=305747=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305747
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:03
Start Date: 03/Sep/19 18:03
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320405424
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
 ##
 @@ -555,6 +555,16 @@ public OzoneOutputStream createFile(String keyName, long 
size,
 .listStatus(volumeName, name, keyName, recursive, startKey, 
numEntries);
   }
 
+  /**
+   * Return with the list of the in-flight multipart uploads.
+   *
+   * @param prefix Optional string to filter for the selected keys.
+   */
+  public OzoneMultipartUploadList listMultpartUploads(String prefix)
 
 Review comment:
   listMultpartUploads -> listMultipartUploads
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305747)
Time Spent: 4h 50m  (was: 4h 40m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=305744=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305744
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 03/Sep/19 18:01
Start Date: 03/Sep/19 18:01
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1366: HDDS-1577. 
Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320404457
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,237 @@
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing and 
network topology
+ * to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out viable nodes that either don't have enough size left
+ *or are too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ *described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
 
 Review comment:
   Sounds good to me. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305744)
Time Spent: 1h 50m  (was: 1h 40m)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=305737=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305737
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 03/Sep/19 17:54
Start Date: 03/Sep/19 17:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320401502
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1270,6 +1271,58 @@ public void abortMultipartUpload(OmKeyArgs omKeyArgs) 
throws IOException {
 
   }
 
+  @Override
+  public OmMultipartUploadList listMultipartUploads(String volumeName,
+  String bucketName, String prefix) throws OMException {
+Preconditions.checkNotNull(volumeName);
+Preconditions.checkNotNull(bucketName);
+
+metadataManager.getLock().acquireLock(BUCKET_LOCK, volumeName, bucketName);
+try {
+
+  List multipartUploadKeys =
+  metadataManager
+  .getMultipartUploadKeys(volumeName, bucketName, prefix);
+
+  List collect = multipartUploadKeys.stream()
+  .map(OmMultipartUpload::from)
+  .map(upload -> {
+String dbKey = metadataManager
+.getOzoneKey(upload.getVolumeName(),
+upload.getBucketName(),
+upload.getKeyName());
+try {
+  Table openKeyTable =
+  metadataManager.getOpenKeyTable();
+
+  OmKeyInfo omKeyInfo =
+  openKeyTable.get(upload.getDbKey());
 
 Review comment:
   Here we are reading openKeyTable only for getting creation time. If we can 
have this information in omMultipartKeyInfo, we could avoid DB calls for 
openKeyTable. 
   
   To do this, We can set creationTime in OmMultipartKeyInfo during 
initiateMultipartUpload . In this way, we can get all the required information 
from the MultipartKeyInfo table.
   
   And also StorageClass is missing from the returned OmMultipartUpload, as 
listMultipartUploads shows StorageClass information. For this, if we can return 
replicationType and depending on this value, we can set StorageClass in the 
listMultipartUploads Response.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305737)
Time Spent: 4h 40m  (was: 4.5h)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2018) Handle Set DtService of token for OM HA

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2018?focusedWorklogId=305733=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305733
 ]

ASF GitHub Bot logged work on HDDS-2018:


Author: ASF GitHub Bot
Created on: 03/Sep/19 17:52
Start Date: 03/Sep/19 17:52
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1371: HDDS-2018. 
Handle Set DtService of token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#discussion_r320400072
 
 

 ##
 File path: 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSelector.java
 ##
 @@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.security;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.token.Token;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.charset.StandardCharsets;
+import java.util.Collections;
+
+import static org.apache.hadoop.ozone.security.OzoneTokenIdentifier.KIND_NAME;
+
+/**
+ * Class to test OzoneDelegationTokenSelector.
+ */
+public class TestOzoneDelegationTokenSelector {
+
+
+  @Test
+  public void testTokenSelector() {
+
+// set dummy details for identifier and password in token.
+byte[] identifier =
+RandomStringUtils.randomAlphabetic(10)
+.getBytes(StandardCharsets.UTF_8);
+byte[] password =
+RandomStringUtils.randomAlphabetic(10)
+.getBytes(StandardCharsets.UTF_8);
+
+Token tokenIdentifierToken =
+new Token<>(identifier, password, KIND_NAME, getService());
+
+OzoneDelegationTokenSelector ozoneDelegationTokenSelector =
+new OzoneDelegationTokenSelector();
+
+Text service = new Text("om1:9862");
+
+Token selectedToken =
+   ozoneDelegationTokenSelector.selectToken(service,
+   Collections.singletonList(tokenIdentifierToken));
+
+
+Assert.assertNotNull(selectedToken);
+
+
+tokenIdentifierToken.setService(new Text("om1:9863"));
+selectedToken =
+ozoneDelegationTokenSelector.selectToken(service,
+Collections.singletonList(tokenIdentifierToken));
+
+Assert.assertNull(selectedToken);
+
+service = new Text("om1:9863");
+selectedToken =
+ozoneDelegationTokenSelector.selectToken(service,
+Collections.singletonList(tokenIdentifierToken));
 
 Review comment:
   Can we define a variable to avoid create three list?
Collections.singletonList(tokenIdentifierToken)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305733)
Time Spent: 1h  (was: 50m)

> Handle Set DtService of token for OM HA
> ---
>
> Key: HDDS-2018
> URL: https://issues.apache.org/jira/browse/HDDS-2018
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address.
>  # Update token selector to return tokens if there is a match with Service. 
> Because SaslRpcClient calls token selector with server address.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Work logged] (HDDS-2018) Handle Set DtService of token for OM HA

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2018?focusedWorklogId=305730=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305730
 ]

ASF GitHub Bot logged work on HDDS-2018:


Author: ASF GitHub Bot
Created on: 03/Sep/19 17:51
Start Date: 03/Sep/19 17:51
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1371: HDDS-2018. 
Handle Set DtService of token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#discussion_r320400072
 
 

 ##
 File path: 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSelector.java
 ##
 @@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.security;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.token.Token;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.charset.StandardCharsets;
+import java.util.Collections;
+
+import static org.apache.hadoop.ozone.security.OzoneTokenIdentifier.KIND_NAME;
+
+/**
+ * Class to test OzoneDelegationTokenSelector.
+ */
+public class TestOzoneDelegationTokenSelector {
+
+
+  @Test
+  public void testTokenSelector() {
+
+// set dummy details for identifier and password in token.
+byte[] identifier =
+RandomStringUtils.randomAlphabetic(10)
+.getBytes(StandardCharsets.UTF_8);
+byte[] password =
+RandomStringUtils.randomAlphabetic(10)
+.getBytes(StandardCharsets.UTF_8);
+
+Token tokenIdentifierToken =
+new Token<>(identifier, password, KIND_NAME, getService());
+
+OzoneDelegationTokenSelector ozoneDelegationTokenSelector =
+new OzoneDelegationTokenSelector();
+
+Text service = new Text("om1:9862");
+
+Token selectedToken =
+   ozoneDelegationTokenSelector.selectToken(service,
+   Collections.singletonList(tokenIdentifierToken));
+
+
+Assert.assertNotNull(selectedToken);
+
+
+tokenIdentifierToken.setService(new Text("om1:9863"));
+selectedToken =
+ozoneDelegationTokenSelector.selectToken(service,
+Collections.singletonList(tokenIdentifierToken));
+
+Assert.assertNull(selectedToken);
+
+service = new Text("om1:9863");
+selectedToken =
+ozoneDelegationTokenSelector.selectToken(service,
+Collections.singletonList(tokenIdentifierToken));
 
 Review comment:
   Can we define a variable to avoid create three list?
Collections.singletonList(tokenIdentifierToken)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305730)
Time Spent: 50m  (was: 40m)

> Handle Set DtService of token for OM HA
> ---
>
> Key: HDDS-2018
> URL: https://issues.apache.org/jira/browse/HDDS-2018
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address.
>  # Update token selector to return tokens if there is a match with Service. 
> Because SaslRpcClient calls token selector with server address.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Created] (HDFS-14816) TestFileCorruption#testCorruptionWithDiskFailure is flaky

2019-09-03 Thread hemanthboyina (Jira)
hemanthboyina created HDFS-14816:


 Summary: TestFileCorruption#testCorruptionWithDiskFailure is flaky
 Key: HDFS-14816
 URL: https://issues.apache.org/jira/browse/HDFS-14816
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: hemanthboyina
Assignee: hemanthboyina






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2018) Handle Set DtService of token for OM HA

2019-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2018?focusedWorklogId=305718=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305718
 ]

ASF GitHub Bot logged work on HDDS-2018:


Author: ASF GitHub Bot
Created on: 03/Sep/19 17:29
Start Date: 03/Sep/19 17:29
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1371: HDDS-2018. 
Handle Set DtService of token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#discussion_r320390662
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
 ##
 @@ -179,9 +182,29 @@ private void createOMProxyIfNeeded(ProxyInfo proxyInfo,
   }
 
   public synchronized Text getCurrentProxyDelegationToken() {
 
 Review comment:
   This synchronized can be removed since we only set it once at constructor.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305718)
Time Spent: 40m  (was: 0.5h)

> Handle Set DtService of token for OM HA
> ---
>
> Key: HDDS-2018
> URL: https://issues.apache.org/jira/browse/HDDS-2018
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address.
>  # Update token selector to return tokens if there is a match with Service. 
> Because SaslRpcClient calls token selector with server address.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >