[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=296888=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296888
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 18/Aug/19 05:34
Start Date: 18/Aug/19 05:34
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1277: HDDS-1054. 
List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-522292705
 
 
   In the list, we have different parameters like key-marker, max-uploads and 
few other parameters.
   Are we planning to do this in a new Jira?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296888)
Time Spent: 1h 10m  (was: 1h)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=296887=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296887
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 18/Aug/19 05:31
Start Date: 18/Aug/19 05:31
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r314968679
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadCompleteList.java
 ##
 @@ -0,0 +1,63 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Part;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+/**
+ * This class represents multipart list, which is required for
+ * CompleteMultipart upload request.
+ */
+public class OmMultipartUploadCompleteList {
+
+  private final TreeMap multipartMap;
+
+  /**
+   * Construct OmMultipartUploadList which holds multipart map which contains
 
 Review comment:
   OmMultipartUploadList  -> OmMultipartUploadCompleteList
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296887)
Time Spent: 1h  (was: 50m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1972) Provide example ha proxy with multiple s3 servers back end.

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1972?focusedWorklogId=296886=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296886
 ]

ASF GitHub Bot logged work on HDDS-1972:


Author: ASF GitHub Bot
Created on: 18/Aug/19 05:21
Start Date: 18/Aug/19 05:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1304: HDDS-1972. 
Provide example ha proxy with multiple s3 servers back end.
URL: https://github.com/apache/hadoop/pull/1304#issuecomment-522292190
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296886)
Time Spent: 2h 40m  (was: 2.5h)

> Provide example ha proxy with multiple s3 servers back end.
> ---
>
> Key: HDDS-1972
> URL: https://issues.apache.org/jira/browse/HDDS-1972
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall provide docker-compose files where we start 3 s3 
> gateway servers, and ha-proxy is used to load balance these S3 Gateway 
> Servers.
>  
> In this Jira, all are proxy configurations are hardcoded, we can make 
> improvements to scale and automatically configure with environment variables 
> as a future improvement. This is just a starter example.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?focusedWorklogId=296884=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296884
 ]

ASF GitHub Bot logged work on HDDS-1946:


Author: ASF GitHub Bot
Created on: 18/Aug/19 05:14
Start Date: 18/Aug/19 05:14
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1311: 
HDDS-1946. CertificateClient should not persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#discussion_r314968409
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DNCertificateClient.java
 ##
 @@ -25,20 +25,26 @@
 import org.slf4j.LoggerFactory;
 
 import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+
+import java.nio.file.Paths;
+
 /**
  * Certificate client for DataNodes.
  */
 public class DNCertificateClient extends DefaultCertificateClient {
 
   private static final Logger LOG =
   LoggerFactory.getLogger(DNCertificateClient.class);
+
+  public static final String COMPONENT_NAME = Paths.get("dn").toString();
 
 Review comment:
   Minor NIT: Here we can use directly assign the component name, instead of 
calling Paths.get("dn").toString(). As this also just returns "dn".
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296884)
Time Spent: 1h  (was: 50m)

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?focusedWorklogId=296885=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296885
 ]

ASF GitHub Bot logged work on HDDS-1946:


Author: ASF GitHub Bot
Created on: 18/Aug/19 05:14
Start Date: 18/Aug/19 05:14
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1311: 
HDDS-1946. CertificateClient should not persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#discussion_r314968413
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/OMCertificateClient.java
 ##
 @@ -39,13 +41,15 @@
   private static final Logger LOG =
   LoggerFactory.getLogger(OMCertificateClient.class);
 
+  public static final String COMPONENT_NAME = Paths.get("om").toString();
 
 Review comment:
   Same as above.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296885)
Time Spent: 1h 10m  (was: 1h)

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909893#comment-16909893
 ] 

Hudson commented on HDDS-1938:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17142 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17142/])
HDDS-1938. Change omPort parameter type from String to int in (bharat: rev 
3bba8086e0e960cb5eea230ed3f8753a86e6d4f2)
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java


> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1938.001.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parser code in BasicOzoneFileSystem#initialize
> Will post a PR after HDDS-1891 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909892#comment-16909892
 ] 

Hudson commented on HDDS-1979:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17142 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17142/])
HDDS-1979. Fix checkstyle errors (#1312) (bharat: rev 
e61825682a1fe19ccf43fae5ff64beb078d8af62)
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java


> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1903) Use dynamic ports for SCM in TestSCMClientProtocolServer and TestSCMSecurityProtocolServer

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1903?focusedWorklogId=296883=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296883
 ]

ASF GitHub Bot logged work on HDDS-1903:


Author: ASF GitHub Bot
Created on: 18/Aug/19 04:58
Start Date: 18/Aug/19 04:58
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1303: 
HDDS-1903 : Use dynamic ports for SCM in TestSCMClientProtocolServer …
URL: https://github.com/apache/hadoop/pull/1303#discussion_r314968173
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMSecurityProtocolServer.java
 ##
 @@ -32,10 +39,19 @@
 
   @Rule
   public Timeout timeout = new Timeout(1000 * 20);
+  private static int scmRpcSecurePort;
+
+  @BeforeClass
+  public static void setupClass() throws Exception {
+scmRpcSecurePort = new ServerSocket(0).getLocalPort();
+  }
 
   @Before
   public void setUp() throws Exception {
 config = new OzoneConfiguration();
+config.set(OZONE_SCM_SECURITY_SERVICE_ADDRESS_KEY,
+StringUtils.join(OZONE_SCM_SECURITY_SERVICE_BIND_HOST_DEFAULT,
+":", String.valueOf(scmRpcSecurePort)));
 
 Review comment:
   My comment is not related to the usage of StringUtils, we can directly use 
OZONE_SCM_SECURITY_SERVICE_BIND_HOST_DEFAULT:0, instead of getting from 
scmRpcSecurePort, which we get from new ServerSocket(0).getLocalPort(); In this 
way during server start, it will choose available free random port.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296883)
Time Spent: 2h 10m  (was: 2h)

> Use dynamic ports for SCM in TestSCMClientProtocolServer and 
> TestSCMSecurityProtocolServer
> --
>
> Key: HDDS-1903
> URL: https://issues.apache.org/jira/browse/HDDS-1903
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> We should use dynamic port for SCM in the following test-cases
> * TestSCMClientProtocolServer
> * TestSCMSecurityProtocolServer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-17 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1938.
--
   Resolution: Fixed
Fix Version/s: 0.5.0

> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1938.001.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parser code in BasicOzoneFileSystem#initialize
> Will post a PR after HDDS-1891 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1938?focusedWorklogId=296882=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296882
 ]

ASF GitHub Bot logged work on HDDS-1938:


Author: ASF GitHub Bot
Created on: 18/Aug/19 04:54
Start Date: 18/Aug/19 04:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1305: 
HDDS-1938. Change omPort parameter type from String to int in 
BasicOzoneFileSystem#createAdapter
URL: https://github.com/apache/hadoop/pull/1305
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296882)
Time Spent: 2h  (was: 1h 50m)

> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1938.001.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parser code in BasicOzoneFileSystem#initialize
> Will post a PR after HDDS-1891 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1938?focusedWorklogId=296881=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296881
 ]

ASF GitHub Bot logged work on HDDS-1938:


Author: ASF GitHub Bot
Created on: 18/Aug/19 04:54
Start Date: 18/Aug/19 04:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1305: HDDS-1938. 
Change omPort parameter type from String to int in 
BasicOzoneFileSystem#createAdapter
URL: https://github.com/apache/hadoop/pull/1305#issuecomment-522291200
 
 
   Thank You @smengcl for the contribution.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296881)
Time Spent: 1h 50m  (was: 1h 40m)

> Change omPort parameter type from String to int in 
> BasicOzoneFileSystem#createAdapter
> -
>
> Key: HDDS-1938
> URL: https://issues.apache.org/jira/browse/HDDS-1938
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Filesystem
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1938.001.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The diff will be based on HDDS-1891.
> Goal:
> 1. Change omPort type to int because it is eventually used as int anyway
> 2. Refactor the parser code in BasicOzoneFileSystem#initialize
> Will post a PR after HDDS-1891 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1979?focusedWorklogId=296880=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296880
 ]

ASF GitHub Bot logged work on HDDS-1979:


Author: ASF GitHub Bot
Created on: 18/Aug/19 04:53
Start Date: 18/Aug/19 04:53
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1312: 
HDDS-1979. Fix checkstyle errors
URL: https://github.com/apache/hadoop/pull/1312
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296880)
Time Spent: 50m  (was: 40m)

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1979.
--
   Resolution: Fixed
Fix Version/s: 0.5.0

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14720) DataNode shouldn't report block as bad block if the block length is Long.MAX_VALUE.

2019-08-17 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14720:
-
Attachment: HDFS-14720.001.patch
Status: Patch Available  (was: Open)

> DataNode shouldn't report block as bad block if the block length is 
> Long.MAX_VALUE.
> ---
>
> Key: HDFS-14720
> URL: https://issues.apache.org/jira/browse/HDFS-14720
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14720.001.patch
>
>
> {noformat}
> 2019-08-11 09:15:58,092 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Can't replicate block 
> BP-725378529-10.0.0.8-1410027444173:blk_13276745777_1112363330268 because 
> on-disk length 175085 is shorter than NameNode recorded length 
> 9223372036854775807.{noformat}
> If the block length is Long.MAX_VALUE, means file belongs to this block is 
> deleted from the namenode and DN got the command after deletion of file. In 
> this case command should be ignored.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14720) DataNode shouldn't report block as bad block if the block length is Long.MAX_VALUE.

2019-08-17 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14720:
-
Attachment: (was: HDFS-14720.001.patch)

> DataNode shouldn't report block as bad block if the block length is 
> Long.MAX_VALUE.
> ---
>
> Key: HDFS-14720
> URL: https://issues.apache.org/jira/browse/HDFS-14720
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14720.001.patch
>
>
> {noformat}
> 2019-08-11 09:15:58,092 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Can't replicate block 
> BP-725378529-10.0.0.8-1410027444173:blk_13276745777_1112363330268 because 
> on-disk length 175085 is shorter than NameNode recorded length 
> 9223372036854775807.{noformat}
> If the block length is Long.MAX_VALUE, means file belongs to this block is 
> deleted from the namenode and DN got the command after deletion of file. In 
> this case command should be ignored.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909879#comment-16909879
 ] 

Hadoop QA commented on HDDS-1977:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  6m 
59s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-hdds: The patch generated 0 new + 0 unchanged 
- 6 fixed = 0 total (was 6) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} The patch passed checkstyle in hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
56s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 33s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
|   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
|   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2766/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1977 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977877/HDDS-1977.002.patch |
| 

[jira] [Updated] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.

2019-08-17 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14646:
--
Attachment: HDFS-14646.002.patch
Status: Patch Available  (was: Open)

> Standby NameNode should not upload fsimage to an inappropriate NameNode.
> 
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
> Attachments: HDFS-14646.000.patch, HDFS-14646.001.patch, 
> HDFS-14646.002.patch
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> Depending on the version of Jetty, this behavior can lead to different 
> consequences, I tested it under 2.7.2 and trunk version. 
> *1.In Hadoop 2.7.2 (with Jetty 6.1.26)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will still be established, and the data SNN sent will be read by 
> Jetty framework itself in the peer NN side, so the SNN will insignificantly 
> send the FsImage to the peer NN continuously, causing a waste of time and 
> bandwidth. In a relatively large HDFS cluster, the size of FsImage can often 
> reach about 30GB, This is indeed a big waste.
> *2.In trunk version (with Jetty 9.3.27)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will be auto closed, and then SNN will directly get an "Error 
> writing request body to server" exception, as below, note this test needs a 
> relatively big FSImage (e.g. 10MB level):
> {code:java}
> 2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 524288 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 851968 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>   {code}
>                   
> *Solution:*
>  A standby NameNode should not upload fsimage to an inappropriate NameNode, 
> when he plans to put a FsImage to the peer NN, he need to check whether he 
> really need to put it at this time.
> In detail, local SNN should establish an HTTP connection with the peer NN, 
> send the put request, and then immediately read the response (this is the key 
> point). If the peer NN does not reply an HTTP_OK, it means the 

[jira] [Updated] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.

2019-08-17 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14646:
--
Status: Open  (was: Patch Available)

> Standby NameNode should not upload fsimage to an inappropriate NameNode.
> 
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
> Attachments: HDFS-14646.000.patch, HDFS-14646.001.patch
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> Depending on the version of Jetty, this behavior can lead to different 
> consequences, I tested it under 2.7.2 and trunk version. 
> *1.In Hadoop 2.7.2 (with Jetty 6.1.26)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will still be established, and the data SNN sent will be read by 
> Jetty framework itself in the peer NN side, so the SNN will insignificantly 
> send the FsImage to the peer NN continuously, causing a waste of time and 
> bandwidth. In a relatively large HDFS cluster, the size of FsImage can often 
> reach about 30GB, This is indeed a big waste.
> *2.In trunk version (with Jetty 9.3.27)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will be auto closed, and then SNN will directly get an "Error 
> writing request body to server" exception, as below, note this test needs a 
> relatively big FSImage (e.g. 10MB level):
> {code:java}
> 2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 524288 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 851968 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>   {code}
>                   
> *Solution:*
>  A standby NameNode should not upload fsimage to an inappropriate NameNode, 
> when he plans to put a FsImage to the peer NN, he need to check whether he 
> really need to put it at this time.
> In detail, local SNN should establish an HTTP connection with the peer NN, 
> send the put request, and then immediately read the response (this is the key 
> point). If the peer NN does not reply an HTTP_OK, it means the local SNN 
> should not put image at this time.



--
This message was 

[jira] [Commented] (HDFS-14648) DeadNodeDetector state machine model

2019-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909872#comment-16909872
 ] 

Hadoop QA commented on HDFS-14648:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
112 unchanged - 1 fixed = 114 total (was 113) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 3 new 
+ 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Incorrect lazy initialization and update of static field 
org.apache.hadoop.hdfs.ClientContext.deadNodeDetector in new 
org.apache.hadoop.hdfs.ClientContext(String, DfsClientConf, Configuration)  At 
ClientContext.java:of static field 
org.apache.hadoop.hdfs.ClientContext.deadNodeDetector in new 
org.apache.hadoop.hdfs.ClientContext(String, DfsClientConf, Configuration)  At 
ClientContext.java:[lines 143-144] |
|  |  VERY confusing to have methods 
org.apache.hadoop.hdfs.DFSStripedInputStream.getDFSClient() and 
org.apache.hadoop.hdfs.DFSInputStream.getDfsClient()  

[jira] [Comment Edited] (HDFS-14675) Increase Balancer Defaults Further

2019-08-17 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909869#comment-16909869
 ] 

Wei-Chiu Chuang edited comment on HDFS-14675 at 8/18/19 2:26 AM:
-

Also worth noting the HDP's HDFS balancer doc recommendation where there are 
background mode and fast mode: 
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_hdfs-administration/content/recommended_configurations.html



was (Author: jojochuang):
Also worth noting the HDP's HDFS balancer doc recommendation: 
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_hdfs-administration/content/recommended_configurations.html


> Increase Balancer Defaults Further
> --
>
> Key: HDFS-14675
> URL: https://issues.apache.org/jira/browse/HDFS-14675
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14675.001.patch
>
>
> HDFS-10297 increased the balancer defaults to 50 for 
> dfs.datanode.balance.max.concurrent.moves and to 10MB/s for 
> dfs.datanode.balance.bandwidthPerSec.
> We have found that these settings often have to be increased further as users 
> find the balancer operates too slowly with 50 and 10MB/s. We often recommend 
> moving concurrent moves to between 200 and 300 and setting the bandwidth to 
> 100 or even 1000MB/s, and these settings seem to work well in practice.
> I would like to suggest we increase the balancer defaults further. I would 
> suggest 100 for concurrent moves and 100MB/s for the bandwidth, but I would 
> like to know what others think on this topic too.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14675) Increase Balancer Defaults Further

2019-08-17 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909869#comment-16909869
 ] 

Wei-Chiu Chuang commented on HDFS-14675:


Also worth noting the HDP's HDFS balancer doc recommendation: 
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_hdfs-administration/content/recommended_configurations.html


> Increase Balancer Defaults Further
> --
>
> Key: HDFS-14675
> URL: https://issues.apache.org/jira/browse/HDFS-14675
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14675.001.patch
>
>
> HDFS-10297 increased the balancer defaults to 50 for 
> dfs.datanode.balance.max.concurrent.moves and to 10MB/s for 
> dfs.datanode.balance.bandwidthPerSec.
> We have found that these settings often have to be increased further as users 
> find the balancer operates too slowly with 50 and 10MB/s. We often recommend 
> moving concurrent moves to between 200 and 300 and setting the bandwidth to 
> 100 or even 1000MB/s, and these settings seem to work well in practice.
> I would like to suggest we increase the balancer defaults further. I would 
> suggest 100 for concurrent moves and 100MB/s for the bandwidth, but I would 
> like to know what others think on this topic too.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14675) Increase Balancer Defaults Further

2019-08-17 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909867#comment-16909867
 ] 

Wei-Chiu Chuang commented on HDFS-14675:


+1

> Increase Balancer Defaults Further
> --
>
> Key: HDFS-14675
> URL: https://issues.apache.org/jira/browse/HDFS-14675
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14675.001.patch
>
>
> HDFS-10297 increased the balancer defaults to 50 for 
> dfs.datanode.balance.max.concurrent.moves and to 10MB/s for 
> dfs.datanode.balance.bandwidthPerSec.
> We have found that these settings often have to be increased further as users 
> find the balancer operates too slowly with 50 and 10MB/s. We often recommend 
> moving concurrent moves to between 200 and 300 and setting the bandwidth to 
> 100 or even 1000MB/s, and these settings seem to work well in practice.
> I would like to suggest we increase the balancer defaults further. I would 
> suggest 100 for concurrent moves and 100MB/s for the bandwidth, but I would 
> like to know what others think on this topic too.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909864#comment-16909864
 ] 

kevin su commented on HDDS-1979:


[~vivekratnavel]

Thanks for you contribution.

It looks like this issue duplicate with HDDS-1977

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909864#comment-16909864
 ] 

kevin su edited comment on HDDS-1979 at 8/18/19 1:39 AM:
-

[~vivekratnavel]

Thanks for your contribution.

It looks like this issue duplicate with HDDS-1977


was (Author: pingsutw):
[~vivekratnavel]

Thanks for you contribution.

It looks like this issue duplicate with HDDS-1977

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14744) RBF: Non secured routers should not log in error mode when UGI is default.

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDFS-14744:
---

Assignee: CR Hota  (was: kevin su)

> RBF: Non secured routers should not log in error mode when UGI is default.
> --
>
> Key: HDFS-14744
> URL: https://issues.apache.org/jira/browse/HDFS-14744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14744.001.patch
>
>
> RouterClientProtocol#getMountPointStatus logs error when groups are not found 
> for default web user dr.who. The line should be logged in "error" mode for 
> secured cluster, for unsecured clusters, we may want to just specify "debug" 
> or else logs are filled up with this non-critical line
> {{ERROR org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer: 
> Cannot get the remote user: There is no primary group for UGI dr.who 
> (auth:SIMPLE)}}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14744) RBF: Non secured routers should not log in error mode when UGI is default.

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDFS-14744:
---

Assignee: kevin su  (was: CR Hota)

> RBF: Non secured routers should not log in error mode when UGI is default.
> --
>
> Key: HDFS-14744
> URL: https://issues.apache.org/jira/browse/HDFS-14744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: kevin su
>Priority: Major
> Attachments: HDFS-14744.001.patch
>
>
> RouterClientProtocol#getMountPointStatus logs error when groups are not found 
> for default web user dr.who. The line should be logged in "error" mode for 
> secured cluster, for unsecured clusters, we may want to just specify "debug" 
> or else logs are filled up with this non-critical line
> {{ERROR org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer: 
> Cannot get the remote user: There is no primary group for UGI dr.who 
> (auth:SIMPLE)}}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1977:
---
Attachment: HDDS-1977.002.patch

> Fix checkstyle issues introduced by HDDS-1894
> -
>
> Key: HDDS-1977
> URL: https://issues.apache.org/jira/browse/HDDS-1977
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1977.001.patch, HDDS-1977.002.patch
>
>
> Fix the checkstyle issues introduced by HDDS-1894
> {noformat}
> [INFO] There are 6 errors reported by Checkstyle 8.8 with 
> checkstyle/checkstyle.xml ruleset.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[41,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[42]
>  (sizes) LineLength: Line is longer than 80 characters (found 88).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[46,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[47]
>  (sizes) LineLength: Line is longer than 80 characters (found 90).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[59]
>  (sizes) LineLength: Line is longer than 80 characters (found 116).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[60]
>  (sizes) LineLength: Line is longer than 80 characters (found 120).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1979?focusedWorklogId=296867=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296867
 ]

ASF GitHub Bot logged work on HDDS-1979:


Author: ASF GitHub Bot
Created on: 18/Aug/19 01:14
Start Date: 18/Aug/19 01:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1312: HDDS-1979. Fix 
checkstyle errors
URL: https://github.com/apache/hadoop/pull/1312#issuecomment-522282428
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 69 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 636 | trunk passed |
   | +1 | compile | 368 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 951 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | trunk passed |
   | 0 | spotbugs | 434 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 634 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 558 | the patch passed |
   | +1 | compile | 371 | the patch passed |
   | +1 | javac | 371 | the patch passed |
   | +1 | checkstyle | 35 | hadoop-hdds: The patch generated 0 new + 0 
unchanged - 6 fixed = 0 total (was 6) |
   | +1 | checkstyle | 39 | The patch passed checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 737 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 190 | the patch passed |
   | +1 | findbugs | 651 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 336 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2587 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 8610 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.om.TestOzoneManagerRestart |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1312/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1312 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 53527d3f04c4 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d873ddd |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1312/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1312/1/testReport/ |
   | Max. process+thread count | 4898 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/tools U: hadoop-hdds/tools |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1312/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296867)
Time Spent: 40m  (was: 0.5h)

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian

[jira] [Work logged] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?focusedWorklogId=296865=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296865
 ]

ASF GitHub Bot logged work on HDDS-1946:


Author: ASF GitHub Bot
Created on: 18/Aug/19 01:00
Start Date: 18/Aug/19 01:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1311: HDDS-1946. 
CertificateClient should not persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#issuecomment-522281782
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 139 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 42 | Maven dependency ordering for branch |
   | +1 | mvninstall | 719 | trunk passed |
   | +1 | compile | 368 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 947 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 435 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 636 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 560 | the patch passed |
   | +1 | compile | 374 | the patch passed |
   | +1 | javac | 374 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 753 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | the patch passed |
   | +1 | findbugs | 654 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 343 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2286 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 8550 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1311/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1311 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux aa8b478b053f 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d873ddd |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1311/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1311/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1311/1/testReport/ |
   | Max. process+thread count | 4571 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1311/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296865)
Time Spent: 50m  (was: 40m)

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu 

[jira] [Work logged] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?focusedWorklogId=296864=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296864
 ]

ASF GitHub Bot logged work on HDDS-1946:


Author: ASF GitHub Bot
Created on: 18/Aug/19 00:44
Start Date: 18/Aug/19 00:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1311: HDDS-1946. 
CertificateClient should not persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#issuecomment-522281155
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for branch |
   | +1 | mvninstall | 607 | trunk passed |
   | +1 | compile | 363 | trunk passed |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 819 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 417 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 609 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 544 | the patch passed |
   | +1 | compile | 363 | the patch passed |
   | +1 | javac | 363 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 637 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 285 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1792 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 7414 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1311/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1311 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6f13f3305561 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d873ddd |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1311/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1311/2/testReport/ |
   | Max. process+thread count | 5411 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1311/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296864)
Time Spent: 40m  (was: 0.5h)

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h

[jira] [Commented] (HDFS-14648) DeadNodeDetector state machine model

2019-08-17 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909855#comment-16909855
 ] 

Lisheng Sun commented on HDFS-14648:


uploaded the v002 patch.

> DeadNodeDetector state machine model
> 
>
> Key: HDFS-14648
> URL: https://issues.apache.org/jira/browse/HDFS-14648
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14648.001.patch, HDFS-14648.002.patch
>
>
> This Jira constructs DeadNodeDetector state machine model. The function it 
> implements as follow:
>  # After DFSInputstream detects some DataNode die, it put in DeadNodeDetector 
> and share this information to others in the same DFSClient. The ohter 
> DFSInputstreams will not read this DataNode.
>  # DeadNodeDetector also have DFSInputstream reference relationships to each 
> DataNode. When DFSInputstream close, DeadNodeDetector also remove this 
> reference. If some DeadNode of DeadNodeDetector is not read by 
> DFSInputstream, it also is removed from DeadNodeDetector.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14648) DeadNodeDetector state machine model

2019-08-17 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14648:
---
Status: Patch Available  (was: Open)

> DeadNodeDetector state machine model
> 
>
> Key: HDFS-14648
> URL: https://issues.apache.org/jira/browse/HDFS-14648
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14648.001.patch, HDFS-14648.002.patch
>
>
> This Jira constructs DeadNodeDetector state machine model. The function it 
> implements as follow:
>  # After DFSInputstream detects some DataNode die, it put in DeadNodeDetector 
> and share this information to others in the same DFSClient. The ohter 
> DFSInputstreams will not read this DataNode.
>  # DeadNodeDetector also have DFSInputstream reference relationships to each 
> DataNode. When DFSInputstream close, DeadNodeDetector also remove this 
> reference. If some DeadNode of DeadNodeDetector is not read by 
> DFSInputstream, it also is removed from DeadNodeDetector.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1979?focusedWorklogId=296859=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296859
 ]

ASF GitHub Bot logged work on HDDS-1979:


Author: ASF GitHub Bot
Created on: 17/Aug/19 22:50
Start Date: 17/Aug/19 22:50
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1312: HDDS-1979. Fix 
checkstyle errors
URL: https://github.com/apache/hadoop/pull/1312#issuecomment-522276244
 
 
   @anuengineer Please review
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296859)
Time Spent: 0.5h  (was: 20m)

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1979?focusedWorklogId=296858=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296858
 ]

ASF GitHub Bot logged work on HDDS-1979:


Author: ASF GitHub Bot
Created on: 17/Aug/19 22:49
Start Date: 17/Aug/19 22:49
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1312: HDDS-1979. Fix 
checkstyle errors
URL: https://github.com/apache/hadoop/pull/1312#issuecomment-522276228
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296858)
Time Spent: 20m  (was: 10m)

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1979?focusedWorklogId=296857=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296857
 ]

ASF GitHub Bot logged work on HDDS-1979:


Author: ASF GitHub Bot
Created on: 17/Aug/19 22:49
Start Date: 17/Aug/19 22:49
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1312: 
HDDS-1979. Fix checkstyle errors
URL: https://github.com/apache/hadoop/pull/1312
 
 
   This patch fixes checkstyle errors in ListPipelinesSubcommand.java
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296857)
Time Spent: 10m
Remaining Estimate: 0h

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1979:
-
Labels: pull-request-available  (was: )

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1979 started by Vivek Ratnavel Subramanian.

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1979:


 Summary: Fix checkstyle errors
 Key: HDDS-1979
 URL: https://issues.apache.org/jira/browse/HDDS-1979
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: SCM
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?focusedWorklogId=296856=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296856
 ]

ASF GitHub Bot logged work on HDDS-1946:


Author: ASF GitHub Bot
Created on: 17/Aug/19 22:37
Start Date: 17/Aug/19 22:37
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1311: HDDS-1946. 
CertificateClient should not persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#issuecomment-522275643
 
 
   @anuengineer @avijayanhwx @swagle @xiaoyuyao Please review when you find time
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296856)
Time Spent: 0.5h  (was: 20m)

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?focusedWorklogId=296855=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296855
 ]

ASF GitHub Bot logged work on HDDS-1946:


Author: ASF GitHub Bot
Created on: 17/Aug/19 22:36
Start Date: 17/Aug/19 22:36
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1311: HDDS-1946. 
CertificateClient should not persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#issuecomment-522275624
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296855)
Time Spent: 20m  (was: 10m)

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?focusedWorklogId=296854=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296854
 ]

ASF GitHub Bot logged work on HDDS-1946:


Author: ASF GitHub Bot
Created on: 17/Aug/19 22:36
Start Date: 17/Aug/19 22:36
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1311: 
HDDS-1946. CertificateClient should not persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311
 
 
   …etadata.dir
   
   The issue was that when OM and SCM are deployed on the same host with 
ozone.metadata.dir defined. SCM can start successfully but OM can not because 
the key/cert from OM will collide with SCM.
   
   The solution implemented in this patch is to store certs in a sub directory 
inside ozone.metadata.dir based on the component. Ozone Manager will store its 
certs in `${ozone.metadata.dir}/om/certs` and Datanode will  store in 
`${ozone.metadata.dir}/dn/certs` to avoid conflicts. This solution was 
discussed with @anuengineer and I thank him for his guidance.
   
   Testing done: 
   I tested the patch in docker containers and verified that certs are now 
stored in `${ozone.metadata.dir}/${component}/certs` path. I modified the unit 
tests and verified that all unit tests pass.

 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296854)
Time Spent: 10m
Remaining Estimate: 0h

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1946:
-
Labels: pull-request-available  (was: )

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-17 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1946 started by Vivek Ratnavel Subramanian.

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13270) RBF: Router audit logger

2019-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909827#comment-16909827
 ] 

Hadoop QA commented on HDFS-13270:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs-rbf generated 1 new + 23 
unchanged - 0 fixed = 24 total (was 23) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 6 new + 6 unchanged - 0 fixed = 12 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 47s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf |
|  |  Write to static field 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.auditLoggers 
from instance method new 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer(Configuration, 
Router, ActiveNamenodeResolver, FileSubclusterResolver)  At 
RouterRpcServer.java:from instance method new 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer(Configuration, 
Router, ActiveNamenodeResolver, FileSubclusterResolver)  At 
RouterRpcServer.java:[line 407] |
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterQuota |
|   | hadoop.fs.contract.router.TestRouterHDFSContractRootDirectorySecure |
|   | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.federation.security.TestRouterSecurityManager |
|   | 

[jira] [Commented] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909825#comment-16909825
 ] 

Hadoop QA commented on HDDS-1977:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  8m 
51s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-hdds: The patch generated 0 new + 0 unchanged 
- 6 fixed = 0 total (was 6) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} The patch passed checkstyle in hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 14m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
3s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 29s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
|   | hadoop.ozone.client.rpc.TestWatchForCommit |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 

[jira] [Commented] (HDFS-14574) [distcp] Add ability to increase the replication factor for fileList.seq

2019-08-17 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909821#comment-16909821
 ] 

hemanthboyina commented on HDFS-14574:
--

[~jojochuang] , In DIstCp we have preserve status (rbugpc..)
if we have an option for replication  then these replications will override 
any suggestions about this ?

> [distcp] Add ability to increase the replication factor for fileList.seq
> 
>
> Key: HDFS-14574
> URL: https://issues.apache.org/jira/browse/HDFS-14574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Reporter: Wei-Chiu Chuang
>Assignee: hemanthboyina
>Priority: Major
>
> distcp creates fileList.seq with default replication factor = 3.
> For large clusters runing distcp job with thousands of mappers, that 
> 3-replica for the file listing file is not good enough, because DataNodes 
> easily run out of max number of xceivers.
>  
> It looks like we can pass in a distcp option, update replication factor in 
> when creating the sequence file writer: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L517-L521]
>  
> Like this:
> {code:java}
> return SequenceFile.createWriter(getConf(),
> SequenceFile.Writer.file(pathToListFile),
> SequenceFile.Writer.keyClass(Text.class),
> SequenceFile.Writer.valueClass(CopyListingFileStatus.class),
> SequenceFile.Writer.compression(SequenceFile.CompressionType.NONE),
> SequenceFile.Writer.replication((short)100)); <-- this line
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14630) Configuration.getTimeDurationHelper() should not log time unit warning in info log.

2019-08-17 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909820#comment-16909820
 ] 

hemanthboyina commented on HDFS-14630:
--

uploaded  patch please check [~surendrasingh]

> Configuration.getTimeDurationHelper() should not log time unit warning in 
> info log.
> ---
>
> Key: HDFS-14630
> URL: https://issues.apache.org/jira/browse/HDFS-14630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Minor
> Attachments: HDFS-14630.patch
>
>
> To solve [HDFS-12920|https://issues.apache.org/jira/browse/HDFS-12920] issue 
> we configured "dfs.client.datanode-restart.timeout" without time unit. No log 
> file is full of
> {noformat}
> 2019-06-22 20:13:14,605 | INFO  | pool-12-thread-1 | No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS 
> org.apache.hadoop.conf.Configuration.logDeprecation(Configuration.java:1409){noformat}
> No need to log this, just give the behavior in property description.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14720) DataNode shouldn't report block as bad block if the block length is Long.MAX_VALUE.

2019-08-17 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14720:
-
Attachment: HDFS-14720.001.patch

> DataNode shouldn't report block as bad block if the block length is 
> Long.MAX_VALUE.
> ---
>
> Key: HDFS-14720
> URL: https://issues.apache.org/jira/browse/HDFS-14720
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14720.001.patch
>
>
> {noformat}
> 2019-08-11 09:15:58,092 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Can't replicate block 
> BP-725378529-10.0.0.8-1410027444173:blk_13276745777_1112363330268 because 
> on-disk length 175085 is shorter than NameNode recorded length 
> 9223372036854775807.{noformat}
> If the block length is Long.MAX_VALUE, means file belongs to this block is 
> deleted from the namenode and DN got the command after deletion of file. In 
> this case command should be ignored.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13270) RBF: Router audit logger

2019-08-17 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-13270:
-
Attachment: HDFS-13270.003.patch

> RBF: Router audit logger
> 
>
> Key: HDFS-13270
> URL: https://issues.apache.org/jira/browse/HDFS-13270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-13270.001.patch, HDFS-13270.002.patch, 
> HDFS-13270.003.patch
>
>
> We can use router auditlogger to log the client info and cmd, because the 
> FSNamesystem#Auditlogger's log think the client are all from router.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10606) TrashPolicyDefault supports time of auto clean up can configured

2019-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909800#comment-16909800
 ] 

Hadoop QA commented on HDFS-10606:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 157 unchanged - 1 fixed = 157 total (was 158) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m  3s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-10606 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977868/HDFS-10606.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 8d7b4c634c6c 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d873ddd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27546/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27546/testReport/ |
| Max. process+thread count | 1344 (vs. ulimit of 5500) |
| 

[jira] [Work logged] (HDDS-1978) Create helper script to run blockade tests

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1978?focusedWorklogId=296820=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296820
 ]

ASF GitHub Bot logged work on HDDS-1978:


Author: ASF GitHub Bot
Created on: 17/Aug/19 18:56
Start Date: 17/Aug/19 18:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1310: 
HDDS-1978. Create helper script to run blockade tests.
URL: https://github.com/apache/hadoop/pull/1310#discussion_r314956506
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/blockade.sh
 ##
 @@ -0,0 +1,28 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+cd "$DIR/../../.." || exit 1
+
+OZONE_VERSION=$(grep "" "$DIR/../../pom.xml" | sed 
's/<[^>]*>//g'|  sed 's/^[ \t]*//')
+cd "$DIR/../../dist/target/ozone-$OZONE_VERSION/tests" || exit 1
+
+source ../compose/ozoneblockade/.env
 
 Review comment:
   shellcheck:1: note: Not following: ../compose/ozoneblockade/.env: 
openBinaryFile: does not exist (No such file or directory) [SC1091]
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296820)
Time Spent: 20m  (was: 10m)

> Create helper script to run blockade tests
> --
>
> Key: HDDS-1978
> URL: https://issues.apache.org/jira/browse/HDDS-1978
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> To run blockade tests as part of jenkins job we need some kind of helper 
> script.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1978) Create helper script to run blockade tests

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1978?focusedWorklogId=296821=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296821
 ]

ASF GitHub Bot logged work on HDDS-1978:


Author: ASF GitHub Bot
Created on: 17/Aug/19 18:56
Start Date: 17/Aug/19 18:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1310: HDDS-1978. 
Create helper script to run blockade tests.
URL: https://github.com/apache/hadoop/pull/1310#issuecomment-522262193
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for branch |
   | +1 | mvninstall | 590 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | -1 | pylint | 2 | Error running pylint. Please check pylint stderr files. |
   | +1 | shadedclient | 801 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 553 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | pylint | 3 | Error running pylint. Please check pylint stderr files. |
   | +1 | pylint | 3 | There were no new pylint issues. |
   | -1 | shellcheck | 0 | The patch generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 701 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 99 | hadoop-hdds in the patch passed. |
   | +1 | unit | 279 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 3366 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1310/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1310 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
pylint |
   | uname | Linux 2b84cf533e11 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d873ddd |
   | pylint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1310/1/artifact/out/branch-pylint-stderr.txt
 |
   | pylint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1310/1/artifact/out/patch-pylint-stderr.txt
 |
   | shellcheck | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1310/1/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1310/1/testReport/ |
   | Max. process+thread count | 415 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone hadoop-ozone/fault-injection-test/network-tests 
U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1310/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 pylint=1.9.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296821)
Time Spent: 0.5h  (was: 20m)

> Create helper script to run blockade tests
> --
>
> Key: HDDS-1978
> URL: https://issues.apache.org/jira/browse/HDDS-1978
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> To run blockade tests as part of jenkins job we need some kind of helper 
> script.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-13879) FileSystem: Add allowSnapshot, disallowSnapshot, getSnapshotDiffReport and getSnapshottableDirListing

2019-08-17 Thread hemanthboyina (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909786#comment-16909786
 ] 

hemanthboyina commented on HDFS-13879:
--

getSnapshottableDirListing returns SnapshottableDirectoryStatus
SnapshottableDirectoryStatus.java was there in client , there is no dependency 
from client to common 
SnapshottableDirectoryStatus internally uses dfsutilclient so we cant move that 
to common .
any suggestions [~jojochuang] [~smeng]

> FileSystem: Add allowSnapshot, disallowSnapshot, getSnapshotDiffReport and 
> getSnapshottableDirListing
> -
>
> Key: HDFS-13879
> URL: https://issues.apache.org/jira/browse/HDFS-13879
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Siyao Meng
>Assignee: hemanthboyina
>Priority: Major
>
> I wonder whether we should add allowSnapshot() and disallowSnapshot() to 
> FileSystem abstract class.
> I think we should because createSnapshot(), renameSnapshot() and 
> deleteSnapshot() are already part of it.
> Any reason why we don't want to do this?
> Thanks!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1978) Create helper script to run blockade tests

2019-08-17 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1978:
--
Status: Patch Available  (was: Open)

> Create helper script to run blockade tests
> --
>
> Key: HDDS-1978
> URL: https://issues.apache.org/jira/browse/HDDS-1978
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> To run blockade tests as part of jenkins job we need some kind of helper 
> script.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1978) Create helper script to run blockade tests

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1978?focusedWorklogId=296814=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296814
 ]

ASF GitHub Bot logged work on HDDS-1978:


Author: ASF GitHub Bot
Created on: 17/Aug/19 17:59
Start Date: 17/Aug/19 17:59
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1310: 
HDDS-1978. Create helper script to run blockade tests.
URL: https://github.com/apache/hadoop/pull/1310
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296814)
Time Spent: 10m
Remaining Estimate: 0h

> Create helper script to run blockade tests
> --
>
> Key: HDDS-1978
> URL: https://issues.apache.org/jira/browse/HDDS-1978
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> To run blockade tests as part of jenkins job we need some kind of helper 
> script.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1978) Create helper script to run blockade tests

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1978:
-
Labels: pull-request-available  (was: )

> Create helper script to run blockade tests
> --
>
> Key: HDDS-1978
> URL: https://issues.apache.org/jira/browse/HDDS-1978
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>
> To run blockade tests as part of jenkins job we need some kind of helper 
> script.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1978) Create helper script to run blockade tests

2019-08-17 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-1978:
-

 Summary: Create helper script to run blockade tests
 Key: HDDS-1978
 URL: https://issues.apache.org/jira/browse/HDDS-1978
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Nanda kumar
Assignee: Nanda kumar


To run blockade tests as part of jenkins job we need some kind of helper script.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1977:
---
Status: Patch Available  (was: Open)

> Fix checkstyle issues introduced by HDDS-1894
> -
>
> Key: HDDS-1977
> URL: https://issues.apache.org/jira/browse/HDDS-1977
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1977.001.patch
>
>
> Fix the checkstyle issues introduced by HDDS-1894
> {noformat}
> [INFO] There are 6 errors reported by Checkstyle 8.8 with 
> checkstyle/checkstyle.xml ruleset.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[41,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[42]
>  (sizes) LineLength: Line is longer than 80 characters (found 88).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[46,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[47]
>  (sizes) LineLength: Line is longer than 80 characters (found 90).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[59]
>  (sizes) LineLength: Line is longer than 80 characters (found 116).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[60]
>  (sizes) LineLength: Line is longer than 80 characters (found 120).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1977:
---
Attachment: HDDS-1977.001.patch

> Fix checkstyle issues introduced by HDDS-1894
> -
>
> Key: HDDS-1977
> URL: https://issues.apache.org/jira/browse/HDDS-1977
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1977.001.patch
>
>
> Fix the checkstyle issues introduced by HDDS-1894
> {noformat}
> [INFO] There are 6 errors reported by Checkstyle 8.8 with 
> checkstyle/checkstyle.xml ruleset.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[41,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[42]
>  (sizes) LineLength: Line is longer than 80 characters (found 88).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[46,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[47]
>  (sizes) LineLength: Line is longer than 80 characters (found 90).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[59]
>  (sizes) LineLength: Line is longer than 80 characters (found 116).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[60]
>  (sizes) LineLength: Line is longer than 80 characters (found 120).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10606) TrashPolicyDefault supports time of auto clean up can configured

2019-08-17 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909767#comment-16909767
 ] 

He Xiaoqiao commented on HDFS-10606:


[^HDFS-10606.002.patch] fix checkstyle and javadoc.

> TrashPolicyDefault supports time of auto clean up can configured
> 
>
> Key: HDFS-10606
> URL: https://issues.apache.org/jira/browse/HDFS-10606
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-10606-branch-2.7.001.patch, HDFS-10606.001.patch, 
> HDFS-10606.002.patch
>
>
> TrashPolicyDefault clean up Trash based on 
> [UTC|http://www.worldtimeserver.com/current_time_in_UTC.aspx] currently and 
> the time of cleaning up is 00:00 UTC. when there are large amount of trash 
> data should be auto-clean, it will block NN for a long time since Global 
> Lock, In the most serious situations it may lead some cron job submit 
> failure. if add configuration about time of cleaning up, it will avoid impact 
> on this cron jobs at that default time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10606) TrashPolicyDefault supports time of auto clean up can configured

2019-08-17 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-10606:
---
Attachment: HDFS-10606.002.patch

> TrashPolicyDefault supports time of auto clean up can configured
> 
>
> Key: HDFS-10606
> URL: https://issues.apache.org/jira/browse/HDFS-10606
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-10606-branch-2.7.001.patch, HDFS-10606.001.patch, 
> HDFS-10606.002.patch
>
>
> TrashPolicyDefault clean up Trash based on 
> [UTC|http://www.worldtimeserver.com/current_time_in_UTC.aspx] currently and 
> the time of cleaning up is 00:00 UTC. when there are large amount of trash 
> data should be auto-clean, it will block NN for a long time since Global 
> Lock, In the most serious situations it may lead some cron job submit 
> failure. if add configuration about time of cleaning up, it will avoid impact 
> on this cron jobs at that default time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HDDS-1977:
--

Assignee: kevin su

> Fix checkstyle issues introduced by HDDS-1894
> -
>
> Key: HDDS-1977
> URL: https://issues.apache.org/jira/browse/HDDS-1977
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
>
> Fix the checkstyle issues introduced by HDDS-1894
> {noformat}
> [INFO] There are 6 errors reported by Checkstyle 8.8 with 
> checkstyle/checkstyle.xml ruleset.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[41,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[42]
>  (sizes) LineLength: Line is longer than 80 characters (found 88).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[46,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[47]
>  (sizes) LineLength: Line is longer than 80 characters (found 90).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[59]
>  (sizes) LineLength: Line is longer than 80 characters (found 116).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[60]
>  (sizes) LineLength: Line is longer than 80 characters (found 120).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14648) DeadNodeDetector state machine model

2019-08-17 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14648:
---
Attachment: HDFS-14648.002.patch

> DeadNodeDetector state machine model
> 
>
> Key: HDFS-14648
> URL: https://issues.apache.org/jira/browse/HDFS-14648
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14648.001.patch, HDFS-14648.002.patch
>
>
> This Jira constructs DeadNodeDetector state machine model. The function it 
> implements as follow:
>  # After DFSInputstream detects some DataNode die, it put in DeadNodeDetector 
> and share this information to others in the same DFSClient. The ohter 
> DFSInputstreams will not read this DataNode.
>  # DeadNodeDetector also have DFSInputstream reference relationships to each 
> DataNode. When DFSInputstream close, DeadNodeDetector also remove this 
> reference. If some DeadNode of DeadNodeDetector is not read by 
> DFSInputstream, it also is removed from DeadNodeDetector.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1903) Use dynamic ports for SCM in TestSCMClientProtocolServer and TestSCMSecurityProtocolServer

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1903?focusedWorklogId=296786=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296786
 ]

ASF GitHub Bot logged work on HDDS-1903:


Author: ASF GitHub Bot
Created on: 17/Aug/19 14:36
Start Date: 17/Aug/19 14:36
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1303: HDDS-1903 : Use 
dynamic ports for SCM in TestSCMClientProtocolServer …
URL: https://github.com/apache/hadoop/pull/1303#issuecomment-522242889
 
 
   Thanks @avijayanhwx for working on this.
   
   It is better to specify the port as `0` in the test, which will allow SCM to 
choose a random port.
   
   In the current solution there is a gap between the port identification 
(inside test-case) and the binding of service to that port (when SCM 
initializes), because of this gap, sometimes the same port is given out 
multiple times by the OperatingSystem since the port is still free. We will 
again run into bind exceptions.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296786)
Time Spent: 2h  (was: 1h 50m)

> Use dynamic ports for SCM in TestSCMClientProtocolServer and 
> TestSCMSecurityProtocolServer
> --
>
> Key: HDDS-1903
> URL: https://issues.apache.org/jira/browse/HDDS-1903
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We should use dynamic port for SCM in the following test-cases
> * TestSCMClientProtocolServer
> * TestSCMSecurityProtocolServer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10606) TrashPolicyDefault supports time of auto clean up can configured

2019-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909731#comment-16909731
 ] 

Hadoop QA commented on HDFS-10606:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 157 unchanged - 1 fixed = 158 total (was 158) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-common-project_hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-10606 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977863/HDFS-10606.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux af121cdc1823 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d873ddd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27545/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27545/artifact/out/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
 |
|  

[jira] [Commented] (HDFS-14342) WebHDFS: expose NEW_BLOCK flag in APPEND operation

2019-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909726#comment-16909726
 ] 

Hadoop QA commented on HDFS-14342:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 163 unchanged - 0 fixed = 164 total (was 163) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14342 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961447/HDFS-14342.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f6ff4b231e12 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d873ddd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27543/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27543/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Created] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-1977:
-

 Summary: Fix checkstyle issues introduced by HDDS-1894
 Key: HDDS-1977
 URL: https://issues.apache.org/jira/browse/HDDS-1977
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM Client
Reporter: Nanda kumar


Fix the checkstyle issues introduced by HDDS-1894
{noformat}

[INFO] There are 6 errors reported by Checkstyle 8.8 with 
checkstyle/checkstyle.xml ruleset.
[ERROR] 
src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[41,23]
 (whitespace) ParenPad: '(' is followed by whitespace.
[ERROR] 
src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[42]
 (sizes) LineLength: Line is longer than 80 characters (found 88).
[ERROR] 
src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[46,23]
 (whitespace) ParenPad: '(' is followed by whitespace.
[ERROR] 
src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[47]
 (sizes) LineLength: Line is longer than 80 characters (found 90).
[ERROR] 
src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[59]
 (sizes) LineLength: Line is longer than 80 characters (found 116).
[ERROR] 
src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[60]
 (sizes) LineLength: Line is longer than 80 characters (found 120).
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894

2019-08-17 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1977:
--
Labels: newbie  (was: )

> Fix checkstyle issues introduced by HDDS-1894
> -
>
> Key: HDDS-1977
> URL: https://issues.apache.org/jira/browse/HDDS-1977
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Reporter: Nanda kumar
>Priority: Major
>  Labels: newbie
>
> Fix the checkstyle issues introduced by HDDS-1894
> {noformat}
> [INFO] There are 6 errors reported by Checkstyle 8.8 with 
> checkstyle/checkstyle.xml ruleset.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[41,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[42]
>  (sizes) LineLength: Line is longer than 80 characters (found 88).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[46,23]
>  (whitespace) ParenPad: '(' is followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[47]
>  (sizes) LineLength: Line is longer than 80 characters (found 90).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[59]
>  (sizes) LineLength: Line is longer than 80 characters (found 116).
> [ERROR] 
> src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[60]
>  (sizes) LineLength: Line is longer than 80 characters (found 120).
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1894) Support listPipelines by filters in scmcli

2019-08-17 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1894:
--
Fix Version/s: (was: 0.4.1)
   0.5.0

> Support listPipelines by filters in scmcli
> --
>
> Key: HDDS-1894
> URL: https://issues.apache.org/jira/browse/HDDS-1894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Today scmcli has a subcmd that allow list all pipelines. This ticket is 
> opened to filter the results by switches, e.g., filter by Factor: THREE and 
> State: OPEN. This will be useful for trouble shooting in large cluster.
>  
> {code}
> bin/ozone scmcli listPipelines
> Pipeline[ Id: a8d1b0c9-e1d4-49ea-8746-3f61dfb5ee3f, Nodes: 
> cce44fde-bc8d-4063-97b3-6f557af756e1\{ip: 10.17.112.65, host: 
> ia0230.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:ONE, State:OPEN]
> Pipeline[ Id: c9c453d1-d74c-4414-b87f-1d3585d78a7c, Nodes: 
> 0b7b0b93-8323-4b82-8cc0-a9a5c10ab827\{ip: 10.17.112.29, host: 
> ia0138.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}c756a0e0-5a1b-4d03-ba5b-cafbcabac877\{ip: 10.17.112.27, host: 
> ia0134.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}bee45bd7-1ee6-4726-b3d1-81476dc1eb49\{ip: 10.17.112.28, host: 
> ia0136.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:THREE, State:OPEN]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1894) Support listPipelines by filters in scmcli

2019-08-17 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909717#comment-16909717
 ] 

Nanda kumar commented on HDDS-1894:
---

Changing the fixed version to 0.5.0 from 0.4.1. This fix is not back-ported to 
0.4.1 yet.

[~xyao], do we need this for 0.4.1?

> Support listPipelines by filters in scmcli
> --
>
> Key: HDDS-1894
> URL: https://issues.apache.org/jira/browse/HDDS-1894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Today scmcli has a subcmd that allow list all pipelines. This ticket is 
> opened to filter the results by switches, e.g., filter by Factor: THREE and 
> State: OPEN. This will be useful for trouble shooting in large cluster.
>  
> {code}
> bin/ozone scmcli listPipelines
> Pipeline[ Id: a8d1b0c9-e1d4-49ea-8746-3f61dfb5ee3f, Nodes: 
> cce44fde-bc8d-4063-97b3-6f557af756e1\{ip: 10.17.112.65, host: 
> ia0230.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:ONE, State:OPEN]
> Pipeline[ Id: c9c453d1-d74c-4414-b87f-1d3585d78a7c, Nodes: 
> 0b7b0b93-8323-4b82-8cc0-a9a5c10ab827\{ip: 10.17.112.29, host: 
> ia0138.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}c756a0e0-5a1b-4d03-ba5b-cafbcabac877\{ip: 10.17.112.27, host: 
> ia0134.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}bee45bd7-1ee6-4726-b3d1-81476dc1eb49\{ip: 10.17.112.28, host: 
> ia0136.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:THREE, State:OPEN]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14583) FileStatus#toString() will throw IllegalArgumentException

2019-08-17 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909716#comment-16909716
 ] 

He Xiaoqiao commented on HDFS-14583:


{quote}
HdfsFileStatus don't has empty symlink check, so I think should fix this issue 
in HdfsFileStatus.
As you said, RouterClientProtocol, it is not necessary to set symlink, may be 
we can remove it.
{quote}
+1, it makes sense to me. Please go ahead and ping at any time if need any 
help. Thanks [~xuzq_zander].

> FileStatus#toString() will throw IllegalArgumentException
> -
>
> Key: HDFS-14583
> URL: https://issues.apache.org/jira/browse/HDFS-14583
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
>  Labels: HDFS
> Attachments: HDFS-14583-trunk-0001.patch
>
>
> FileStatus#toString() will throw IllegalArgumentException, stack and error 
> message like this:
> {code:java}
> java.lang.IllegalArgumentException: Can not create a Path from an empty string
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:172)
>   at org.apache.hadoop.fs.Path.(Path.java:184)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.getSymlink(HdfsLocatedFileStatus.java:117)
>   at org.apache.hadoop.fs.FileStatus.toString(FileStatus.java:462)
>   at 
> org.apache.hadoop.hdfs.web.TestJsonUtil.testHdfsFileStatus(TestJsonUtil.java:123)
> {code}
> Test Code like this:
> {code:java}
> @Test
> public void testHdfsFileStatus() throws IOException {
>   HdfsFileStatus hdfsFileStatus = new HdfsFileStatus.Builder()
>   .replication(1)
>   .blocksize(1024)
>   .perm(new FsPermission((short) 777))
>   .owner("owner")
>   .group("group")
>   .symlink(new byte[0])
>   .path(new byte[0])
>   .fileId(1010)
>   .isdir(true)
>   .build();
>   System.out.println("HdfsFileStatus = " + hdfsFileStatus.toString());
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14583) FileStatus#toString() will throw IllegalArgumentException

2019-08-17 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909711#comment-16909711
 ] 

xuzq commented on HDFS-14583:
-

Thanks [~hexiaoqiao] for your comment.

HdfsFileStatus don't has empty symlink check, so I think should fix this issue 
in HdfsFileStatus.

As you said, RouterClientProtocol, it is not necessary to set symlink, may be 
we can remove it.

> FileStatus#toString() will throw IllegalArgumentException
> -
>
> Key: HDFS-14583
> URL: https://issues.apache.org/jira/browse/HDFS-14583
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
>  Labels: HDFS
> Attachments: HDFS-14583-trunk-0001.patch
>
>
> FileStatus#toString() will throw IllegalArgumentException, stack and error 
> message like this:
> {code:java}
> java.lang.IllegalArgumentException: Can not create a Path from an empty string
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:172)
>   at org.apache.hadoop.fs.Path.(Path.java:184)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.getSymlink(HdfsLocatedFileStatus.java:117)
>   at org.apache.hadoop.fs.FileStatus.toString(FileStatus.java:462)
>   at 
> org.apache.hadoop.hdfs.web.TestJsonUtil.testHdfsFileStatus(TestJsonUtil.java:123)
> {code}
> Test Code like this:
> {code:java}
> @Test
> public void testHdfsFileStatus() throws IOException {
>   HdfsFileStatus hdfsFileStatus = new HdfsFileStatus.Builder()
>   .replication(1)
>   .blocksize(1024)
>   .perm(new FsPermission((short) 777))
>   .owner("owner")
>   .group("group")
>   .symlink(new byte[0])
>   .path(new byte[0])
>   .fileId(1010)
>   .isdir(true)
>   .build();
>   System.out.println("HdfsFileStatus = " + hdfsFileStatus.toString());
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14583) FileStatus#toString() will throw IllegalArgumentException

2019-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909707#comment-16909707
 ] 

Hadoop QA commented on HDFS-14583:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
49s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14583 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12972163/HDFS-14583-trunk-0001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4ca4b36aa935 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0e0ddfa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-14583) FileStatus#toString() will throw IllegalArgumentException

2019-08-17 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909704#comment-16909704
 ] 

He Xiaoqiao commented on HDFS-14583:


Thanks [~xuzq_zander] for your report and contribution. I prefer to fix at 
{{RouterClientProtocol}} rather than {{HdfsFileStatus}}. In my opinion, 
HdfsFileStatus do not accept empty symlink as expect. IIUC, in 
RouterClientProtocol, it is not necessary to set {{symlink}} as following. FYI. 
cc [~elgoiri].
{code:java}
return new HdfsFileStatus.Builder()
.isdir(true)
.mtime(modTime)
.atime(accessTime)
.perm(permission)
.owner(owner)
.group(group)
.path(DFSUtil.string2Bytes(name))
.fileId(inodeId)
.children(childrenNum)
.build();
{code}

> FileStatus#toString() will throw IllegalArgumentException
> -
>
> Key: HDFS-14583
> URL: https://issues.apache.org/jira/browse/HDFS-14583
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
>  Labels: HDFS
> Attachments: HDFS-14583-trunk-0001.patch
>
>
> FileStatus#toString() will throw IllegalArgumentException, stack and error 
> message like this:
> {code:java}
> java.lang.IllegalArgumentException: Can not create a Path from an empty string
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:172)
>   at org.apache.hadoop.fs.Path.(Path.java:184)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.getSymlink(HdfsLocatedFileStatus.java:117)
>   at org.apache.hadoop.fs.FileStatus.toString(FileStatus.java:462)
>   at 
> org.apache.hadoop.hdfs.web.TestJsonUtil.testHdfsFileStatus(TestJsonUtil.java:123)
> {code}
> Test Code like this:
> {code:java}
> @Test
> public void testHdfsFileStatus() throws IOException {
>   HdfsFileStatus hdfsFileStatus = new HdfsFileStatus.Builder()
>   .replication(1)
>   .blocksize(1024)
>   .perm(new FsPermission((short) 777))
>   .owner("owner")
>   .group("group")
>   .symlink(new byte[0])
>   .path(new byte[0])
>   .fileId(1010)
>   .isdir(true)
>   .build();
>   System.out.println("HdfsFileStatus = " + hdfsFileStatus.toString());
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14648) DeadNodeDetector state machine model

2019-08-17 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14648:
---
Attachment: HDFS-14648.001.patch

> DeadNodeDetector state machine model
> 
>
> Key: HDFS-14648
> URL: https://issues.apache.org/jira/browse/HDFS-14648
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14648.001.patch
>
>
> This Jira constructs DeadNodeDetector state machine model. The function it 
> implements as follow:
>  # After DFSInputstream detects some DataNode die, it put in DeadNodeDetector 
> and share this information to others in the same DFSClient. The ohter 
> DFSInputstreams will not read this DataNode.
>  # DeadNodeDetector also have DFSInputstream reference relationships to each 
> DataNode. When DFSInputstream close, DeadNodeDetector also remove this 
> reference. If some DeadNode of DeadNodeDetector is not read by 
> DFSInputstream, it also is removed from DeadNodeDetector.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14648) DeadNodeDetector state machine model

2019-08-17 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14648:
---
Description: 
This Jira constructs DeadNodeDetector state machine model. The function it 
implements as follow:
 # After DFSInputstream detects some DataNode die, it put in DeadNodeDetector 
and share this information to others in the same DFSClient. The ohter 
DFSInputstreams will not read this DataNode.
 # DeadNodeDetector also have DFSInputstream reference relationships to each 
DataNode. When DFSInputstream close, DeadNodeDetector also remove this 
reference. If some DeadNode of DeadNodeDetector is not read by DFSInputstream, 
it also is removed from DeadNodeDetector.

  was:This Jira constructs HADOOP-16351.


> DeadNodeDetector state machine model
> 
>
> Key: HDFS-14648
> URL: https://issues.apache.org/jira/browse/HDFS-14648
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
>
> This Jira constructs DeadNodeDetector state machine model. The function it 
> implements as follow:
>  # After DFSInputstream detects some DataNode die, it put in DeadNodeDetector 
> and share this information to others in the same DFSClient. The ohter 
> DFSInputstreams will not read this DataNode.
>  # DeadNodeDetector also have DFSInputstream reference relationships to each 
> DataNode. When DFSInputstream close, DeadNodeDetector also remove this 
> reference. If some DeadNode of DeadNodeDetector is not read by 
> DFSInputstream, it also is removed from DeadNodeDetector.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.

2019-08-17 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909702#comment-16909702
 ] 

He Xiaoqiao commented on HDFS-14646:


Thanks [~xudongcao] for your pings, to be honest, I do not have any experiences 
about multiple NNs in our installation. cc [~xkrogen],[~elgoiri],[~csun] would 
your mind to take a review?

> Standby NameNode should not upload fsimage to an inappropriate NameNode.
> 
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
> Attachments: HDFS-14646.000.patch, HDFS-14646.001.patch
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> Depending on the version of Jetty, this behavior can lead to different 
> consequences, I tested it under 2.7.2 and trunk version. 
> *1.In Hadoop 2.7.2 (with Jetty 6.1.26)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will still be established, and the data SNN sent will be read by 
> Jetty framework itself in the peer NN side, so the SNN will insignificantly 
> send the FsImage to the peer NN continuously, causing a waste of time and 
> bandwidth. In a relatively large HDFS cluster, the size of FsImage can often 
> reach about 30GB, This is indeed a big waste.
> *2.In trunk version (with Jetty 9.3.27)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will be auto closed, and then SNN will directly get an "Error 
> writing request body to server" exception, as below, note this test needs a 
> relatively big FSImage (e.g. 10MB level):
> {code:java}
> 2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 524288 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 851968 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>   {code}
>                   
> *Solution:*
>  A standby NameNode should not upload fsimage to an inappropriate NameNode, 
> when he plans to put a FsImage to the peer NN, he need to check whether he 
> really need to put it at this time.
> In detail, local SNN should establish an HTTP connection with the peer NN, 
> send the put request, 

[jira] [Commented] (HDFS-14583) FileStatus#toString() will throw IllegalArgumentException

2019-08-17 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909700#comment-16909700
 ] 

xuzq commented on HDFS-14583:
-

Thanks [~jojochuang]. 

The HdfsFileStatus of mount point has an empty symlink in RBF, the code is:
{code:java}
// 
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol#getMountPointStatus
return new HdfsFileStatus.Builder()
.isdir(true)
.mtime(modTime)
.atime(accessTime)
.perm(permission)
.owner(owner)
.group(group)
.symlink(new byte[0])
.path(DFSUtil.string2Bytes(name))
.fileId(inodeId)
.children(childrenNum)
.build();
{code}
 

> FileStatus#toString() will throw IllegalArgumentException
> -
>
> Key: HDFS-14583
> URL: https://issues.apache.org/jira/browse/HDFS-14583
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
>  Labels: HDFS
> Attachments: HDFS-14583-trunk-0001.patch
>
>
> FileStatus#toString() will throw IllegalArgumentException, stack and error 
> message like this:
> {code:java}
> java.lang.IllegalArgumentException: Can not create a Path from an empty string
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:172)
>   at org.apache.hadoop.fs.Path.(Path.java:184)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.getSymlink(HdfsLocatedFileStatus.java:117)
>   at org.apache.hadoop.fs.FileStatus.toString(FileStatus.java:462)
>   at 
> org.apache.hadoop.hdfs.web.TestJsonUtil.testHdfsFileStatus(TestJsonUtil.java:123)
> {code}
> Test Code like this:
> {code:java}
> @Test
> public void testHdfsFileStatus() throws IOException {
>   HdfsFileStatus hdfsFileStatus = new HdfsFileStatus.Builder()
>   .replication(1)
>   .blocksize(1024)
>   .perm(new FsPermission((short) 777))
>   .owner("owner")
>   .group("group")
>   .symlink(new byte[0])
>   .path(new byte[0])
>   .fileId(1010)
>   .isdir(true)
>   .build();
>   System.out.println("HdfsFileStatus = " + hdfsFileStatus.toString());
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14741) RBF: RecoverLease should be return false when the file is open in multiple destination

2019-08-17 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909699#comment-16909699
 ] 

xuzq commented on HDFS-14741:
-

Thanks [~elgoiri] for the comment.

Add new mount point in testSetup(), so need to change totalFiles from 6 to 7 in 
 testSubclusterDown().

> RBF: RecoverLease should be return false when the file is open in multiple 
> destination
> --
>
> Key: HDFS-14741
> URL: https://issues.apache.org/jira/browse/HDFS-14741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14741-trunk-001.patch
>
>
> RecoverLease should be return false when the file is open or be writing in 
> multiple destinations.
> Liks this:
> Mount point has multiple destination(ns0 and ns1).
> And the file is in ns0 but it is be writing, ns1 doesn't has this file.
> In this case *recoverLease* should return false instead of throw 
> FileNotFoundException.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10606) TrashPolicyDefault supports time of auto clean up can configured

2019-08-17 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909694#comment-16909694
 ] 

He Xiaoqiao commented on HDFS-10606:


Thanks [~jojochuang] pick it up again. 
[^HDFS-10606.001.patch] based on trunk and just offer a way to tune time when 
trash auto-clean execution.

> TrashPolicyDefault supports time of auto clean up can configured
> 
>
> Key: HDFS-10606
> URL: https://issues.apache.org/jira/browse/HDFS-10606
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-10606-branch-2.7.001.patch, HDFS-10606.001.patch
>
>
> TrashPolicyDefault clean up Trash based on 
> [UTC|http://www.worldtimeserver.com/current_time_in_UTC.aspx] currently and 
> the time of cleaning up is 00:00 UTC. when there are large amount of trash 
> data should be auto-clean, it will block NN for a long time since Global 
> Lock, In the most serious situations it may lead some cron job submit 
> failure. if add configuration about time of cleaning up, it will avoid impact 
> on this cron jobs at that default time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10606) TrashPolicyDefault supports time of auto clean up can configured

2019-08-17 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-10606:
---
Attachment: HDFS-10606.001.patch

> TrashPolicyDefault supports time of auto clean up can configured
> 
>
> Key: HDFS-10606
> URL: https://issues.apache.org/jira/browse/HDFS-10606
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-10606-branch-2.7.001.patch, HDFS-10606.001.patch
>
>
> TrashPolicyDefault clean up Trash based on 
> [UTC|http://www.worldtimeserver.com/current_time_in_UTC.aspx] currently and 
> the time of cleaning up is 00:00 UTC. when there are large amount of trash 
> data should be auto-clean, it will block NN for a long time since Global 
> Lock, In the most serious situations it may lead some cron job submit 
> failure. if add configuration about time of cleaning up, it will avoid impact 
> on this cron jobs at that default time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14648) DeadNodeDetector state machine model

2019-08-17 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14648:
---
Description: This Jira constructs HADOOP-16351.  (was: This Jira constructs 
)

> DeadNodeDetector state machine model
> 
>
> Key: HDFS-14648
> URL: https://issues.apache.org/jira/browse/HDFS-14648
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
>
> This Jira constructs HADOOP-16351.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14648) DeadNodeDetector state machine model

2019-08-17 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14648:
---
Description: This Jira constructs 

> DeadNodeDetector state machine model
> 
>
> Key: HDFS-14648
> URL: https://issues.apache.org/jira/browse/HDFS-14648
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
>
> This Jira constructs 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14648) DeadNodeDetector state machine model

2019-08-17 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14648:
---
Summary: DeadNodeDetector state machine model  (was: Create 
DeadNodeDetector state machine model)

> DeadNodeDetector state machine model
> 
>
> Key: HDFS-14648
> URL: https://issues.apache.org/jira/browse/HDFS-14648
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14725) Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks until next report)

2019-08-17 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909684#comment-16909684
 ] 

He Xiaoqiao commented on HDFS-14725:


[~jojochuang], Thanks for your reviews. [^HDFS-14725.branch-2.8.001.patch] for 
branch-2.8, pending for what Jenkins says.

> Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks 
> until next report)
> 
>
> Key: HDFS-14725
> URL: https://issues.apache.org/jira/browse/HDFS-14725
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14725.branch-2.001.patch, 
> HDFS-14725.branch-2.002.patch, HDFS-14725.branch-2.003.patch, 
> HDFS-14725.branch-2.8.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14725) Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks until next report)

2019-08-17 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-14725:
---
Attachment: HDFS-14725.branch-2.8.001.patch

> Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks 
> until next report)
> 
>
> Key: HDFS-14725
> URL: https://issues.apache.org/jira/browse/HDFS-14725
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14725.branch-2.001.patch, 
> HDFS-14725.branch-2.002.patch, HDFS-14725.branch-2.003.patch, 
> HDFS-14725.branch-2.8.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2019-08-17 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909682#comment-16909682
 ] 

He Xiaoqiao commented on HDFS-14090:


Thanks [~crh] for your contributions and pings. Actually this feature has been 
used in our test env for a while. It runs very well in the most scenarios. I 
would like to offer some required features for 
{{StaticFairnessPolicyController}} in our case, as [~xkrogen] said above, when 
we configure a constant allocation count permit, it can not tune dynamically 
after Router startup unless reconfig and reboot Router process, however loads 
of different namespaces are changing at any time, So do we need some admin 
interface to change the allocation count permit dynamically? what furthermore, 
I believe we should add one controller which could allocated count permit 
automatically based on the current namespace's load, maybe named 
{{DynamicalFairnessPolicyController}} vs {{StaticFairnessPolicyController}}.
+1(no-binding) for [^HDFS-14090.010.patch] from my side. Consider this is very 
useful feature and many guys are waiting for this patch be ready as far as I 
know, In my opinion we should push this patch forward then continue to extend 
{{FairnessPolicyController}} and offer some other more choices in the next 
phase. Pending other guys feedback. Thanks [~crh] again.

> RBF: Improved isolation for downstream name nodes.
> --
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, 
> HDFS-14090-HDFS-13891.002.patch, HDFS-14090-HDFS-13891.003.patch, 
> HDFS-14090-HDFS-13891.004.patch, HDFS-14090-HDFS-13891.005.patch, 
> HDFS-14090.006.patch, HDFS-14090.007.patch, HDFS-14090.008.patch, 
> HDFS-14090.009.patch, HDFS-14090.010.patch, RBF_ Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14703) NameNode Fine-Grained Locking via Metadata Partitioning

2019-08-17 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909681#comment-16909681
 ] 

He Xiaoqiao edited comment on HDFS-14703 at 8/17/19 12:41 PM:
--

Thanks [~shv] for your POC patches. I have to state that this is very clever 
design for fine-grained global locking. There are still couple of questions 
what I do not quite understand and look forward to your response.
1. Write concurrency control. Consider one case with two threads with mkdir 
(/a/b/c/d/e) and delete(/a/b/c) ops. I try to ran this case following design 
and POC patches, but I usually get unstable result since key with  
and  could be located at different RangeGSet using 
{{INodeMap#latchWriteLock}}, then the two threads could run concurrently and 
get unstable result even if from one client and one by one. As your last 
explains, `deleting a directory should lock all RangeGets involved`. Is it one 
special case about Delete Ops? Sorry for asking this question again.
{quote}
Deleting a directory /a/b/c means deleting the entire sub-tree underneath this 
directory. We should lock all RangeGSets involved in such deletion, 
particularly the one containing file f. So f cannot be modified concurrently 
with the delete.
{quote}
2. {{INode}} involves local variable {{long[] namespaceKey}} at 0004 in POC 
package. I believe this attributes is very useful to partition for INode. 
meanwhile does it bring some other potential issues
* heap footprint overhead. For a long while running of NameNode process, 
namespaceKey of most INode (visited once at least) in the directory tree may be 
not null. If we consider there are 500M INodes and {{level}} is both 2, it need 
over than 8GB heap size.
* when one INode is renamed, the {{namespaceKey}} have to update, right? Since 
its parent INode has changes. POC seems not update anymore if {{namespaceKey}} 
is not null.
Is it possible to calculate namespaceKey for INode when use it out of the Lock. 
Of course, it will bring CPU overhead. Please correct me if I am wrong. Thanks.

3. No LatchLock unlock in the POC for operation #mkdir, it seems like a bit of 
oversight. In my opinion, it has to release childLock after used, right?
[~shv] Thanks for your POC patches again and looks forward to the next 
milestone. And I would like to involve to push forward this feature if need.


was (Author: hexiaoqiao):
Thanks [~shv] for your POC patches. I have to state that this is very clever 
design for fine-grained global locking. There are still couple of questions 
what I do not quite understand and look forward to your response.
1. Write concurrency control. Consider one case with two threads with mkdir 
(/a/b/c/d/e) and delete(/a/b/c) ops. I try to ran this case following design 
and POC patches, but I usually get unstable result since key with  
and  could be located at different RangeGSet using 
{{INodeMap#latchWriteLock}}, then the two threads could run concurrently and 
get unstable result even if from one client and one by one. As your last 
explains, `deleting a directory should lock all RangeGets involved`. Is it one 
special case about Delete Ops? Sorry for asking this question again.
{quote}
Deleting a directory /a/b/c means deleting the entire sub-tree underneath this 
directory. We should lock all RangeGSets involved in such deletion, 
particularly the one containing file f. So f cannot be modified concurrently 
with the delete.
{quote}
2. {{INode}} involves local variable {{long[] namespaceKey}} at 0004 in POC 
package. I believe this attributes is very useful to partition for INode. 
meanwhile does it bring some other potential issues
* heap footprint overhead. For a long while running of NameNode process, 
namespaceKey of most INode (visited once at least) in the directory tree may be 
not null. If we consider there are 500M INodes and {{level}} is both 2, it need 
over than 8GB heap size.
* when one INode is renamed, the {{namespaceKey}} have to update, right? Since 
its parent INode has changes. POC seems not update anymore if {{namespaceKey}} 
is not null.
Is it possible to calculate namespaceKey for INode when use it out of the Lock. 
Of course, it will bring CPU overhead. Please correct me if I am wrong. Thanks.
3. No LatchLock unlock in the POC for operation #mkdir, it seems like a bit of 
oversight. In my opinion, it has to release childLock after used, right?
[~shv] Thanks for your POC patches again and looks forward to the next 
milestone. And I would like to involve to push forward this feature if need.

> NameNode Fine-Grained Locking via Metadata Partitioning
> ---
>
> Key: HDFS-14703
> URL: https://issues.apache.org/jira/browse/HDFS-14703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Konstantin 

[jira] [Commented] (HDFS-14703) NameNode Fine-Grained Locking via Metadata Partitioning

2019-08-17 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909681#comment-16909681
 ] 

He Xiaoqiao commented on HDFS-14703:


Thanks [~shv] for your POC patches. I have to state that this is very clever 
design for fine-grained global locking. There are still couple of questions 
what I do not quite understand and look forward to your response.
1. Write concurrency control. Consider one case with two threads with mkdir 
(/a/b/c/d/e) and delete(/a/b/c) ops. I try to ran this case following design 
and POC patches, but I usually get unstable result since key with  
and  could be located at different RangeGSet using 
{{INodeMap#latchWriteLock}}, then the two threads could run concurrently and 
get unstable result even if from one client and one by one. As your last 
explains, `deleting a directory should lock all RangeGets involved`. Is it one 
special case about Delete Ops? Sorry for asking this question again.
{quote}
Deleting a directory /a/b/c means deleting the entire sub-tree underneath this 
directory. We should lock all RangeGSets involved in such deletion, 
particularly the one containing file f. So f cannot be modified concurrently 
with the delete.
{quote}
2. {{INode}} involves local variable {{long[] namespaceKey}} at 0004 in POC 
package. I believe this attributes is very useful to partition for INode. 
meanwhile does it bring some other potential issues
* heap footprint overhead. For a long while running of NameNode process, 
namespaceKey of most INode (visited once at least) in the directory tree may be 
not null. If we consider there are 500M INodes and {{level}} is both 2, it need 
over than 8GB heap size.
* when one INode is renamed, the {{namespaceKey}} have to update, right? Since 
its parent INode has changes. POC seems not update anymore if {{namespaceKey}} 
is not null.
Is it possible to calculate namespaceKey for INode when use it out of the Lock. 
Of course, it will bring CPU overhead. Please correct me if I am wrong. Thanks.
3. No LatchLock unlock in the POC for operation #mkdir, it seems like a bit of 
oversight. In my opinion, it has to release childLock after used, right?
[~shv] Thanks for your POC patches again and looks forward to the next 
milestone. And I would like to involve to push forward this feature if need.

> NameNode Fine-Grained Locking via Metadata Partitioning
> ---
>
> Key: HDFS-14703
> URL: https://issues.apache.org/jira/browse/HDFS-14703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: 001-partitioned-inodeMap-POC.tar.gz, NameNode 
> Fine-Grained Locking.pdf
>
>
> We target to enable fine-grained locking by splitting the in-memory namespace 
> into multiple partitions each having a separate lock. Intended to improve 
> performance of NameNode write operations.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13101) Yet another fsimage corruption related to snapshot

2019-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909673#comment-16909673
 ] 

Hadoop QA commented on HDFS-13101:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
51s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:da67579 |
| JIRA Issue | HDFS-13101 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977857/HDFS-13101.branch-2.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 770035ce387f 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git 

[jira] [Commented] (HDFS-13118) SnapshotDiffReport should provide the INode type

2019-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909669#comment-16909669
 ] 

Hadoop QA commented on HDFS-13118:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-13118 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13118 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959014/HDFS-13118.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27544/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> SnapshotDiffReport should provide the INode type
> 
>
> Key: HDFS-13118
> URL: https://issues.apache.org/jira/browse/HDFS-13118
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13118.001.patch, HDFS-13118.002.patch, 
> HDFS-13118.003.patch, HDFS-13118.004.patch, HDFS-13118.005.patch
>
>
> Currently the snapshot diff report will list which inodes were added, 
> removed, renamed, etc. But to see what the INode actually is, we need to 
> actually access the underlying snapshot - and this is cumbersome to do 
> programmatically when the snapshot diff already has the information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13118) SnapshotDiffReport should provide the INode type

2019-08-17 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909668#comment-16909668
 ] 

Wei-Chiu Chuang commented on HDFS-13118:


[~shashikant] can you help review this one?
[~ehiggs] would you please address the findbugs warnings?

> SnapshotDiffReport should provide the INode type
> 
>
> Key: HDFS-13118
> URL: https://issues.apache.org/jira/browse/HDFS-13118
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13118.001.patch, HDFS-13118.002.patch, 
> HDFS-13118.003.patch, HDFS-13118.004.patch, HDFS-13118.005.patch
>
>
> Currently the snapshot diff report will list which inodes were added, 
> removed, renamed, etc. But to see what the INode actually is, we need to 
> actually access the underlying snapshot - and this is cumbersome to do 
> programmatically when the snapshot diff already has the information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14342) WebHDFS: expose NEW_BLOCK flag in APPEND operation

2019-08-17 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909666#comment-16909666
 ] 

Wei-Chiu Chuang commented on HDFS-14342:


[~csun] or [~smeng] could you help review this one?

> WebHDFS: expose NEW_BLOCK flag in APPEND operation
> --
>
> Key: HDFS-14342
> URL: https://issues.apache.org/jira/browse/HDFS-14342
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: webhdfs
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-14342.000.patch
>
>
> After the support for variable length blocks was added (HDFS-3689), we should 
> expose the NEW_BLOCK flag of APPEND operation in webhdfs, so that this 
> functionality will be usable over the rest api.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12012) Fix spelling mistakes in BPServiceActor.java.

2019-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909663#comment-16909663
 ] 

Hudson commented on HDFS-12012:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17141 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17141/])
HDFS-12012. Fix spelling mistakes in BPServiceActor.java. Contributed by 
(weichiu: rev 528378784fe14e7069dd0471f3c4c478544b57c8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java


> Fix spelling mistakes in BPServiceActor.java.
> -
>
> Key: HDFS-12012
> URL: https://issues.apache.org/jira/browse/HDFS-12012
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: chencan
>Assignee: chencan
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-12012.patch
>
>
> In functions blockReport,cacheReport,offerService there are multiple msec 
> need to be modified to msecs.
> Such as the following logs:
> 2017-06-22 11:38:25,399 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Unsuccessfully sent block report 0x8c26a112d6d4,  containing 3 storage 
> report(s), of which we sent 0. The reports had 19906571 total blocks and used 
> 0 RPC(s). This took 3071 msec to generate and 781 msecs for RPC and NN 
> processing. Got back no commands



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14127) Add a description about the observer read configuration

2019-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909664#comment-16909664
 ] 

Hudson commented on HDFS-14127:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17141 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17141/])
HDFS-14127. Add a description about the observer read configuration. (weichiu: 
rev d873ddd65664da18635b99327cde314ce1d8f260)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Add a description about the observer read configuration
> ---
>
> Key: HDFS-14127
> URL: https://issues.apache.org/jira/browse/HDFS-14127
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-12943
>Reporter: xiangheng
>Assignee: xiangheng
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14127-HDFS-12943.000.patch
>
>
> The lack of description of observer reader configuration in hdfs-default.xml 
> ,That can easily lead users to configure observer read mode to a normal HA 
> mode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14127) Add a description about the observer read configuration

2019-08-17 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14127:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

+1 patch still applies.
Pushed to trunk. Thanks [~xiangheng]

> Add a description about the observer read configuration
> ---
>
> Key: HDFS-14127
> URL: https://issues.apache.org/jira/browse/HDFS-14127
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-12943
>Reporter: xiangheng
>Assignee: xiangheng
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14127-HDFS-12943.000.patch
>
>
> The lack of description of observer reader configuration in hdfs-default.xml 
> ,That can easily lead users to configure observer read mode to a normal HA 
> mode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1959) Decrement purge interval for Ratis logs in datanode

2019-08-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1959?focusedWorklogId=296765=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296765
 ]

ASF GitHub Bot logged work on HDDS-1959:


Author: ASF GitHub Bot
Created on: 17/Aug/19 11:40
Start Date: 17/Aug/19 11:40
Worklog Time Spent: 10m 
  Work Description: pingsutw commented on issue #1301: HDDS-1959. Decrement 
purge interval for Ratis logs in datanode
URL: https://github.com/apache/hadoop/pull/1301#issuecomment-59754
 
 
   @lokeshj1703  Thank you so much
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296765)
Time Spent: 1h 20m  (was: 1h 10m)

> Decrement purge interval for Ratis logs in datanode
> ---
>
> Key: HDDS-1959
> URL: https://issues.apache.org/jira/browse/HDDS-1959
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: kevin su
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently purge interval for ratis log("dfs.container.ratis.log.purge.gap") 
> is set at 10. The Jira aims to reduce the interval and set it to 
> 100.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12012) Fix spelling mistakes in BPServiceActor.java.

2019-08-17 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12012:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~candychencan]. pushed to trunk.

> Fix spelling mistakes in BPServiceActor.java.
> -
>
> Key: HDFS-12012
> URL: https://issues.apache.org/jira/browse/HDFS-12012
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: chencan
>Assignee: chencan
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-12012.patch
>
>
> In functions blockReport,cacheReport,offerService there are multiple msec 
> need to be modified to msecs.
> Such as the following logs:
> 2017-06-22 11:38:25,399 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Unsuccessfully sent block report 0x8c26a112d6d4,  containing 3 storage 
> report(s), of which we sent 0. The reports had 19906571 total blocks and used 
> 0 RPC(s). This took 3071 msec to generate and 781 msecs for RPC and NN 
> processing. Got back no commands



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12012) Fix spelling mistakes in BPServiceActor.java.

2019-08-17 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909656#comment-16909656
 ] 

Wei-Chiu Chuang commented on HDFS-12012:


+1 patch still applies.

> Fix spelling mistakes in BPServiceActor.java.
> -
>
> Key: HDFS-12012
> URL: https://issues.apache.org/jira/browse/HDFS-12012
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: chencan
>Assignee: chencan
>Priority: Major
> Attachments: HADOOP-12012.patch
>
>
> In functions blockReport,cacheReport,offerService there are multiple msec 
> need to be modified to msecs.
> Such as the following logs:
> 2017-06-22 11:38:25,399 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Unsuccessfully sent block report 0x8c26a112d6d4,  containing 3 storage 
> report(s), of which we sent 0. The reports had 19906571 total blocks and used 
> 0 RPC(s). This took 3071 msec to generate and 781 msecs for RPC and NN 
> processing. Got back no commands



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14687) Standby Namenode never come out of safemode when EC files are being written.

2019-08-17 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16909655#comment-16909655
 ] 

Wei-Chiu Chuang commented on HDFS-14687:


[~surendrasingh] I think the fix is good. But the test ran for more than 3 
minutes on my machine. Can we update the test and cut down some wait time? It 
doesn't look like a large integration test.

> Standby Namenode never come out of safemode when EC files are being written.
> 
>
> Key: HDFS-14687
> URL: https://issues.apache.org/jira/browse/HDFS-14687
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: HDFS-14687.001.patch, HDFS-14687.002.patch, 
> HDFS-14687.003.patch
>
>
> When huge number of EC files are being written and SBN is restarted then it 
> will never come out of safe mode and required blocks count getting increase.
> {noformat}
> The reported blocks 16658401 needs additional 1702 blocks to reach the 
> threshold 0.9 of total blocks 16660120.
> The reported blocks 16658659 needs additional 2935 blocks to reach the 
> threshold 0.9 of total blocks 16661611.
> The reported blocks 16659947 needs additional 3868 blocks to reach the 
> threshold 0.9 of total blocks 16663832.
> The reported blocks 1335 needs additional 5116 blocks to reach the 
> threshold 0.9 of total blocks 16671468.
> The reported blocks 16669311 needs additional 6384 blocks to reach the 
> threshold 0.9 of total blocks 16675712.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >