[jira] [Created] (HDFS-12161) Distcp does not preserve ownership in destination parent folder
Sailesh Patel created HDFS-12161: Summary: Distcp does not preserve ownership in destination parent folder Key: HDFS-12161 URL: https://issues.apache.org/jira/browse/HDFS-12161 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Affects Versions: 2.6.0 Reporter: Sailesh Patel Priority: Minor After running distcp as user 'usertest' which is a hdfs superuser, the destination folder ownership is not preserved. e.g. hadoop distcp -pugpaxt -update -skipcrccheck /tmp/usertest /tmp/new_user/usertest_copy After distcp is executed, the parent folder has ownership with 'usertest' : drwxr-xr-x - usertest supergroup 0 2017-07-13 22:09 /tmp/new_user/usertest_copy and the actual files copied by distcp preserved the permissions: drwxr-xr-x - hive hive 0 2017-07-13 22:09 /tmp/new_user/usertest_copy/dir1 -rw-r--r-- 1 hdfs hive287 2017-07-13 22:09 /tmp/new_user/usertest_copy/test1.txt The distcp options (-pugpaxt) does not apply to the destination parent directory specified in the distcp command e.g. "/tmp/new_user/usertest_copy" Can we document this in https://hadoop.apache.org/docs/r1.2.1/distcp2.html Under : Command Line Options The destination folder location needs to be pre-created with correct ownership/permissions before using distcp. The preserve options do not apply to parent folder. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12160) Fix broken NameNode metrics documentation
Erik Krogen created HDFS-12160: -- Summary: Fix broken NameNode metrics documentation Key: HDFS-12160 URL: https://issues.apache.org/jira/browse/HDFS-12160 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0-alpha4, 2.8.0 Reporter: Erik Krogen Assignee: Erik Krogen Priority: Trivial HDFS-11261 introduced documentation for the metrics added in HDFS-10872. The metrics have a pipe ({{|}}) in them which breaks the markdown table. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/468/ [Jul 17, 2017 1:54:16 PM] (szetszwo) HDFS-12138. Remove redundant 'public' modifiers from BlockCollection. [Jul 17, 2017 2:11:14 PM] (Arun Suresh) YARN-6706. Refactor ContainerScheduler to make oversubscription change [Jul 17, 2017 9:32:37 PM] (aajisaka) HADOOP-14539. Move commons logging APIs over to slf4j in hadoop-common. [Jul 17, 2017 11:19:09 PM] (sunilg) Addendum patch for YARN-5731 -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-hdfs-project/hadoop-hdfs-client Possible exposure of partially initialized object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:[line 2888] org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) makes inefficient use of keySet iterator instead of entrySet iterator At SlowDiskReports.java:keySet iterator instead of entrySet iterator At SlowDiskReports.java:[line 105] FindBugs : module:hadoop-hdfs-project/hadoop-hdfs Possible null pointer dereference in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:[line 302] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String) unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId At HdfsServerConstants.java:[line 193] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int) unconditionally sets the field force At HdfsServerConstants.java:force At HdfsServerConstants.java:[line 217] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean) unconditionally sets the field isForceFormat At HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean) unconditionally sets the field isInteractiveFormat At HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 237] Possible null pointer dereference in org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:[line 1339] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:[line 258] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:[line 133] Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 2085] Useless condition:numBlocks == -1 at this point At ImageLoaderCurrent.java:[line 727] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Useless object stored in variable removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:[line 642] org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache() makes inefficient use of keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:[line 719]
[jira] [Created] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC
Anu Engineer created HDFS-12159: --- Summary: Ozone: SCM: Add create replication pipeline RPC Key: HDFS-12159 URL: https://issues.apache.org/jira/browse/HDFS-12159 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Anu Engineer Assignee: Anu Engineer Fix For: HDFS-7240 Add an API that allows users to create replication pipelines using SCM. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12158) Secondary Namenode's web interface lack configs for X-FRAME-OPTIONS protection
Mukul Kumar Singh created HDFS-12158: Summary: Secondary Namenode's web interface lack configs for X-FRAME-OPTIONS protection Key: HDFS-12158 URL: https://issues.apache.org/jira/browse/HDFS-12158 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh HDFS-10579 adds X-FRAME-OPTIONS protection to Namenode and Datanode. This is also needed for Secondary Namenode as well. *Seondary Namenode misses X-FRAME-OPTIONS protection* {code} [root@f0e12b63907e opt]# curl -I http://127.0.0.1:50090/index.html HTTP/1.1 200 OK Cache-Control: no-cache Expires: Tue, 18 Jul 2017 20:13:53 GMT Date: Tue, 18 Jul 2017 20:13:53 GMT Pragma: no-cache Expires: Tue, 18 Jul 2017 20:13:53 GMT Date: Tue, 18 Jul 2017 20:13:53 GMT Pragma: no-cache Content-Type: text/html; charset=utf-8 Last-Modified: Mon, 12 Jun 2017 13:15:41 GMT Content-Length: 1083 Accept-Ranges: bytes Server: Jetty(6.1.26) {code} *Primary Namenode offers X-FRAME-OPTIONS protection* {code} [root@f0e12b63907e opt]# curl -I http://127.0.0.1:50070/index.html HTTP/1.1 200 OK Cache-Control: no-cache Expires: Tue, 18 Jul 2017 20:14:04 GMT Date: Tue, 18 Jul 2017 20:14:04 GMT Pragma: no-cache Expires: Tue, 18 Jul 2017 20:14:04 GMT Date: Tue, 18 Jul 2017 20:14:04 GMT Pragma: no-cache Content-Type: text/html; charset=utf-8 X-FRAME-OPTIONS: SAMEORIGIN Last-Modified: Mon, 12 Jun 2017 13:15:41 GMT Content-Length: 1079 Accept-Ranges: bytes Server: Jetty(6.1.26) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12157) Do fsyncDirectory(..) outside of FSDataset lock
Vinayakumar B created HDFS-12157: Summary: Do fsyncDirectory(..) outside of FSDataset lock Key: HDFS-12157 URL: https://issues.apache.org/jira/browse/HDFS-12157 Project: Hadoop HDFS Issue Type: Bug Components: datanode Reporter: Vinayakumar B Priority: Critical -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org