[jira] [Created] (HDFS-11795) Fix ASF Licence warnings in branch-2.7
Yiqun Lin created HDFS-11795: Summary: Fix ASF Licence warnings in branch-2.7 Key: HDFS-11795 URL: https://issues.apache.org/jira/browse/HDFS-11795 Project: Hadoop HDFS Issue Type: Bug Reporter: Yiqun Lin Assignee: Yiqun Lin There are some ASF warnings appeared in branch-2.7 due to test files being created in "hadoop-hdfs/build" instead of 'target' directory. (Similar to HDFS-9571). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11794) Add ec sub command -listCodec to show currently supported ec codecs
SammiChen created HDFS-11794: Summary: Add ec sub command -listCodec to show currently supported ec codecs Key: HDFS-11794 URL: https://issues.apache.org/jira/browse/HDFS-11794 Project: Hadoop HDFS Issue Type: Improvement Components: erasure-coding Reporter: SammiChen Assignee: SammiChen Add ec sub command -listCodec to show currently supported ec codecs -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11793) Dfs configuration key dfs.namenode.ec.policies.enabled support user defined erasure coding policy
SammiChen created HDFS-11793: Summary: Dfs configuration key dfs.namenode.ec.policies.enabled support user defined erasure coding policy Key: HDFS-11793 URL: https://issues.apache.org/jira/browse/HDFS-11793 Project: Hadoop HDFS Issue Type: Improvement Components: erasure-coding Reporter: SammiChen Assignee: SammiChen Dfs configuration key dfs.namenode.ec.policies.enabled support user defined erasure coding policy -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11792) [READ] Additional test cases for ProvidedVolumeImpl
Virajith Jalaparti created HDFS-11792: - Summary: [READ] Additional test cases for ProvidedVolumeImpl Key: HDFS-11792 URL: https://issues.apache.org/jira/browse/HDFS-11792 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Virajith Jalaparti -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11791) [READ] Test for increasing replication of provided files.
Virajith Jalaparti created HDFS-11791: - Summary: [READ] Test for increasing replication of provided files. Key: HDFS-11791 URL: https://issues.apache.org/jira/browse/HDFS-11791 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Virajith Jalaparti -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11790) Decommissioning of a DataNode after MaintenanceState takes a very long time to complete
Manoj Govindassamy created HDFS-11790: - Summary: Decommissioning of a DataNode after MaintenanceState takes a very long time to complete Key: HDFS-11790 URL: https://issues.apache.org/jira/browse/HDFS-11790 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Affects Versions: 3.0.0-alpha1 Reporter: Manoj Govindassamy Assignee: Manoj Govindassamy Problem: When a DataNode is requested for Decommissioning after it successfully transitioned to MaintenanceState (HDFS-7877), the decommissioning state transition is stuck for a long time even for very small number of blocks in the cluster. Details: * A DataNode DN1 wa requested for MaintenanceState and it successfully transitioned from ENTERING_MAINTENANCE state IN_MAINTENANCE state as there are sufficient replication for all its blocks. * As DN1 was in maintenance state now, the DataNode process was stopped on DN1. Later the same DN1 was requested for Decommissioning. * As part of Decommissioning, all the blocks residing in DN1 were requested for re-replicated to other DataNodes, so that DN1 could transition from ENTERING_DECOMMISSION to DECOMMISSIONED. * But, re-replication for few blocks was stuck for a long time. Eventually it got completed. * Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as a source datanode for re-replication of few of the blocks. Since DataNode process on DN1 was already stopped, the re-replication was stuck for a long time. * Eventually PendingReplicationMonitor timed out, and those re-replication were re-scheduled for those timed out blocks. Again, during the re-replication also, the IN_MAINT DN1 was chose as a source datanode for few of the blocks leading to timeout again. This iteration continued for few times until all blocks get re-replicated. * By design, IN_MAINT datandoes should not be chosen for any read or write operations. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11789) Maintain Short-Circuit Read Statistics
Hanisha Koneru created HDFS-11789: - Summary: Maintain Short-Circuit Read Statistics Key: HDFS-11789 URL: https://issues.apache.org/jira/browse/HDFS-11789 Project: Hadoop HDFS Issue Type: Improvement Reporter: Hanisha Koneru Assignee: Hanisha Koneru If a disk or controller hardware is faulty then short-circuit read requests can stall indefinitely while reading from the file descriptor. Currently there is no way to detect when short-circuit read requests are slow or blocked. This Jira proposes that each BlockReaderLocal maintain read statistics while it is active by measuring the time taken for a pre-determined fraction of read requests. These per-reader stats can be aggregated into global stats when the reader is closed. The aggregate statistics can be exposed via JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/309/ [May 8, 2017 1:27:37 PM] (kihwal) HDFS-11702. Remove indefinite caching of key provider uri in DFSClient. [May 8, 2017 10:14:37 PM] (jlowe) YARN-3839. Quit throwing NMNotYetReadyException. Contributed by [May 8, 2017 10:28:45 PM] (wheat9) HADOOP-14383. Implement FileSystem that reads from HTTP / HTTPS [May 8, 2017 10:46:12 PM] (haibochen) YARN-6457. Allow custom SSL configuration to be supplied in WebApps. [May 8, 2017 11:41:30 PM] (subru) YARN-6234. Support multiple attempts on the node when AMRMProxy is [May 8, 2017 11:55:47 PM] (subru) YARN-6281. Cleanup when AMRMProxy fails to initialize a new interceptor [May 9, 2017 4:59:49 AM] (wang) HDFS-11644. Support for querying outputstream capabilities. Contributed [May 9, 2017 10:37:43 AM] (aajisaka) HADOOP-14374. License error in GridmixTestUtils.java. Contributed by -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.ha.TestZKFailoverControllerStress hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 hadoop.hdfs.qjournal.server.TestJournalNode hadoop.hdfs.server.namenode.ha.TestBootstrapStandby hadoop.hdfs.TestDistributedFileSystem hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.datanode.TestDataNodeUUID hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.mapred.TestShuffleHandler hadoop.yarn.sls.TestSLSRunner hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.yarn.client.api.impl.TestAMRMClient hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.resourcemanager.TestResourceTrackerService hadoop.yarn.server.TestDiskFailures hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore Timed out junit tests : org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStorePerf mvninstall: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/309/artifact/out/patch-mvninstall-root.txt [504K] compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/309/artifact/out/patch-compile-root.txt [20K] cc:
[jira] [Reopened] (HDFS-11644) Support for querying outputstream capabilities
[ https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang reopened HDFS-11644: > Support for querying outputstream capabilities > -- > > Key: HDFS-11644 > URL: https://issues.apache.org/jira/browse/HDFS-11644 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11644.01.patch, HDFS-11644.02.patch, > HDFS-11644.03.patch, HDFS-11644-branch-2.01.patch > > > FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, > calls hsync. Otherwise, it just calls flush. This is used, for instance, by > YARN's FileSystemTimelineWriter. > DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. > However, DFSStripedOS throws a runtime exception when the Syncable methods > are called. > We should refactor the inheritance structure so DFSStripedOS does not > implement Syncable. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11788) Ozone : add DEBUG CLI support for nodepool db file
Chen Liang created HDFS-11788: - Summary: Ozone : add DEBUG CLI support for nodepool db file Key: HDFS-11788 URL: https://issues.apache.org/jira/browse/HDFS-11788 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Chen Liang Assignee: Chen Liang This is a following-up of HDFS-11698. This JIRA adds the converting of nodepool.db levelDB file. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11787) After HDFS-11515, -du still throws ConcurrentModificationException
Wei-Chiu Chuang created HDFS-11787: -- Summary: After HDFS-11515, -du still throws ConcurrentModificationException Key: HDFS-11787 URL: https://issues.apache.org/jira/browse/HDFS-11787 Project: Hadoop HDFS Issue Type: Bug Components: snapshots, tools Affects Versions: 3.0.0-alpha3, 2.8.1 Reporter: Wei-Chiu Chuang I ran a modified NameNode that was patched against HDFS-11515 on a production cluster fsimage, and I am still seeing ConcurrentModificationException. It seems that there are corner cases not convered by HDFS-11515. File this jira to discuss how to proceed. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11515) -du throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/HDFS-11515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDFS-11515. Resolution: Fixed > -du throws ConcurrentModificationException > -- > > Key: HDFS-11515 > URL: https://issues.apache.org/jira/browse/HDFS-11515 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode, shell >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Wei-Chiu Chuang >Assignee: Istvan Fajth > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11515.001.patch, HDFS-11515.002.patch, > HDFS-11515.003.patch, HDFS-11515.004.patch, HDFS-11515.test.patch > > > HDFS-10797 fixed a disk summary (-du) bug, but it introduced a new bug. > The bug can be reproduced running the following commands: > {noformat} > bash-4.1$ hdfs dfs -mkdir /tmp/d0 > bash-4.1$ hdfs dfsadmin -allowSnapshot /tmp/d0 > Allowing snaphot on /tmp/d0 succeeded > bash-4.1$ hdfs dfs -touchz /tmp/d0/f4 > bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1 > bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s1 > Created snapshot /tmp/d0/.snapshot/s1 > bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2 > bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3 > bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d2/d4 > bash-4.1$ hdfs dfs -mkdir /tmp/d0/d1/d3/d5 > bash-4.1$ hdfs dfs -createSnapshot /tmp/d0 s2 > Created snapshot /tmp/d0/.snapshot/s2 > bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2/d4 > bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d2 > bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3/d5 > bash-4.1$ hdfs dfs -rmdir /tmp/d0/d1/d3 > bash-4.1$ hdfs dfs -du -h /tmp/d0 > du: java.util.ConcurrentModificationException > 0 0 /tmp/d0/f4 > {noformat} > A ConcurrentModificationException forced du to terminate abruptly. > Correspondingly, NameNode log has the following error: > {noformat} > 2017-03-08 14:32:17,673 WARN org.apache.hadoop.ipc.Server: IPC Server handler > 4 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getContentSumma > ry from 10.0.0.198:49957 Call#2 Retry#0 > java.util.ConcurrentModificationException > at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922) > at java.util.HashMap$KeyIterator.next(HashMap.java:956) > at > org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext.tallyDeletedSnapshottedINodes(ContentSummaryComputationContext.java:209) > at > org.apache.hadoop.hdfs.server.namenode.INode.computeAndConvertContentSummary(INode.java:507) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.getContentSummary(FSDirectory.java:2302) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:4535) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1087) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getContentSummary(AuthorizationProviderProxyClientProtocol.java:5 > 63) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.jav > a:873) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210) > {noformat} > The bug is due to a improper use of HashSet, not concurrent operations. > Basically, a HashSet can not be updated while an iterator is traversing it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11786) Add a new command for multi threaded Put/CopyFromLocal
Mukul Kumar Singh created HDFS-11786: Summary: Add a new command for multi threaded Put/CopyFromLocal Key: HDFS-11786 URL: https://issues.apache.org/jira/browse/HDFS-11786 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh CopyFromLocal/Put is not currently multithreaded. In case, where there are multiple files which need to be uploaded to the hdfs, a single thread reads the file and then copies the data to the cluster. This copy to hdfs can be made faster by uploading multiple files in parallel. I am attaching the initial patch so that I can get some initial feedback. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11785) Backport HDFS-9902 to branch-2.7: Support different values of dfs.datanode.du.reserved per storage type
Brahma Reddy Battula created HDFS-11785: --- Summary: Backport HDFS-9902 to branch-2.7: Support different values of dfs.datanode.du.reserved per storage type Key: HDFS-11785 URL: https://issues.apache.org/jira/browse/HDFS-11785 Project: Hadoop HDFS Issue Type: Bug Components: datanode Reporter: Brahma Reddy Battula Assignee: Brahma Reddy Battula Priority: Critical As per discussussion in [mailling list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser] backport HDFS-8312 to branch-2.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11784) Backport HDFS-8312 to branch-2.7: Trash does not descent into child directories to check for permissions
Brahma Reddy Battula created HDFS-11784: --- Summary: Backport HDFS-8312 to branch-2.7: Trash does not descent into child directories to check for permissions Key: HDFS-11784 URL: https://issues.apache.org/jira/browse/HDFS-11784 Project: Hadoop HDFS Issue Type: Bug Reporter: Brahma Reddy Battula Assignee: Brahma Reddy Battula Priority: Critical As per discussussion in [mailling list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser] backport HDFS-8312 to branch-2.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org