[
https://issues.apache.org/jira/browse/HDFS-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15058065#comment-15058065
]
Hudson commented on HDFS-9516:
------------------------------
SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #694 (See
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/694/])
HDFS-9516. Truncate file fails with data dirs on multiple disks. (shv: rev
96d307e1e320eafb470faf7bd47af3341c399d55)
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
> truncate file fails with data dirs on multiple disks
> ----------------------------------------------------
>
> Key: HDFS-9516
> URL: https://issues.apache.org/jira/browse/HDFS-9516
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.7.1
> Reporter: Bogdan Raducanu
> Assignee: Plamen Jeliazkov
> Fix For: 2.9.0
>
> Attachments: HDFS-9516_1.patch, HDFS-9516_2.patch, HDFS-9516_3.patch,
> HDFS-9516_testFailures.patch, Main.java, truncate.dn.log
>
>
> FileSystem.truncate returns false (no exception) but the file is never closed
> and not writable after this.
> It seems to be because of copy on truncate which is used because the system
> is in upgrade state. In this case a rename between devices is attempted.
> See attached log and repro code.
> Probably also affects truncate snapshotted file when copy on truncate is also
> used.
> Possibly it affects not only truncate but any block recovery.
> I think the problem is in updateReplicaUnderRecovery
> {code}
> ReplicaBeingWritten newReplicaInfo = new ReplicaBeingWritten(
> newBlockId, recoveryId, rur.getVolume(),
> blockFile.getParentFile(),
> newlength);
> {code}
> blockFile is created with copyReplicaWithNewBlockIdAndGS which is allowed to
> choose any volume so rur.getVolume() is not where the block is located.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)