[
https://issues.apache.org/jira/browse/HADOOP-17611?focusedWorklogId=582333&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-582333
]
ASF GitHub Bot logged work on HADOOP-17611:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 14/Apr/21 10:03
Start Date: 14/Apr/21 10:03
Worklog Time Spent: 10m
Work Description: amaroti edited a comment on pull request #2897:
URL: https://github.com/apache/hadoop/pull/2897#issuecomment-819387790
@bgaborg
I have tested this manully by hand using two clusters. I have not yet looked
into how the unit tests look like for hadoop. I will take a look at it. Also
Ayush Saxena (@ayushtkn) had made some usefull points on the jira ticket that I
will take a look at shortly:
https://issues.apache.org/jira/browse/HADOOP-17611?focusedCommentId=17320445&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17320445
> Seems there are two PRs, More or less doing the same thing I guess, I just
had a glance on the second one,
>
> So, who ever plans to chase this, Couple of points to keep in mind:
>
> We need a test in AbstractContractDistCpTest which all the FileSystems
can also use
> Should cover two scenarios. First When preserve Time is specified it
should preserve time and when not it shouldn't in case of parallel copy. The
latter case is working ok as of now, To make sure we don't change the behaviour
> The parent modification time is to be preserved when the parent is in
the scope of copy, not always. say your are copying /dir/fil1 to /dir1/file2
using parallel copy, then we don't touch /dir1 AFAIK
>
> The above are the basic requirements, Now the below stuff, If possible we
should do:
>
> For parent directories preserve only once, say if you have 10K files
under that parent, then do that setTimes 10K times.
> And if the parallel copy is enabled, there is no point of preserving
before concat operation, we can save that call.
>
>
>
> This isn't a one liner, and throw some challenges, So, please decide who
wants to chase this and together work on one PR only.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 582333)
Time Spent: 2h 50m (was: 2h 40m)
> Distcp parallel file copy breaks the modification time
> ------------------------------------------------------
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
> Issue Type: Bug
> Reporter: Adam Maroti
> Assignee: Adam Maroti
> Priority: Major
> Labels: pull-request-available
> Time Spent: 2h 50m
> Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel.
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of
> large files.
>
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called
> which changes the modification time therefore the modification times of files
> copeid by distcp will not match the source files. However this only occurs
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and
> apply that to the concatenated result-file after the concat. (probably best
> -after- before the rename()).
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]