[ 
https://issues.apache.org/jira/browse/HADOOP-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12517669
 ] 

Konstantin Shvachko commented on HADOOP-1506:
---------------------------------------------

What is the reason for keeping the same block size in the target file?
The target file system may have a different default block size.
Why do we want to go against the default in this case?
Ex. If we copy a file from ext2 with 1K blocks to ext3 with 8K blocks we are 
not trying to preserve any block sizes.


> distcp not preserving the replication factor and block size of source files
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-1506
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1506
>             Project: Hadoop
>          Issue Type: Bug
>          Components: util
>    Affects Versions: 0.12.3
>            Reporter: Koji Noguchi
>            Priority: Minor
>
> Myabe not a bug but a feature request.
> It would be nice if the source file and the target file have the same 
> replication factor and block size.
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to