[ 
https://issues.apache.org/jira/browse/HDFS-684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12781766#action_12781766
 ] 

Rodrigo Schmidt commented on HDFS-684:
--------------------------------------

The code would be simpler if we had one HAR file for every policy, but I 
definitely understand your point in having one file per directory. I'm working 
on it.

> Use HAR filesystem to merge parity files 
> -----------------------------------------
>
>                 Key: HDFS-684
>                 URL: https://issues.apache.org/jira/browse/HDFS-684
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: contrib/raid
>            Reporter: dhruba borthakur
>            Assignee: Rodrigo Schmidt
>
> The HDFS raid implementation (HDFS-503) creates a parity file for every file 
> that is RAIDed. This puts additional burden on the memory requirements of the 
> namenode. It will be  nice if the parity files are combined together using 
> the HadoopArchive (har) format. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to