[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14370307#comment-14370307
 ] 

Zhe Zhang commented on HDFS-7285:
---------------------------------

I had another offline discussion with [~andrew.wang] around the storagePolicy 
vs. zone topic. We agreed that it's a difficult decision because it requires 
prediction of production usage patterns. The desired EC setup might not always 
align with directories. E.g., it is possible for a directory to contain both 
big files (suitable for striping) and small ones (will cause heavy NN overhead 
under striping). In this case, we can keep the directory policy to be non-EC, 
so only big files need to carry the EC policy in their XAttr -- it is a small 
NN overhead since only a small fraction of files are big. As a follow on 
optimization we can even setup a size-based policy for automatic conversion. 
I'll look at a few applications like HBase / Hive to get a better understanding.

I think we can follow an incremental development plan:
# We can start with a simple zone-like policy as Jing [proposed | 
https://issues.apache.org/jira/browse/HDFS-7285?focusedCommentId=14366293&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14366293]
 above. In this step we don't even need to fully implement the enforcement of 
zone constraints (empty directory, no nesting etc.).
# After collecting potential usage patterns (in terms of directory structure), 
we'll decide whether the use case of per-file and nested EC configuration is 
important enough. Based on that, we'll either fully implement zone constraints 
or implement fine grained EC policies.
# We'll finally decide whether and how to integrate with other storage policies

Thoughts?

> Erasure Coding Support inside HDFS
> ----------------------------------
>
>                 Key: HDFS-7285
>                 URL: https://issues.apache.org/jira/browse/HDFS-7285
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Weihua Jiang
>            Assignee: Zhe Zhang
>         Attachments: ECAnalyzer.py, ECParser.py, HDFS-7285-initial-PoC.patch, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
> fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to