[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294813#comment-14294813
 ] 

Vinayakumar B commented on HDFS-7285:
-------------------------------------

Hi [~zhz]

Design doc updates might be required for some points.

1. DISK-EC was added with the intention of identifying the Parity blocks in 
case of Non-strip encoding. But now, IMO the logical storage type DISK-EC is no 
longer required, as BLOCK can be identified either Parity/original using the 
BlockGroup.
2. blockStoragePolicydefault.xml is no longer in the code base and storage 
policies are no longer user configurable. It was removed before merging the 
HDFS-6584 to trunk, instead all the policies are hardcoded into 
BlockStoragePolicySuite.java
3. {quote}Transition between erasurecoded and replicated forms can be done by 
changing the storage policy and triggering the Mover to enforce the new 
policy.{quote}
I think this is not applicable for the striped design. This should be 
completely controlled by ECManager right?

4. {quote}Under this framework, a unique storage policy should be defined for 
each codec schema. For example, if both 3of5 and 4of10 ReedSolomon coding are 
supported, policies RS3of5 and RS4of10 should be defined and they can be 
applied on different paths.{quote}
This also may not be applicable for the striped design, as schema information 
also will be saved inside the BlockGroup itself. So IMO there is no need of 
separate policies for each of the schema.

> Erasure Coding Support inside HDFS
> ----------------------------------
>
>                 Key: HDFS-7285
>                 URL: https://issues.apache.org/jira/browse/HDFS-7285
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Weihua Jiang
>            Assignee: Zhe Zhang
>         Attachments: ECAnalyzer.py, ECParser.py, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to