[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14302591#comment-14302591
 ] 

Zhe Zhang commented on HDFS-7285:
---------------------------------

bq. I think it's incorrect. For example, we have a file, and it's length is 
128M. If we use 6+3 schema, and ec stripe cell size is 64K, then we need 
(128*1024K)/(6*64K) = 342 block groups. 
Aah I see where the confusion came from. Sorry that the design doc didn't 
explain clearly the different parameters. When the client writes to a striped 
file, the following 3 events happen:
# Once the client accumulates 6*64KB data, it does _not_ flush the data to the 
DNs. The client buffers the data and starts buffering the next 6*64KB stripe.
# Once the client accumulates {{1024 / 64 = 16}} stripes -- that is 1MB for 
each DN -- it flushes out the data to DNs.
# Once the data flushed to each DN reaches 128MB -- that is {{128MB * 6 = 
768MB}} data overall -- it allocates a *new block group* from NN.

Section 2.1 of the QFS [paper | 
http://www.vldb.org/pvldb/vol6/p1092-ovsiannikov.pdf] has a pretty detailed 
explanation too. 

> Erasure Coding Support inside HDFS
> ----------------------------------
>
>                 Key: HDFS-7285
>                 URL: https://issues.apache.org/jira/browse/HDFS-7285
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Weihua Jiang
>            Assignee: Zhe Zhang
>         Attachments: ECAnalyzer.py, ECParser.py, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to