[ 
https://issues.apache.org/jira/browse/HDFS-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13972276#comment-13972276
 ] 

Guo Ruijing commented on HDFS-2699:
-----------------------------------

what's plan to include the improvement?

It is nice to include this improvement. when this feature is implemented, we 
need to consider HDFS upgrade since block format is changed.

new block format can be

BLOCK HEADER:

1. MAGIC_NUMBER (can be "HDFSBLOCK")
2. VERSION
3. CRC/COMPRESSION_TYPE
4. ONE BLOCK LENGTH  (for example 512 byte)
5. PADDING (optional)
6. DATA_OFFSET

BLOCK DATA1 (for example, the format depends on CRC/COMPRESSION_TYPE)
1. RAW DATA   (502 byte)
2. DATA LENGTH (2 byte)
3. DATA CRC  (8 byte)

How to make compatible with existing format?

if (*.meta) {
    use original format
} else {
    use new format
}

> Store data and checksums together in block file
> -----------------------------------------------
>
>                 Key: HDFS-2699
>                 URL: https://issues.apache.org/jira/browse/HDFS-2699
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>
> The current implementation of HDFS stores the data in one block file and the 
> metadata(checksum) in another block file. This means that every read from 
> HDFS actually consumes two disk iops, one to the datafile and one to the 
> checksum file. This is a major problem for scaling HBase, because HBase is 
> usually  bottlenecked on the number of random disk iops that the 
> storage-hardware offers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to