[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11920:
------------------------------
    Attachment: HDFS-11920-HDFS-7240.008.patch

Some of the failed tests are related, fixed in v008 patch. To any reviewer, 
v008 patch is almost identical to v007, only except for added the following 
change:
1. Changed the internal stream of OzoneInputStream from ChunkInputStream to 
ChunkGroupInputStream
2. added an entry to ozone-default.xml
3. KeyHandler's putKey will specify the size of the key based on the data.

> Ozone : add key partition
> -------------------------
>
>                 Key: HDFS-11920
>                 URL: https://issues.apache.org/jira/browse/HDFS-11920
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Chen Liang
>            Assignee: Chen Liang
>         Attachments: HDFS-11920-HDFS-7240.001.patch, 
> HDFS-11920-HDFS-7240.002.patch, HDFS-11920-HDFS-7240.003.patch, 
> HDFS-11920-HDFS-7240.004.patch, HDFS-11920-HDFS-7240.005.patch, 
> HDFS-11920-HDFS-7240.006.patch, HDFS-11920-HDFS-7240.007.patch, 
> HDFS-11920-HDFS-7240.008.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to