[ 
https://issues.apache.org/jira/browse/HDFS-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lu Yilei updated HDFS-1457:
---------------------------

    Attachment: checkpoint-limitandcompress.patch

If the fsimage is very big. The network is full in a short time when 
SeconaryNamenode do checkpoint, leading to Jobtracker access Namenode to get 
relevant file data to fail in job initialization phase. So we limit 
transmission speed and compress transmission to resolve the problem

LZO compression codec is not supported in Hadoop standard package. So default 
copression is GzipCodec.

> Limit transmission rate when transfering image between primary and secondary 
> NNs
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-1457
>                 URL: https://issues.apache.org/jira/browse/HDFS-1457
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 0.22.0
>            Reporter: Hairong Kuang
>             Fix For: 0.22.0
>
>         Attachments: checkpoint-limitandcompress.patch
>
>
> If the fsimage is very big. The network is full in a short time when 
> SeconaryNamenode do checkpoint, leading to Jobtracker access Namenode to get 
> relevant file data to fail in job initialization phase. So we limit 
> transmission speed and compress transmission to resolve the problem. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to