Hi Harsh,

 

I’m using HDFS client to write GZIP compressed files, I want to write once
a file, in order to not uncompressing  it. So I should make every write
completely, otherwise file will corrupted.

I’m raising up the client’s write packet size to avoid partially write.
But it doesn’t work, since it can’t bigger than 16M(file size > 16M).

That’s my problem.

 

Thank a lot for replying.

 

Regards,

Ken Huang

 

发件人: user-return-15182-tdhkx=126....@hadoop.apache.org
[mailto:user-return-15182-tdhkx=126....@hadoop.apache.org] 代表 Harsh J
发送时间: 2014年4月28日 13:30
收件人: <user@hadoop.apache.org>
主题: Re: hdfs write partially

 

Packets are chunks of the input you try to pass to the HDFS writer. What
problem are you exactly facing (or, why are you trying to raise up the
client's write packet size)?

 

On Mon, Apr 28, 2014 at 8:52 AM, <td...@126.com> wrote:

Hello everyone,

 

Since the default dfs.client-write-packet-size is 64K and it can’t be
bigger than 16M.

So if write bigger than 16M a time, how to make sure it doesn’t write
partially ?

 

Does anyone knows how to fix this?

 

Thanks a lot.

 

-- 

Ken Huang





 

-- 
Harsh J 

Reply via email to