Thanks for your response.

Are you telling while reading or writing a file make RF=0..?

Anyway write will fail if we make RF=0 since min replica is one by default.

Coming read I am getting eof exception even RF=0;

Here doubt is actually block got corrupted (I checked using fsck and even NN UI 
also shows block got corrupted)

After reverting changes what ever changes I did.

Read is successfull by using fsshell commands(cat,text and get)..Only readfully 
throwing EOF


Please correct me If I am wrong.

Thanks And Regards

Brahma Reddy


________________________________
From: Srikanth [sxk7...@rit.edu]
Sent: Monday, April 30, 2012 3:29 PM
To: hdfs-user@hadoop.apache.org
Subject: Re: checksum error

Hi,

When you say replication factor=1 it is trying to look for another data node to 
store the replica of what is stored in the only data node you have created.

Try using replication factor=0.


Srikanth Kommineni

On Apr 30, 2012, at 3:07, Brahma Reddy Battula 
<brahmareddy.batt...@huawei.com<mailto:brahmareddy.batt...@huawei.com>> wrote:

Hi


I have started hadoop cluster with one NameNode And one DataNode and written 
one file with replication factor one.

 Now Edited((To get check-sum error)) written file in DN where block is 
located(physically).

Then I tried to read file ,then I got could not obtain block since block got 
corrupted and check-sum error from DN logs.

After that I removed what ever I edited from block .



Then I try to read file,here I am able to read by using fsshell commands even 
though block got corrupted(by fsck report)..But I am getting eof execption 
using readfully api.

After reverting not getting any checksum error from DN logs.


Please let me know the behavior once revert back to original




Thanks And Regards

Brahma Reddy

Reply via email to