Hello,

I have hadoop cluster setup of one namenode and two datanodes.
And i continuously write/read/delete through hdfs on namenode through hadoop client.

Then i kill one of the datanode, still one is working but writing on datanode is getting failed for all write requests.

I want to overcome this scenario because at live traffic scenario any of datanode might get down then how do we handle those cases.

Can anybody face this issue or i am doing something wrong in my setup.

Thanx in advance.


Warm Regards,
Satyam

Reply via email to