It doesn't work like that. Kindly drop a mail to
"user-unsubscr...@hadoop.apache.org"
From: Yue Cheng
Sent: Saturday, July 1, 2017 2:47 AM
To: user@hadoop.apache.org
Subject: unsubscribe
Please sign me off.
Thanks.
1.Yes, those will ensure that file will be written to available nodes .
2.
BlockManager: defaultReplication = 2
This is the Default block replication which you configured in server
(Namenode). The actual number of replications can be specified when the file is
created. The default is
Please sign me off.
Thanks.
Hi
I have a two master and three datanode HDFS cluster setup. They are AWS EC2
instances.
I have to test High Availability of Datanodes i.e., if during load run
where data is written on HDFS, a datanode dies then there is no data loss.
The two remaning datanodes which are alive should take care o