Am 26.01.2015 um 10:46 schrieb Azuryy Yu:
can you file an issue to add this configuration to the hdfs-default.xml?
Done with
https://issues.apache.org/jira/browse/HDFS-7685
Cheers,
Frank
Note that there is a difference between being dead and being stale. stale
means avoid as much as possible while dead means avoid absolutely AND
initiate a recovery, i.e. copy all the data (typically 1 or more Tb)
There is some info on this blog entry:
Hi Frank,
can you file an issue to add this configuration to the hdfs-default.xml?
On Mon, Jan 26, 2015 at 5:39 PM, Frank Lanitz frank.lan...@sql-ag.de
wrote:
Hi,
Am 23.01.2015 um 19:23 schrieb Chris Nauroth:
The time period for determining if a datanode is dead is calculated as a
Hi,
Am 23.01.2015 um 19:23 schrieb Chris Nauroth:
The time period for determining if a datanode is dead is calculated as a
function of a few different configuration properties. The current
implementation in DatanodeManager.java does it like this:
final long heartbeatIntervalSeconds =
Hi Frank,
The time period for determining if a datanode is dead is calculated as a
function of a few different configuration properties. The current
implementation in DatanodeManager.java does it like this:
final long heartbeatIntervalSeconds = conf.getLong(
Hi,
I'm trying to configure the time a datanode needs to be considered dead.
Currently it appears to be set to something about 10min which is a
little to high for my scenario. As I wasn't able to find some obvious
flag, I've tried to set some properties, which might could do that.
Without succes.