This is solved in Hadoop 3. So stay tuned
Best,
On Feb 2, 2018 6:26 AM, "李立伟" wrote:
> Hi:
> It's my understanding that HDFS write operation is not considered
> completd until all of the replicas have been successfully written.If so,
> does the replication factor
Hi,
I would like to have a list of issues we generally face in hdfs and how can we
make it a self healing distributed file system. I do know there are few
functionalities of hdfs which is self healing like under replication of blocks.
But still there are multiple issues a hadoop administrator
Nishchay Malhotra, what scheduler are you using? Also, what are the settings
for each queue?
From: Billy Watson
To: nishchay malhotra
Cc: "common-u...@hadoop.apache.org"
Sent: Tuesday, January 30, 2018
Hi:
It's my understanding that HDFS write operation is not considered
completd until all of the replicas have been successfully written.If so,
does the replication factor affect the write latency? the mapreduce\spark
task will be affected?
is there the way to set HDFS write the first