Hi all

I "dfs put" a large dataset onto a 10-node cluster.

When I observe the Hadoop progress (via web:50070) and each local file
system (via df -k),
I notice that my master node is hit 5-10 times harder than others, so hard
drive is get full quicker than others. Last night load, it actually crash
when hard drive was full. 

To my understand,  data should wrap around all nodes evenly (in a
round-robin fashion using 64M as a unit). 

Is it expected behavior of Hadoop? Can anyone suggest a good troubleshooting
way?

Thanks


-- 
View this message in context: 
http://www.nabble.com/HDFS-is-not-loading-evenly-across-all-nodes.-tp24099585p24099585.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Reply via email to