hi Konstantin, sorry for my mistake, it was not 5012, it was 512.
Of course, it is great that the throughput is mb/sec per client like you said. In this case we have circa 120 MB/sec :clap: But I'm not sure, if that really was. Please follow my example and calculation of throughput > hadoop-0.18.0/bin/hadoop jar testDFSIO.jar -write -fileSize 512 -nrFiles 4 The value of throughput = 30,16 The information in /benchmarks/TestDFSIO/io_write/part-00000 is : f:rate 121631.625 f:sqrate 3726004.8 l:size 2147483648 l:tasks 4 l:time 67900 To calculate throughput in source code: throughput = size * 1000.0 / (time * MEGA), So in this case we have throughput = 2147483648 * 1000 / (time * MEGA) = 2048 * 1000 / 67900 = 30,16. Because the value of "size" is 2048 GB and not 512 MB, that's why I'm not sure about it. Can you give me a hint again, thanks lots. Tien Duc Dinh -- View this message in context: http://www.nabble.com/Re%3A-TestDFSIO-delivers-bad-values-of-%22throughput%22-and-%22average-IO-rate%22-tp21322404p21339931.html Sent from the Hadoop core-user mailing list archive at Nabble.com.
