I should add that your test should both create and delete files.

Raghu.

Raghu Angadi wrote:
Sandeep Dhawan wrote:
Hi,

I am trying to create a hadoop cluster which can handle 2000 write requests
per second.
In each write request I would writing a line of size 1KB in a file.

This is essentially a matter of deciding how many datanodes (with the given configuration) do you need to write 3*2000*2 files per second (assuming each 1KB is a separate HDFS file).

You can test this on single datanode. For e.g. if your datanode supports 1000 of 1KB files per second (even with multiple processes creating at the same time), then you you need 12 datanodes (+ any factor of safety you want to add).

How many nodes or disks do you have approximately?

Raghu.


I would be using machine having following configuration:
Platfom: Red Hat Linux 9.0 CPU : 2.07 GHz
RAM : 1GB

Can anyone help in giving me some pointers/guideline as to how to go about
setting up such a cluster.
What are the configuration parameters in hadoop with which we can tweak to
ehance the performance of the hadoop cluster.
Thanks,
Sandeep


Reply via email to