Hi Brian, That's what my issue is i.e. "How do I ascertain the bottleneck" or in other words if the results obtained after doing the performance testing are not upto the mark then How do I find the bottleneck.
How can we confidently say that OS and hardware are the culprits. I understand that by using the latest OS and hardware can improve the performance irrespective of the application but my real worry is "What Next ". How can I further increase the performance. What should I look for which can suggest or point the areas which can be potential problems or "hotspot". Thanks for your comments. ~Sandeep~ Brian Bockelman wrote: > > Hey Sandeep, > > I would warn against premature optimization: first, run your test, > then see how far from your target you are. > > Of course, I'd wager you'd find that the hardware you are using is > woefully underpowered and that your OS is 5 years old. > > Brian > > On Dec 30, 2008, at 5:57 AM, Sandeep Dhawan wrote: > >> >> Hi, >> >> I am trying to create a hadoop cluster which can handle 2000 write >> requests >> per second. >> In each write request I would writing a line of size 1KB in a file. >> >> I would be using machine having following configuration: >> Platfom: Red Hat Linux 9.0 >> CPU : 2.07 GHz >> RAM : 1GB >> >> Can anyone help in giving me some pointers/guideline as to how to go >> about >> setting up such a cluster. >> What are the configuration parameters in hadoop with which we can >> tweak to >> ehance the performance of the hadoop cluster. >> >> Thanks, >> Sandeep >> -- >> View this message in context: >> http://www.nabble.com/Performance-testing-tp21216266p21216266.html >> Sent from the Hadoop core-user mailing list archive at Nabble.com. > > > -- View this message in context: http://www.nabble.com/Performance-testing-tp21216266p21228264.html Sent from the Hadoop core-user mailing list archive at Nabble.com.
