On 28/03/11 23:34, JunYoung Kim wrote:
hi,

this linke is about hadoop usage for the good practices.

http://developer.yahoo.com/blogs/hadoop/posts/2010/08/apache_hadoop_best_practices_a/
 by Arun C Murthy

if I want to use about 50,000 counters for a job, does it cause serious 
performance down?




Yes, you will use up lots of JT memory and so put limits on the overall size of your cluster.

If you have a small cluster and can crank up the memory settings on the JT to 48 GB this isn't going to be an issue, but as Y! are topping out at these numbers anyway, lots of counters just overload them.


Reply via email to