Thanks Wangda for the clarification .
I was thinking about
max(1, maxApplications * userlimit/100)
but
max(1, maxApplications * max(userlimit/100, 1/#activeUsers))
will be much more dynamic accurate as per the description of userlimit. Will
raise a issue and start working on it .
+Naga
Hi
From thread dump, it seems waiting for HDFS operation. Can you attach AM logs,
and do you see any client retry for connecting to HDFS?
CommitterEvent Processor #4 prio=10 tid=0x0199a800 nid=0x18df in
Object.wait() [0x7f4f12aa4000]
java.lang.Thread.State: WAITING (on object
Hi Rohit ,
Thanks for replying .
No , I do not see any connection retry attempts to HDFS in the logs .
Also , Namenode and HDFS look healthy in our cluster .
PFA latest AM logs for the job .
Regards,
Ashish
On Mon, Jul 20, 2015 at 3:29 PM, Rohith Sharma K S
rohithsharm...@huawei.com
May I ask why you need to do that? Y not let Hadoop handle that for u?
On Sunday, July 19, 2015, Shiyao Ma i...@introo.me wrote:
Hi,
I'd like to put my data selectively on some datanodes.
Currently I can do that by shutting down un-needed datanodes. But this is
a little laborsome.
Is
Hi,
i tried to reinstall hadoop on all nodes its now a five node setup
(4*slave 1*slave/master). It still gives me the same error on all nodes.
But the error is not consistent but comes and goes from time to time.
This is the log from one datanode:
http://pastebin.com/SQd0G5tF
It still is
Might due to performance issue of FileOutputCommitter which is resolved in 2.7
https://issues.apache.org/jira/browse/MAPREDUCE-4815
Best Regard,
Jeff Zhang
From: Ashish Kumar Singh ashish23...@gmail.commailto:ashish23...@gmail.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Hello,
I have to output multiple avro files with different schemas as the output
of a mapreduce job.Currently I am achieving this by doing a union of all
the schemas in the driver and then by using Avromultipleoutputs to
output two files.
AvroMultipleOutputs.addNamedOutput(job, a,