Re: about hadoop-2.2.0 mapred.child.java.opts

2013-12-04 Thread Harsh J
Actually, its the other way around (thanks Sandy for catching this error in my post). The presence of mapreduce.map|reduce.java.opts overrides mapred.child.java.opts, not the other way round as I had stated earlier (below). On Wed, Dec 4, 2013 at 1:28 PM, Harsh J ha...@cloudera.com wrote: Yes

Aw: Re: mapreduce.jobtracker.expire.trackers.interval no effect

2013-12-04 Thread Hansi Klose
Hi adam,   in our enviroment it does not matter what i insert there it always take over 600 seconds. I tried 3 and the resulte was the same.   Regards Hansi   Gesendet: Dienstag, 03. Dezember 2013 um 19:23 Uhr Von: Adam Kawa kawa.a...@gmail.com An: user@hadoop.apache.org Betreff: Re:

Re: Client mapred tries to renew a token with renewer specified as nobody

2013-12-04 Thread Rainer Toebbicke
Well, that does not seem to be the issue. The Kerberos ticket gets refreshed automatically, but the delegation token doesn't. Le 3 déc. 2013 à 20:24, Raviteja Chirala a écrit : Alternatively you can schedule a cron job to do kinit every 20 hours or so. Just to renew token before it expires.

RE: about hadoop-2.2.0 mapred.child.java.opts

2013-12-04 Thread Henry Hung
@Harsh J Thank you, I intend to upgrade from Hadoop 1.0.4 and this kind of information is very helpful. Best regards, Henry -Original Message- From: Harsh J [mailto:ha...@cloudera.com] Sent: Wednesday, December 04, 2013 4:20 PM To: user@hadoop.apache.org Subject: Re: about hadoop-2.2.0

Aw: mapreduce.jobtracker.expire.trackers.interval no effect

2013-12-04 Thread Hansi Klose
Hi. i think i found the reason. I looked at the job.xml and found the parameter mapred.tasktracker.expiry.interval 600 and mapreduce.jobtracker.expire.trackers.interval 3 So i tried the deprecated parameter mapred.tasktracker.expiry.interval in my configuration and voila it works!

Check compression codec of an HDFS file

2013-12-04 Thread alex bohr
What's the best way to check the compression codec that an HDFS file was written with? We use both Gzip and Snappy compression so I want a way to determine how a specific file is compressed. The closest I found is the *getCodec

Re: issue about capacity scheduler

2013-12-04 Thread ch huang
if i have 40GB memory of cluster resource, and yarn.scheduler.capacity.maximum-am-resource-percent set to 0.1 ,so that's mean when i lauch a appMaster ,i need allocate 4GB to the appMaster ? ,if so, why i increasing the value will cause more appMaster running concurrently,instead of decreasing ?

Re: issue about capacity scheduler

2013-12-04 Thread ch huang
another question is ,i set the yarn.scheduler.minimum-allocation-mb is 2GB,so the container size will at less 2GB ,but i see appMaster container only use 1GB heap size why? # ps -ef|grep 8062 yarn 8062 8047 5 09:04 ?00:00:09 /usr/java/jdk1.7.0_25/bin/java

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memo

2013-12-04 Thread panfei
we have already tried several values of these two parameters, but it seems no use. 2013/12/5 Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com Hi, Please check the properties like mapreduce.reduce.memory.mb and mapredce.map.memory.mb in mapred-site.xml. These properties decide resource limits for

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memo

2013-12-04 Thread YouPeng Yang
Hi please reference to http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/ 2013/12/5 panfei cnwe...@gmail.com we have already tried several values of these two parameters, but it seems no use. 2013/12/5 Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com Hi, Please check the

Re: Client mapred tries to renew a token with renewer specified as nobody

2013-12-04 Thread Vinod Kumar Vavilapalli
It is clearly mentioning that the renewer is wrong (renewer marked is 'nobody' but mapred is trying to renew the token), you may want to check this. Thanks, +Vinod On Dec 2, 2013, at 8:25 AM, Rainer Toebbicke wrote: 2013-12-02 15:57:08,541 ERROR

Re: issue about the MR JOB local dir

2013-12-04 Thread Vinod Kumar Vavilapalli
These are the directories where NodeManager (as configured) will store its local files. Local files includes scripts, jars, libraries - all files sent to nodes via DistributedCache. Thanks, +Vinod On Dec 3, 2013, at 5:26 PM, ch huang wrote: hi,maillist: i see three dirs on my

Re: issue about capacity scheduler

2013-12-04 Thread Vinod Kumar Vavilapalli
If both the jobs in the MR queue are from the same user, CapacityScheduler will only try to run them one after another. If possible, run them as different users. At which point, you will see sharing across jobs because they are from different users. Thanks, +Vinod On Dec 4, 2013, at 1:33 AM,

Re: issue about the MR JOB local dir

2013-12-04 Thread ch huang
thank you ,but it seems the doc is littler old , doc says - *PUBLIC:* local-dir/filecache - *PRIVATE:* local-dir/usercache//filecache - *APPLICATION:* local-dir/usercache//appcache/app-id/ but here is my nodemanager directory,i guess nmPrivate belongs to private dir ,and filecache dir

Re: Implementing and running an applicationmaster

2013-12-04 Thread Yue Wang
Hi, I took a look at the codes and found some examples on the web. One example is: http://wiki.opf-labs.org/display/SP/Resource+management It seems that users can run simple shell commands using Client of YARN. But when it comes to a practical MapReduce example like WordCount, people still run

get error in running terasort tool

2013-12-04 Thread ch huang
hi,maillist: i try run terasort in my cluster ,but failed ,following is error ,i do not know why, anyone can help? # hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar terasort /alex/terasort/1G-input /alex/terasort/1G-output 13/12/05 15:15:43 INFO

Re: get error in running terasort tool

2013-12-04 Thread ch huang
BTW.i use CDH4.4 On Thu, Dec 5, 2013 at 3:18 PM, ch huang justlo...@gmail.com wrote: hi,maillist: i try run terasort in my cluster ,but failed ,following is error ,i do not know why, anyone can help? # hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar

Re: get error in running terasort tool

2013-12-04 Thread Jitendra Yadav
can you check how many healthy data nodes available in your cluster? Use: #hadoop dfsadmin -report Regards Jitendra On Thu, Dec 5, 2013 at 12:48 PM, ch huang justlo...@gmail.com wrote: hi,maillist: i try run terasort in my cluster ,but failed ,following is error ,i do

Re: Ant BuildException error building Hadoop 2.2.0

2013-12-04 Thread Silvina Caíno Lores
Hi again, I've tried to build using JDK 1.6.0_38 and I'm still getting the same exception: ~/hadoop-2.2.0-maven$ java -version java version 1.6.0_38-ea Java(TM) SE Runtime Environment (build 1.6.0_38-ea-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.13-b02, mixed mode) -- [ERROR] Failed