Actually, its the other way around (thanks Sandy for catching this
error in my post). The presence of mapreduce.map|reduce.java.opts
overrides mapred.child.java.opts, not the other way round as I had
stated earlier (below).
On Wed, Dec 4, 2013 at 1:28 PM, Harsh J ha...@cloudera.com wrote:
Yes
Hi adam,
in our enviroment it does not matter what i insert there it always take over
600 seconds.
I tried 3 and the resulte was the same.
Regards Hansi
Gesendet: Dienstag, 03. Dezember 2013 um 19:23 Uhr
Von: Adam Kawa kawa.a...@gmail.com
An: user@hadoop.apache.org
Betreff: Re:
Well, that does not seem to be the issue. The Kerberos ticket gets refreshed
automatically, but the delegation token doesn't.
Le 3 déc. 2013 à 20:24, Raviteja Chirala a écrit :
Alternatively you can schedule a cron job to do kinit every 20 hours or so.
Just to renew token before it expires.
@Harsh J
Thank you, I intend to upgrade from Hadoop 1.0.4 and this kind of information
is very helpful.
Best regards,
Henry
-Original Message-
From: Harsh J [mailto:ha...@cloudera.com]
Sent: Wednesday, December 04, 2013 4:20 PM
To: user@hadoop.apache.org
Subject: Re: about hadoop-2.2.0
Hi.
i think i found the reason. I looked at the job.xml and found the parameter
mapred.tasktracker.expiry.interval 600
and
mapreduce.jobtracker.expire.trackers.interval 3
So i tried the deprecated parameter mapred.tasktracker.expiry.interval in my
configuration and voila it works!
What's the best way to check the compression codec that an HDFS file was
written with?
We use both Gzip and Snappy compression so I want a way to determine how a
specific file is compressed.
The closest I found is the *getCodec
if i have 40GB memory of cluster resource, and
yarn.scheduler.capacity.maximum-am-resource-percent set to 0.1 ,so that's
mean when i lauch a appMaster ,i need allocate 4GB to the appMaster ? ,if
so, why i increasing the value will cause more appMaster running
concurrently,instead of decreasing ?
another question is ,i set the yarn.scheduler.minimum-allocation-mb is
2GB,so the container size will at less 2GB ,but i see appMaster container
only use 1GB heap size why?
# ps -ef|grep 8062
yarn 8062 8047 5 09:04 ?00:00:09
/usr/java/jdk1.7.0_25/bin/java
we have already tried several values of these two parameters, but it seems
no use.
2013/12/5 Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com
Hi,
Please check the properties like mapreduce.reduce.memory.mb and
mapredce.map.memory.mb in mapred-site.xml. These properties decide
resource limits for
Hi
please reference to
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
2013/12/5 panfei cnwe...@gmail.com
we have already tried several values of these two parameters, but it seems
no use.
2013/12/5 Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com
Hi,
Please check the
It is clearly mentioning that the renewer is wrong (renewer marked is 'nobody'
but mapred is trying to renew the token), you may want to check this.
Thanks,
+Vinod
On Dec 2, 2013, at 8:25 AM, Rainer Toebbicke wrote:
2013-12-02 15:57:08,541 ERROR
These are the directories where NodeManager (as configured) will store its
local files. Local files includes scripts, jars, libraries - all files sent to
nodes via DistributedCache.
Thanks,
+Vinod
On Dec 3, 2013, at 5:26 PM, ch huang wrote:
hi,maillist:
i see three dirs on my
If both the jobs in the MR queue are from the same user, CapacityScheduler will
only try to run them one after another. If possible, run them as different
users. At which point, you will see sharing across jobs because they are from
different users.
Thanks,
+Vinod
On Dec 4, 2013, at 1:33 AM,
thank you ,but it seems the doc is littler old , doc says
- *PUBLIC:* local-dir/filecache
- *PRIVATE:* local-dir/usercache//filecache
- *APPLICATION:* local-dir/usercache//appcache/app-id/
but here is my nodemanager directory,i guess nmPrivate belongs to private
dir ,and filecache dir
Hi,
I took a look at the codes and found some examples on the web.
One example is: http://wiki.opf-labs.org/display/SP/Resource+management
It seems that users can run simple shell commands using Client of YARN.
But when it comes to a practical MapReduce example like WordCount, people
still run
hi,maillist:
i try run terasort in my cluster ,but failed ,following
is error ,i do not know why, anyone can help?
# hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar
terasort /alex/terasort/1G-input /alex/terasort/1G-output
13/12/05 15:15:43 INFO
BTW.i use CDH4.4
On Thu, Dec 5, 2013 at 3:18 PM, ch huang justlo...@gmail.com wrote:
hi,maillist:
i try run terasort in my cluster ,but failed ,following
is error ,i do not know why, anyone can help?
# hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar
can you check how many healthy data nodes available in your cluster?
Use: #hadoop dfsadmin -report
Regards
Jitendra
On Thu, Dec 5, 2013 at 12:48 PM, ch huang justlo...@gmail.com wrote:
hi,maillist:
i try run terasort in my cluster ,but failed ,following
is error ,i do
Hi again,
I've tried to build using JDK 1.6.0_38 and I'm still getting the same
exception:
~/hadoop-2.2.0-maven$ java -version
java version 1.6.0_38-ea
Java(TM) SE Runtime Environment (build 1.6.0_38-ea-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.13-b02, mixed mode)
--
[ERROR] Failed
19 matches
Mail list logo