You probably want this.
https://issues.apache.org/jira/browse/HDFS-457
Koji
As we can see here the data node shutdown. Should one disk entering a
Read Only state really shut down the entire datanode? The datanode
happily restarts once the disk was unmounted. just wondering?
On 8/7/09 12:11
but I didn't find a config option
that allows ignoring tasks that fail.
If 0.18,
http://hadoop.apache.org/common/docs/r0.18.3/api/org/apache/hadoop/mapred/Jo
bConf.html#setMaxMapTaskFailuresPercent(int)
(mapred.max.map.failures.percent)
Probably unrelated to your problem, but one extreme case I've seen,
a user's job with large gzip inputs (non-splittable),
20 mappers 800 reducers. Each map outputted like 20G.
Too many reducers were hitting a single node as soon as a mapper finished.
I think we tried something like
This doesn't solve your stderr/stdout problem, but you can always set the
timeout to be a bigger value if necessary.
-Dmapred.task.timeout=__ (in milliseconds)
Koji
On 10/25/09 12:00 PM, Ryan Rosario uclamath...@gmail.com wrote:
I am using a Python script as a mapper for a Hadoop
Hi Scott,
You might be hitting two different issues.
1) Decommission not finishing.
https://issues.apache.org/jira/browse/HDFS-694 explains decommission
never finishing due to open files in 0.20
2) Nodes showing up both in live and dead nodes.
I remember Suresh taking a look at this.
Try moving up to v 6.1.25, which should be more straightforward.
FYI, when we tried 6.1.25, we got hit by a deadlock.
http://jira.codehaus.org/browse/JETTY-1264
Koji
On 1/17/11 3:10 AM, Steve Loughran ste...@apache.org wrote:
On 16/01/11 09:41, xiufeng liu wrote:
Hi,
In my cluster, Hadoop
-Xmx1024
This would be 1024 bytes heap.
Maybe you want -Xmx1024m ?
Koji
On 1/27/11 6:04 PM, Jun Young Kim juneng...@gmail.com wrote:
Hi,
I have 9 cluster (1 master, 8 slaves) to run a hadoop.
when I executed my job in a master, I got the following errors.
11/01/28 10:58:01 INFO
Rishi,
Using exclude list for TT will not help as Koji has already mentioned
It'll help a bit in a sense that no more tasks are assigned to that TaskTracker
once excluded.
As for TT decommissioning and map outputs handling, opened a Jira for further
discussion.
the expiry of the walltime. Seems like it will work for now.
P.S. - What credentials are required for commentiong on an issue in Jira
On Mon, Jan 31, 2011 at 10:22 PM, Koji Noguchi knogu...@yahoo-inc.com wrote:
Rishi,
Using exclude list for TT will not help as Koji has already mentioned
It'll
Hi Shivani,
You probably don't want to ask m45 specific questions on hadoop.apache mailing
list.
Try
% hadoop queue -showacls
It should show which queues you're allowed to submit. If it doesn't give you
any queues, you need to request one.
Koji
On 2/9/11 9:10 PM, Shivani Rao
(mapred.min.split.size can be only set to larger than HDFS block size)
I haven't tried this on a new mapreduce API, but
-Dmapred.min.split.size=split_size_you_want -Dmapred.map.tasks=1
I think this would let you set a split size smaller than the hdfs block size :)
Koji
On 2/17/11
This is technically a java issue but I thought other hadoop users would find
it interesting.
When we upgraded from old jvm to a newer (32bit) jvm of 1.6.0_21, we started
seeing users' tasks having issues at random places
1. Tasks running 2-3 times slower
2. Tasks failing with OutOfMemory
3.
Hi Harsh,
Wasn't MAPREDUCE-478 in 1.0 ? Maybe the Jira is not up to date.
Koji
On 1/11/12 8:44 PM, Harsh J ha...@cloudera.com wrote:
These properties are not available on Apache Hadoop 1.0 (Formerly
known as 0.20.x). This was a feature introduced in 0.21
...@jp.fujitsu.com wrote:
Koji, Harsh
mapred-478 seems to be in v1, but those new settings have not yet been
added to mapred-default.xml. (for backwards compatibility?)
George
On 2012/01/12 13:50, Koji Noguchi wrote:
Hi Harsh,
Wasn't MAPREDUCE-478 in 1.0 ? Maybe the Jira is not up
Which hadoop version are you using?
On 1/13/12 1:59 PM, vvkbtnkr vvkbt...@yahoo.com wrote:
I am running a hadoop jar and keep getting this error -
java.lang.NoSuchMethodError:
org.codehaus.jackson.JsonParser.getValueAsLong()
on digging deeper, this is what I can find:- my jar packages
How about playing with 'whoami'? :)
% export PATH=.:$PATH
% cat whoami
#!/bin/sh
echo mynewname
% whoami
mynewname
%
Koji
On 1/16/12 9:02 AM, Eli Finkelshteyn iefin...@gmail.com wrote:
Hi Folks,
I'm still lost on this. Has no one wanted or needed to connect to a
Hadoop cluster from a
Assuming you're using hadoop-1.0, then
export HADOOP_USER_CLASSPATH_FIRST=true
for submit/client side to use your HADOOP_CLASS_PATH before frameworks' and
-Dmapreduce.user.classpath.first=true
for tasks side to use your -libjars before frameworks'.
Koji
On 1/23/12 6:39 AM, John
Should be ./q_map .
Koji
On 5/29/12 7:38 AM, Alan Miller alan.mil...@synopsys.com wrote:
I'm trying to use the DistributedCache but having an issue resolving the
symlinks to my files.
My Driver class writes some hashmaps to files in the DC like this:
Path tPath = new
18 matches
Mail list logo