Bounce mapred and TT on the node
Sent from my iPhone
On May 12, 2011, at 3:56 PM, Sidney Simmons ssimm...@nmitconsulting.co.uk
wrote:
Hi there,
Apologies if this comes through twice but i sent the mail a few hours
ago and haven't seen it on the mailing list.
I'm experiencing some
All nodes are in sync configuration wise. We have a few cluster scripts that
ensure this is the case.
On 13 May 2011 06:55, Harsh J ha...@cloudera.com wrote:
One of the reasons I can think of could be a version mismatch. You may
want to ensure that the job in question was not carrying a
It's not a single node. It occurs on multiple nodes at (seemingly) random
points throughout the day. Should we be performing period restarts of the
processes / datanode servers ?
On 13 May 2011 07:02, highpointe highpoint...@gmail.com wrote:
Bounce mapred and TT on the node
Sent from my
On Thu, 12 May 2011 09:49:23 -0700 (PDT)
Aman aman_d...@hotmail.com wrote:
The creation of files part-n is atomic. When you run a MR job,
these files are created in directory output_dir/_temporary and
moved to output_dir after the files is closed for writing. This
move is atomic hence as
There is no shutdown message until I shutdown the DataNode.
I used hostname of the machine that will run the DataNode and I now used the IP
but there is no difference.
Again the DataNode seems to freeze and the output at the log is the one I
mentioned before.
Subject: Re: Datanode doesn't
Is there a reason for using OpenJDK and not Sun's JDK?
The cluster we are seeing the problem in uses Sun's JDK java version
1.6.0_21,Java(TM) SE Runtime Environment (build 1.6.0_21-b06),Java
HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
The standalone node where I tried to reproduce
You posted system specifics earlier; would you mind posting again? can't find
them in the thread.
Sent from my iPhone
On May 13, 2011, at 8:05 AM, Adi adi.pan...@gmail.com wrote:
Is there a reason for using OpenJDK and not Sun's JDK?
The cluster we are seeing the problem in uses Sun's JDK
When you say freeze you mean there is nothing rolling in the log?
Sent from my iPhone
On May 13, 2011, at 2:28 AM, Panayotis Antonopoulos
antonopoulos...@hotmail.com wrote:
There is no shutdown message until I shutdown the DataNode.
I used hostname of the machine that will run the
There is no other information in the log (although when I run it on my pc and
it works, there is more information in the log) and also the web page of the
namenode doesn't contain any live datanodes as it should.
That's why I said it freezes... I have no idea what is going on...
Please if
Hello Panayotis,
Could you please post a jstack output of your hung process to look into?
$ jstack PID of DN # will do.
2011/5/13 Panayotis Antonopoulos antonopoulos...@hotmail.com:
There is no other information in the log (although when I run it on my pc and
it works, there is more
Thank you for your help!
Here is the result of the command that you said:
panton@clone1:~/hadoop-0.20.203.0$ jstack 6320
2011-05-13 20:31:59
Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.0-b11 mixed mode):
Attach Listener daemon prio=10 tid=0x409c9800 nid=0x1999 waiting on
Hey,
2011/5/13 Panayotis Antonopoulos antonopoulos...@hotmail.com:
899599744@qtp-1416044437-1 - Acceptor0 SelectChannelConnector@0.0.0.0:50075
prio=10 tid=0x7f50f8414800 nid=0x1926 runnable [0x7f50f6eb1000]
java.lang.Thread.State: RUNNABLE
at
Actually, only the last mentioned stack matters. Also see:
https://issues.apache.org/jira/browse/HDFS-1835
On Fri, May 13, 2011 at 11:15 PM, Harsh J ha...@cloudera.com wrote:
Hey,
2011/5/13 Panayotis Antonopoulos antonopoulos...@hotmail.com:
899599744@qtp-1416044437-1 - Acceptor0
I have been waiting for hours to see if it will ever start but it doesn't.
I will check the links you sent me.
Thanks again for your help!!!
From: ha...@cloudera.com
Date: Fri, 13 May 2011 23:18:40 +0530
Subject: Re: Datanode doesn't start but there is no exception in the log
To:
Sounds like your entropy pool is exhausted blocking the process. What sort
of hardware/os combo are you running this on?
Sridhar
I have a MapReduce process that uses the same class for its combiner and
reducer. I just realized that I want the behavior in the combiner and
reducer to be slightly different in this one place. I could write separate
combiner and reducer classes derived from a common source, but in my
situation
Hi,
I'd like to move and copy files from one directory in HDFS to another
one. I know there are methods in the Filesystem API that enable
copying files between the local disk and HDFS, but I couldn't figure
out how to do this between two paths both in HDFS. I think rename(Path
src, Path dest) can
There is no FileSystem API to copy.
You could try
hadoop dfs -cp src dest
which basically reads the file and writes to new file.
The code for this is in FsShell.java
- Original Message
From: Jim Twensky jim.twen...@gmail.com
To: core-u...@hadoop.apache.org
Sent: Fri, May 13, 2011
Hello,
Short answer: No. Use separate classes (or derive your combiner from the
reducer, with modified behavior).
I answered a similar question not too long ago from now:
http://search-hadoop.com/m/Wh7vuKJEtL1/reducer+combinersubj=Differentiate+Reducer+or+Combiner
HTH.
On 14-May-2011, at
I am using the cluster of the Lab of my university, so I have limited access to
its settings.
It consists of 10 nodes with Intel Xeon CPUs and GNU/Linux 2.6.38
Is there a way to solve the problem without changing the settings of the
cluster?
I am trying to use the patch that Harsh J sent me
I installed the patch:
https://issues.apache.org/jira/browse/HDFS-1835
that Harsh J told me and now everything works great!!!
I hope that this change won't create other problems.
Thanks to everyone and especially to Harsh J!!
I would never find the problem without your help!!
From:
Jim,
you can use FileUtil.copy() methods to copy files.
Hope that helps.
--
thanks
mahadev
@mahadevkonar
On Fri, May 13, 2011 at 2:00 PM, lohit lohit...@yahoo.com wrote:
There is no FileSystem API to copy.
You could try
hadoop dfs -cp src dest
which basically reads the file and writes
22 matches
Mail list logo