I will try this.
For the HDFS,
The WebUI M/R Admin, using 50030 (from the Apache example) shows 2 nodes
registered.
All of the jobs shows completed only on one of the nodes.
I will package up a set of clean logs.
Thanks
-s
On Mon, Jul 23, 2012 at 2:08 PM, Harsh J wrote:
> Steve,
>
> If you're
Steve,
If you're going to use NFS, make sure your "hadoop.tmp.dir" property
points to the mount point that is NFS. Can you change that property
and restart the cluster and retry?
Regarding the HDFS issue, its hard to tell without logs. Did you see
two nodes alive in the Web UI after configuring H
Thanks Harsh,
1) I was using NFS
2) I don't believe that anything under /tmp is distributed even when running
3) When I use HDFS, it doesn't attempt to send ANY jobs to my second node
Any clues?
-steve
On Fri, Jul 20, 2012 at 11:52 PM, Harsh J wrote:
> A 2-node cluster is a fully-distributed
A 2-node cluster is a fully-distributed cluster and cannot use a
file:/// FileSystem as thats not a distributed filesystem (unless its
an NFS mount). This explains why some of your tasks aren't able to
locate an earlier written file on the /tmp dir thats probably
available on the JT node alone, not
yes we can see it :-)
SS
On 20 Jul 2012, at 12:15, Steve Sonnenberg wrote:
Sorry this is my first posting and I haven't gotten a copy nor any
response.
Could someone please respond if you are seeing this?
Thanks,
Newbie
On Fri, Jul 20, 2012 at 12:36 PM, Steve Sonnenberg > wrote:
I have a 2-
Sorry this is my first posting and I haven't gotten a copy nor any response.
Could someone please respond if you are seeing this?
Thanks,
Newbie
On Fri, Jul 20, 2012 at 12:36 PM, Steve Sonnenberg wrote:
> I have a 2-node Fedora system and in cluster mode, I have the following
> issue that I can'