With the following configuration, tasks complete successfully:
1. /mnt1 is the mount point for a shared, distributed FS on all machines
running hadoop.
2. hadoop.tmp.dir = /mnt1/tmp/hadoop
3. fs.default.name = file:///
4. `bin/hadoop jar hadoop-*-examples.jar wordcount /mnt1/input /mnt1/ouput`

top and intuition suggest that each machine is reading from it's own /mnt1
point and that computation is distributed. Thank you very much for the
advice Allen.

Best,
--Chris

On Thu, Jul 1, 2010 at 12:15 PM, Allen Wittenauer
<awittena...@linkedin.com>wrote:

>
>
>
> On Jul 1, 2010, at 12:00 PM, Chris D wrote:
>
> > Yes, it is mountable on all machines simultaneously, and, for example,
> works
> > properly through file:///mnt/to/dfs in a single node cluster.
>
> Then file:// will likely work a multi-node cluster as well.  So I doubt
> you'll need to write anything at all. :)

Reply via email to