Not sure if this will help but in my install 0.14.1 there is an option in 
the conf/hadoop-default.xml

<property>
<name>dfs.data.dir</name>
<value>${hadoop.tmp.dir}/dfs/data</value>
<description>Determines where on the local filesystem an DFS data node
should store its blocks. If this is a comma-delimited
list of directories, then data will be stored in all named
directories, typically on different devices.
Directories that do not exist are ignored.
</description>
</property>

this suggest you shouldb e abile to list each of the paths there. hope this 
helps.


"C G" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
> Hi All:
>
>  Two quick questions, thanks for any guidance...
>
>  I'd like to run nodes with around 2T of local disk set up as JBOD.  So I 
> would have 4 separate file systems per machine, for example /hdfs_a, 
> /hdfs_b, /hdfs_c, /hdfs_d .  Is it possible to configure things so that 
> HDFS knows about all 4 file systems? Since we're using HDFS replication I 
> see no point in using RAID-anything...to me that's the whole point of 
> replication  Comments?
>
>  Is it possible to set things up in Hadoop to run multiple masters?  Is 
> there any point/benefit reason for so-doing?  Can I run multiple namenodes 
> to guard against a single namenode going down or being wrecked?
>
>  If you can't run multiple namenodes, then that sort of implies the 
> machine which is hosting *the* namenode needs to do all the traditional 
> things to protect against data loss/corruption, including frequent 
> backups, RAID mirroring, etc.
>
>  Thanks,
>  C G
>
>
>
> ---------------------------------
> Looking for a deal? Find great prices on flights and hotels with Yahoo! 
> FareChase. 



Reply via email to