[ 
http://issues.apache.org/jira/browse/HADOOP-340?page=comments#action_12419700 ] 

Doug Cutting commented on HADOOP-340:
-------------------------------------

Disks listed in dfs.data.dir that do not exist on a host are ignored.  So, 
instead of a wildcard, you can simply list all possible names used in your 
cluster, and only those that actually exist on a host will be used.  Similarly 
for mapred.local.dir.  Does that suffice?

> Using wildcards in config pathnames
> -----------------------------------
>
>          Key: HADOOP-340
>          URL: http://issues.apache.org/jira/browse/HADOOP-340
>      Project: Hadoop
>         Type: Improvement

>   Components: conf
>     Versions: 0.4.0
>  Environment: a cluster with different disk setups
>     Reporter: Johan Oskarson
>     Priority: Minor

>
> In our cluster there's machines with very different disk setups
> I've solved this by not rsyncing hadoop-site.xml, but as you probably 
> understand this means new settings will not get copied properly.
> I'd like to be able to use wildcards in the dfs.data.dir path for example:
> <property>
>   <name>dfs.data.dir</name>
>     <value>/home/hadoop/disk*/dfs/data</value>
> </property>
> then every disk mounted in that directory would be used

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to