Hi, the value should be a comma separated list :) /foo/a,/foo/b etc
Sent from my iPhone On Dec 27, 2012, at 19:32, Mohit Vadhera <[email protected]> wrote: > Hi, > > I have installed hadoop cluster . I tried to add other drives in hdfs but > didn't get success. I tried to add following parameter in > /etc/hadoop/conf/hdfs-site.xml. but it is not working i get service > > <property> > <name>dfs.datanode.data.dir</name> > <!-- <value>/var/lib/hadoop-hdfs/cache/${user.name}/dfs/data</value> --> > <value>/mnt/san1/hdfs/${user.name}/dfs/data,/mnt/san2/hdfs/${user.name > }/dfs/data,/mnt/san3/hdfs/${user.name}/dfs/data,/mnt/san4/hdfs/${user.name > }/dfs/data</value> > </property> > > ]# for service in /etc/init.d/hadoop-hdfs-* ; do $service status; done > Hadoop datanode is running [ OK ] > Hadoop namenode is dead and pid file exists [FAILED] > Hadoop secondarynamenode is running [ OK ] > > > Logs > > # tail -n 30 hadoop-hdfs-namenode-OPERA-MAST1.log > 2012-12-27 13:04:42,526 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = > hdfs (auth:SIMPLE) > 2012-12-27 13:04:42,526 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = > hadmin > 2012-12-27 13:04:42,526 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = > true > 2012-12-27 13:04:42,526 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false > 2012-12-27 13:04:42,528 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true > 2012-12-27 13:04:42,555 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names > occuring more than 10 times > 2012-12-27 13:04:42,559 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > dfs.namenode.safemode.threshold-pct = 0.9990000128746033 > 2012-12-27 13:04:42,559 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > dfs.namenode.safemode.min.datanodes = 0 > 2012-12-27 13:04:42,559 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > dfs.namenode.safemode.extension = 0 > 2012-12-27 13:04:42,566 INFO org.apache.hadoop.hdfs.server.common.Storage: > Lock on /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/in_use.lock acquired by > nodename [email protected] > 2012-12-27 13:04:42,568 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode > metrics system... > 2012-12-27 13:04:42,568 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system > stopped. > 2012-12-27 13:04:42,568 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system > shutdown complete. > 2012-12-27 13:04:42,569 FATAL > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join > java.io.IOException: NameNode is not formatted. > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:211) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:534) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:424) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:386) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:398) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:432) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204) > 2012-12-27 13:04:42,571 INFO org.apache.hadoop.util.ExitUtil: Exiting with > status 1 > 2012-12-27 13:04:42,574 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: > /************************************************************ > SHUTDOWN_MSG: Shutting down NameNode at OPERA-MAST1.ny.os.local/172.20.3.119 > > ==================================================== > > > But if i add in following way means one property for one drive then it pick > last drive . > > <property> > <name>dfs.datanode.data.dir</name> > <!-- <value>/var/lib/hadoop-hdfs/cache/${user.name}/dfs/data</value> --> > <value>/mnt/san2/hdfs/${user.name}/dfs/data</value> > </property> > <property> > <name>dfs.datanode.data.dir</name> > <!-- <value>/var/lib/hadoop-hdfs/cache/${user.name}/dfs/data</value> --> > <value>/mnt/san3/hdfs/${user.name}/dfs/data</value> > </property> > <property> > <name>dfs.datanode.data.dir</name> > <!-- <value>/var/lib/hadoop-hdfs/cache/${user.name}/dfs/data</value> --> > <value>/mnt/san4/hdfs/${user.name}/dfs/data</value> > </property> > > > Can you please let me know the issue. > > Thanks, > Mohit
