Hi guys,
I'm using an NFS cluster consisting of 30 machines, but only specified 3 of
the nodes to be my hadoop cluster. So my problem is this. Datanode won't
start in one of the nodes because of the following error:
org.apache.hadoop.hdfs.server.
common.Storage: Cannot lock storage /cs/student/mark/tmp/hodhod/dfs/data.
The directory is already locked
I think it's because of the NFS property which allows one node to lock it
then the second node can't lock it. So I had to change the following
configuration:
dfs.data.dir to be "/tmp/hadoop-user/dfs/data"
But this configuration is overwritten by ${hadoop.tmp.dir}/dfs/data where my
hadoop.tmp.dir = " /cs/student/mark/tmp" as you might guess from above.
Where is this configuration over-written ? I thought my core-site.xml has
the final configuration values.
Thanks,
Mark