Hi berfore doing that, I would ls-ltR >
filename.txt on each disk and see if there are hints/references to the original 
file system. That may help provide a more meaningful path to to HD’s-site.xml. 
Generally it sounds pretty close

Let us know how it goes

On Mon, Sep 13, 2021 at 5:59 PM, Andrew Chi <chi.and...@gmail.com> wrote:

> I've had a recent drive failure that resulted in the removal of several 
> drives from an HDFS datanode machine (Hadoop version 3.3.0). This caused 
> Linux to rename half of the drives in /dev/*, with the result that when we 
> mount the drives, the original directory mapping no longer exists. The data 
> on those drives still exists, so this is equivalent to a renaming of the 
> local filesystem directories.
>
> Originally, we had:
> /hadoop/data/path/a
> /hadoop/data/path/b
> /hadoop/data/path/c
>
> Now we have:
> /hadoop/data/path/x
> /hadoop/data/path/y
> /hadoop/data/path/z
>
> Where it's not clear how {a,b,c} map on to {x,y,z}. The blocks have been 
> preserved within the directories, but the directories have essentially been 
> randomly permuted.
>
> Can I simply go to hdfs-site.xml and change dfs.datanode.data.dir to the new 
> list of comma-separated directories /hadoop/data/path/{x,y,z}? Will the 
> datanode still work correctly when I start it back up?
>
> Thanks!
> Andrew

Reply via email to