I think the question here is as how to add new HDD volumn into an already
existing formatted HDFS cluster.
Not sure , by just adding the directory in data.dfs.dir would help.


On Fri, May 3, 2013 at 3:28 PM, Håvard Wahl Kongsgård <
[email protected]> wrote:

> go for ext3 or ext4
>
>
> On Fri, May 3, 2013 at 8:32 AM, Joarder KAMAL <[email protected]> wrote:
>
>> Hi,
>>
>>  I have a running HDFS cluster (Hadoop/HBase) consists of 4 nodes and the
>> initial hard disk (/dev/vda1) size is 10G only. Now I have a second hard
>> drive /dev/vdb of 60GB size and want to add it into my existing HDFS
>> cluster. How can I format the new hard disk (and in which format? XFS?) and
>> mount it to work with HDFS
>>
>> Default HDFS directory is situated in
>> /usr/local/hadoop-1.0.4/hadoop-datastore
>> And I followed this link for installation.
>>
>> http://ankitasblogger.blogspot.com.au/2011/01/hadoop-cluster-setup.html
>>
>> Many thanks in advance :)
>>
>>
>> Regards,
>> Joarder Kamal
>>
>
>
>
> --
> Håvard Wahl Kongsgård
> Data Scientist
> Faculty of Medicine &
> Department of Mathematical Sciences
> NTNU
>
> http://havard.dbkeeping.com/
>

Reply via email to