I've already added the new volume with dfs.data.dir, and it adds without
any problem. My problem is that the volume I'm adding has 150 GBs of free
space, but when I check the namenode:50070 it only adds 5GB to the total
capacity, of which has reserved 50% for non-dfs usage. I've set the
dfs.datanode.du.reserved to zero as well, but it doesn't make any
difference.
How am I supposed to tell hadoop to use the whole 150 GB for the datanode.

On Sun, Jan 1, 2012 at 2:59 PM, Rajiv Chittajallu <raj...@yahoo-inc.com>wrote:

> dfsadmin -setSpaceQuota applies to hdfs filesystem. This doesn't apply to
> datanode volumes.
>
>
> to add a volume, update dfs.data.dir (hdfs-site.xml on datanode) , and
> restart the datanode.
>
>
> check the datanode log to see if the new volume as activated. You should
> see additional space in
> namenode:50070/dfsnodelist.jsp?whatNodes=LIVE
>
>
> >________________________________
> > From: Hamed Ghavamnia <ghavamni...@gmail.com>
> >To: hdfs-user@hadoop.apache.org; Rajiv Chittajallu <raj...@yahoo-inc.com>
> >Sent: Sunday, January 1, 2012 4:06 PM
> >Subject: Re: HDFS Datanode Capacity
> >
> >
> >Thanks for the help.
> >I checked the quotas, it seems they're used for setting the maximum size
> on the files inside the hdfs, and not the datanode itself. For example, if
> I set my dfs.data.dir to /media/newhard (which I've mounted my new hard
> disk to), I can't use dfsadmin -setSpaceQuota n /media/newhard to set the
> size of this directory, I can change the sizes of the directories inside
> hdfs (tmp, user, ...), which don't have any effect on the capacity of the
> datanode.
> >I can set the my new mounted volume as the datanode directory and it runs
> without a problem, but the capacity is the default 5 GB.
> >
> >
> >On Sun, Jan 1, 2012 at 10:41 AM, Rajiv Chittajallu <raj...@yahoo-inc.com>
> wrote:
> >
> >Once you updated the configuration is the datanode, restarted? Check if
> the datanode log indicated that it was able to setup the new volume.
> >>
> >>
> >>
> >>>________________________________
> >>> From: Hamed Ghavamnia <ghavamni...@gmail.com>
> >>>To: hdfs-user@hadoop.apache.org
> >>>Sent: Sunday, January 1, 2012 11:33 AM
> >>>Subject: HDFS Datanode Capacity
> >>
> >>>
> >>>
> >>>Hi,
> >>>I've been searching on how to configure the maximum capacity of a
> datanode. I've added big volumes to one of my datanodes, but the configured
> capacity doesn't get bigger than the default 5GB. If I want a datanode with
> 100GB of capacity, I have to add 20 directories, each having 5GB so the
> maximum capacity reaches 100. Is there anywhere this can be set? Can
> different datanodes have different capacities?
> >>>
> >>>Also it seems like the dfs.datanode.du.reserved doesn't work either,
> because I've set it to zero, but it still leaves 50% of the free space for
> non-dfs usage.
> >>>
> >>>Thanks,
> >>>Hamed
> >>>
> >>>P.S. This is my first message in the mailing list, so if I have to
> follow any rules for sending emails, I'll be thankful if you let me know. :)
> >>>
> >>>
> >>>
> >>
> >
> >
> >
>

Reply via email to