Thanks Harsh. Hope I don't have space in my list which I specified in the
last mail.

Thanks,
V
On Sep 5, 2013 11:20 PM, "Harsh J" <[email protected]> wrote:

> The spaces may be a problem if you are using the older 1.x releases.
> Please try to specify the list without spaces, and also check if all
> of these paths exist and have some DN owned directories under them.
>
> Please also keep the lists in CC/TO when replying. Clicking
> Reply-to-all usually helps do this automatically.
>
> On Thu, Sep 5, 2013 at 11:16 PM, Viswanathan J
> <[email protected]> wrote:
> > Hi Harsh,
> >
> > dfs.data.dir property we defined the values as in comma separated,
> >
> > /mnt/hadoop0/hdfs, /mnt/hadoop1/hdfs, /mnt/hadoop2/hdfs,
> /mnt/hadoop3/hdfs
> >
> > The above values are different devices.
> >
> > Thanks,
> > V
> >
> > On Sep 5, 2013 10:53 PM, "Harsh J" <[email protected]> wrote:
> >>
> >> Please share your hdfs-site.xml. HDFS needs to be configured to use
> >> all 4 disk mounts - it does not auto-discover and use all drives
> >> today.
> >>
> >> On Thu, Sep 5, 2013 at 10:48 PM, Viswanathan J
> >> <[email protected]> wrote:
> >> > Hi,
> >> >
> >> > The data which are storing in data nodes are not equally utilized in
> all
> >> > the
> >> > data directories.
> >> >
> >> > We having 4x1 TB drives, but huge data storing in single disc only at
> >> > all
> >> > the nodes. How to balance for utilize all the drives.
> >> >
> >> > This causes the hdfs storage size becomes high very soon even though
> we
> >> > have
> >> > available space.
> >> >
> >> > Thanks,
> >> > Viswa.J
> >>
> >>
> >>
> >> --
> >> Harsh J
>
>
>
> --
> Harsh J
>

Reply via email to