https://reviews.apache.org/r/10661/
On Thu, Apr 18, 2013 at 1:55 AM, 王国栋 <[email protected]> wrote: > Thanks for your explanation. > > I will try to send a patch later this week. > > Guodong > > > On Thu, Apr 18, 2013 at 3:14 PM, Vinod Kone <[email protected]> wrote: > > > Aha. That makes sense now. > > > > When mesos calculates disk usage, it just does a 'statvfs' on the the > root > > ("/") path. Since the root path is mounted on /dev/sda1 in your case, it > > picks up that file system. > > > > The fix is for the slave to do disk calculation on the file system on > which > > the "work_dir" is mounted instead of "/". > > > > This is as simple as doing 'fs::usage(flags.work_dir)' in > > Slave::checkDiskUsage() and 'fs::available(flags.work_dir) in > > Slave::Slave()' (see src/slave/slave.cpp). > > > > I can get to it some time this week. But, feel free to send a patch! > > > > > > > > On Wed, Apr 17, 2013 at 8:37 PM, Vinod Kone <[email protected]> wrote: > > > > > > > > Hi Guodong, > > > > > > Could you explain a bit about what's not working with "--work_dir" > flag? > > > If you mount a directory (say /mesos) in /data1 and give > > > "--workdir=/mesos", I would presume it would work? > > > > > > If it doesn't can you send out some output? > > > > > > Thanks, > > > Vinod > > > > > > > > > > > > > > > On Wed, Apr 17, 2013 at 8:25 PM, 王国栋 <[email protected]> wrote: > > > > > >> Hi, > > >> > > >> It seems that I can not set the disk device which provide disk > resource > > in > > >> slave. > > >> Here is my situation. > > >> > > >> I install mesos in /user/mesos/ on sda, but sda is the disk for OS, so > > the > > >> disk size is small. When I start the slave, mesos find the disk > resource > > >> from sda. > > >> But our machine have several large size disk which is mounted on > /data1 > > >> etc. > > >> > > >> So I am wondering how to set mesos slave to use /data1 as the disk > > >> resource > > >> provider. I tried --work_dir flag, but it doesn't work on our node. > > >> > > >> Thanks. > > >> > > >> > > >> Guodong > > >> > > > > > > > > >
