Hi Vinod,

I don't think it is a good idea to use the whole disk space in the resource
offers. Here is my opinion, if master offers whole disk space to framework,
but the slave node can not provide so much space, this will cause some very
bad results(eg. executor crash or task fail) which should have been avoided.

I agree with you that it is quite difficult to implement dynamic  adjusting
available resources.

Currently, I think using free disk space as the default resource is better
than using whole disk space.

Best.

Guodong


On Fri, May 31, 2013 at 1:11 AM, Vinod Kone <[email protected]> wrote:

> Thats a good point. While dynamically adjusting available resources (e.g.,
> disk) is a good idea, it would be very complex to implement. Maybe an
> alternative solution is for the slave to offer the whole disk space (not
> just free disk space) when it starts up, similar to how it offers
> memory/cpu. Once we have disk isolation in place (currently we don't
> support it) then the slave should correctly offer available disk as soon as
> it gets free. Ofcourse, if you clean up the disk out-of-band (not through
> mesos) then mesos wouldn't react to that.
>
> Feel free to create a ticket.
>
> In the meanwhile, you can specify the disk resource as a command line flag,
> to force the slave to use that value as disk resource in its offer.
>
>
> On Thu, May 30, 2013 at 1:57 AM, 王国栋 <[email protected]> wrote:
>
> > Hi,
> >
> > I find that mesos slave will not update disk resource info when disk
> usage
> > is changed. It seems that slave takes the disk space when it starts as
> the
> > disk resources. When the disk space is changed after that, slave will not
> > know this.
> >
> > It seems a bug in slave. Say, I have a slave node whose free disk is only
> > 100MB when it starts.  Then no framework task will be launched on this
> > slave node due to lack of disk space. Then, I remove some local file on
> > this node and free more disk. But the slave still thinks the disk
> resource
> > is 100MB.
> >
> > Guodong
> >
>

Reply via email to