Hi William, Actually I don't quite get the need of:
2. Our JSV adds an environment variable to the job recording the amount of disk requested (you could try parsing it out of the job spool but this is easier). If a user has specify the disk usage via consumable complex (like -l disk_requested=100G), can the prolog script simply uses that value? Cheers, D On Thu, Sep 8, 2016 at 11:00 PM, William Hay <w....@ucl.ac.uk> wrote: > On Thu, Sep 08, 2016 at 10:10:51AM +1000, Derrick Lin wrote: > > Hi all, > > Each of our execution nodes has a scratch space mounted as > /scratch_local. > > I notice there is tmpdir variable can be changed in a queue's conf. > > According to doc, SGE will create a per job dir on tmpdir, and set > path in > > var TMPDIR and TMP. > > I have setup a complex tmp_requested which a job can specify during > > submission. I want to ensure a job could not utilize what it claims at > > tmp_requested. For example, I would like to set a quota to a job's > TMPDIR > > according to tmp_requested. > > What is the best way for doing that? > > Cheers, > > Derrick > We do something along those lines here and making some improvements to it > is on my todo list. I'll outline my current plan: > 1. Hand all the spare disk space on a node over to a nice btrfs filesystem. > 2. Our JSV adds an environment variable to the job recording the amount > of disk requested (you could try parsing it out of the job spool but > this is easier). > 3. The prolog creates a btrfs subvolume and assigns it an appropriate > quota. > 4. In starter_method point TMPDIR and TMP at the created volume. We use > ssh for qlogin etc and ForceCommand can do the same job for it. > 5. In the epilog delete the subvolume. > > At present we're using a huge swap partition and TMPFS instead of btrfs. > You could probably do this with a volume manager and creating a > regular filesystem as well but it would be slower. > > We don't use the grid engine configured TMPDIR as this is also used > for internal gridengine communication and mounting filesystems on > it causes problems. > > > > _______________________________________________ > > users mailing list > > users@gridengine.org > > https://gridengine.org/mailman/listinfo/users > >
_______________________________________________ users mailing list users@gridengine.org https://gridengine.org/mailman/listinfo/users