Hi, Am 29.12.2011 um 20:39 schrieb Ciaran Wills:
> I think in my case I can just set up a queue with a number of instances equal > to the number of licenses I have. I'm not so much worried about maximum > efficiency as avoiding having jobs fail because of license oversubscription. > > Dealing with host-based licensing is a feature Qube advertises - it comes up > reasonably often with vfx software. > > Another feature which I couldn't figure out how to achieve is per-host > dependencies (particularly in array dependencies) - if one task is going to > generate a load of data for another task to consume then I could write it all > to the local tmp drive if I know the dependent task will run on the same host > rather than pushing it across the network. Instead I just wrap those tasks > up into a single task submission which is fine for single dependencies but it > would be nice to have something more flexible. In case you split the steps in your job to different jobs, you can't use $TMPDIR any longer but need to take care on this scratch data on your own as it need to be persistent between the jobs (and at one point finally remove them). It could also lead to the effect, that too many steps A are executed on a machine, and you run out of local scratch data at all. If you have this often, another way to have more flexibility could be a parallel file system, so that any node can access the scratch data. Then you could split the jobs without any special setup. == For now: the step A of the job could use a `qalter` to add a "-l h=$HOSTNAME" to step B of this flow of steps. (All exechosts need to be submission hosts this way though). Besides a jobnumber, `qalter` can also you a job name which could make it easier to target the following step when you name the jobs properly. -- Reuti > The content of this e-mail (including any attachments) is strictly > confidential and may be commercially sensitive. If you are not, or believe > you may not be, the intended recipient, please advise the sender immediately > by return e-mail, delete this e-mail and destroy any copies. > > _______________________________________________ > users mailing list > [email protected] > https://gridengine.org/mailman/listinfo/users _______________________________________________ users mailing list [email protected] https://gridengine.org/mailman/listinfo/users
