Hi, > Am 29.01.2015 um 20:15 schrieb Kevin Taylor <[email protected]>: > > > They're generally the same, but some nodes are definitely newer than others. > I think at some level it's just political, yes, they purchased nodeX, but if > you're running on nodeY, do you really know the difference. > > Is share tree or functional share the best way to go forward on that?
The functional policy will only take the actual workload into account. When all are submitting jobs continuously, this will work. But any gap in submitting jobs won't be recovered. This is done with the share tree policy, which will take past usage into account and allow to reach the desired amount of share of the cluster for each group. As you say, it's political and maybe mental: did I buy a physical box, or did I buy a certain amount of computing time. Setting up a share tree policy and collecting the data after a month you can show them which group used what amount of computing time, and check/adjust this setting if the desired distribution wasn't achieved. -- Reuti > > Subject: Re: [gridengine users] host sequence number per user > > From: [email protected] > > Date: Thu, 29 Jan 2015 14:50:59 +0100 > > CC: [email protected] > > To: [email protected] > > > > Hi, > > > > > Am 29.01.2015 um 14:16 schrieb Kevin Taylor <[email protected]>: > > > > > > > > > Is this a valid way to do what I'm thinking? > > > > > > Create two queues with the same nodes in it, where one has a different > > > sequence of machines than the others. > > > If all of the hosts have slots defined as a consumable resource, they > > > shouldn't stomp over each other. > > > > > > The goal I'm trying to reach is: > > > > > > Several departments are contributing systems to a large grid. > > > They've agreed to share systems for processing, but each department wants > > > to use its own stuff before moving onto the other groups systems. > > > > Are all nodes of the same type? > > > > > > > I'd imagine at some point I'm going to need to get the functional shares > > > going, but for right now, I think if they launch jobs, each department > > > wants to start on their own stuff first (best machines, worst machines, > > > other group machines....) > > > > In principle it should work, but may lead to the situation that you start a > > long running job on a node which belongs to another group and your own > > nodes are idle after some time. Then the other group may have to use your > > nodes, as you are still on their nodes - everyone is computing on other > > ones machines. > > > > It would be worth to try to see all machines as one cluster, and use a > > share tree policy where the moving average per group tries to meet the set > > up goal, i.e. share of the complete cluster. If the groups shared the costs > > 40/40/20 this can be set up to reflect the scheduling policy to get this > > computing time in a specified timeframe. > > > > -- Reuti > > > > > > > > Subject: Re: [gridengine users] host sequence number per user > > > > From: [email protected] > > > > Date: Tue, 27 Jan 2015 17:14:38 +0100 > > > > CC: [email protected] > > > > To: [email protected] > > > > > > > > Am 27.01.2015 um 16:45 schrieb Kevin Taylor <[email protected]>: > > > > > > > > > > I don't know if this is possible or not, but I've got a test queue > > > > > set up with 5 hosts in it to test sequence number sorting of jobs. > > > > > > > > > > In the queue config, I have: > > > > > > > > > > seq_no 0,[hostA=110], \ > > > > > [hostB=105], \ > > > > > [hostC=105], \ > > > > > [hostD=105], \ > > > > > [hostE=105] > > > > > > > > > > and when submitting a job, hostA is the last one to get stuff on it, > > > > > which is fine. > > > > > > > > > > Can I define this further, to say that only userA can get this type > > > > > of sorting, everyone else will get normal load based sorting? > > > > > > > > No, but he can use a soft request to avoid nodeA if possible: `qsub > > > > -soft -l 'h=!nodeA' ...` > > > > > > > > -- Reuti > > > > > > > > > > > > > This doesn't work, but something like [userA@hostA=110], > > > > > [userA@hostB=105], etc...? > > > > > > > > > > > > > > > _______________________________________________ > > > > > users mailing list > > > > > [email protected] > > > > > https://gridengine.org/mailman/listinfo/users > > > > > > > _______________________________________________ > > > users mailing list > > > [email protected] > > > https://gridengine.org/mailman/listinfo/users > > _______________________________________________ users mailing list [email protected] https://gridengine.org/mailman/listinfo/users
