Michelangelo, I have not tried anything like what you try to do, but add my two cents anyway, after having read the Maui AdminManual and the Maui mailinglist.
My understanding of the ':ts' parameter in the nodes files, is that it is not used by Maui. If you want to run more jobs on a node than you have processors, you declare more processors than you have. E.g., if you think that your node should run at most ten jobs at the same time, you write your node file like this: cold1 np=10 cold2 np=10 cold3 np=10 As a NODEALLOCATIONPOLICY, I suggest that you try CPULOAD, recommended in the AdminManual for "timesharing" nodes. If you do not like it, the Maui AdminManual says that you may set up an algorithm of yourself, using 'NODEALLOCATIONPOLICY LOCAL' and writing a 'contrib' node allocation algorithm, or using 'NODEALLOCATIONPOLICY PRIORITY' and defining, e.g. NODECFG[DEFAULT] PRIORITYF='-JOBCOUNT - LOAD' or if you prefer to put tasks on already loaded nodes (you indicate something like that): NODECFG[DEFAULT] PRIORITYF=JOBCOUNT It seems to be difficult to make everything run exactly according to your specifications, but something like the above might move you in the right direction. Best wishes, -- Lennart Karlsson <[EMAIL PROTECTED]> National Supercomputer Centre in Linkoping, Sweden http://www.nsc.liu.se You wrote: > Hey everybody. I'm running Maui 3.2.6p14 with Torque 2.0.0p7. I'm > trying to set up a rather convoluted scheduling system, and I'm > completely lost in the world of all the paramters that I can set with > Maui. > > I have 3 nodes, each with two processors, and many users. What I'd like > to be able to do is allow each user to run up to 6 jobs, one on each > processor. If that same user submits a seventh job, it waits for one of > the first six to finish. > But if another user comes along and submits jobs (up to 6), I want those > jobs to start running right away on some of the processors that the > first user is using. (So I want to be able to run multiple jobs on the > same processor). So far, I've only be able to spread 6 jobs out on 6 > nodes, regardless of whose they are, and any more jobs wait until one of > those is finished. > > Any help would be greatly appreciated. Here's the relevant > configuration information (I think...) > > Michelangelo > > qmgr -c 'p s': > > # > # Create queues and set their attributes. > # > # > # Create and define queue batch > # > create queue batch > set queue batch queue_type = Execution > set queue batch resources_default.nodes = 1:ppn=1 > set queue batch enabled = True > set queue batch started = True > # > # Set server attributes. > # > set server scheduling = True > set server operators = [EMAIL PROTECTED] > set server operators += [EMAIL PROTECTED] > set server default_queue = batch > set server log_events = 511 > set server mail_from = adm > set server resources_default.nodes = 1 > set server scheduler_iteration = 600 > set server node_check_rate = 150 > set server tcp_timeout = 6 > set server default_node = 1 > set server node_pack = False # maui.cfg 3.2.6p14 > > SERVERHOST icecube.berkeley.edu > # primary admin must be first in list > ADMIN1 root > > # Resource Manager Definition > > RMCFG[ICECUBE.BERKELEY.EDU] TYPE=PBS > > # Allocation Manager Definition > > AMCFG[bank] TYPE=NONE > > # full parameter docs at > http://clusterresources.com/mauidocs/a.fparameters.html > # use the 'schedctl -l' command to display current configuration > > RMPOLLINTERVAL 00:00:30 > > #SERVERPORT 42559 > SERVERPORT 15004 > SERVERMODE NORMAL > > # Admin: http://clusterresources.com/mauidocs/a.esecurity.html > > > LOGFILE maui.log > LOGFILEMAXSIZE 10000000 > LOGLEVEL 1 > > # Job Priority: > http://clusterresources.com/mauidocs/5.1jobprioritization.html > > QUEUETIMEWEIGHT 0 > USERWEIGHT 1 > FSWEIGHT 1 > FSUSERWEIGHT 1 > # FairShare: http://clusterresources.com/mauidocs/6.3fairshare.html > > FSPOLICY UTILIZEDPS > FSDEPTH 10 > FSINTERVAL 01:00:00 > FSDECAY 0.80 > > # Throttling Policies: > http://clusterresources.com/mauidocs/6.2throttlingpolicies.html > > # NONE SPECIFIED > > # Backfill: http://clusterresources.com/mauidocs/8.2backfill.html > > BACKFILLPOLICY FIRSTFIT > RESERVATIONPOLICY CURRENTHIGHEST > > # Node Allocation: > http://clusterresources.com/mauidocs/5.2nodeallocation.html > > NODEALLOCATIONPOLICY MINRESOURCE > NODEACCESSPOLICY SHARED > > # QOS: http://clusterresources.com/mauidocs/7.3qos.html > > # QOSCFG[hi] PRIORITY=100 XFTARGET=100 FLAGS=PREEMPTOR:IGNMAXJOB > # QOSCFG[low] PRIORITY=-1000 FLAGS=PREEMPTEE > > # Standing Reservations: > http://clusterresources.com/mauidocs/7.1.3standingreservations.html > > # SRSTARTTIME[test] 8:00:00 > # SRENDTIME[test] 17:00:00 > # SRDAYS[test] MON TUE WED THU FRI > # SRTASKCOUNT[test] 20 > # SRMAXTIME[test] 0:30:00 > > # Creds: http://clusterresources.com/mauidocs/6.1fairnessoverview.html > > # USERCFG[DEFAULT] FSTARGET=25.0 > > USERCFG[mdagost] FSTARGET=50.0+ > # Backfill: http://clusterresources.com/mauidocs/8.2backfill.html > > BACKFILLPOLICY FIRSTFIT > RESERVATIONPOLICY CURRENTHIGHEST > > # Node Allocation: > http://clusterresources.com/mauidocs/5.2nodeallocation.html > > NODEALLOCATIONPOLICY MINRESOURCE > NODEACCESSPOLICY SHARED > > # QOS: http://clusterresources.com/mauidocs/7.3qos.html > > # QOSCFG[hi] PRIORITY=100 XFTARGET=100 FLAGS=PREEMPTOR:IGNMAXJOB > # QOSCFG[low] PRIORITY=-1000 FLAGS=PREEMPTEE > > # Standing Reservations: > http://clusterresources.com/mauidocs/7.1.3standingreservations.html > > # SRSTARTTIME[test] 8:00:00 > # SRENDTIME[test] 17:00:00 > # SRDAYS[test] MON TUE WED THU FRI > # SRTASKCOUNT[test] 20 > # SRMAXTIME[test] 0:30:00 > > # Creds: http://clusterresources.com/mauidocs/6.1fairnessoverview.html > > # USERCFG[DEFAULT] FSTARGET=25.0 > > USERCFG[mdagost] FSTARGET=50.0+ > USERCFG[mdagost] MAXPROC=6 > USERCFG[hardtke] FSTARGET=50.0+ > USERCFG[hardtke] MAXPROC=6 > USERCFG[amorey] FSTARGET=50.0+ > USERCFG[amorey] MAXPROC=6 > > # USERCFG[john] PRIORITY=100 FSTARGET=10.0- > # GROUPCFG[staff] PRIORITY=1000 QLIST=hi:low QDEF=hi > # CLASSCFG[batch] FLAGS=PREEMPTEE > # CLASSCFG[interactive] FLAGS=PREEMPTOR > > set server pbs_version = 2.0.0p7 > > nodes: > > cold1:ts np=2 > cold2:ts np=2 > cold3:ts np=2 _______________________________________________ mauiusers mailing list [email protected] http://www.supercluster.org/mailman/listinfo/mauiusers
