if you are going to go to that much trouble, why not write a driver
that does four fork() calls and then a wait() to wait for one of them
to terminate.  When the wait returns, fork() another process, and keep
this going.  

this wouldn't be but 20-30 lines of code, and you could make the number
of processors a command-line option that would make your 'thing' run
cleanly on 1, 2, 4 or N processor machines...

I could help you if you need it..


Robert Hyatt                    Computer and Information Sciences
[EMAIL PROTECTED]               University of Alabama at Birmingham
(205) 934-2213                  115A Campbell Hall, UAB Station 
(205) 934-5473 FAX              Birmingham, AL 35294-1170

On 14 Sep 1999 [EMAIL PROTECTED] wrote:

> Terry Bullett <[EMAIL PROTECTED]> writes:
> 
> > Hi,
> >     I've been working with SMP from the hardware side for several years
> > now, but am somewhat new to the applications side.  Here's my situation.
> > 
> > I have a CPU intensive task that takes 2-3 min on a 300MHz PIII.  The
> > timing of each run is data dependent thus can't be predicted precisely. 
> > I need to run as many of these processes as possible usring my repeat
> > interval, which is 30 minutes at the moment.  These processes are
> > launched by a shell script, which itself is called by cron.
> > 
> > Trouble comes when the compute time for all the processes excedes the
> > repeat interval.  If I try to do 35 minutes of computation every 30
> > minutes, the number of processes increase and the system eventually
> > grinds to a halt.  To counter this on a uni-processor system, I had the
> > script touch a lockfile and then refuse to run if another process was
> > running.
> > 
> >     I tried this on a dual CPU system and had to make 2 separate scripts
> > with 2 lockfiles.  It seemed ugly, for example I had to maintain 2
> > separate lists of tasks, but it worked.
> > 
> >     Now I have a quad CPU system (VARserver3500 from VA Research) and I'm
> > looking for a more elegant way to approach the problem.  Given some
> > relatively large number of tasks (50-100?)  how can I make sure that
> > there are always N=2,4,8, of them running?
> 
> > Any advice on where to look for info on this type of problem is
> > appreciated,
> 
> yes.  this is a frustrating problem and i have it too with my quad
> box.  (i run multiple monte-carlo simulations.)  i suspect unix was
> never really done with SMP in mind.  the shells, at least, have
> precious little by way of SMP job control.
> 
> i have used three methods.
> 
> 1) make 4 shell scripts, each of which do 1/4 of the work.  this
>    leaves some slop at the end but is easy.
> 
> 2) use make.  make does a better job than shells at SMP.  have a
>    script generate a giant Makefile which simply lists all your jobs
>    as dependancies for a default dummy target.  make -j 4 and go.
> 
> 3) submit all the jobs to batch.  set batch to keep spawning jobs for
>    load less 4 (or whatever works for you).  this will keep all the
>    processors going hot.  however job control via batch can get
>    tricky.
> 
> i think the second (make) is probably the best but your mileage may
> vary.
> 
> you can probably make your own batch a small program to fork off 4
> jobs, and spawn more as the children die until you are done.
> 
> hope this helps.
> 
> -- 
> J o h a n  K u l l s t a m
> [[EMAIL PROTECTED]]
> Don't Fear the Penguin!
> -
> Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/mentre/smp-faq/
> To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]
> 

-
Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/mentre/smp-faq/
To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]

Reply via email to