Otto Moerbeek wrote:
>
> On Sun, Apr 18, 2010 at 09:35:42PM +0800, Aaron Lewis wrote:
>
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> >
> > Hi,
> >     I'm reading Operating System Concepts (7th Edition) , Written by
> > Abraham , Peter & Greg.
> >
> >     In chapter 5.3 , it talks about a schedule algorithm: SJF
> >     SJF means shortest jobs schedules firstly.
> >
> >     To compare different process , thy use a process running time.
> >
> >     e.g
> >             P1 takes 6 secs to run
> >             P2 takes 3 seconds
> >             P3 takes 10 secs
> >
> >     Then we should put those tasks in array like this:
> >     P2 => P1 => P3
> >
> >     That looks much reasonable , but my question is , how does an OS
> know
> > that a process will takes longer time to finish its life ?
> >     I think it's impossible to let OS know exactly how long a process
> will
> > take to run.
> >
> >
> >     So far in my experience , i think there's a few ways to compare
> > Process running time:
> >
> >     Forgive me if i have a poor experience on OS ;-)
> >
> >     I) Number of Loops in a Program , can be detected by compiler
> >     As long as you have any loops , you are slower than any straight
> ahead
> > program
> >
> >     II) Length of Program , longer code takes longer time sometimes ,
> not a
> > good way.
> >
> >
> >     Anyone wants to share some experience with me ?
>
> You cannot tell in general, that's a basic result from CS. But you can
> measure previous runs and do predictions based on that, in some cases
> at least. I hope I'm not answering a homework assignment...
>
>       -Otto
>
In general you cannot predict, however there are many (long) jobs with
very predictable times to completion: sorts, merges, most anything that
processes thousands of records in one batch operation.
(and ties up various resources for the duration --- thein is the gotcha)
I would not trust counting instructions, loops, subroutine calls as
being usefully predictive of execution time.

The fun thing about scheduling algorithms is that any one of them
is usually theoretically capable of giving the worst possible overall
performance.

Reply via email to