> -----Original Message-----
> From: Sam Horrocks [mailto:[EMAIL PROTECTED]]
> Sent: 17 January 2001 23:37
> To: Gunther Birznieks
> Cc: [EMAIL PROTECTED]; mod_perl list; [EMAIL PROTECTED]
> Subject: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl
> withscripts that contain un-shared memory 
> 
> 
>  > With some modification, I guess I am thinking that the 
> cook is really the 
>  > OS and the CPU is really the oven. But the hamburgers on 
> an Intel oven have 
>  > to be timesliced instead of left to cook and then after 
> it's done the next 
>  > hamburger is put on.
>  > 
>  > So if we think of meals as Perl requests, the reality is 
> that not all meals 
>  > take the same amount of time to cook. A quarter pounder 
> surely takes longer 
>  > than your typical paper thin McDonald's Patty.

[snip]

> 
> I don't like your mods to the analogy, because they don't model how
> a CPU actually works.  Even if the cook == the OS and the oven == the
> CPU, the oven *must* work on tasks sequentially.  If you look at the
> assembly language for your Intel CPU you won't see anything about it
> doing multi-tasking.  It does adds, subtracts, stores, loads, 
> jumps, etc.
> It executes code sequentially.  You must model this somewhere in your
> analogy if it's going to be accurate.

( I think the analogies have lost their usefulness....)

This doesn't affect the argument, because the core of it is that:

a) the CPU will not completely process a single task all at once; instead,
it will divide its time _between_ the tasks
b) tasks do not arrive at regular intervals
c) tasks take varying amounts of time to complete

Now, if (a) were true but (b) and (c) were not, then, yes, it would have the
same effective result as sequential processing. Tasks that arrived first
would finish first. In the real world however, (b) and (c) are usually true,
and it becomes practically impossible to predict which task handler (in this
case, a mod_perl process) will complete first.

Similarly, because of the non-deterministic nature of computer systems,
Apache doesn't service requests on an LRU basis; you're comparing SpeedyCGI
against a straw man. Apache's servicing algortihm approaches randomness, so
you need to build a comparison between forced-MRU and random choice.

(Note I'm not saying SpeedyCGI _won't_ win....just that the current
comparison doesn't make sense)

Thinking about it, assuming you are, at some time, servicing requests
_below_ system capacity, SpeedyCGI will always win in memory usage, and
probably have an edge in handling response time. My concern would be, does
it offer _enough_ of an edge? Especially bearing in mind, if I understand,
you could end runing anywhere up 2x as many processes (n Apache handlers + n
script handlers)?

> No, homogeneity (or the lack of it) wouldn't make a 
> difference.  Those 3rd,
> 5th or 6th processes run only *after* the 1st and 2nd have 
> finished using
> the CPU.  And at that poiint you could re-use those 
> interpreters that 1 and 2
> were using.

This, if you'll excuse me, is quite clearly wrong. See the above argument,
and imagine that tasks 1 and 2 happen to take three times as long to
complete than 3, and you should see that that they could all end being in
the scheduling queue together. Perhaps you're considering tasks which are
too small to take more than 1 or 2 timeslices, in which case, you're much
less likely to want to accelerate them.


[snipping obscenely long quoted thread 8-)]


Stephen.

Reply via email to