> The other reason why we cache like crazy is simple - MySQL's query queue
> optimizer sucks shit.
>

Ah yes... You can tweak things some to give the queue optimizer hints about 
what to give priority to, but I can see where you'd be in rough shape with 
that sort of surge load on complex queries. I guess what I was invisaging was 
building intermediate tables to reduce all the "hard" queries to simple ones. 
I did a service very similar to the one you're talking about a few years ago, 
and I know what you mean about the distance calculations. I just ended up 
building a bunch of tables with overlapping parts of the dataset in them that 
represented geographical regions (basically each state and the ones adjacent, 
for each state). Then you could go after the distances in a small enough data 
set that performance was pretty good. 

> Ok, the above situation happens when your hitting your webservers at silly
> rates
> but the nice man from Sky does have some serious testing tools. I think we
> had 1,000 simumlateous
> users hitting us every 20 seconds at that point on two servers. That's
> 25rps per box,
> off dynamically generated pages backed by a sql db.

Yeah, well I've seen hit rates like that in real life too. Happens to be 
called the "slashdot effect". hehe. Probably in your case the fact that the 
serialized data can be fetched in predictable time is worth far more than 
better best-case performance.
>
> Those figures arn't using AxKit though,  they where our previous mod_perl
> app.
> AxKit's figures are close to that though.
>
> Mike.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to