> The searching has me more concerned. That one is a lot more difficult to anticipate 
>and control. Initially I think I'm just going to have to take
> the hit on servers and let it bang on the database directly. I just don't see any 
>other way around it. As the site gets bigger, I think it may work
> to move searches to another machine and handle them out of a "nearly-live" databas
> -- perhaps even served directly out of ram (...?) and implemented with a 
>lighter-weight faster responding database platform (mysql or dbm perhaps?)
> 

One common technique I use for handling searches against a database is to 
acquire an exclusive lock, so that no more than one search can run at
a time on the system.  For a system that does more than searching, this
ensures that no matter how many searches have queued up, the rest of the 
site will still perform reasonably, and for a multiprocesssor database will
further ensure that the searching does not use more than 1CPU at a time.

In MySQL, I typically do this with GET_LOCK() user locks.  In Oracle I
tend to do this by acquiring a row level lock somewhere relevant with
a select ... for update, and then commit to allow another search to begin.

With this kind of queue forming, the next thing to worry about
is that you have enough mod_perl servers to wait in this queue,
so you cannot get away with just 5 backend mod_perl servers in
a dual mod_proxy/mod_perl type of config, but rather, you would
probably need 20+ mod_perl servers at the very minimum so when
the queue builds up it does not starve your entire site.

--Josh

_________________________________________________________________
Joshua Chamas                           Chamas Enterprises Inc.
NodeWorks Founder                       Huntington Beach, CA  USA 
http://www.nodeworks.com                1-714-625-4051

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to