On 2/28/07, Kirk Haines <[EMAIL PROTECTED]> wrote:

On 2/28/07, Jeremy Kemper <[EMAIL PROTECTED]> wrote:

> This is true. However, his assertion is valid: it's a must for any web
app
> that uses blocking API calls, like executing queries using the native
mysql
> and postgres clients. That's just life with Ruby threads.

I have oodles of dynamic web sites and applications that make blocking
API calls yet can still sustain hit rates in the 50-200/second range
(depending on the site) under a single mongrel.  They bear up just
fine to bursty traffic when people are checking their fund prices in
the evenings.


Great anecdote. I've had similar experience with Mongrel.

There are situations where clustering multiple backends is necessary,
for sure, but it's possible to handle an exceptional amount of
dynamic, db-interactive traffic in a single ruby process.


"It's a must" is too strong. I meant to illuminate that it's not just a
"must for Rails." That's true for other reasons.

For example: you have an operation that obtains a write lock on a db row and
does some work. Concurrent requests ought to just wait on the lock and
proceed when available, but Ruby threads will deadlock on the API call since
waiting on the lock prevents the worker thread that obtained it from
finishing.

Whether this is an edge case or a common case is up to your application.

I think a preforking Mongrel would be the biggest positive change for Ruby
web app deployment since its introduction. Nearly zero config; just gem
install and go.

jeremy
_______________________________________________
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users

Reply via email to