> btw, I'm not surprised highwire isn't talked about more.

A response from the Highwire guy:
>>

I'm afraid that we haven't been using Highwire for a while now. That
doesn't mean that the problem Highwire was designed to address
doesn't still exist (it very definitely does), but we have decided to
follow a different solution. We have divided our mongrel cluster into
two halves - a "normal" cluster on which the common "fast" operations
take place and an "admin" cluster on which occasional "slow" things
take place. Very soon, we plan to take this further and move the
"slow" cluster onto an entirely separate server.

That means, I'm afraid, that we haven't got a patch for the problem
you mention because we've not developed it any further. Having said
that, Highwire really couldn't be simpler, so if you want to take on
creating a patch, or even ownership of the Highwire project, please
be my guest! Let me know if you're interested.

BTW - you might be interested in a couple of blog articles we've
recently written on Ruby on Rails:

http://about.82ask.com/news/wizardry/

------------------------------------------------
Paul Butcher
CTO
82ASK
Mobile: +44 (0) 7740 857648
Main: +44 (0) 1223 309080
Fax: +44(0) 1223 309082
Email: [EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]
AIM: paulrabutcher
Skype: paulrabutcher
LinkedIn: http://www.linkedin.com/in/paulbutcher
------------------------------------------------

Luis Lavena wrote:
> On 5/5/07, Eddy <[EMAIL PROTECTED]> wrote:
> [...]
> 
>>       sleep 10
>>   end_time = 1.minute.from_now
>>
>>   puts "Fast: #{fast_count}"
>>   puts "Slow: #{slow_count}"
>>
> 
> Ok, first things first:
> 
> sleep is not good on "threaded" ruby applications. Long numbers froze
> the whole VM, not just the thread involved.
> 
> Also, a rails app is locked inside a big mutex to solve issues around
> thread-safety (better name it unsafety) of Rails. So, any incoming
> connection that require been served by Rails dispatcher will bet into
> the queue.
> 
>>
> Most of the named "load-balancer" behave like that: round-robin
> balancing. Even if you could weight them, they are strict on that and
> not adapt well through time.
> 
>> We've experimented with various different configurations for
>> mod_proxy_balancer without successfully solving this issue. As far as we
>> can
>> tell, all other popular load balancers (Pound, Pen, balance) behave in
>> roughly the same way.
> 
> From my point of view, they should learn about timmings from each
> member of the cluster and recalculate the weight they could handle.
> 
>> unnecessary timeouts because requests are queuing up on one server.
>>
> 
> Maybe you could "switch" over a lightweight solution, that partially
> cover these problems (Mongrel + erb, do a google for it) ;-)
> 
>> The real solution to the problem would be to remove Rails' inability to
>> handle more than one thread.
> 
> That is a real problem: A lot of parts of Rails lack aren't
> thread-safe, adapting them will require a huge amount of work, but I
> agree is worth.
> 
>>   svn checkout svn://rubyforge.org/var/svn/highwire
>>
> 
> Haven't checked the code (yet), but sounds interesting. Also if there
> will be a loading strategy, that could be configurable (maybe via
> callbacks or something) that allow you change how loading will work.
> 
>> Please check it out and let us know what you think.
>>
> 
> Excellent news, thanks for sharing it with us.
> 
> --
> Luis Lavena
> Multimedia systems
> -
> Leaders are made, they are not born. They are made by hard effort,
> which is the price which all of us must pay to achieve any goal that
> is worthwhile.
> Vince Lombardi


-- 
Posted via http://www.ruby-forum.com/.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Deploying Rails" group.
To post to this group, send email to rubyonrails-deployment@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/rubyonrails-deployment?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to