Wow! This is really cool stuff.

But indeed not without problems. Problems that are solvable of course,
but it does change a lot of the current deployment model of Rails.

If I've understood Comet correctly the web browser opens an http
connection to the server and keeps it open. When something happens the
server can use the open connection to push stuff to the browser.

Please correct me if I'm wrong in my understanding of Comet and if I
indeed am wrong then you can safely skip the rest of this email as it
assumes that the above is correct.

Problem of course is that with the current deployment model of Rails
each open connection is going to allocate one FastCGI process and keep
it allocated until the connection is closed. So with the typical Rails
deployment of about twenty FastCGI processes you can have, let's
think.... about twenty concurrent users! Sorry for saying the s-word
here but.... this ehm... doesn't scale. :-)

The solution here is of course non-blocking I/O. You need to configure
Apache to reverse proxy to some process that can have as many open
connections it likes with just one single thread. Then you use
non-blocking I/O to read and write from the connection. Problem with
non-blocking I/O of course is that it completely changes the way you
write your programs. Unless of course.... you use Ruby! Because in
Ruby there's this wonderful little invention called continuations.
Rarely used but in some situations it's just a perfect fit! With
continuations you can make a non-blocking I/O program look exactly
like a blocking one completely hiding the plumbing. The interested
reader can work out the details themselves. :-)

Another option is that you only allow data browser-bound messages on
the Comet connection and those have a limited size only. You still set
up the reverse proxy configuration and configure the write-buffers of
the connections to be at least as big as the size of the biggest
message. When you want to send a message you make sure there's enough
space in the write buffer so that the write doesn't block. This is
trickier to get right and is going to be very OS dependent. I know
Linux and BSD does things differently here and I have no idea how
Windows does it (if it does at all).

Another simpler problem is that proxies and other infrastructure is
going to close HTTP connections if they've been open for too long. In
this case you simply have to have a client that's good at reopening
the connections. But the problem is: What if stuff has happened while
the connection was closed! So there needs to be away of queueing up
outgoing messages and timing them out if indeed the client doesn't
reconnect.

Indeed very interesting stuff.

Are we rewriting a complete messaging infrastructure software in Rails
here or is there a way of doing this that's so incredibly clever and
simple that I've completely missed it. Mind you, I'm definitely not
ruling this option out.

Cheers,
Jon

On 4/21/06, Kyle Maxwell <[EMAIL PROTECTED]> wrote:
> Is there anything that the community can do to jumpstart/accelerate
> this project?
>
> For those that missed Canada on Rails Armageddon is roughly Comet
> (http://www.irishdev.com/NewsArticle.aspx?id=2166) on Rails.
>
> --
> Kyle Maxwell
> Chief Technologist
> E Factor Media // FN Interactive
> [EMAIL PROTECTED]
> 1-866-263-3261
>
> _______________________________________________
> Rails-core mailing list
> Rails-core@lists.rubyonrails.org
> http://lists.rubyonrails.org/mailman/listinfo/rails-core
>
>
>
_______________________________________________
Rails-core mailing list
Rails-core@lists.rubyonrails.org
http://lists.rubyonrails.org/mailman/listinfo/rails-core

Reply via email to