On Mon, Nov 9, 2009 at 18:47, Graham Leggett <minf...@sharp.fm> wrote:
>...
>> When you read from a serf bucket, it will return however much you ask
>> for, or as much as it has without blocking. When it gives you that
>> data, it can say "I have more", "I'm done", or "This is what I had
>> without blocking".
>
> Who is "you"?

Anybody who reads from a bucket. In this case, the core network loop
when a client connection is ready for writing.

> Up till now, my understanding is that "you" is the core, and therefore
> not under control of a module writer.
>
> Let me put it another way. Imagine I am a cache module. I want to read
> as much as possible as fast as possible from a backend, and I want to
> write this data to two places simultaneously: the cache, and the
> downstream network. I know the cache is always writable, but the
> downstream network I am not sure of, I only want to write to the
> downstream network when the downstream network is ready for me.
>
> How would I do this in a serf model?

No module *anywhere* ever writes to the network.

The core loop reads/pulls from a bucket when it needs more data (for
writing to the network).

When your cache bucket reads from its interior bucket, it can also
drop the content into a file, off to the side. Think of this bucket as
a filter. All content that is read through it will be dumped into a
file, too.

>...
> That I understand, but it makes no difference as I see it - your loop
> only reads from the bucket and jams it into the client socket if the
> client socket is good and ready to accept data.
>
> If the client socket isn't good and ready, the bucket doesn't get pulled
> from, and resources used by the bucket are left in limbo until the
> client is done. If the bucket wants to do something clever, like cache,
> or release resources early, it can't - because as soon as it returns the
> data it has to wait for the client socket to be good and ready all over
> again. The server runs as slow as the browser, which in computing terms
> is glacially slow.

I'm not sure that I understand you, and that you're familiar with the
serf bucket model.

The bucket can certainly cache data as it flows through. No problem
there. Once the bucket has returned all of its data, it can close its
file handle or socket or whatever resources it may have.

Buckets are one-time use, so once it has returned all of its data, it
can throw out any resources.

And no... the server does NOT run as slow as the browser. There are N
browsers connected, and the server is processing ALL of them. One
single response bucket is running as fast as its client, sure, but the
server certainly is not idle.

>...
> One event loop handling many requests each == event MPM (speed and
> resource efficient, but we'd better be bug free).
> Many event loops handling many requests each == worker MPM (compromise).
> Many event loops handling one request each == prefork (reliable old
> workhorse).

These have no bearing. The current MPM model is based on
content-generators writing/pushing data into the network.

A serf-based model reads from content-generators.

> In theory if we turn the content handler into a filter and bootstrap the
> filter stack with a bucket of some kind, this may work.
>
> In fact, using both "push" and "pull" at the same time might also make
> some sense - your event loop creates a bucket from which data is
> "pulled" (serf model), which is in turn "pulled" by a filter stack
> (existing filter stack model) and "pushed" upstream.

That is NOT the design that myself, Paul, and Justin envision. The
core is serf. So *everything* is read/pull-based.

The old-style handlers and filters get their own thread and push into
a pipe, or an in-memory data queue. The core loop uses a bucket which
reads out of that pipe.

>...

Cheers,
-g

Reply via email to