Hi,

i am reviewing this email and thinking a little about it, i will reply
later.

OBS: about stage 30..did you check sr->stage30_blocked ?

regards,


On Thu, Feb 7, 2013 at 4:11 PM, Sonny Karlsson <[email protected]> wrote:

> Hi
>
> In commit f80af9e2 edsiper mentioned a pending buffer interface in a
> 'Fixme' comment. I've been thinking about ways to implement something
> like that as a way to enable gzip compression of http data. Basically
> the current direct socket io would be replaced with queued malloced
> output buffers.
>
> This would make it possible not only to gzip data, but also chunk the
> transfer of any request response without support from the plugin. Also
> the next queued request may begin processing without sending all data,
> decreasing delay.
>
> Currently the plugins I'm familiar with implement their own buffering.
> Why not do it inside the core instead?
>
> I can implement this unless someone else has already started something
> similar or has any arguments against it? Below I'll describe how I would
> do it. Comments and suggestions are welcome.
>
>
>
> Adding something like a mk_iov to the session_request struct all buffers
> could be kept there. Buffers need to be malloc'd by the caller and
> free'd after data has been sent. However, adding a destructor callback
> would allow buffer reuse. The default destructor should be free.
>
>         typedef void (*destructor)(void *);
>         mk_send_buffer(struct session_request *, void *, size_t,
> destructor);
>
> The behavior of mk_header_send and mk_http_request_end need to
> change. The first should prepend headers to the session_requests mk_iov
> and the second should mark the request as done, allowing the next pending
> request to be processed. They should not do any io.
>
> Currently the stage30 callback is invoked multiple times, something I
> think isn't needed. After stage30, the full response should be queued or
> the plugin is waiting for some other socket's read/write event.
>
> Sending the pending buffers could then be done inside a write event
> handler inside core. The write handler could then easily chunk and/or
> gzip the content. To make the write event efficient the http socket
> could be put to sleep if there's no pending data. This is currently done
> by both the CGI and the FastCGI plugins.
>
> This way of sending data would almost exclusively be used by plugins and
> could coexists with the current socket interface if only one of them is
> used at a time. However, I do think we should deprecating the raw socket
> interface when all features and plugins are confirmed working with
> buffers.
>
> Fixing the issue in f80af9e2 would be trivial as the mk_lib already uses
> a malloced content buffer. The mk_iov struct can be modified to handle
> partial writes and using an event handler will allow writev to fail
> with EAGAIN without problem.
>
> Problems with this approach is that large amount of memory could be
> occupied by these buffers. There should therefore be some kind of limit
> on the amount of memory used at any time, blocking pipelined requests or
> serving a 503 service unavailable when the memory limit is reached.
>
> --
> Sonny Karlsson
>



-- 
Eduardo Silva
http://edsiper.linuxchile.cl
http://www.monkey-project.com
_______________________________________________
Monkey mailing list
[email protected]
http://lists.monkey-project.com/listinfo/monkey

Reply via email to