> -----Original Message-----
> From: Ivan Zhakov [mailto:i...@visualsvn.com]
> Sent: dinsdag 10 november 2015 20:43
> To: rhuij...@apache.org
> Cc: dev@serf.apache.org
> Subject: Re: svn commit: r1713489 - in /serf/trunk: buckets/event_buckets.c
> outgoing.c serf_private.h
> 
> On 9 November 2015 at 20:49,  <rhuij...@apache.org> wrote:
> > Author: rhuijben
> > Date: Mon Nov  9 17:49:59 2015
> > New Revision: 1713489
> >
> > URL: http://svn.apache.org/viewvc?rev=1713489&view=rev
> > Log:
> > Replace the track bucket in the request writing with a tiny bit more
> > advanced event bucket that tracks both writing done and destroyed. The
> > timings of these callbacks will allow simplifying some logic introduced
> > in r1712776.
> >
> > For now declare the event bucket as a private type.
> >
> > * buckets/event_buckets.c
> >   New file.
> >
> > * outgoing.c
> >   (request_writing_done): New function.
> >   (request_writing_finished): Tweak to implement
> serf_bucket_event_callback_t.
> >   (write_to_connection): Add event bucket directly after the request
> bucket,
> >     instead of an aggregate when the writing is done.
> >
> Hi Bert,
> 
> What do you think about alternative design for event buckets: make
> event bucket wrap any other bucket (request bucket in our case)? I
> think it will be more flexible since we could add callback to be
> called before reading from wrapped bucket to track pending request.

That might work, but would have different characteristics unless you do more 
special things.

Currently the 'done' handler is called after reading the request from the 
bucket, while the 'finished' handler is called after it is destructed.

The done handler can be slightly rescheduled, but that *after* destructed 
moment is really important. The old 'stream' implementation that didn't destroy 
buckets inside was a cover-up for the problem that we didn't know when the 
bucket was destroyed, while it lived in a different allocator.

I'm not sure if wrapping the request really improves things there... The 
aggregate bucket's promises work really nice here: the event bucket is called 
once, while the reading from the bucket is not slowed down in any way by 
introducing another layer of callbacks.


It might make the event bucket more generic, but I'm not sure if we really need 
that... This scheduling of requests is mostly an implementation detail of our 
http/1.1 stack; related to our queues of written and unwritten requests there. 
I don't think the event bucket is really a bucket that we want to expose in our 
public api.

The 2.0 stack and other protocols will need different systems as they have to 
apply a level of framing (and in case of http/2 also windowing) over the 
request buckets. I'm not sure which http2 specific buckets we want to expose as 
reusable apis either.

        Bert

Reply via email to