Hi,
Yes… and I think I found why… (after >6 hours of trying to get things working for the ssl bucket; mostly succeeding via a specific ‘mark unread’ function that removes the previous read from memory). The whole idea of checking if buckets are completely drained is falling down for buckets that we ‘write’ to the socket and then find that the network buffers are full. At that point we have to stop draining and there is no way to mark all buckets in the chain (aggregate within aggregate, inside custom wrapper, etc.) as ‘ok not to empty’, as there is no way to walk the entire chain upto the inner bucket that was last read. I was thinking about 2 possible solutions: · We might want to check ‘a peek operation’ in a different way in a debug specific marking step, tracking which buckets are peeked downwards… All of these would be ok, not to read to EAGAIN/EOF. · We might want to use different allocators for the input and output (during debugging), to tag their usecase. Neither solution looks good/generic to me… That hold open system is used all through the connection/request infrastructure and even in the newer http/2 code… And that ugly patch I committed already handled that for the destroy bucket case. I don’t want to recommend deprecating this useful system, just to make the debugging easier. Bert From: Greg Stein [mailto:gst...@gmail.com] Sent: donderdag 12 november 2015 17:29 To: Bert Huijben <b...@qqmail.nl> Cc: Bert Huijben <rhuij...@apache.org>; dev@serf.apache.org Subject: Re: svn commit: r1713936 - in /serf/trunk: buckets/allocator.c buckets/log_wrapper_buckets.c outgoing.c test/mock_buckets.c On Thu, Nov 12, 2015 at 6:37 AM, Bert Huijben <b...@qqmail.nl <mailto:b...@qqmail.nl> > wrote: >... This assumed somebody calls serf_debug__entered_loop()... Which we -as far as I can tell- never did. No call in trunk nor in branches/[01].[0-9].x. Weird. I could swear it was in there. Maybe it caused too much trouble, as you're finding. I have a patch that adds this call on the connection and the allocators of (partially) written request. But adding that call caused new test failures, especially around ssl buckets, where substreams are commonly not read further after APR_SUCCESS... as we have to wait for the server to come with more data first, before we can write something else. (Trying to work out a solution) Well, it seems fine to exclude certain buckets, if comments explain why. Maybe one day, we'll figure out a solution. That code is there to try and help app programmers with "proper" use of the buckets. Getting 90% of the buckets, doing 80% of the tests would go a long ways towards that goal. >... Aggregate buckets can have a hold open callback, causing them to return something else than EOF (E.g. EAGAIN) until <some condition>. I've never really looked at that "hold open" thing. Any way we can deprecate that? Change code to avoid using it? (and avoid these kinds of problems) Cheers, -g