We haven't tested the extremes of using a thread per connection and
"blocking" 10MB at a time, but I was wondering if there was an obvious
reason why that wouldn't work (assuming we trickle enough to prevent socket
timeouts)?  It seems like the MHD event loop might pull as much as it can
off the socket, which means that if we haven't completely emptied the
buffer an error could occur?  Or does MHD first try to only pull enough
data off the socket for the amount of space available in the buffer?


On Wed, Oct 23, 2013 at 11:28 AM, Christian Grothoff <[email protected]>wrote:

> This is not possible right now.  I agree that we should fix this,
> I'll try to find the time to implement this soon. (We will need a
> new API call to suspend/resume processing of a connection.)
>
> Happy hacking!
>
> Christian
>
> On 10/23/2013 04:27 PM, Jared Cantwell wrote:
> > Is it possible to use libmicrohttpd to transfer large amounts of binary
> > data (e.g. 10GB) incrementally using a thread pool model?  By
> incrementally
> > I mean that after we receive 10MB we'd like to process the 10MB
> internally
> > before allowing more data to be transferred to the web server-- we don't
> > have enough memory to receive the whole 10GB first.  Using
> > MHD_USE_THREAD_PER_CONNECTION we can achieve this by blocking until we
> > process each 10MB chunk.  However, for scaling reasons we'd prefer to use
> > the thread pool model, but I can't find a way to "block" acknowledgement
> of
> > processing some data without synchronously blocking one of the threads in
> > the pool.  It looks like the POST request handlers are close to what I
> > need, except my data is not form-encoded.
> >
> > Any suggestions would be greatly appreciated.
> >
> > Thanks,
> > Jared
> >
>
>

Reply via email to