[ 
https://issues.apache.org/jira/browse/SOLR-17430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17877918#comment-17877918
 ] 

Chris M. Hostetter commented on SOLR-17430:
-------------------------------------------

 

Without getting too bogged down into the details, I'd like to propose a high 
level strawman replacement for the current {{ExportBuffers}} logic:
 * Eliminate the double buffers and use of CyclicBarrier for swapping
 ** Replace them with a simple consumer->BlockingQueue->producer model
 * The (filler) producer should:
 ** Be implemented as a {{Callable}} (that can throw exceptions) 
 ** "put" items into the queue – ie: block forever, or until interrupted, if 
the queue is full
 *** NOTE: It may still make sense from an "index reading efficiency" 
standpoint for it to read large blocks of documents at a time into it's own 
buffer
 ** On any type of error (including any InterruptedException from trying to 
"put" to the queue) it should throw it's exception
 * The "writer" (request thread) consumer should:
 ** Hold a {{Future}} object backed by the "producer"
 ** Repeatedly "poll" from the queue in a loop (w/a short time limit)
 *** If "poll" returns null: break out of the loop if {{true == 
Future.isDone()}} 
 ** regardless of how we exit our loop, a {{finally}} block(s) should ensure:
 *** {{Future.get()}} is called (so any Exceptions from the producer can be 
propagated up)
 *** {{Future.cancel(true)}} is called (to interrupt the producer if the 
consumer is failing for it's own reasons before the producer is done)
 

> Redesign ExportWriter / ExportBuffers to work better with large batchSizes 
> and slow consumption
> -----------------------------------------------------------------------------------------------
>
>                 Key: SOLR-17430
>                 URL: https://issues.apache.org/jira/browse/SOLR-17430
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Chris M. Hostetter
>            Priority: Major
>
> As mentioned in SOLR-17416, the design of the {{ExportBuffers}} class used by 
> the {{ExportHandler}} is brittle and the absolutely time limit on how long 
> the buffer swapping threads will wait for eachother isn't suitable for very 
> long running streaming expressions...
> {quote}The problem however is that this 600 second timeout may not be enough 
> to account for really slow downstream consumption of the data.  With really 
> large collections, and really complicated streaming expressions, this can 
> happen even when well behaved clients that are actively trying to consume 
> data.
> {quote}
> ...but another sub-optimal aspect of this buffer swapping design is that the 
> "writer" thread is initially completely blocked, and can't write out a single 
> document, until the "filler" thread has read the full {{batchSize}} of 
> documents into it's buffer and opted to swap.  Likewise, after buffer 
> swapping has occured at least once, any document in the {{outputBuffer}} that 
> the writer has already processed hangs around, taking up ram, until the next 
> swap, while one of the threads is idle.  If {{{}batchSize=30000{}}}, and the 
> "filler" thread is ready to go with a full {{fillBuffer}} while the "writer" 
> has only been able to emit 29999 of the documents in it's {{outputBuffer}} 
> documents before being blocked and forced to wait (due to the downstream 
> consumer of the output bytes) before it can emit the last document in it's 
> batch – that means both the "writer" thread and the "filler" thread are 
> stalled, taking up 2x the batchSize of ram, even though half of that is data 
> that is no longer needed.
> The bigger the {{batchSize}} the worse the initial delay (and steady state 
> wasted RAM) is.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to