I concur with Rob.  I occasionally deal with large datasets that I
don't want in memory at once.  Leave up to the app developer to put it
in a collection if they desire (although this might be a useful
optional mode in jena-client).

-Stephen


On Mon, Aug 27, 2012 at 9:17 AM, Rob Vesse <[email protected]> wrote:
> On 8/26/12 11:18 AM, "Andy Seaborne" <[email protected]> wrote:
>
>
>>On 24/08/12 18:53, Stephen Allen wrote:
>>
>>> Currently it is a).  You MUST close the QueryExecution object.
>>> Especially in the case that you are using QueryEngineHTTP (if you do
>>> not close this, then it leaves a connection open to the remote
>>> endpoint, and can end up exhausting the Fuseki thread pool over time).
>>>
>>> It does not close automatically after you finish iterating the
>>> ResultSet returned from execSelect() and possibly
>>> execConstructTriples().  I feel like this is a bug.  As a back-up for
>>> the developer not closing the QueryExecution, the code should close it
>>> when they've iterated through the entire ResultSet.  Tracked in
>>> JENA-305.
>>
>>Agreed.
>>
>>And on a related note, I wonder if execSelect or even deeper in
>>HttpQuery.exec should read the entire response, and not try to do
>>end-to-end streaming.  That way, a slow/bad application can't affect the
>>remote server by holding connections open for too long.  Obvious down
>>side is that things are resource limited
>
> I would disagree, while this is a useful idea in principal and in some use
> cases it quickly falls down as soon as you have moderately large results
> with a OOM exception.
>
> If this behavior were to change then it would need to change in a
> backwards compatible way I.e. you can opt to either pre-parse the full
> results or to stream them.  My strong preference would be to continue to
> stream them
>
> Rob
>
>>
>>SERVICE already does this because it may loop back to the same server.
>>
>>       Andy
>>
>

Reply via email to