>>>>> - When reading, if the DbDataStore property copyWhenReading is enabled.
>>> the setting is "Enabled by default to support concurrent reads".
>> Does this mean concurrent reads are not possible with the property set to
>> false? Are requests for the content pipelined automatically or will there be
>> threading issues?
>
> I improved the documentation at http://wiki.apache.org/jackrabbit/DataStore
>
> copyWhenReading: The the copy setting, enabled by default. If enabled,
> a stream is always copied to a temporary file when reading a stream,
> so that reads can be concurrent. If disabled, reads are serialized.

Okay, thanks. Out of curiosity, why is a file necessary for concurrent access? 
I thought Jackrabbit's architecture was fundamentally copy-on-write with 
respect to concurrent modification.

>> when an error occurs the file lingers because the stream was never closed. I
>> think closing the stream in a finally block should be a best practice anyway.
>
> Yes, I think that's the best solution.
>
>>  Perhaps an enhancement to the datastore code would be to mark the temp file
>> for delete when the JVM terminates as a fail-safe against deficient client
>> code?
>
> I was thinking about that as well. The problem is that
> File.deleteOnExit is problematic. See
> http://www.bobcongdon.net/blog/2005/07/filedeleteonexit-is-evil.html
> (that's just the first link I found).

Interesting. I've never seen that issue before. Definitely good to know. Sounds 
like you have something in an inputstream implementation to detect when you're 
at the last byte or close was called. How about some code in the finalize 
method to delete the temp file if the stream is garbage-collected?

--
Erik

Reply via email to