2015-01-05 15:12 GMT+01:00 sebb <seb...@gmail.com>:
> On 5 January 2015 at 13:43, Stefan Bodewig <bode...@apache.org> wrote:
>> On 2015-01-04, Kristian Rosenvold wrote:
>>
>>> Most surprising to me is that it seems like the overhead of lots of
>>> small calls to RandomAccessFile.write seems to be a lot costlier than
>>> I thought it would be. It seems like consolidating to a larger byte
>>> array before calling write is a *lot* faster.
>>
>> This surprises me as well.
>
> Could be due to need to lock data in memory in native code.
> This usually means data has to be copied to a safe buffer.
> A single large copy will be faster than lots of small ones.

All of this disappears into native code pretty quickly so there might
be OS-specific badness happening on OSX for all I know. I'll check on
linux to see if there's a difference. But one thing is quite clear; if
I do 1000 writes of 100 bytes each (for a grand total of 100K data)  I
can probably /copy/ the data at least 10 times in memory to make up
for the difference in speed.

Kristian

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to