[
https://issues.apache.org/jira/browse/IO-468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311504#comment-14311504
]
Thomas Neidhart commented on IO-468:
------------------------------------
The drawbacks of ThreadLocals are that they might leak memory in web
application server, see the blog here for example:
http://niklasschlimm.blogspot.be/2012/04/threading-stories-threadlocal-in-web.html
As a developer of a general-purpose library one has to keep exactly an eye on
that: general purpose use. We can not optimize for one specific use-case, i.e.
maximum performance, when the utility method is mainly used in total different
setups.
Furthermore, there is a way to get *maximum* performance: not using a
ThreadLocal but providing a local byte array when calling the method. This
should be faster than going via a ThreadLocal. Did you test this in your
performance test? Also, the buffer size will highly depend on the use-case. To
get maximum performance you will want to adjust the buffer size, which is not
possible with the ThreadLocal solution either.
I can understand when someone is pissed because his performance patch is not
accepted, but if you take a step back you will realize that it really does not
make any sense here.
> Avoid allocating memory for method internal buffers, use threadlocal memory
> instead
> -----------------------------------------------------------------------------------
>
> Key: IO-468
> URL: https://issues.apache.org/jira/browse/IO-468
> Project: Commons IO
> Issue Type: Improvement
> Components: Utilities
> Affects Versions: 2.4
> Environment: all environments
> Reporter: Bernd Hopp
> Priority: Minor
> Labels: newbie, performance
> Fix For: 2.5
>
> Attachments: PerfTest.java, monitoring_with_threadlocals.png,
> monitoring_without_threadlocals.png, performancetest.ods
>
> Original Estimate: 12h
> Remaining Estimate: 12h
>
> In a lot of places, we allocate new buffers dynamically via new byte[]. This
> is a performance drawback since many of these allocations could be avoided if
> we would use threadlocal buffers that can be reused. For example, consider
> the following code from IOUtils.java, ln 2177:
> return copyLarge(input, output, inputOffset, length, new
> byte[DEFAULT_BUFFER_SIZE]);
> This code allocates new memory for every copy-process, that is not used
> outside of the method and could easily and safely reused, as long as is is
> thread-local. So instead of allocating new memory, a new utility-class could
> provide a thread-local bytearray like this:
> byte[] buffer = ThreadLocalByteArray.ofSize(DEFAULT_BUFFER_SIZE);
> return copyLarge(input, output, inputOffset, length, buffer);
> I have not measured the performance-benefits yet, but I would expect them to
> be significant, especially when the streams itself are not the performance
> bottleneck.
> Git PR is at https://github.com/apache/commons-io/pull/6/files
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)