[
https://issues.apache.org/jira/browse/IO-468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311580#comment-14311580
]
Mark Thomas commented on IO-468:
--------------------------------
The patch as currently would trigger a class loader memory leak if deployed in
a Servlet container. Tomcat would report this as an application / library bug
and in this case it would be viewed as a bug in Commons IO.
You might find this a useful reference:
http://people.apache.org/~markt/presentations/2010-11-04-Memory-Leaks-60mins.pdf
You could avoid the class loader leak if:
- ThreadLocal was directly without sub-classing
- The type placed into the thread local was one provided by the JVM not the
application (e.,g. ByteBuffer).
You would still have the issue of it being non-trivial (i.e. would require some
nasty reflection code) to be ensure that all the ThreadLocals were cleaned up
when the web application stopped. Or you could just live with extra memory used
by not cleaning up the ByteBuffers.
Note: I haven't looked at the performance benefits or the validity of the tests.
> Avoid allocating memory for method internal buffers, use threadlocal memory
> instead
> -----------------------------------------------------------------------------------
>
> Key: IO-468
> URL: https://issues.apache.org/jira/browse/IO-468
> Project: Commons IO
> Issue Type: Improvement
> Components: Utilities
> Affects Versions: 2.4
> Environment: all environments
> Reporter: Bernd Hopp
> Priority: Minor
> Labels: newbie, performance
> Fix For: 2.5
>
> Attachments: PerfTest.java, monitoring_with_threadlocals.png,
> monitoring_without_threadlocals.png, performancetest.ods,
> performancetest_weakreference.ods
>
> Original Estimate: 12h
> Remaining Estimate: 12h
>
> In a lot of places, we allocate new buffers dynamically via new byte[]. This
> is a performance drawback since many of these allocations could be avoided if
> we would use threadlocal buffers that can be reused. For example, consider
> the following code from IOUtils.java, ln 2177:
> return copyLarge(input, output, inputOffset, length, new
> byte[DEFAULT_BUFFER_SIZE]);
> This code allocates new memory for every copy-process, that is not used
> outside of the method and could easily and safely reused, as long as is is
> thread-local. So instead of allocating new memory, a new utility-class could
> provide a thread-local bytearray like this:
> byte[] buffer = ThreadLocalByteArray.ofSize(DEFAULT_BUFFER_SIZE);
> return copyLarge(input, output, inputOffset, length, buffer);
> I have not measured the performance-benefits yet, but I would expect them to
> be significant, especially when the streams itself are not the performance
> bottleneck.
> Git PR is at https://github.com/apache/commons-io/pull/6/files
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)