[ 
https://issues.apache.org/jira/browse/IO-468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311562#comment-14311562
 ] 

Thomas Neidhart commented on IO-468:
------------------------------------

You did not fully understand the article: it is not about ThreadLocals in 
ClassLoaders, but that ThreadLocals can cause memory leaks due to the way web 
applications are loaded/unloaded in an application server.

There is another popular entry about ThreadLocals here: 
https://plumbr.eu/blog/how-to-shoot-yourself-in-foot-with-threadlocals

Regarding your test: I have seen many such micro-benchmarks that try to show 
some improvement but I am not convinced by that one. Tje JVM might even detect 
that your TestRunnable does not actually do anything and optimize the loop 
away, who knows. Also, only the total duration is captured not the individual 
runs. If you collect individual runs and provide some statistical properties 
like mean, avg, stdev you have at least a way to determine if JIT kicked in in 
the middle of the evaluation or already before. Now we just assume something.

Take a look here to get some inspiration: 
http://www.javaspecialists.eu/archive/Issue124.html



> Avoid allocating memory for method internal buffers, use threadlocal memory 
> instead
> -----------------------------------------------------------------------------------
>
>                 Key: IO-468
>                 URL: https://issues.apache.org/jira/browse/IO-468
>             Project: Commons IO
>          Issue Type: Improvement
>          Components: Utilities
>    Affects Versions: 2.4
>         Environment: all environments
>            Reporter: Bernd Hopp
>            Priority: Minor
>              Labels: newbie, performance
>             Fix For: 2.5
>
>         Attachments: PerfTest.java, monitoring_with_threadlocals.png, 
> monitoring_without_threadlocals.png, performancetest.ods, 
> performancetest_weakreference.ods
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> In a lot of places, we allocate new buffers dynamically via new byte[]. This 
> is a performance drawback since many of these allocations could be avoided if 
> we would use threadlocal buffers that can be reused. For example, consider 
> the following code from IOUtils.java, ln 2177:
> return copyLarge(input, output, inputOffset, length, new 
> byte[DEFAULT_BUFFER_SIZE]);
> This code allocates new memory for every copy-process, that is not used 
> outside of the method and could easily and safely reused, as long as is is 
> thread-local. So instead of allocating new memory, a new utility-class could 
> provide a thread-local bytearray like this:
> byte[] buffer = ThreadLocalByteArray.ofSize(DEFAULT_BUFFER_SIZE);
> return copyLarge(input, output, inputOffset, length, buffer);
> I have not measured the performance-benefits yet, but I would expect them to 
> be significant, especially when the streams itself are not the performance 
> bottleneck. 
> Git PR is at https://github.com/apache/commons-io/pull/6/files



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to