[ 
https://issues.apache.org/jira/browse/HADOOP-10681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14048098#comment-14048098
 ] 

Gopal V commented on HADOOP-10681:
----------------------------------

Since this bug has caused some alarm among those who looked at it, I will 
pessimize this a little.

The core open loop which needs per-stream sync is (the correct code version)

{code}
synchronized(compressor) {
   compressor.setInput();
   while (!compressor.finished()) {
       compressor.compress(buffer, 0, buffer.length); 
   }
}
{code}

I will make sure only this incorrect loop follows the required fast-path.

> Remove synchronized blocks from SnappyCodec and ZlibCodec buffering
> -------------------------------------------------------------------
>
>                 Key: HADOOP-10681
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10681
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: performance
>    Affects Versions: 2.2.0, 2.4.0, 2.5.0
>            Reporter: Gopal V
>            Assignee: Gopal V
>              Labels: perfomance
>         Attachments: HADOOP-10681.1.patch, compress-cmpxchg-small.png, 
> perf-top-spill-merge.png, snappy-perf-unsync.png
>
>
> The current implementation of SnappyCompressor spends more time within the 
> java loop of copying from the user buffer into the direct buffer allocated to 
> the compressor impl, than the time it takes to compress the buffers.
> !perf-top-spill-merge.png!
> The bottleneck was found to be java monitor code inside SnappyCompressor.
> The methods are neatly inlined by the JIT into the parent caller 
> (BlockCompressorStream::write), which unfortunately does not flatten out the 
> synchronized blocks.
> !compress-cmpxchg-small.png!
> The loop does a write of small byte[] buffers (each IFile key+value). 
> I counted approximately 6 monitor enter/exit blocks per k-v pair written.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to