[
https://issues.apache.org/jira/browse/HADOOP-10681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15472997#comment-15472997
]
Junegunn Choi commented on HADOOP-10681:
----------------------------------------
Can we do the same for {{Lz4Compressor}} and {{Bzip2Compressor}}? I noticed a
significant performance overhead using {{Lz4Compressor}} for
{{hbase.client.rpc.compressor}} compared to {{SnappyCompressor}} due to the
same problem.
> Remove synchronized blocks from SnappyCodec and ZlibCodec buffering inner loop
> ------------------------------------------------------------------------------
>
> Key: HADOOP-10681
> URL: https://issues.apache.org/jira/browse/HADOOP-10681
> Project: Hadoop Common
> Issue Type: Bug
> Components: performance
> Affects Versions: 2.2.0, 2.4.0, 2.5.0
> Reporter: Gopal V
> Assignee: Gopal V
> Labels: perfomance
> Fix For: 2.6.0
>
> Attachments: HADOOP-10681.1.patch, HADOOP-10681.2.patch,
> HADOOP-10681.3.patch, HADOOP-10681.4.patch, compress-cmpxchg-small.png,
> perf-top-spill-merge.png, snappy-perf-unsync.png
>
>
> The current implementation of SnappyCompressor spends more time within the
> java loop of copying from the user buffer into the direct buffer allocated to
> the compressor impl, than the time it takes to compress the buffers.
> !perf-top-spill-merge.png!
> The bottleneck was found to be java monitor code inside SnappyCompressor.
> The methods are neatly inlined by the JIT into the parent caller
> (BlockCompressorStream::write), which unfortunately does not flatten out the
> synchronized blocks.
> !compress-cmpxchg-small.png!
> The loop does a write of small byte[] buffers (each IFile key+value).
> I counted approximately 6 monitor enter/exit blocks per k-v pair written.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]