[
https://issues.apache.org/jira/browse/KAFKA-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16028559#comment-16028559
]
ASF GitHub Bot commented on KAFKA-5150:
---------------------------------------
GitHub user ijuma opened a pull request:
https://github.com/apache/kafka/pull/3164
KAFKA-5150: Reduce lz4 decompression overhead (without thread local buffers)
Temporary PR that has additional changes over
https://github.com/apache/kafka/pull/2967 for comparison.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ijuma/kafka
kafka-5150-reduce-lz4-decompression-overhead
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/kafka/pull/3164.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #3164
----
commit 950858e7fae838aecbf31c1ea201c3dbcd67a91d
Author: Xavier Léauté <[email protected]>
Date: 2017-05-03T17:01:07Z
small batch decompression benchmark
commit 0177665f3321e101ccbb2e95ec724125bf784e1c
Author: Xavier Léauté <[email protected]>
Date: 2017-05-03T20:40:45Z
KAFKA-5150 reduce lz4 decompression overhead
- reuse decompression buffers, keeping one per thread
- switch lz4 input stream to operate directly on ByteBuffers
- more tests with both compressible / incompressible data, multiple
blocks, and various other combinations to increase code coverage
- fixes bug that would cause EOFException instead of invalid block size
for invalid incompressible blocks
commit 7b553afdd7a6a7d39b122c503cd643b915b9f556
Author: Xavier Léauté <[email protected]>
Date: 2017-05-04T17:26:52Z
remove unnecessary synchronized on reset/mark
commit b4c46ac15aa25e2c8ea6bb9392d700b02b61fccd
Author: Xavier Léauté <[email protected]>
Date: 2017-05-05T16:00:49Z
avoid exception when reaching end of batch
commit 77e1a1d47f9060430257821704045353ec77a8d0
Author: Xavier Léauté <[email protected]>
Date: 2017-05-18T16:47:01Z
remove reflection for LZ4 and add comments
commit e3b68668b6b2e0057bcd9a3de24ab8fce774d8d5
Author: Ismael Juma <[email protected]>
Date: 2017-05-26T15:45:13Z
Simplify DataLogInputStream.nextBatch
commit 213bb77b8a3862a325118492d658a4e58ffd3c29
Author: Ismael Juma <[email protected]>
Date: 2017-05-26T15:56:16Z
Minor comment improvement
commit 9bd10361d70de837ccb58a82b356b402d28bb94f
Author: Ismael Juma <[email protected]>
Date: 2017-05-29T14:18:13Z
Minor tweaks in `DefaultRecord.readFrom`
commit 178d4900a6c848a4f1b0aa0ae68aaa24885f36bc
Author: Ismael Juma <[email protected]>
Date: 2017-05-29T15:22:01Z
Cache decompression buffers in Fetcher instead of thread-locals
This means that this only benefits the consumer for now, which
is the most important case. For the server, we should consider
how this fits with KIP-72.
commit c10b310cc13f5ec110cbaed8fb72f24774c2a2cd
Author: Ismael Juma <[email protected]>
Date: 2017-05-29T15:23:19Z
Tweaks to `KafkaLZ4*Stream` classes and `RecordBatchIterationBenchmark
commit d93444c147430a62f5e9d16492ad14d2c6a0dd38
Author: Ismael Juma <[email protected]>
Date: 2017-05-29T18:18:23Z
Trivial style tweaks to KafkaLZ4Test
commit 419500e848b943f20d9bce1790fe40e64080ae29
Author: Ismael Juma <[email protected]>
Date: 2017-05-29T18:38:55Z
Provide a `NO_CACHING` BufferSupplier
----
> LZ4 decompression is 4-5x slower than Snappy on small batches / messages
> ------------------------------------------------------------------------
>
> Key: KAFKA-5150
> URL: https://issues.apache.org/jira/browse/KAFKA-5150
> Project: Kafka
> Issue Type: Bug
> Components: consumer
> Affects Versions: 0.8.2.2, 0.9.0.1, 0.11.0.0, 0.10.2.1
> Reporter: Xavier Léauté
> Assignee: Xavier Léauté
> Fix For: 0.11.0.0
>
>
> I benchmarked RecordsIteratorDeepRecordsIterator instantiation on small batch
> sizes with small messages after observing some performance bottlenecks in the
> consumer.
> For batch sizes of 1 with messages of 100 bytes, LZ4 heavily underperforms
> compared to Snappy (see benchmark below). Most of our time is currently spent
> allocating memory blocks in KafkaLZ4BlockInputStream, due to the fact that we
> default to larger 64kB block sizes. Some quick testing shows we could improve
> performance by almost an order of magnitude for small batches and messages if
> we re-used buffers between instantiations of the input stream.
> [Benchmark
> Code|https://github.com/xvrl/kafka/blob/small-batch-lz4-benchmark/clients/src/test/java/org/apache/kafka/common/record/DeepRecordsIteratorBenchmark.java#L86]
> {code}
> Benchmark (compressionType)
> (messageSize) Mode Cnt Score Error Units
> DeepRecordsIteratorBenchmark.measureSingleMessage LZ4
> 100 thrpt 20 84802.279 ± 1983.847 ops/s
> DeepRecordsIteratorBenchmark.measureSingleMessage SNAPPY
> 100 thrpt 20 407585.747 ± 9877.073 ops/s
> DeepRecordsIteratorBenchmark.measureSingleMessage NONE
> 100 thrpt 20 579141.634 ± 18482.093 ops/s
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)