[
https://issues.apache.org/jira/browse/KAFKA-374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526624#comment-13526624
]
David Arthur edited comment on KAFKA-374 at 12/7/12 6:38 PM:
-------------------------------------------------------------
Not sure how you guys feel about having Java in the source tree, but I attached
a patch with the pure Java implementation (and the other stuff from [~jkreps]
original patch).
was (Author: mumrah):
Not sure how you guys feel about having Java in the source tree, but I
attached a patch with the pure Java implementation (and the other stuff from
[~jkreps]'s original patch).
> Move to java CRC32 implementation
> ---------------------------------
>
> Key: KAFKA-374
> URL: https://issues.apache.org/jira/browse/KAFKA-374
> Project: Kafka
> Issue Type: New Feature
> Components: core
> Affects Versions: 0.8
> Reporter: Jay Kreps
> Priority: Minor
> Labels: newbie
> Attachments: KAFKA-374-draft.patch, KAFKA-374.patch
>
>
> We keep a per-record crc32. This is fairly cheap algorithm, but the java
> implementation uses JNI and it seems to be a bit expensive for small records.
> I have seen this before in Kafka profiles, and I noticed it on another
> application I was working on. Basically with small records the native
> implementation can only checksum < 100MB/sec. Hadoop has done some analysis
> of this and replaced it with a Java implementation that is 2x faster for
> large values and 5-10x faster for small values. Details are here HADOOP-6148.
> We should do a quick read/write benchmark on log and message set iteration
> and see if this improves things.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira