GitHub user onurkaraman opened a pull request:
https://github.com/apache/kafka/pull/1394
KAFKA-3718: propagate all KafkaConfig __consumer_offsets configs to
OffsetConfig instantiation
Kafka has two configurable compression codecs: the one used by the client
(source codec) and the one finally used when storing into the log (target
codec). The target codec defaults to KafkaConfig.compressionType and can be
dynamically configured through zookeeper.
The GroupCoordinator appends group membership information into the
__consumer_offsets topic by:
1. making a message with group membership information
2. making a MessageSet with the single message compressed with the source
codec
3. doing a log.append on the MessageSet
Without this patch, KafkaConfig.offsetsTopicCompressionCodec doesn't get
propagated to OffsetConfig instantiation, so GroupMetadataManager uses a source
codec of NoCompressionCodec when making the MessageSet. Let's say we have
enough group information such that the message formed exceeds
KafkaConfig.messageMaxBytes before compression but would fall below the
threshold after compression using our source codec. Even if we had dynamically
configured __consumer_offsets with our favorite compression codec, the
log.append will throw RecordTooLargeException during
analyzeAndValidateMessageSet since the message was unexpectedly uncompressed
instead of having been compressed with the source codec defined by
KafkaConfig.offsetsTopicCompressionCodec.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/onurkaraman/kafka KAFKA-3718
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/kafka/pull/1394.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #1394
----
commit d65726de2e4a196837de7bb2e11f397482826dca
Author: Onur Karaman <[email protected]>
Date: 2016-05-13T06:43:02Z
propagate all KafkaConfig __consumer_offsets configs to OffsetConfig
instantiation
Kafka has two configurable compression codecs: the one used by the client
(source codec) and the one finally used when storing into the log (target
codec). The target codec defaults to KafkaConfig.compressionType and can be
dynamically configured through zookeeper.
The GroupCoordinator appends group membership information into the
__consumer_offsets topic by:
1. making a message with group membership information
2. making a MessageSet with the single message compressed with the source
codec
3. doing a log.append on the MessageSet
Without this patch, KafkaConfig.offsetsTopicCompressionCodec doesn't get
propagated to OffsetConfig instantiation, so GroupMetadataManager uses a source
codec of NoCompressionCodec when making the MessageSet. Let's say we have
enough group information such that the message formed exceeds
KafkaConfig.messageMaxBytes before compression but would fall below the
threshold after compression using our source codec. Even if we had dynamically
configured __consumer_offsets with our favorite compression codec, the
log.append will throw RecordTooLargeException during
analyzeAndValidateMessageSet since the message was unexpectedly uncompressed
instead of having been compressed with the source codec defined by
KafkaConfig.offsetsTopicCompressionCodec.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---