That's great news. Will be keeping an eye on the release.
On Mon, Aug 8, 2016 at 10:12 AM Jacob Maes
wrote:
> Hey David,
>
> I think that behavior is meant to prevent an issue on the Kafka 0.8
> Brokers. Samza 10.1 allows compression on log compacted topics, but
Hey David,
I think that behavior is meant to prevent an issue on the Kafka 0.8
Brokers. Samza 10.1 allows compression on log compacted topics, but you'll
need to make sure you're using Kafka 0.9 or higher on the Brokers.
-Jake
On Fri, Aug 5, 2016 at 10:57 PM, David Yu
I'm reporting back my observations after enabling compression.
Looks like compression is not doing anything. I'm still seeing
"compression-rate-avg=1.0" and the same "record-size-avg" from JMX
"kafka.producer" metrics.
I did set the following:
systems.kafka.producer.compression.type=snappy
Am I
Great. Thx.
On Wed, Aug 3, 2016 at 1:42 PM Jacob Maes wrote:
> Hey David,
>
> what gets written to the changelog topic
>
> The changelog gets the same value as the store, which is the serialized
> form of the key and value. The serdes for the store are configured with the
Hey David,
what gets written to the changelog topic
The changelog gets the same value as the store, which is the serialized
form of the key and value. The serdes for the store are configured with the
properties:
stores.store-name.key.serde
stores.store-name.msg.serde
If I want to compress the
I'm trying to understand what gets written to the changelog topic. Is it
just the serialized value of the particular state store entry? If I want to
compress the changelog topic, do I enable that from the producer?
The reason I'm asking is that, we are seeing producer throughput issues and