I checked the latest .log files and the earliest. They are all the same with human-readable message payload.
I tried setting LZ4 but that leads to a fatal on startup: [2017-12-29 13:55:15,393] FATAL [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.apache.kafka.common.config.ConfigException: Invalid value LZ4 for configuration compression.type: String must be one of: uncompressed, snappy, lz4, gzip, producer at org.apache.kafka.common.config.ConfigDef$ValidString.ensureValid(ConfigDef.java:897) at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:469) at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:453) at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:62) at kafka.log.LogConfig.<init>(LogConfig.scala:68) at kafka.log.LogManager$.apply(LogManager.scala:783) at kafka.server.KafkaServer.startup(KafkaServer.scala:222) at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:112) at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:58) So I switched back to lz4 immediately. So no solution so far. If Broker-side compression is applied correctly, does it then recompress regarding new incoming messages/batches no matter what compression the producer applies? Or are the producer settings somehow relevant even then? Cheers, Sven ---------------- Gesendet: Freitag, 29. Dezember 2017 um 14:45 Uhr Von: Manikumar <manikumar.re...@gmail.com> An: users@kafka.apache.org Betreff: Re: Problem to apply Broker-side lz4 compression even in fresh setup Is this config added after sending some data? Can you verify the latest logs? This wont recompress existing messages. Only applicable to new messages. On Fri, Dec 29, 2017 at 6:59 PM, Ted Yu <yuzhih...@gmail.com> wrote: > Looking at https://issues.apache.org/jira/browse/KAFKA-5686 , it seems you > should have specified LZ4. > > FYI > > On Fri, Dec 29, 2017 at 5:00 AM, Sven Ludwig <s_lud...@gmx.de> wrote: > > > Hi, > > > > we thought we have lz4 applied as broker-side compression on our Kafka > > Cluster for storing measurements, but today I looked into the individual > > .log files and I was able to read all the measurements in plain text by > > just using less on the command line. This is for me an indicator that > > batches of messages are actually not compressed with lz4. Our setup was > > started from scratch i.e. without pre-existing topics, there are no > > topic-level overrides and it is based on Confluent Platform 4.0.0 > > > > > > 1. Is it perhaps so that we need to care in every Producer that is does > > not already compress batches when sending them to the Broker? Up to > today I > > thought that if the Producer compresses, but the Broker has > > compression.type lz4, that the Broker would recompress as lz4? > > > > > > 2. When starting the Broker, in its log-statements it shows the line: > > compression.type = lz4 > > Is this correct, or does the value need to be 'lz4' with apostrophe? > > > > > > 3. Any other hints or possibilities what could be wrong? > > > > > > Generally we would like to enforce lz4 broker-side compression. We do not > > need to compress data coming from producers, since the network link is > not > > the problem. We just need to save on disk space. > > > > Please help us if you can, and have a good new years eve :-) > > > > Thanks, > > Sven > > >