Is using /opt/confluent-4.1.1/bin/kafka-run-class kafka.tools.DumpLogSegments 
kafka.tools.DumpLogSegments –files <filename.log> --print-data-log, correct way 
to verify that the kafka logs are compressed, if using compression.type=snappy?

I have set my compression.type: snappy in my property file for kafka streams 
application, and when I execute using /opt/confluent-4.1.1/bin/kafka-run-class 
kafka.tools.DumpLogSegments kafka.tools.DumpLogSegments … I get compresscodec: 
SNAPPY, but the logs came out readable. If I ‘head’ or ‘tail’ .log file 
directly, I can see the file is compressed, it shows funny characters, 
indicating compressed log file.

My understanding is that reading a compressed logs using consumer API, the logs 
are decompressed automatically without additional work in my code. Executing 
kafka-run-class kafka.tools.DumpLogSegments kafka.tools.DumpLogSegments, does 
it decompress the logs as well, or is it a direct dump of compressed logs?

Meeiling Bradley



Reply via email to