[ 
https://issues.apache.org/jira/browse/KAFKA-17484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879394#comment-17879394
 ] 

Greg Harris commented on KAFKA-17484:
-------------------------------------

Hi [~georgios] Thanks for the ticket!

Are you able to reproduce this issue on the latest release 3.8.0? 1.1.1 is well 
out of support at this point, and a fix for this could only be applied to a 
modern version of Kafka.

Thanks!

> Logs are deleted in Kafka 1.1.1 if logs.segment.bytes is changed
> ----------------------------------------------------------------
>
>                 Key: KAFKA-17484
>                 URL: https://issues.apache.org/jira/browse/KAFKA-17484
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 1.1.1
>         Environment: Kafka 1.1.1
>            Reporter: Georgios Kousouris
>            Priority: Major
>             Fix For: 2.0.0
>
>
> On Kafka 1.1.1 we have observed this unexpected behaviour. If there is a 
> topic with {{cleanup.policy=compact}} and the server-wide 
> {{log.segments.bytes}} field is modified, the compact logs are deleted.
> Steps to reproduce the bug:
>  
> {code:java}
> 1. Download the Kafka 1.1.1 zip file
> $ curl -O https://archive.apache.org/dist/kafka/1.1.1/kafka_2.11-1.1.1.tgz
> 2. Unzip it
> $ tar -xzf kafka_2.11-1.1.1.tgz
> 3. Start Zookeeper
> $ bin/zookeeper-server-start.sh config/zookeeper.properties
> 4. Adjust config/server.properties with these properties
> > log.retention.ms=10000
> > log.segment.bytes=100
> 5. Create a test-topic
> $ bin/kafka-topics.sh --create --zookeeper localhost:2181 
> --replication-factor 1 --partitions 1 --topic test-topic --config 
> cleanup.policy=compact
> 6. Produce some dummy data to the topic
> $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic 
> test-topic --property "parse.key=true" --property "key.separator=:"
> >key1:this is a test
> >key2:another test
> 7. Verify that the logs have rolled over:
> $ ls -l /tmp/kafka-logs/test-topic-0
> 8. Wait for 10 seconds (our log.retention.ms value) and change 
> log.segment.bytes
> $ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type 
> brokers --entity-default --alter --add-config log.segment.bytes=95
> 9. Consume the data from the topic:
> $  % bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic 
> test-topic --from-beginning
> # Output should be:
> > this is a test
> > another test
> 9. Wait for at least 10 seconds and change it again:
> $ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type 
> brokers --entity-default --alter --add-config log.segment.bytes=105
> 10. 9. Consume the data from the topic:
> $ % bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic 
> test-topic --from-beginning 
> # Output is empty now
> > {code}
> Even though this leads to {*}loss of data{*}, I am not marking it as Critical 
> because the Kafka version this affects is quite old.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to