[jira] [Commented] (KAFKA-5770) AdminClient.deleteTopics future complete but topic is still here

2017-09-15 Thread Vincent Maurin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167530#comment-16167530
 ] 

Vincent Maurin commented on KAFKA-5770:
---

The behavior haven't changed with version 0.11.0.1

> AdminClient.deleteTopics future complete but topic is still here
> 
>
> Key: KAFKA-5770
> URL: https://issues.apache.org/jira/browse/KAFKA-5770
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Vincent Maurin
> Attachments: Main.java
>
>
> After running some tests, it appears that a deleteTopics command futures are 
> completed even if the topic is still present on the broker.
> If it is the expected behavior, it should be documented accordingly, but it 
> is not very convenient for integration tests for example, when we create and 
> delete topics on each tests
> I am attaching a example java file that creates and deletes a bunch of topic 
> in a loop. Usually I got an error on the second loop saying that the topic 
> already exists



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5770) AdminClient.deleteTopics future complete but topic is still here

2017-08-23 Thread Vincent Maurin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138120#comment-16138120
 ] 

Vincent Maurin commented on KAFKA-5770:
---

Also listing topics after deleting them shows a empty list.
Maybe related to a bug on the broker like this one 
https://issues.apache.org/jira/browse/KAFKA-5752

> AdminClient.deleteTopics future complete but topic is still here
> 
>
> Key: KAFKA-5770
> URL: https://issues.apache.org/jira/browse/KAFKA-5770
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Vincent Maurin
> Attachments: Main.java
>
>
> After running some tests, it appears that a deleteTopics command futures are 
> completed even if the topic is still present on the broker.
> If it is the expected behavior, it should be documented accordingly, but it 
> is not very convenient for integration tests for example, when we create and 
> delete topics on each tests
> I am attaching a example java file that creates and deletes a bunch of topic 
> in a loop. Usually I got an error on the second loop saying that the topic 
> already exists



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5770) AdminClient.deleteTopics future complete but topic is still here

2017-08-23 Thread Vincent Maurin (JIRA)
Vincent Maurin created KAFKA-5770:
-

 Summary: AdminClient.deleteTopics future complete but topic is 
still here
 Key: KAFKA-5770
 URL: https://issues.apache.org/jira/browse/KAFKA-5770
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Vincent Maurin
 Attachments: Main.java

After running some tests, it appears that a deleteTopics command futures are 
completed even if the topic is still present on the broker.
If it is the expected behavior, it should be documented accordingly, but it is 
not very convenient for integration tests for example, when we create and 
delete topics on each tests

I am attaching a example java file that creates and deletes a bunch of topic in 
a loop. Usually I got an error on the second loop saying that the topic already 
exists




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5630) Consumer poll loop over the same record after a CorruptRecordException

2017-07-24 Thread Vincent Maurin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098276#comment-16098276
 ] 

Vincent Maurin commented on KAFKA-5630:
---

And I haven't noticed any other issues so far.
After a check with the DumpLogSegments tool, it appears that 2 partitions where 
impacted both on the same topic. I had log cleaner errors for these two 
partitions (same as the consumer one).

> Consumer poll loop over the same record after a CorruptRecordException
> --
>
> Key: KAFKA-5630
> URL: https://issues.apache.org/jira/browse/KAFKA-5630
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.11.0.0
>Reporter: Vincent Maurin
>
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite 
> consumption loop of the same record, i.e, each call to poll is returning to 
> me 500 times one record (500 is my max.poll.records). I am using the java 
> client 0.11.0.0.
> Running the code with the debugger, the initial problem come from 
> `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record 
> size is less than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test 
> block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the 
> last record.
> I guess the corruption problem is similar too 
> https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the 
> client is probably not the expected one



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5630) Consumer poll loop over the same record after a CorruptRecordException

2017-07-24 Thread Vincent Maurin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098263#comment-16098263
 ] 

Vincent Maurin commented on KAFKA-5630:
---

[~ijuma] thank you for your feedback. Regarding consumer, I have test with 
version 0.10.2.1 and it is actually throwing the error if calling "poll". Then 
it sounds fair enough to skip the record with seek. But with 0.11, I don't get 
any error, a call to poll just returns the same record duplicated 
max.poll.record. The logic then to seek for the next offsets is more 
complicated than reacting to the exception, it sounds for me that I have to 
compare records returned by poll and advance my offset if they are all equals ? 
Or am I misusing the client ? (It is a manual assigned partition use case, 
without committing offsets to kafka, I have tried to follow the recommendations 
in the KafkaConsumer javadoc for that)

> Consumer poll loop over the same record after a CorruptRecordException
> --
>
> Key: KAFKA-5630
> URL: https://issues.apache.org/jira/browse/KAFKA-5630
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.11.0.0
>Reporter: Vincent Maurin
>
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite 
> consumption loop of the same record, i.e, each call to poll is returning to 
> me 500 times one record (500 is my max.poll.records). I am using the java 
> client 0.11.0.0.
> Running the code with the debugger, the initial problem come from 
> `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record 
> size is less than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test 
> block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the 
> last record.
> I guess the corruption problem is similar too 
> https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the 
> client is probably not the expected one



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5630) Consumer poll loop over the same record after a CorruptRecordException

2017-07-24 Thread Vincent Maurin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098154#comment-16098154
 ] 

Vincent Maurin commented on KAFKA-5630:
---

It is
```
offset: 210648 position: 172156054 CreateTime: 1499416798791 isvalid: true 
size: 610 magic: 1 compresscodec: NONE crc: 1846714374
offset: 210649 position: 172156664 CreateTime: 1499416798796 isvalid: true 
size: 586 magic: 1 compresscodec: NONE crc: 3995473502
offset: 210650 position: 172157250 CreateTime: 1499416798798 isvalid: true 
size: 641 magic: 1 compresscodec: NONE crc: 2352501239
Exception in thread "main" 
org.apache.kafka.common.errors.CorruptRecordException: Record size is smaller 
than minimum record overhead (14).
```

> Consumer poll loop over the same record after a CorruptRecordException
> --
>
> Key: KAFKA-5630
> URL: https://issues.apache.org/jira/browse/KAFKA-5630
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.11.0.0
>Reporter: Vincent Maurin
>
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite 
> consumption loop of the same record, i.e, each call to poll is returning to 
> me 500 times one record (500 is my max.poll.records). I am using the java 
> client 0.11.0.0.
> Running the code with the debugger, the initial problem come from 
> `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record 
> size is less than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test 
> block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the 
> last record.
> I guess the corruption problem is similar too 
> https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the 
> client is probably not the expected one



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5630) Consumer poll loop over the same record after a CorruptRecordException

2017-07-24 Thread Vincent Maurin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098129#comment-16098129
 ] 

Vincent Maurin commented on KAFKA-5630:
---

A rolling upgrade from 0.10.2.0 has also been performed a couple of week ago. 
Could be a reason for the corruption problem ?

> Consumer poll loop over the same record after a CorruptRecordException
> --
>
> Key: KAFKA-5630
> URL: https://issues.apache.org/jira/browse/KAFKA-5630
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.11.0.0
>Reporter: Vincent Maurin
>
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite 
> consumption loop of the same record, i.e, each call to poll is returning to 
> me 500 times one record (500 is my max.poll.records). I am using the java 
> client 0.11.0.0.
> Running the code with the debugger, the initial problem come from 
> `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record 
> size is less than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test 
> block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the 
> last record.
> I guess the corruption problem is similar too 
> https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the 
> client is probably not the expected one



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)