[jira] [Commented] (KAFKA-8649) Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0

2019-09-27 Thread Sophie Blee-Goldman (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939861#comment-16939861
 ] 

Sophie Blee-Goldman commented on KAFKA-8649:


[~ferbncode] [~guozhang] [~mjsax] I think I happened across the bug responsible 
for this: consider during the rolling bounce, some members are still on the old 
bytecode (2.0) and subscription version (v3) while others have been upgraded to 
2.1 and v4. If the leader is on the higher version, everyone gets an assignment 
encoded using the min version (v3) but containing the leader's version as v4. 
The members still on 2.0 will see that their used version is less than the 
leader's, and blindly bump it to v4 in `upgradeSubscriptionVersionIfNeeded` – 
then when they try and encode their subscription at the start of the next 
rebalance, this exception is thrown because they don't yet know what v4 is.

Two ideas to fix this:
 # Don't upgrade beyond what you support, and in `onAssignment` do not set the 
version probing code if you were not a "future consumer" aka sent a 
subscription version higher than what the leader supports (this part is 
necessary to avoid getting stuck in a rebalancing loop)
 # Keep track of which consumers sent which versions, and send back an 
assignment using min(consumerVersion, leaderVersion)

> Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0
> --
>
> Key: KAFKA-8649
> URL: https://issues.apache.org/jira/browse/KAFKA-8649
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Suyash Garg
>Priority: Major
>
> While doing a rolling update of a cluster of nodes running Kafka Streams 
> application, the stream threads in the nodes running the old version of the 
> library (2.0.0), fail with the following error: 
> {code:java}
> [ERROR] [application-existing-StreamThread-336] 
> [o.a.k.s.p.internals.StreamThread] - stream-thread 
> [application-existing-StreamThread-336] Encountered the following error 
> during processing:
> java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
> #011at 
> org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.(SubscriptionInfo.java:67)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8700) Flaky Test QueryableStateIntegrationTest#queryOnRebalance

2019-09-27 Thread xujianhai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939833#comment-16939833
 ] 

xujianhai commented on KAFKA-8700:
--

if this is a problem , maybe I can try to fix

> Flaky Test QueryableStateIntegrationTest#queryOnRebalance
> -
>
> Key: KAFKA-8700
> URL: https://issues.apache.org/jira/browse/KAFKA-8700
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.4.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3807/tests]
> {quote}java.lang.AssertionError: Condition not met within timeout 12. 
> waiting for metadata, store and value to be non null
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:376)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:353)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:292)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:382){quote}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8059) Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota

2019-09-27 Thread xujianhai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939832#comment-16939832
 ] 

xujianhai commented on KAFKA-8059:
--

maybe I can try to repair this problem

> Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota
> -
>
> Key: KAFKA-8059
> URL: https://issues.apache.org/jira/browse/KAFKA-8059
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/46/tests]
> {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception 
> java.io.IOException to be thrown, but no exception was thrown
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
> at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
> at org.scalatest.Assertions$class.intercept(Assertions.scala:822)
> at org.scalatest.junit.JUnitSuite.intercept(JUnitSuite.scala:71)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicConnectionQuota(DynamicConnectionQuotaTest.scala:82){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8958) Fix Kafka Streams JavaDocs with regard to used Serdes

2019-09-27 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-8958:
--

 Summary: Fix Kafka Streams JavaDocs with regard to used Serdes
 Key: KAFKA-8958
 URL: https://issues.apache.org/jira/browse/KAFKA-8958
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


In older released, Kafka Streams applied operator specific overwrites of Serdes 
as in-place overwrites. In newer releases, Kafka Streams tries to re-use Serdes 
more "aggressively" by pushing serde information downstream if the key and/or 
value did not change.

However, we never updated the JavaDocs accordingly. For example 
`KStream#through(String topic)` JavaDocs say:
{code:java}
Materialize this stream to a topic and creates a new {@code KStream} from the 
topic using default serializers, deserializers, and producer's {@link 
DefaultPartitioner}.
{code}
The JavaDocs don't put into account that Serdes might have been set further 
upstream, and the defaults from the config would not be used.

`KStream#through()` is just one example. We should address this through all 
JavaDocs over all operators (ie, KStream, KGroupedStream, TimeWindowedKStream, 
SessionWindowedKStream, KTable, and KGroupedTable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7052) ExtractField SMT throws NPE - needs clearer error message

2019-09-27 Thread Gunnar Morling (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939744#comment-16939744
 ] 

Gunnar Morling commented on KAFKA-7052:
---

Looking into this, and it's an interesting case. The problem arises when the 
SMT gets one of the [schema change 
events|https://debezium.io/documentation/reference/0.9/connectors/mysql.html#schema-change-topic]
 from the Debezium MySQL connector which don't have the primary key structure 
as for instance your customers table. So I think actually it's not your 
intention to apply the SMT to these messages to begin with.

Question is how to deal with this case; I could see these options when 
encountering a message which doesn't have the given field:

* raise a more meaningful exception than NPE (pretty disruptive)
* return null
* leave the key/value unchanged

I think for your use case, the last option is the most useful one. But 
returning null might also make sense in others. This might require an option 
perhaps?

> ExtractField SMT throws NPE - needs clearer error message
> -
>
> Key: KAFKA-7052
> URL: https://issues.apache.org/jira/browse/KAFKA-7052
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Robin Moffatt
>Priority: Major
>
> With the following Single Message Transform: 
> {code:java}
> "transforms.ExtractId.type":"org.apache.kafka.connect.transforms.ExtractField$Key",
> "transforms.ExtractId.field":"id"{code}
> Kafka Connect errors with : 
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.kafka.connect.transforms.ExtractField.apply(ExtractField.java:61)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38){code}
> There should be a better error message here, identifying the reason for the 
> NPE.
> Version: Confluent Platform 4.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7263) Container exception java.lang.IllegalStateException: Coordinator selected invalid assignment protocol: null

2019-09-27 Thread Larry Li (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939738#comment-16939738
 ] 

Larry Li commented on KAFKA-7263:
-

We are experiencing exactly the same exception polling from a topic.  Anyone 
has a workaround?  Thanks.

Caused by: java.lang.IllegalStateException: Coordinator selected invalid 
assignment protocol: null
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:217)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:367)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:316)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:290)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1149)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)

 

> Container exception java.lang.IllegalStateException: Coordinator selected 
> invalid assignment protocol: null
> ---
>
> Key: KAFKA-7263
> URL: https://issues.apache.org/jira/browse/KAFKA-7263
> Project: Kafka
>  Issue Type: Bug
>Reporter: laomei
>Priority: Major
>
> We are using  spring-kafka and we get an infinite loop error in 
> ConsumerCoordinator.java;
> kafka cluster version: 1.0.0
> kafka-client version: 1.0.0
>  
> 2018-08-08 15:24:46,120 ERROR 
> [org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer]
>  - Container exception
>  java.lang.IllegalStateException: Coordinator selected invalid assignment 
> protocol: null
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:217)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:367)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:316)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:295)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1138)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1103)
>   at 
> org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:556)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
>  2018-08-08 15:24:46,132 INFO 
> [org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer]
>  - Consumer stopped
>  2018-08-08 15:24:46,230 INFO 
> [org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer]
>  - Consumer stopped
>  2018-08-08 15:24:46,234 INFO [org.springfram



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8523) InsertField transformation fails when encountering tombstone event

2019-09-27 Thread Gunnar Morling (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunnar Morling updated KAFKA-8523:
--
Description: 
When applying the {{InsertField}} transformation to a tombstone event, an 
exception is raised:

{code}
org.apache.kafka.connect.errors.DataException: Only Map objects supported in 
absence of schema for [field insertion], found: null
at 
org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38)
at 
org.apache.kafka.connect.transforms.InsertField.applySchemaless(InsertField.java:138)
at 
org.apache.kafka.connect.transforms.InsertField.apply(InsertField.java:131)
at 
org.apache.kafka.connect.transforms.InsertFieldTest.tombstone(InsertFieldTest.java:128)
{code}

~~AFAICS, the transform can still be made working in in this case by simply 
building up a new value map from scratch.~~

Update as per the discussion in the comments: tombstones should be left as-is 
by this SMT, as any insertion would defeat their purpose of enabling log 
compaction.

  was:
When applying the {{InsertField}} transformation to a tombstone event, an 
exception is raised:

{code}
org.apache.kafka.connect.errors.DataException: Only Map objects supported in 
absence of schema for [field insertion], found: null
at 
org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38)
at 
org.apache.kafka.connect.transforms.InsertField.applySchemaless(InsertField.java:138)
at 
org.apache.kafka.connect.transforms.InsertField.apply(InsertField.java:131)
at 
org.apache.kafka.connect.transforms.InsertFieldTest.tombstone(InsertFieldTest.java:128)
{code}

AFAICS, the transform can still be made working in in this case by simply 
building up a new value map from scratch.


> InsertField transformation fails when encountering tombstone event
> --
>
> Key: KAFKA-8523
> URL: https://issues.apache.org/jira/browse/KAFKA-8523
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Gunnar Morling
>Priority: Major
> Attachments: image-2019-09-17-15-53-44-038.png
>
>
> When applying the {{InsertField}} transformation to a tombstone event, an 
> exception is raised:
> {code}
> org.apache.kafka.connect.errors.DataException: Only Map objects supported in 
> absence of schema for [field insertion], found: null
>   at 
> org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38)
>   at 
> org.apache.kafka.connect.transforms.InsertField.applySchemaless(InsertField.java:138)
>   at 
> org.apache.kafka.connect.transforms.InsertField.apply(InsertField.java:131)
>   at 
> org.apache.kafka.connect.transforms.InsertFieldTest.tombstone(InsertFieldTest.java:128)
> {code}
> ~~AFAICS, the transform can still be made working in in this case by simply 
> building up a new value map from scratch.~~
> Update as per the discussion in the comments: tombstones should be left as-is 
> by this SMT, as any insertion would defeat their purpose of enabling log 
> compaction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8523) InsertField transformation fails when encountering tombstone event

2019-09-27 Thread Gunnar Morling (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunnar Morling updated KAFKA-8523:
--
Description: 
When applying the {{InsertField}} transformation to a tombstone event, an 
exception is raised:

{code}
org.apache.kafka.connect.errors.DataException: Only Map objects supported in 
absence of schema for [field insertion], found: null
at 
org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38)
at 
org.apache.kafka.connect.transforms.InsertField.applySchemaless(InsertField.java:138)
at 
org.apache.kafka.connect.transforms.InsertField.apply(InsertField.java:131)
at 
org.apache.kafka.connect.transforms.InsertFieldTest.tombstone(InsertFieldTest.java:128)
{code}

-AFAICS, the transform can still be made working in in this case by simply 
building up a new value map from scratch.-

Update as per the discussion in the comments: tombstones should be left as-is 
by this SMT, as any insertion would defeat their purpose of enabling log 
compaction.

  was:
When applying the {{InsertField}} transformation to a tombstone event, an 
exception is raised:

{code}
org.apache.kafka.connect.errors.DataException: Only Map objects supported in 
absence of schema for [field insertion], found: null
at 
org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38)
at 
org.apache.kafka.connect.transforms.InsertField.applySchemaless(InsertField.java:138)
at 
org.apache.kafka.connect.transforms.InsertField.apply(InsertField.java:131)
at 
org.apache.kafka.connect.transforms.InsertFieldTest.tombstone(InsertFieldTest.java:128)
{code}

~~AFAICS, the transform can still be made working in in this case by simply 
building up a new value map from scratch.~~

Update as per the discussion in the comments: tombstones should be left as-is 
by this SMT, as any insertion would defeat their purpose of enabling log 
compaction.


> InsertField transformation fails when encountering tombstone event
> --
>
> Key: KAFKA-8523
> URL: https://issues.apache.org/jira/browse/KAFKA-8523
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Gunnar Morling
>Priority: Major
> Attachments: image-2019-09-17-15-53-44-038.png
>
>
> When applying the {{InsertField}} transformation to a tombstone event, an 
> exception is raised:
> {code}
> org.apache.kafka.connect.errors.DataException: Only Map objects supported in 
> absence of schema for [field insertion], found: null
>   at 
> org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38)
>   at 
> org.apache.kafka.connect.transforms.InsertField.applySchemaless(InsertField.java:138)
>   at 
> org.apache.kafka.connect.transforms.InsertField.apply(InsertField.java:131)
>   at 
> org.apache.kafka.connect.transforms.InsertFieldTest.tombstone(InsertFieldTest.java:128)
> {code}
> -AFAICS, the transform can still be made working in in this case by simply 
> building up a new value map from scratch.-
> Update as per the discussion in the comments: tombstones should be left as-is 
> by this SMT, as any insertion would defeat their purpose of enabling log 
> compaction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7895) Ktable supress operator emitting more than one record for the same key per window

2019-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939732#comment-16939732
 ] 

ASF GitHub Bot commented on KAFKA-7895:
---

mjsax commented on pull request #7373: KAFKA-7895: Revert suppress changelog 
bugfix for 2.1
URL: https://github.com/apache/kafka/pull/7373
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Ktable supress operator emitting more than one record for the same key per 
> window
> -
>
> Key: KAFKA-7895
> URL: https://issues.apache.org/jira/browse/KAFKA-7895
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
>Reporter: prasanthi
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.2.1
>
>
> Hi, We are using kstreams to get the aggregated counts per vendor(key) within 
> a specified window.
> Here's how we configured the suppress operator to emit one final record per 
> key/window.
> {code:java}
> KTable, Long> windowedCount = groupedStream
>  .windowedBy(TimeWindows.of(Duration.ofMinutes(1)).grace(ofMillis(5L)))
>  .count(Materialized.with(Serdes.Integer(),Serdes.Long()))
>  .suppress(Suppressed.untilWindowCloses(unbounded()));
> {code}
> But we are getting more than one record for the same key/window as shown 
> below.
> {code:java}
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1039
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1162
> [KTABLE-TOSTREAM-10]: [9@154906704/154906710], 6584
> [KTABLE-TOSTREAM-10]: [88@154906704/154906710], 107
> [KTABLE-TOSTREAM-10]: [108@154906704/154906710], 315
> [KTABLE-TOSTREAM-10]: [119@154906704/154906710], 119
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 746
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 809{code}
> Could you please take a look?
> Thanks
>  
>  
> Added by John:
> Acceptance Criteria:
>  * add suppress to system tests, such that it's exercised with crash/shutdown 
> recovery, rebalance, etc.
>  ** [https://github.com/apache/kafka/pull/6278]
>  * make sure that there's some system test coverage with caching disabled.
>  ** Follow-on ticket: https://issues.apache.org/jira/browse/KAFKA-7943
>  * test with tighter time bounds with windows of say 30 seconds and use 
> system time without adding any extra time for verification
>  ** Follow-on ticket: https://issues.apache.org/jira/browse/KAFKA-7944



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8523) InsertField transformation fails when encountering tombstone event

2019-09-27 Thread Gunnar Morling (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939730#comment-16939730
 ] 

Gunnar Morling commented on KAFKA-8523:
---

Hey [~rhauch], [~frederic.tardif], yes, we're all on the same page: tombstones 
shouldn't be modified at all by this SMT. I've updated and force-pushed the PR 
accordingly. It's good to go from my PoV now.

> InsertField transformation fails when encountering tombstone event
> --
>
> Key: KAFKA-8523
> URL: https://issues.apache.org/jira/browse/KAFKA-8523
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Gunnar Morling
>Priority: Major
> Attachments: image-2019-09-17-15-53-44-038.png
>
>
> When applying the {{InsertField}} transformation to a tombstone event, an 
> exception is raised:
> {code}
> org.apache.kafka.connect.errors.DataException: Only Map objects supported in 
> absence of schema for [field insertion], found: null
>   at 
> org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38)
>   at 
> org.apache.kafka.connect.transforms.InsertField.applySchemaless(InsertField.java:138)
>   at 
> org.apache.kafka.connect.transforms.InsertField.apply(InsertField.java:131)
>   at 
> org.apache.kafka.connect.transforms.InsertFieldTest.tombstone(InsertFieldTest.java:128)
> {code}
> AFAICS, the transform can still be made working in in this case by simply 
> building up a new value map from scratch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8934) Introduce Instance-level Metrics

2019-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939700#comment-16939700
 ] 

ASF GitHub Bot commented on KAFKA-8934:
---

guozhangwang commented on pull request #7397: KAFKA-8934: Create version file 
during build for Streams
URL: https://github.com/apache/kafka/pull/7397
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Introduce Instance-level Metrics
> 
>
> Key: KAFKA-8934
> URL: https://issues.apache.org/jira/browse/KAFKA-8934
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> Introduce instance-level metrics as proposed in KIP-444.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8427) Error while cleanup under windows for EmbeddedKafkaCluster

2019-09-27 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8427.
--
Fix Version/s: 2.4.0
 Assignee: Guozhang Wang
   Resolution: Fixed

Should have been fixed via https://github.com/apache/kafka/pull/7382

> Error while cleanup under windows for EmbeddedKafkaCluster
> --
>
> Key: KAFKA-8427
> URL: https://issues.apache.org/jira/browse/KAFKA-8427
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0, 2.2.0
>Reporter: Sukumaar Mane
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: kafka, testing, win10, windows
> Fix For: 2.4.0
>
>
> Unable to run a simple test case for EmbeddedKafkaCluster where there is an 
> object of  EmbeddedKafkaCluster with 1 broker.
>  Running below simple code (which is actually code snippet from 
> *org.apache.kafka.streams.KafkaStreamsTest* class)
> {code:java}
> public class KTest {
> private static final int NUM_BROKERS = 1;
> // We need this to avoid the KafkaConsumer hanging on poll
> // (this may occur if the test doesn't complete quickly enough)
> @ClassRule
> public static final EmbeddedKafkaCluster CLUSTER = new 
> EmbeddedKafkaCluster(NUM_BROKERS);
> private static final int NUM_THREADS = 2;
> private final StreamsBuilder builder = new StreamsBuilder();
> @Rule
> public TestName testName = new TestName();
> private KafkaStreams globalStreams;
> private Properties props;
> @Before
> public void before() {
> props = new Properties();
> props.put(StreamsConfig.APPLICATION_ID_CONFIG, "appId");
> props.put(StreamsConfig.CLIENT_ID_CONFIG, "clientId");
> props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
> CLUSTER.bootstrapServers());
> props.put(StreamsConfig.METRIC_REPORTER_CLASSES_CONFIG, 
> MockMetricsReporter.class.getName());
> props.put(StreamsConfig.STATE_DIR_CONFIG, 
> TestUtils.tempDirectory().getPath());
> props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, NUM_THREADS);
> globalStreams = new KafkaStreams(builder.build(), props);
> }
> @After
> public void cleanup() {
> if (globalStreams != null) {
> globalStreams.close();
> }
> }
> @Test
> public void thisIsFirstFakeTest() {
> assert true;
> }
> }
> {code}
> But getting these error message at the time of cleanup
> {code:java}
> java.nio.file.FileSystemException: 
> C:\Users\Sukumaar\AppData\Local\Temp\kafka-3445189010908127083\version-2\log.1:
>  The process cannot access the file because it is being used by another 
> process.
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
>   at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
>   at java.nio.file.Files.delete(Files.java:1126)
>   at org.apache.kafka.common.utils.Utils$2.visitFile(Utils.java:753)
>   at org.apache.kafka.common.utils.Utils$2.visitFile(Utils.java:742)
>   at java.nio.file.Files.walkFileTree(Files.java:2670)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at org.apache.kafka.common.utils.Utils.delete(Utils.java:742)
>   at kafka.zk.EmbeddedZookeeper.shutdown(EmbeddedZookeeper.scala:65)
>   at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.stop(EmbeddedKafkaCluster.java:122)
>   at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.after(EmbeddedKafkaCluster.java:151)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> {code}
> One similar issue (KAFKA-6075) had been reported and marked as resolved but 
> still getting the error while cleanup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7990) Flaky Test KafkaStreamsTest#shouldCleanupOldStateDirs

2019-09-27 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-7990:
-
Fix Version/s: (was: 2.2.1)
   (was: 2.3.0)
   2.4.0

> Flaky Test KafkaStreamsTest#shouldCleanupOldStateDirs
> -
>
> Key: KAFKA-7990
> URL: https://issues.apache.org/jira/browse/KAFKA-7990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/229/tests]
>  
> {quote}Exception in thread 
> "appId-78a5ef7e-0f4d-47bd-af2e-54f4606fb19e-StreamThread-189" 
> java.lang.IllegalArgumentException: Assigned partition input-0 for 
> non-subscribed topic regex pattern; subscription pattern is topic
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignFromSubscribed(SubscriptionState.java:187)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:244)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:422)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:343)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:810)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7990) Flaky Test KafkaStreamsTest#shouldCleanupOldStateDirs

2019-09-27 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939698#comment-16939698
 ] 

Guozhang Wang commented on KAFKA-7990:
--

Should have been fixed via https://github.com/apache/kafka/pull/7382

> Flaky Test KafkaStreamsTest#shouldCleanupOldStateDirs
> -
>
> Key: KAFKA-7990
> URL: https://issues.apache.org/jira/browse/KAFKA-7990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.3.0, 2.2.1
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/229/tests]
>  
> {quote}Exception in thread 
> "appId-78a5ef7e-0f4d-47bd-af2e-54f4606fb19e-StreamThread-189" 
> java.lang.IllegalArgumentException: Assigned partition input-0 for 
> non-subscribed topic regex pattern; subscription pattern is topic
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignFromSubscribed(SubscriptionState.java:187)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:244)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:422)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:343)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:810)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7921) Instable KafkaStreamsTest

2019-09-27 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939697#comment-16939697
 ] 

Guozhang Wang commented on KAFKA-7921:
--

Should have been fixed via https://github.com/apache/kafka/pull/7382

> Instable KafkaStreamsTest
> -
>
> Key: KAFKA-7921
> URL: https://issues.apache.org/jira/browse/KAFKA-7921
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> {{KafkaStreamsTest}} failed multiple times, eg,
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.shouldThrowOnCleanupWhileRunning(KafkaStreamsTest.java:556){quote}
> or
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.testStateThreadClose(KafkaStreamsTest.java:255){quote}
>  
> The preserved logs are as follows:
> {quote}[2019-02-12 07:02:17,198] INFO Kafka version: 2.3.0-SNAPSHOT 
> (org.apache.kafka.common.utils.AppInfoParser:109)
> [2019-02-12 07:02:17,198] INFO Kafka commitId: 08036fa4b1e5b822 
> (org.apache.kafka.common.utils.AppInfoParser:110)
> [2019-02-12 07:02:17,199] INFO stream-client [clientId] State transition from 
> CREATED to REBALANCING (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-client [clientId] State transition from 
> REBALANCING to PENDING_SHUTDOWN (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-238] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] 

[jira] [Commented] (KAFKA-6215) KafkaStreamsTest fails in trunk

2019-09-27 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939696#comment-16939696
 ] 

Guozhang Wang commented on KAFKA-6215:
--

Should have been fixed via https://github.com/apache/kafka/pull/7382

> KafkaStreamsTest fails in trunk
> ---
>
> Key: KAFKA-6215
> URL: https://issues.apache.org/jira/browse/KAFKA-6215
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Matthias J. Sax
>Priority: Major
> Fix For: 1.0.1, 1.1.0
>
>
> Two subtests fail.
> https://builds.apache.org/job/kafka-trunk-jdk9/193/testReport/junit/org.apache.kafka.streams/KafkaStreamsTest/testCannotCleanupWhileRunning/
> {code}
> org.apache.kafka.streams.errors.StreamsException: 
> org.apache.kafka.streams.errors.ProcessorStateException: state directory 
> [/tmp/kafka-streams/testCannotCleanupWhileRunning] doesn't exist and couldn't 
> be created
>   at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:618)
>   at org.apache.kafka.streams.KafkaStreams.(KafkaStreams.java:505)
>   at 
> org.apache.kafka.streams.KafkaStreamsTest.testCannotCleanupWhileRunning(KafkaStreamsTest.java:462)
> {code}
> testCleanup fails in similar manner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7921) Instable KafkaStreamsTest

2019-09-27 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-7921:
-
Fix Version/s: (was: 2.3.0)
   2.4.0

> Instable KafkaStreamsTest
> -
>
> Key: KAFKA-7921
> URL: https://issues.apache.org/jira/browse/KAFKA-7921
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> {{KafkaStreamsTest}} failed multiple times, eg,
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.shouldThrowOnCleanupWhileRunning(KafkaStreamsTest.java:556){quote}
> or
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.testStateThreadClose(KafkaStreamsTest.java:255){quote}
>  
> The preserved logs are as follows:
> {quote}[2019-02-12 07:02:17,198] INFO Kafka version: 2.3.0-SNAPSHOT 
> (org.apache.kafka.common.utils.AppInfoParser:109)
> [2019-02-12 07:02:17,198] INFO Kafka commitId: 08036fa4b1e5b822 
> (org.apache.kafka.common.utils.AppInfoParser:110)
> [2019-02-12 07:02:17,199] INFO stream-client [clientId] State transition from 
> CREATED to REBALANCING (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-client [clientId] State transition from 
> REBALANCING to PENDING_SHUTDOWN (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-238] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] (Re-)joining 
> group 

[jira] [Resolved] (KAFKA-8319) Flaky Test KafkaStreamsTest.statefulTopologyShouldCreateStateDirectory

2019-09-27 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8319.
--
Fix Version/s: 2.4.0
 Assignee: Guozhang Wang  (was: Bill Bejeck)
   Resolution: Fixed

Should have been fixed via https://github.com/apache/kafka/pull/7382

> Flaky Test KafkaStreamsTest.statefulTopologyShouldCreateStateDirectory
> --
>
> Key: KAFKA-8319
> URL: https://issues.apache.org/jira/browse/KAFKA-8319
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8319) Flaky Test KafkaStreamsTest.statefulTopologyShouldCreateStateDirectory

2019-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939694#comment-16939694
 ] 

ASF GitHub Bot commented on KAFKA-8319:
---

guozhangwang commented on pull request #7382: KAFKA-8319: Make KafkaStreamsTest 
a non-integration test class
URL: https://github.com/apache/kafka/pull/7382
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test KafkaStreamsTest.statefulTopologyShouldCreateStateDirectory
> --
>
> Key: KAFKA-8319
> URL: https://issues.apache.org/jira/browse/KAFKA-8319
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>  Labels: flaky-test
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8957) Improve docs about `min.isr` and `acks=all`

2019-09-27 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8957:
---
Summary: Improve docs about `min.isr` and `acks=all`  (was: Improve docs 
about `min.isr.` and `acks=all`)

> Improve docs about `min.isr` and `acks=all`
> ---
>
> Key: KAFKA-8957
> URL: https://issues.apache.org/jira/browse/KAFKA-8957
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, core
>Reporter: Matthias J. Sax
>Priority: Minor
>
> The current docs are as follows:
> {code:java}
> acks=all
> This means the leader will wait for the full set of in-sync replicas to 
> acknowledge the record. This guarantees that the record will not be lost as 
> long as at least one in-sync replica remains alive. This is the strongest 
> available guarantee.{code}
> {code:java}
> min.in.sync.replicas
> When a producer sets acks to "all" (or -1), this configuration specifies the 
> minimum number of replicas that must acknowledge a write for the write to be 
> considered successful. If this minimum cannot be met, then the producer will 
> raise an exception (either NotEnoughReplicas or 
> NotEnoughReplicasAfterAppend). When used together, `min.insync.replicas` and 
> `acks` allow you to enforce greater durability guarantees. A typical scenario 
> would be to create a topic with a replication factor of 3, set 
> min.insync.replicas to 2, and produce with acks of "all". This will ensure 
> that the producer raises an exception if a majority of replicas do not 
> receive a write.
> {code}
> The miss leading part seems to be:
>  
> {noformat}
> the minimum number of replicas that must acknowledge the write
> {noformat}
> That could be interpreted to mean that the producer request can return 
> *_before_* all replicas acknowledge the write. However, min.irs is a 
> configuration that aims to specify how many replicase must be online, to 
> consider a partition to be available.
> The actual behavior is the following (with replication factor = 3 and min.isr 
> = 2)
>  * If all three replicas are in-sync, brokers only ack to the producer after 
> all three replicas got the data. (ie, both follows need to ack)
>  * However, if one replicas lags (is not in-sync any longer), we are also ok 
> to ack to the producer after the remaining in-sync follower acked.
> It's *_not_* the case, that if all three replicase are in-sync, brokers ack 
> to the producer after one follower acked to the leader.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8957) Improve docs about `min.isr.` and `acks=all`

2019-09-27 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-8957:
--

 Summary: Improve docs about `min.isr.` and `acks=all`
 Key: KAFKA-8957
 URL: https://issues.apache.org/jira/browse/KAFKA-8957
 Project: Kafka
  Issue Type: Improvement
  Components: clients, core
Reporter: Matthias J. Sax


The current docs are as follows:
{code:java}
acks=all

This means the leader will wait for the full set of in-sync replicas to 
acknowledge the record. This guarantees that the record will not be lost as 
long as at least one in-sync replica remains alive. This is the strongest 
available guarantee.{code}
{code:java}
min.in.sync.replicas
When a producer sets acks to "all" (or -1), this configuration specifies the 
minimum number of replicas that must acknowledge a write for the write to be 
considered successful. If this minimum cannot be met, then the producer will 
raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). 
When used together, `min.insync.replicas` and `acks` allow you to enforce 
greater durability guarantees. A typical scenario would be to create a topic 
with a replication factor of 3, set min.insync.replicas to 2, and produce with 
acks of "all". This will ensure that the producer raises an exception if a 
majority of replicas do not receive a write.
{code}
The miss leading part seems to be:

 
{noformat}
the minimum number of replicas that must acknowledge the write
{noformat}
That could be interpreted to mean that the producer request can return 
*_before_* all replicas acknowledge the write. However, min.irs is a 
configuration that aims to specify how many replicase must be online, to 
consider a partition to be available.

The actual behavior is the following (with replication factor = 3 and min.isr = 
2)
 * If all three replicas are in-sync, brokers only ack to the producer after 
all three replicas got the data. (ie, both follows need to ack)
 * However, if one replicas lags (is not in-sync any longer), we are also ok to 
ack to the producer after the remaining in-sync follower acked.

It's *_not_* the case, that if all three replicase are in-sync, brokers ack to 
the producer after one follower acked to the leader.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6883) KafkaShortnamer should allow to convert Kerberos principal name to upper case user name

2019-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939681#comment-16939681
 ] 

ASF GitHub Bot commented on KAFKA-6883:
---

omkreddy commented on pull request #7375: KAFKA-6883: Add toUpperCase support 
to sasl.kerberos.principal.to.local rule (KIP-309)
URL: https://github.com/apache/kafka/pull/7375
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KafkaShortnamer should allow to convert Kerberos principal name to upper case 
> user name
> ---
>
> Key: KAFKA-6883
> URL: https://issues.apache.org/jira/browse/KAFKA-6883
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Attila Sasvári
>Assignee: Manikumar
>Priority: Major
>
> KAFKA-5764 implemented support to convert Kerberos principal name to lower 
> case Linux user name via auth_to_local rules. 
> As a follow-up, KafkaShortnamer could be further extended to allow converting 
> principal names to uppercase by appending /U to the rule.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6883) KafkaShortnamer should allow to convert Kerberos principal name to upper case user name

2019-09-27 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6883.
--
Fix Version/s: 2.4.0
   Resolution: Fixed

Issue resolved by pull request 7375
[https://github.com/apache/kafka/pull/7375]

> KafkaShortnamer should allow to convert Kerberos principal name to upper case 
> user name
> ---
>
> Key: KAFKA-6883
> URL: https://issues.apache.org/jira/browse/KAFKA-6883
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Attila Sasvári
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.4.0
>
>
> KAFKA-5764 implemented support to convert Kerberos principal name to lower 
> case Linux user name via auth_to_local rules. 
> As a follow-up, KafkaShortnamer could be further extended to allow converting 
> principal names to uppercase by appending /U to the rule.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8956) Refactor DelayedCreatePartitions#updateWaiting to avoid modifying collection in foreach

2019-09-27 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-8956:
---

 Summary: Refactor DelayedCreatePartitions#updateWaiting to avoid 
modifying collection in foreach
 Key: KAFKA-8956
 URL: https://issues.apache.org/jira/browse/KAFKA-8956
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin McCabe
Assignee: Colin McCabe


We should refactor {{DelayedCreatePartitions#updateWaiting}} to avoid modifying 
the {{waitingPartitions}} collection in its own foreach.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8955) Add an AbstractResponse#errorCounts method that takes a stream or iterable

2019-09-27 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-8955:
---

 Summary: Add an AbstractResponse#errorCounts method that takes a 
stream or iterable
 Key: KAFKA-8955
 URL: https://issues.apache.org/jira/browse/KAFKA-8955
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin McCabe


We should have an AbstractResponse#errorCounts method that takes a stream or 
iterable.  This would allow us to avoid copying data in many cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8953) Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor

2019-09-27 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939601#comment-16939601
 ] 

Matthias J. Sax commented on KAFKA-8953:


Thanks for you interest! I added you to the list of contributors; you should be 
able to assign the ticket to yourself now.

For this ticket, we need to do a KIP. Details about the KIP process are 
describe in the wiki: 
[https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals]

Let us know if you have any questions.

> Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor
> -
>
> Key: KAFKA-8953
> URL: https://issues.apache.org/jira/browse/KAFKA-8953
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Trivial
>  Labels: beginner, needs-kip, newbie
>
> Kafka Streams ships couple of different timestamp extractors, one named 
> `UsePreviousTimeOnInvalidTimestamp`.
> Given the latest improvements with regard to time tracking, it seems 
> appropriate to rename this class to `UsePartitionTimeOnInvalidTimestamp`, as 
> we know have fixed definition of partition time, and also pass in partition 
> time into the `#extract(...)` method, instead of some non-well-defined 
> "previous timestamp".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-8950) KafkaConsumer stops fetching

2019-09-27 Thread Will James (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939538#comment-16939538
 ] 

Will James edited comment on KAFKA-8950 at 9/27/19 3:24 PM:


Here are relevant logs from before one occurance of the issue. 

Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.consumer.KafkaConsumer [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Subscribed to topic(s): [redacted] #012 
Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
org.apache.kafka.clients.Metadata [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Cluster ID: 5hzjksKUQJSFOa84oTBWdw #012 
Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.AbstractCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Discovered group coordinator 
kafka-6-us-west-2.instana.io:9092 (id: 2147483646 rack: null) #012 
Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.ConsumerCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Revoking previously assigned partitions [] #012 
Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.AbstractCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] (Re-)joining group #012 
Sep 24 04:23:32 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.AbstractCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Successfully joined group with generation 2890 #012 
Sep 24 04:23:32 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.ConsumerCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Setting newly assigned partitions: [redacted]-0 #012 
Sep 24 04:23:32 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.ConsumerCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Setting offset for partition [redacted]-0 to the committed 
offset FetchPosition\{offset=8999851092, offsetEpoch=Optional.empty, 
currentLeader=LeaderAndEpoch{leader=kafka-13-us-west-2.instana.io:9092 (id: 8 
rack: null), epoch=-1}} #012 
Sep 24 04:23:33 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.SubscriptionState [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Seeking to LATEST offset of partition [redacted]-0 #012 
Sep 24 04:23:33 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.SubscriptionState [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Resetting offset for partition [redacted]-0 to offset 
8999872779. #012

The consumer stopped consuming around 07:00 on the same day (although in some 
cases the error occurs days after the consumer starts). I don't really see 
anything super interesting in here. Basically it is just seeking to the end of 
the log.


was (Author: wtjames):
Here are relevant logs from before one occurance of the issue. 

{{Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.consumer.KafkaConsumer [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Subscribed to topic(s): [redacted] #012 }}
{{Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
org.apache.kafka.clients.Metadata [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Cluster ID: 5hzjksKUQJSFOa84oTBWdw #012 }}
{{Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.AbstractCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Discovered group coordinator 
kafka-6-us-west-2.instana.io:9092 (id: 2147483646 rack: null) #012 }}
{{Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.ConsumerCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Revoking previously assigned partitions [] #012 }}
{{Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.AbstractCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] (Re-)joining group #012 }}
{{Sep 24 04:23:32 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.AbstractCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Successfully joined group with generation 2890 #012 }}
{{Sep 24 04:23:32 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.ConsumerCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Setting newly assigned partitions: [redacted]-0 #012 }}
{{Sep 24 04:23:32 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.ConsumerCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Setting offset for partition [redacted]-0 to the committed 
offset 

[jira] [Commented] (KAFKA-8950) KafkaConsumer stops fetching

2019-09-27 Thread Will James (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939538#comment-16939538
 ] 

Will James commented on KAFKA-8950:
---

Here are relevant logs from before one occurance of the issue. 

{{Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.consumer.KafkaConsumer [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Subscribed to topic(s): [redacted] #012 }}
{{Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
org.apache.kafka.clients.Metadata [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Cluster ID: 5hzjksKUQJSFOa84oTBWdw #012 }}
{{Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.AbstractCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Discovered group coordinator 
kafka-6-us-west-2.instana.io:9092 (id: 2147483646 rack: null) #012 }}
{{Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.ConsumerCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Revoking previously assigned partitions [] #012 }}
{{Sep 24 04:23:29 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.AbstractCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] (Re-)joining group #012 }}
{{Sep 24 04:23:32 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.AbstractCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Successfully joined group with generation 2890 #012 }}
{{Sep 24 04:23:32 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.ConsumerCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Setting newly assigned partitions: [redacted]-0 #012 }}
{{Sep 24 04:23:32 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.ConsumerCoordinator [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Setting offset for partition [redacted]-0 to the committed 
offset FetchPosition\{offset=8999851092, offsetEpoch=Optional.empty, 
currentLeader=LeaderAndEpoch{leader=kafka-13-us-west-2.instana.io:9092 (id: 8 
rack: null), epoch=-1}} #012 }}
{{Sep 24 04:23:33 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.SubscriptionState [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Seeking to LATEST offset of partition [redacted]-0 #012 }}
{{Sep 24 04:23:33 fleet-worker-highperf-17-us-west-2.instana.io [redacted] info 
o.a.k.c.c.i.SubscriptionState [Consumer clientId=[redacted]_[redacted], 
groupId=[redacted]] Resetting offset for partition [redacted]-0 to offset 
8999872779. #012 }}

The consumer stopped consuming around 07:00 on the same day (although in some 
cases the error occurs days after the consumer starts). I don't really see 
anything super interesting in here. Basically it is just seeking to the end of 
the log.

> KafkaConsumer stops fetching
> 
>
> Key: KAFKA-8950
> URL: https://issues.apache.org/jira/browse/KAFKA-8950
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0
> Environment: linux
>Reporter: Will James
>Priority: Major
>
> We have a KafkaConsumer consuming from a single partition with 
> enable.auto.commit set to true.
> Very occasionally, the consumer goes into a broken state. It returns no 
> records from the broker with every poll, and from most of the Kafka metrics 
> in the consumer it looks like it is fully caught up to the end of the log. 
> We see that we are long polling for the max poll timeout, and that there is 
> zero lag. In addition, we see that the heartbeat rate stays unchanged from 
> before the issue begins (so the consumer stays a part of the consumer group).
> In addition, from looking at the __consumer_offsets topic, it is possible to 
> see that the consumer is committing the same offset on the auto commit 
> interval, however, the offset does not move, and the lag from the broker's 
> perspective continues to increase.
> The issue is only resolved by restarting our application (which restarts the 
> KafkaConsumer instance).
> From a heap dump of an application in this state, I can see that the Fetcher 
> is in a state where it believes there are nodesWithPendingFetchRequests.
> However, I can see the state of the fetch latency sensor, specifically, the 
> fetch rate, and see that the samples were not updated for a long period of 
> time (actually, precisely the amount of time that the problem in our 
> application was occurring, around 50 hours - we have alerting on other 
> metrics but not the fetch rate, so we didn't notice the problem until a 
> customer complained).
> In this example, the consumer was processing around 40 messages 

[jira] [Commented] (KAFKA-8954) Topic existence check is wrongly implemented in the DeleteOffset API

2019-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939495#comment-16939495
 ] 

ASF GitHub Bot commented on KAFKA-8954:
---

dajac commented on pull request #7406: KAFKA-8954; Topic existence check is 
wrongly implemented in the DeleteOffset API
URL: https://github.com/apache/kafka/pull/7406
 
 
   This patch changes the way topic existence is checked in the DeleteOffset 
API. Previously, it was relying on the committed offsets. Now, it relies on the 
metadata cache which is better.
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Topic existence check is wrongly implemented in the DeleteOffset API
> 
>
> Key: KAFKA-8954
> URL: https://issues.apache.org/jira/browse/KAFKA-8954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> The current DeleteOffset API check relies on the consumer group's committed 
> offsets to decide if a topic exists or not. While this works in most of the 
> cases, it does not work when a topic exists but it does not have any 
> committed offsets yet. Moreover, it is not consistent with other APIs which 
> rely on the metadata cache so it must be updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8907) Return topic configs in CreateTopics response

2019-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939386#comment-16939386
 ] 

ASF GitHub Bot commented on KAFKA-8907:
---

rajinisivaram commented on pull request #7380: KAFKA-8907; Return topic configs 
in CreateTopics response (KIP-525)
URL: https://github.com/apache/kafka/pull/7380
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Return topic configs in CreateTopics response 
> --
>
> Key: KAFKA-8907
> URL: https://issues.apache.org/jira/browse/KAFKA-8907
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.4.0
>
>
> See 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-525+-+Return+topic+metadata+and+configs+in+CreateTopics+response]
>  for details



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8887) Use purgatory for CreateAcls and DeleteAcls if implementation is async

2019-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939382#comment-16939382
 ] 

ASF GitHub Bot commented on KAFKA-8887:
---

rajinisivaram commented on pull request #7404: KAFKA-8887; Use purgatory for 
ACL updates using async authorizers
URL: https://github.com/apache/kafka/pull/7404
 
 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Use purgatory for CreateAcls and DeleteAcls if implementation is async
> --
>
> Key: KAFKA-8887
> URL: https://issues.apache.org/jira/browse/KAFKA-8887
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.4.0
>
>
> KAFKA-8886 is updating Authorizer.createAcls and Authorizer.deleteAcls APIs 
> to be asynchronous to avoid blocking request threads during ACL updates when 
> implementations use external stores like databases where updates may block 
> for long. This Jira is to async updates using a purgatory in KafkaApis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8954) Topic existence check is wrongly implemented in the DeleteOffset API

2019-09-27 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-8954:
--

Assignee: David Jacot

> Topic existence check is wrongly implemented in the DeleteOffset API
> 
>
> Key: KAFKA-8954
> URL: https://issues.apache.org/jira/browse/KAFKA-8954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> The current DeleteOffset API check relies on the consumer group's committed 
> offsets to decide if a topic exists or not. While this works in most of the 
> cases, it does not work when a topic exists but it does not have any 
> committed offsets yet. Moreover, it is not consistent with other APIs which 
> rely on the metadata cache so it must be updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8954) Topic existence check is wrongly implemented in the DeleteOffset API

2019-09-27 Thread David Jacot (Jira)
David Jacot created KAFKA-8954:
--

 Summary: Topic existence check is wrongly implemented in the 
DeleteOffset API
 Key: KAFKA-8954
 URL: https://issues.apache.org/jira/browse/KAFKA-8954
 Project: Kafka
  Issue Type: Improvement
Reporter: David Jacot


The current DeleteOffset API check relies on the consumer group's committed 
offsets to decide if a topic exists or not. While this works in most of the 
cases, it does not work when a topic exists but it does not have any committed 
offsets yet. Moreover, it is not consistent with other APIs which rely on the 
metadata cache so it must be updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8953) Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor

2019-09-27 Thread Rabi Kumar K C (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939179#comment-16939179
 ] 

Rabi Kumar K C commented on KAFKA-8953:
---

Hi [~mjsax] I am new to Kafka. Is it okay if I take this up? As I am new I 
cannot assign the ticket to myself. 

> Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor
> -
>
> Key: KAFKA-8953
> URL: https://issues.apache.org/jira/browse/KAFKA-8953
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Trivial
>  Labels: beginner, needs-kip, newbie
>
> Kafka Streams ships couple of different timestamp extractors, one named 
> `UsePreviousTimeOnInvalidTimestamp`.
> Given the latest improvements with regard to time tracking, it seems 
> appropriate to rename this class to `UsePartitionTimeOnInvalidTimestamp`, as 
> we know have fixed definition of partition time, and also pass in partition 
> time into the `#extract(...)` method, instead of some non-well-defined 
> "previous timestamp".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8953) Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor

2019-09-27 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-8953:
--

 Summary: Consider renaming `UsePreviousTimeOnInvalidTimestamp` 
timestamp extractor
 Key: KAFKA-8953
 URL: https://issues.apache.org/jira/browse/KAFKA-8953
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


Kafka Streams ships couple of different timestamp extractors, one named 
`UsePreviousTimeOnInvalidTimestamp`.

Given the latest improvements with regard to time tracking, it seems 
appropriate to rename this class to `UsePartitionTimeOnInvalidTimestamp`, as we 
know have fixed definition of partition time, and also pass in partition time 
into the `#extract(...)` method, instead of some non-well-defined "previous 
timestamp".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7772) Dynamically adjust log level in Connect workers

2019-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939174#comment-16939174
 ] 

ASF GitHub Bot commented on KAFKA-7772:
---

wicknicks commented on pull request #7403: KAFKA-7772: Dynamically Adjust Log 
Levels in Connect
URL: https://github.com/apache/kafka/pull/7403
 
 
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Dynamically adjust log level in Connect workers
> ---
>
> Key: KAFKA-7772
> URL: https://issues.apache.org/jira/browse/KAFKA-7772
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Arjun Satish
>Assignee: Arjun Satish
>Priority: Minor
>  Labels: needs-kip
>
> Currently, Kafka provides a JMX interface to dynamically modify log levels of 
> different active loggers. It would be good to have a similar interface for 
> Connect as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8952) Vulnerabilities found for jackson-databind-2.9.9.jar and guava-20.0.jar in latest Apache-kafka latest version 2.3.0

2019-09-27 Thread Namrata Kokate (Jira)
Namrata Kokate created KAFKA-8952:
-

 Summary: Vulnerabilities found for jackson-databind-2.9.9.jar and 
guava-20.0.jar in latest Apache-kafka latest version 2.3.0
 Key: KAFKA-8952
 URL: https://issues.apache.org/jira/browse/KAFKA-8952
 Project: Kafka
  Issue Type: New Feature
Affects Versions: 2.3.0
Reporter: Namrata Kokate


I am currently using apache kafka latest version-2.3.0, however When I deployed 
the binary on the containers, I can see the vulnerability reported for the two 
jars - jackson-databind-2.9.9.jar and  guava-20.0.jar
 
I can see these vulnerabilities have been removed in the 
jackson-databind-2.9.10.jar and guava-24.1.1-jre.jar jars but the apache-kafka 
version 2.3.0 does not include these new jars.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Vulnerabilities found for jackson-databind-2.9.9.jar and guava-20.0.jar in latest Apache-kafka latest version 2.3.0

2019-09-27 Thread namrata kokate
I am currently using apache kafka latest version-2.3.0 from the official
site https://kafka.apache.org/downloads, however When I deployed the binary
on the containers, I can see the vulnerability reported for the two jars -
jackson-databind-2.9.9.jar and  guava-20.0.jar

I can see these vulnerabilities have been removed in
the jackson-databind-2.9.10.jar and guava-24.1.1-jre.jar jars but the
apache-kafka version 2.3.0 does not include these new jars. Can you help
me with this?

Please let me know the right procedure for this(i.e if I can create a jira
ticket or naything else), in case I am going wrong.

Regards,
Namrata Kokate