[jira] [Created] (KAFKA-8970) StateDirectory creation fails with Exception

2019-10-02 Thread Nishkam Ravi (Jira)
Nishkam Ravi created KAFKA-8970:
---

 Summary: StateDirectory creation fails with Exception
 Key: KAFKA-8970
 URL: https://issues.apache.org/jira/browse/KAFKA-8970
 Project: Kafka
  Issue Type: Bug
Reporter: Nishkam Ravi


When two threads try to create KafkaStreams simultaneously, one of them 
succeeds while the other fails with the following exception:

org.apache.kafka.streams.errors.StreamsException: 
org.apache.kafka.streams.errors.ProcessorStateException: base state directory 
[/tmp/kafka-streams] doesn't exist and couldn't be created

Quick investigation suggests that this is because the code at/around:

[https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/StateDirectory.java#L82]

is not synchronized and can lead to race conditions.

Specifying different values for state.dir can be a workaround for this issue 
but a bit cumbersome. Can we just make this synchronized?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8953) Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor

2019-10-02 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943308#comment-16943308
 ] 

Matthias J. Sax commented on KAFKA-8953:


Thanks [~rabikumar.kc] – I was at Kafka Summit and the last two days... On the 
mailing list, it can take some time until you get a response – it's a pretty 
busy list :) – but it seems you are all set up.

We have feature freeze deadline for 2.4 release on Friday so it might take some 
days until I find time to read the KIP – will follow up on the mailing list as 
soon as I find time. Thanks.

> Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor
> -
>
> Key: KAFKA-8953
> URL: https://issues.apache.org/jira/browse/KAFKA-8953
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Rabi Kumar K C
>Priority: Trivial
>  Labels: beginner, needs-kip, newbie
>
> Kafka Streams ships couple of different timestamp extractors, one named 
> `UsePreviousTimeOnInvalidTimestamp`.
> Given the latest improvements with regard to time tracking, it seems 
> appropriate to rename this class to `UsePartitionTimeOnInvalidTimestamp`, as 
> we know have fixed definition of partition time, and also pass in partition 
> time into the `#extract(...)` method, instead of some non-well-defined 
> "previous timestamp".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8558) KIP-479 - Add Materialized Overload to KStream#Join

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943303#comment-16943303
 ] 

ASF GitHub Bot commented on KAFKA-8558:
---

mjsax commented on pull request #7285: KAFKA-8558:  Add StreamJoined config 
object to join
URL: https://github.com/apache/kafka/pull/7285
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KIP-479 - Add Materialized Overload to KStream#Join 
> 
>
> Key: KAFKA-8558
> URL: https://issues.apache.org/jira/browse/KAFKA-8558
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Blocker
>  Labels: needs-kip
> Fix For: 2.4.0
>
>
> To prevent a topology incompatibility with the release of 2.4 and the naming 
> of Join operations we'll add an overloaded KStream#join method accepting a 
> Materialized parameter. This will allow users to explicitly name state stores 
> created by Kafka Streams in the join operation.
>  
> The overloads will apply to all flavors of KStream#join (inner, left, and 
> right). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8558) KIP-479 - Add Materialized Overload to KStream#Join

2019-10-02 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8558.

Resolution: Fixed

> KIP-479 - Add Materialized Overload to KStream#Join 
> 
>
> Key: KAFKA-8558
> URL: https://issues.apache.org/jira/browse/KAFKA-8558
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Blocker
>  Labels: needs-kip
> Fix For: 2.4.0
>
>
> To prevent a topology incompatibility with the release of 2.4 and the naming 
> of Join operations we'll add an overloaded KStream#join method accepting a 
> Materialized parameter. This will allow users to explicitly name state stores 
> created by Kafka Streams in the join operation.
>  
> The overloads will apply to all flavors of KStream#join (inner, left, and 
> right). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8957) Improve docs about `min.isr` and `acks=all`

2019-10-02 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943275#comment-16943275
 ] 

Guozhang Wang commented on KAFKA-8957:
--

I’d rather just document min.isr as minimum number of ISRs allowed when the 
produce request with acks=all is received.
And the acknowledge mechanism just solely based on acks=all itself, have 
nothing to do with the min.isr

> Improve docs about `min.isr` and `acks=all`
> ---
>
> Key: KAFKA-8957
> URL: https://issues.apache.org/jira/browse/KAFKA-8957
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, core
>Reporter: Matthias J. Sax
>Priority: Minor
>
> The current docs are as follows:
> {code:java}
> acks=all
> This means the leader will wait for the full set of in-sync replicas to 
> acknowledge the record. This guarantees that the record will not be lost as 
> long as at least one in-sync replica remains alive. This is the strongest 
> available guarantee.{code}
> {code:java}
> min.in.sync.replicas
> When a producer sets acks to "all" (or -1), this configuration specifies the 
> minimum number of replicas that must acknowledge a write for the write to be 
> considered successful. If this minimum cannot be met, then the producer will 
> raise an exception (either NotEnoughReplicas or 
> NotEnoughReplicasAfterAppend). When used together, `min.insync.replicas` and 
> `acks` allow you to enforce greater durability guarantees. A typical scenario 
> would be to create a topic with a replication factor of 3, set 
> min.insync.replicas to 2, and produce with acks of "all". This will ensure 
> that the producer raises an exception if a majority of replicas do not 
> receive a write.
> {code}
> The miss leading part seems to be:
>  
> {noformat}
> the minimum number of replicas that must acknowledge the write
> {noformat}
> That could be interpreted to mean that the producer request can return 
> *_before_* all replicas acknowledge the write. However, min.irs is a 
> configuration that aims to specify how many replicase must be online, to 
> consider a partition to be available.
> The actual behavior is the following (with replication factor = 3 and min.isr 
> = 2)
>  * If all three replicas are in-sync, brokers only ack to the producer after 
> all three replicas got the data. (ie, both follows need to ack)
>  * However, if one replicas lags (is not in-sync any longer), we are also ok 
> to ack to the producer after the remaining in-sync follower acked.
> It's *_not_* the case, that if all three replicase are in-sync, brokers ack 
> to the producer after one follower acked to the leader.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8649) Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943225#comment-16943225
 ] 

ASF GitHub Bot commented on KAFKA-8649:
---

guozhangwang commented on pull request #7426: KAFKA-8649: send latest commonly 
supported version in assignment
URL: https://github.com/apache/kafka/pull/7426
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0
> --
>
> Key: KAFKA-8649
> URL: https://issues.apache.org/jira/browse/KAFKA-8649
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Suyash Garg
>Assignee: Sophie Blee-Goldman
>Priority: Critical
> Fix For: 2.0.2, 2.1.2, 2.2.2, 2.3.1
>
>
> While doing a rolling update of a cluster of nodes running Kafka Streams 
> application, the stream threads in the nodes running the old version of the 
> library (2.0.0), fail with the following error: 
> {code:java}
> [ERROR] [application-existing-StreamThread-336] 
> [o.a.k.s.p.internals.StreamThread] - stream-thread 
> [application-existing-StreamThread-336] Encountered the following error 
> during processing:
> java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
> #011at 
> org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.(SubscriptionInfo.java:67)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8969) Log the partition being made online due to unclean leader election.

2019-10-02 Thread Sean McCauliff (Jira)
Sean McCauliff created KAFKA-8969:
-

 Summary: Log the partition being made online due to unclean leader 
election.
 Key: KAFKA-8969
 URL: https://issues.apache.org/jira/browse/KAFKA-8969
 Project: Kafka
  Issue Type: Bug
Reporter: Sean McCauliff
Assignee: Sean McCauliff


When unclean leader election happens it's difficult to find which partitions 
were affected. Knowledge of the affected partitions is sometimes needed when 
users are doing root cause investigations and want to narrow down the source of 
data loss.  Without logging this information somewhere it's not possible to 
know which partitions were affected by ULE.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7772) Dynamically adjust log level in Connect workers

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943202#comment-16943202
 ] 

ASF GitHub Bot commented on KAFKA-7772:
---

wicknicks commented on pull request #6069: KAFKA-7772: Adjust log levels of 
classes in Connect via JMX
URL: https://github.com/apache/kafka/pull/6069
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Dynamically adjust log level in Connect workers
> ---
>
> Key: KAFKA-7772
> URL: https://issues.apache.org/jira/browse/KAFKA-7772
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Arjun Satish
>Assignee: Arjun Satish
>Priority: Minor
>  Labels: needs-kip
> Fix For: 2.4.0
>
>
> Currently, Kafka provides a JMX interface to dynamically modify log levels of 
> different active loggers. It would be good to have a similar interface for 
> Connect as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8804) Internal Connect REST endpoints are insecure

2019-10-02 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8804.
--
Fix Version/s: 2.4.0
 Reviewer: Randall Hauch
   Resolution: Fixed

[KIP-507|https://cwiki.apache.org/confluence/display/KAFKA/KIP-507%3A+Securing+Internal+Connect+REST+Endpoints]
 passed and was merged into AK 2.4.0.

> Internal Connect REST endpoints are insecure
> 
>
> Key: KAFKA-8804
> URL: https://issues.apache.org/jira/browse/KAFKA-8804
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.4.0
>
>
> This covers 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-507%3A+Securing+Internal+Connect+REST+Endpoints]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8804) Internal Connect REST endpoints are insecure

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943198#comment-16943198
 ] 

ASF GitHub Bot commented on KAFKA-8804:
---

rhauch commented on pull request #7310: KAFKA-8804: Secure internal Connect 
REST endpoints
URL: https://github.com/apache/kafka/pull/7310
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Internal Connect REST endpoints are insecure
> 
>
> Key: KAFKA-8804
> URL: https://issues.apache.org/jira/browse/KAFKA-8804
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
>
> This covers 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-507%3A+Securing+Internal+Connect+REST+Endpoints]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5609) Connect log4j should log to file by default

2019-10-02 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-5609.
--
Fix Version/s: 2.4.0
 Reviewer: Randall Hauch
   Resolution: Fixed

This required a KIP change (see 
[KIP-521|https://cwiki.apache.org/confluence/display/KAFKA/KIP-521%3A+Enable+redirection+of+Connect%27s+log4j+messages+to+a+file+by+default]),
 so this was passed and merged into the 2.4.0 release.

> Connect log4j should log to file by default
> ---
>
> Key: KAFKA-5609
> URL: https://issues.apache.org/jira/browse/KAFKA-5609
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Yeva Byzek
>Assignee: Konstantine Karantasis
>Priority: Minor
>  Labels: easyfix
> Fix For: 2.4.0
>
>
> {{https://github.com/apache/kafka/blob/trunk/config/connect-log4j.properties}}
> Currently logs to stdout.  It should also log to a file by default, otherwise 
> it just writes to console and messages can be lost



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-5609) Connect log4j should log to file by default

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943195#comment-16943195
 ] 

ASF GitHub Bot commented on KAFKA-5609:
---

rhauch commented on pull request #7430: KAFKA-5609: Connect log4j should also 
log to a file by default (KIP-521)
URL: https://github.com/apache/kafka/pull/7430
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Connect log4j should log to file by default
> ---
>
> Key: KAFKA-5609
> URL: https://issues.apache.org/jira/browse/KAFKA-5609
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Yeva Byzek
>Assignee: Kaufman Ng
>Priority: Minor
>  Labels: easyfix
>
> {{https://github.com/apache/kafka/blob/trunk/config/connect-log4j.properties}}
> Currently logs to stdout.  It should also log to a file by default, otherwise 
> it just writes to console and messages can be lost



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-5609) Connect log4j should log to file by default

2019-10-02 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reassigned KAFKA-5609:


Assignee: Konstantine Karantasis  (was: Kaufman Ng)

> Connect log4j should log to file by default
> ---
>
> Key: KAFKA-5609
> URL: https://issues.apache.org/jira/browse/KAFKA-5609
> Project: Kafka
>  Issue Type: Improvement
>  Components: config, KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Yeva Byzek
>Assignee: Konstantine Karantasis
>Priority: Minor
>  Labels: easyfix
>
> {{https://github.com/apache/kafka/blob/trunk/config/connect-log4j.properties}}
> Currently logs to stdout.  It should also log to a file by default, otherwise 
> it just writes to console and messages can be lost



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7772) Dynamically adjust log level in Connect workers

2019-10-02 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-7772.
--
  Reviewer: Randall Hauch
Resolution: Fixed

> Dynamically adjust log level in Connect workers
> ---
>
> Key: KAFKA-7772
> URL: https://issues.apache.org/jira/browse/KAFKA-7772
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Arjun Satish
>Assignee: Arjun Satish
>Priority: Minor
>  Labels: needs-kip
> Fix For: 2.4.0
>
>
> Currently, Kafka provides a JMX interface to dynamically modify log levels of 
> different active loggers. It would be good to have a similar interface for 
> Connect as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7772) Dynamically adjust log level in Connect workers

2019-10-02 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-7772:
-
Fix Version/s: 2.4.0

> Dynamically adjust log level in Connect workers
> ---
>
> Key: KAFKA-7772
> URL: https://issues.apache.org/jira/browse/KAFKA-7772
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Arjun Satish
>Assignee: Arjun Satish
>Priority: Minor
>  Labels: needs-kip
> Fix For: 2.4.0
>
>
> Currently, Kafka provides a JMX interface to dynamically modify log levels of 
> different active loggers. It would be good to have a similar interface for 
> Connect as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7772) Dynamically adjust log level in Connect workers

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943190#comment-16943190
 ] 

ASF GitHub Bot commented on KAFKA-7772:
---

rhauch commented on pull request #7403: KAFKA-7772: Dynamically Adjust Log 
Levels in Connect
URL: https://github.com/apache/kafka/pull/7403
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Dynamically adjust log level in Connect workers
> ---
>
> Key: KAFKA-7772
> URL: https://issues.apache.org/jira/browse/KAFKA-7772
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Arjun Satish
>Assignee: Arjun Satish
>Priority: Minor
>  Labels: needs-kip
>
> Currently, Kafka provides a JMX interface to dynamically modify log levels of 
> different active loggers. It would be good to have a similar interface for 
> Connect as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8671) NullPointerException occurs if topic associated with GlobalKTable changes

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943181#comment-16943181
 ] 

ASF GitHub Bot commented on KAFKA-8671:
---

amleung21 commented on pull request #7437: KAFKA-8671: NullPointerException 
occurs if topic associated with GlobalKTable changes
URL: https://github.com/apache/kafka/pull/7437
 
 
   A NullPointerException occurs when the global/.checkpoint file contains a 
line with an obsolete (but valid) topic. Log an error and throw exception when 
non-relevant topic-partitions from checkpoint file are encountered.
   
   Added a unit test to verify that non-relevant topics are detected and an 
exception is thrown. Also, manually ran a streams application with a modified 
global/.checkpoint file containing an obsolete topic partition and verified 
that the error is logged and initialization fails.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> NullPointerException occurs if topic associated with GlobalKTable changes
> -
>
> Key: KAFKA-8671
> URL: https://issues.apache.org/jira/browse/KAFKA-8671
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0
>Reporter: Alex Leung
>Assignee: Alex Leung
>Priority: Critical
>
> The following NullPointerException occurs when the global/.checkpoint file 
> contains a line with a topic previously associated with (but no longer 
> configured for) a GlobalKTable:
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.kafka.streams.processor.internals.GlobalStateUpdateTask.update(GlobalStateUpdateTask.java:85)
> at 
> org.apache.kafka.streams.processor.internals.GlobalStreamThread$StateConsumer.pollAndUpdate(GlobalStreamThread.java:241)
> at 
> org.apache.kafka.streams.processor.internals.GlobalStreamThread.run(GlobalStreamThread.java:290){code}
>  
> After line 84 
> ([https://github.com/apache/kafka/blob/2.0/streams/src/main/java/org/apache/kafka/streams/processor/internals/GlobalStateUpdateTask.java#L84)]
>  `sourceNodeAndDeserializer` is null for the old, but still valid, topic. 
> This can be reproduced with the following sequence:
>  # create a GlobalKTable associated with topic, 'global-topic1'
>  # change the topic associated with the GlobalKTable to 'global-topic2' 
>  ##  at this point, the global/.checkpoint file will contain lines for both 
> topics
>  # produce messages to previous topic ('global-topic1')
>  # the consumer will attempt to consume from global-topic1, but no 
> deserializer associated with global-topic1 will be found and the NPE will 
> occur
> It looks like the following recent commit has included checkpoint validations 
> that may prevent this issue: 
> https://github.com/apache/kafka/commit/53b4ce5c00d61be87962f603682873665155cec4#diff-cc98a6c20f2a8483e1849aea6921c34dR425



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8943) Move SecurityProviderCreator to a public package

2019-10-02 Thread Sai Sandeep (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943053#comment-16943053
 ] 

Sai Sandeep commented on KAFKA-8943:


Hi [~rsivaram], shall I go ahead with the change?

> Move SecurityProviderCreator to a public package
> 
>
> Key: KAFKA-8943
> URL: https://issues.apache.org/jira/browse/KAFKA-8943
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Rajini Sivaram
>Priority: Blocker
> Fix For: 2.4.0
>
>
> The public interface `SecurityProviderCreator` added under KAFKA-8669 
> (KIP-492) is currently in the internal package 
> `org.apache.kafka.common.security` along with other internal classes. Since 
> this is a public interface, we should move it to a public package. We should 
> also add `@InterfaceStability.Evolving` annotation.
>  
> Marked as blocker for 2.4.0 since we should do this before the release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8649) Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0

2019-10-02 Thread Suyash Garg (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943050#comment-16943050
 ] 

Suyash Garg commented on KAFKA-8649:


Awesome :) Thank you everyone for following up on this so quickly. 

> Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0
> --
>
> Key: KAFKA-8649
> URL: https://issues.apache.org/jira/browse/KAFKA-8649
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Suyash Garg
>Assignee: Sophie Blee-Goldman
>Priority: Critical
> Fix For: 2.0.2, 2.1.2, 2.2.2, 2.3.1
>
>
> While doing a rolling update of a cluster of nodes running Kafka Streams 
> application, the stream threads in the nodes running the old version of the 
> library (2.0.0), fail with the following error: 
> {code:java}
> [ERROR] [application-existing-StreamThread-336] 
> [o.a.k.s.p.internals.StreamThread] - stream-thread 
> [application-existing-StreamThread-336] Encountered the following error 
> during processing:
> java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
> #011at 
> org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.(SubscriptionInfo.java:67)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8835) Update documentation for URP changes in KIP-352

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943048#comment-16943048
 ] 

ASF GitHub Bot commented on KAFKA-8835:
---

viktorsomogyi commented on pull request #7434: KAFKA-8835: KIP-352 docs update
URL: https://github.com/apache/kafka/pull/7434
 
 
   Doc updates for KIP-352
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Update documentation for URP changes in KIP-352
> ---
>
> Key: KAFKA-8835
> URL: https://issues.apache.org/jira/browse/KAFKA-8835
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>
> This Jira covers any doc changes needed for the changes to URP semantics in 
> KIP-352: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-352%3A+Distinguish+URPs+caused+by+reassignment].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8671) NullPointerException occurs if topic associated with GlobalKTable changes

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943007#comment-16943007
 ] 

ASF GitHub Bot commented on KAFKA-8671:
---

amleung21 commented on pull request #7188: KAFKA-8671: NullPointerException 
occurs if topic associated with GlobalKTable changes
URL: https://github.com/apache/kafka/pull/7188
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> NullPointerException occurs if topic associated with GlobalKTable changes
> -
>
> Key: KAFKA-8671
> URL: https://issues.apache.org/jira/browse/KAFKA-8671
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0
>Reporter: Alex Leung
>Assignee: Alex Leung
>Priority: Critical
>
> The following NullPointerException occurs when the global/.checkpoint file 
> contains a line with a topic previously associated with (but no longer 
> configured for) a GlobalKTable:
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.kafka.streams.processor.internals.GlobalStateUpdateTask.update(GlobalStateUpdateTask.java:85)
> at 
> org.apache.kafka.streams.processor.internals.GlobalStreamThread$StateConsumer.pollAndUpdate(GlobalStreamThread.java:241)
> at 
> org.apache.kafka.streams.processor.internals.GlobalStreamThread.run(GlobalStreamThread.java:290){code}
>  
> After line 84 
> ([https://github.com/apache/kafka/blob/2.0/streams/src/main/java/org/apache/kafka/streams/processor/internals/GlobalStateUpdateTask.java#L84)]
>  `sourceNodeAndDeserializer` is null for the old, but still valid, topic. 
> This can be reproduced with the following sequence:
>  # create a GlobalKTable associated with topic, 'global-topic1'
>  # change the topic associated with the GlobalKTable to 'global-topic2' 
>  ##  at this point, the global/.checkpoint file will contain lines for both 
> topics
>  # produce messages to previous topic ('global-topic1')
>  # the consumer will attempt to consume from global-topic1, but no 
> deserializer associated with global-topic1 will be found and the NPE will 
> occur
> It looks like the following recent commit has included checkpoint validations 
> that may prevent this issue: 
> https://github.com/apache/kafka/commit/53b4ce5c00d61be87962f603682873665155cec4#diff-cc98a6c20f2a8483e1849aea6921c34dR425



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8649) Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0

2019-10-02 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942927#comment-16942927
 ] 

Guozhang Wang commented on KAFKA-8649:
--

The corresponding fix has been merged to trunk, [~ableegoldman] please go ahead 
and resolve this ticket when it has completed cherry-picking to older branches.

[~ferbncode] The upcoming bug-fix release 2.1.2 + should have this fix.

> Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0
> --
>
> Key: KAFKA-8649
> URL: https://issues.apache.org/jira/browse/KAFKA-8649
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Suyash Garg
>Assignee: Sophie Blee-Goldman
>Priority: Critical
> Fix For: 2.0.2, 2.1.2, 2.2.2, 2.3.1
>
>
> While doing a rolling update of a cluster of nodes running Kafka Streams 
> application, the stream threads in the nodes running the old version of the 
> library (2.0.0), fail with the following error: 
> {code:java}
> [ERROR] [application-existing-StreamThread-336] 
> [o.a.k.s.p.internals.StreamThread] - stream-thread 
> [application-existing-StreamThread-336] Encountered the following error 
> during processing:
> java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
> #011at 
> org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.(SubscriptionInfo.java:67)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8649) Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942926#comment-16942926
 ] 

ASF GitHub Bot commented on KAFKA-8649:
---

guozhangwang commented on pull request #7423: KAFKA-8649: send latest commonly 
supported version in assignment
URL: https://github.com/apache/kafka/pull/7423
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Error while rolling update from Kafka Streams 2.0.0 -> Kafka Streams 2.1.0
> --
>
> Key: KAFKA-8649
> URL: https://issues.apache.org/jira/browse/KAFKA-8649
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Suyash Garg
>Assignee: Sophie Blee-Goldman
>Priority: Critical
> Fix For: 2.0.2, 2.1.2, 2.2.2, 2.3.1
>
>
> While doing a rolling update of a cluster of nodes running Kafka Streams 
> application, the stream threads in the nodes running the old version of the 
> library (2.0.0), fail with the following error: 
> {code:java}
> [ERROR] [application-existing-StreamThread-336] 
> [o.a.k.s.p.internals.StreamThread] - stream-thread 
> [application-existing-StreamThread-336] Encountered the following error 
> during processing:
> java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
> #011at 
> org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.(SubscriptionInfo.java:67)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
> #011at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
> #011at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
> #011at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
> #011at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Kristian Aurlien (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942923#comment-16942923
 ] 

Kristian Aurlien commented on KAFKA-8965:
-

OK, great! 

Thanks for the quick feedback. Will look for another ticket :)

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Assignee: Kristian Aurlien
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Bruno Cadonna (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942920#comment-16942920
 ] 

Bruno Cadonna commented on KAFKA-8965:
--

I have already issued a ticket with Confluent.

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Assignee: Kristian Aurlien
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Kristian Aurlien (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kristian Aurlien updated KAFKA-8965:

Comment: was deleted

(was: Only related to [Confluent documentation|https://docs.confluent.io], and 
therefore not a Kafka issue.)

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Assignee: Kristian Aurlien
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Kristian Aurlien (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942915#comment-16942915
 ] 

Kristian Aurlien edited comment on KAFKA-8965 at 10/2/19 3:43 PM:
--

Only related to [Confluent documentation|https://docs.confluent.io], and 
therefore not a Kafka issue.


was (Author: aurlien):
Only related to [docs.confluent.io|docs.confluent.io], and therefore not a 
Kafka issue.

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Assignee: Kristian Aurlien
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Kristian Aurlien (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942916#comment-16942916
 ] 

Kristian Aurlien commented on KAFKA-8965:
-

Ok, thanks! 

FYI: I will report the issue to confluent as well. 

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Assignee: Kristian Aurlien
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Kristian Aurlien (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kristian Aurlien resolved KAFKA-8965.
-
  Reviewer: Bruno Cadonna
Resolution: Invalid

Only related to [docs.confluent.io|docs.confluent.io], and therefore not a 
Kafka issue.

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Assignee: Kristian Aurlien
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Bruno Cadonna (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942914#comment-16942914
 ] 

Bruno Cadonna edited comment on KAFKA-8965 at 10/2/19 3:37 PM:
---

Oh, I see. Then it is not an Apache Kafka issue and as you said out of scope. 
Could you close the ticket as invalid? I hope you will find another ticket to 
work on. Look after the {{newbie}} label.


was (Author: cadonna):
Oh, I see. Then it is not an Apache Kafka issue and as you said out of scope. 
Could you close the ticket as invalid. I hope you will find another ticket to 
work on. Look after the {{newbie}} label.

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Assignee: Kristian Aurlien
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Bruno Cadonna (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942914#comment-16942914
 ] 

Bruno Cadonna commented on KAFKA-8965:
--

Oh, I see. Then it is not an Apache Kafka issue and as you said out of scope. 
Could you close the ticket as invalid. I hope you will find another ticket to 
work on. Look after the {{newbie}} label.

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Assignee: Kristian Aurlien
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Kristian Aurlien (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942907#comment-16942907
 ] 

Kristian Aurlien commented on KAFKA-8965:
-

The documentation seems to be correct on 
[kafka.apache.org|https://kafka.apache.org/documentation/#kafka_streams_task_monitoring]
 (debug), but wrong on 
[docs.confluent.io|https://docs.confluent.io/current/streams/monitoring.html#task-metrics]
 (info).


If this is the case, is this ticket out of scope? I can't find the confluent 
docs available open source anywhere.

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Assignee: Kristian Aurlien
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8901) Extend consumer group command to use the new Admin API to delete consumer offsets

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942896#comment-16942896
 ] 

ASF GitHub Bot commented on KAFKA-8901:
---

gwenshap commented on pull request #7362: KAFKA-8901; Extend consumer group 
command to use the new Admin API to delete consumer offsets (KIP-496)
URL: https://github.com/apache/kafka/pull/7362
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Extend consumer group command to use the new Admin API to delete consumer 
> offsets
> -
>
> Key: KAFKA-8901
> URL: https://issues.apache.org/jira/browse/KAFKA-8901
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> KIP: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-496%3A+Administrative+API+to+delete+consumer+offsets].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8260) Flaky test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-10-02 Thread Randall Hauch (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942894#comment-16942894
 ] 

Randall Hauch commented on KAFKA-8260:
--

Saw this test failing locally on the `2.2` branch:
{code}
kafka.api.ConsumerBounceTest > 
testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup FAILED
java.lang.AssertionError: Received 0, expected at least 68
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557)
at 
kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
at 
kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at 
kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319)
{code}

> Flaky test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-8260
> URL: https://issues.apache.org/jira/browse/KAFKA-8260
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.2.0, 2.3.0
>Reporter: John Roesler
>Priority: Major
>
> I have seen this fail again just now. See also KAFKA-7965 and KAFKA-7936.
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3874/testReport/junit/kafka.api/ConsumerBounceTest/testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup/
> {noformat}
> Error Message
> org.scalatest.junit.JUnitTestFailedError: Should have received an class 
> org.apache.kafka.common.errors.GroupMaxSizeReachedException during the 
> cluster roll
> Stacktrace
> org.scalatest.junit.JUnitTestFailedError: Should have received an class 
> org.apache.kafka.common.errors.GroupMaxSizeReachedException during the 
> cluster roll
>   at 
> org.scalatest.junit.AssertionsForJUnit.newAssertionFailedException(AssertionsForJUnit.scala:100)
>   at 
> org.scalatest.junit.AssertionsForJUnit.newAssertionFailedException$(AssertionsForJUnit.scala:99)
>   at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
>   at org.scalatest.Assertions.fail(Assertions.scala:1089)
>   at org.scalatest.Assertions.fail$(Assertions.scala:1085)
>   at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:71)
>   at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:350)
> {noformat}
> {noformat}
> Standard Output
> [2019-04-18 18:26:47,487] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-18 18:26:47,676] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-18 18:26:47,677] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition topic-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-18 18:26:47,698] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition topic-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-18 18:26:48,023] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition closetest-8 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-18 18:26:48,023] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition closetest-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-18 

[jira] [Commented] (KAFKA-8264) Flaky Test PlaintextConsumerTest#testLowMaxFetchSizeForRequestAndPartition

2019-10-02 Thread Gwen Shapira (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942891#comment-16942891
 ] 

Gwen Shapira commented on KAFKA-8264:
-

https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/25413/

> Flaky Test PlaintextConsumerTest#testLowMaxFetchSizeForRequestAndPartition
> --
>
> Key: KAFKA-8264
> URL: https://issues.apache.org/jira/browse/KAFKA-8264
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.0.1, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/252/tests]
> {quote}org.apache.kafka.common.errors.TopicExistsException: Topic 'topic3' 
> already exists.{quote}
> STDOUT
>  
> {quote}[2019-04-19 03:54:20,080] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,080] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,312] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,313] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition topic-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,994] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:21,727] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:28,696] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:28,699] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:29,246] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition topic-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:29,247] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:29,287] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition topic-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:33,408] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:33,408] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 

[jira] [Commented] (KAFKA-8953) Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor

2019-10-02 Thread Rabi Kumar K C (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942881#comment-16942881
 ] 

Rabi Kumar K C commented on KAFKA-8953:
---

[~mjsax] I have created the KIP 
([https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=130028807]). 
Please do let me know if I missed something.

> Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor
> -
>
> Key: KAFKA-8953
> URL: https://issues.apache.org/jira/browse/KAFKA-8953
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Rabi Kumar K C
>Priority: Trivial
>  Labels: beginner, needs-kip, newbie
>
> Kafka Streams ships couple of different timestamp extractors, one named 
> `UsePreviousTimeOnInvalidTimestamp`.
> Given the latest improvements with regard to time tracking, it seems 
> appropriate to rename this class to `UsePartitionTimeOnInvalidTimestamp`, as 
> we know have fixed definition of partition time, and also pass in partition 
> time into the `#extract(...)` method, instead of some non-well-defined 
> "previous timestamp".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8264) Flaky Test PlaintextConsumerTest#testLowMaxFetchSizeForRequestAndPartition

2019-10-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942819#comment-16942819
 ] 

ASF GitHub Bot commented on KAFKA-8264:
---

stanislavkozlovski commented on pull request #7433: KAFKA-8264: Log more 
information on ConsumerTest consume timeout
URL: https://github.com/apache/kafka/pull/7433
 
 
   This patch adds additional logging information on timeout of the 
AbstractConsumerTest#consumeRecords method. It is difficult to pinpoint the 
exact issue of timeouts in this method with flaky tests due to the difficulty 
in reproducing it. Adding more information will help us diagnose whether the 
consumer was constantly polling for records or whether it was stuck rebalancing.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test PlaintextConsumerTest#testLowMaxFetchSizeForRequestAndPartition
> --
>
> Key: KAFKA-8264
> URL: https://issues.apache.org/jira/browse/KAFKA-8264
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.0.1, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/252/tests]
> {quote}org.apache.kafka.common.errors.TopicExistsException: Topic 'topic3' 
> already exists.{quote}
> STDOUT
>  
> {quote}[2019-04-19 03:54:20,080] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,080] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,312] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,313] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition topic-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:20,994] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:21,727] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:28,696] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:28,699] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:29,246] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition topic-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:29,247] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-19 03:54:29,287] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error 

[jira] [Commented] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Bruno Cadonna (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942795#comment-16942795
 ] 

Bruno Cadonna commented on KAFKA-8965:
--

[~aurlien] Thank you for your interest. Please go ahead.

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8379) Flaky test KafkaAdminClientTest.testUnreachableBootstrapServer

2019-10-02 Thread Randall Hauch (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942789#comment-16942789
 ] 

Randall Hauch commented on KAFKA-8379:
--

[~rsivaram] and [~ijuma], this is happening frequently when trying to build on 
the `2.2` branch:
{code}
org.apache.kafka.clients.admin.KafkaAdminClientTest > 
testUnreachableBootstrapServer FAILED
org.junit.runners.model.TestTimedOutException: test timed out after 12 
milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at 
org.apache.kafka.clients.admin.KafkaAdminClientTest.testUnreachableBootstrapServer(KafkaAdminClientTest.java:276)
{code}

Can this be backported to the `2.2` branch?

> Flaky test KafkaAdminClientTest.testUnreachableBootstrapServer
> --
>
> Key: KAFKA-8379
> URL: https://issues.apache.org/jira/browse/KAFKA-8379
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.3.0
>
>
> Test failed with:
> {code:java}
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:502)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClientTest.testUnreachableBootstrapServer(KafkaAdminClientTest.java:303)
> {code}
> Standard output shows:
> {code}
> [2019-05-17 06:38:01,854] ERROR Uncaught exception in thread 
> 'kafka-admin-client-thread | adminclient-35': 
> (org.apache.kafka.common.utils.KafkaThread:51)
> java.lang.IllegalStateException: Cannot send 
> ClientRequest(expectResponse=true, callback=null, destination=-1, 
> correlationId=0, clientId=mockClientId, createdTimeMs=1558075081853, 
> requestBuilder=MetadataRequestData(topics=[], allowAutoTopicCreation=true, 
> includeClusterAuthorizedOperations=false, 
> includeTopicAuthorizedOperations=false)) since the destination is not ready
>   at org.apache.kafka.clients.MockClient.send(MockClient.java:186)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:943)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1140)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (KAFKA-8953) Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor

2019-10-02 Thread Rabi Kumar K C (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rabi Kumar K C updated KAFKA-8953:
--
Comment: was deleted

(was: Hi [~mjsax] Wrote emails to 
[d...@kafka.apache.org|mailto:d...@kafka.apache.org] but didn't get any 
response regarding permission. Would you able to help in this scenario? 
Confluence id: rabikumar.kc)

> Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor
> -
>
> Key: KAFKA-8953
> URL: https://issues.apache.org/jira/browse/KAFKA-8953
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Rabi Kumar K C
>Priority: Trivial
>  Labels: beginner, needs-kip, newbie
>
> Kafka Streams ships couple of different timestamp extractors, one named 
> `UsePreviousTimeOnInvalidTimestamp`.
> Given the latest improvements with regard to time tracking, it seems 
> appropriate to rename this class to `UsePartitionTimeOnInvalidTimestamp`, as 
> we know have fixed definition of partition time, and also pass in partition 
> time into the `#extract(...)` method, instead of some non-well-defined 
> "previous timestamp".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8953) Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor

2019-10-02 Thread Rabi Kumar K C (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942777#comment-16942777
 ] 

Rabi Kumar K C commented on KAFKA-8953:
---

Hi [~mjsax] Wrote emails to 
[d...@kafka.apache.org|mailto:d...@kafka.apache.org] but didn't get any 
response regarding permission. Would you able to help in this scenario? 
Confluence id: rabikumar.kc

> Consider renaming `UsePreviousTimeOnInvalidTimestamp` timestamp extractor
> -
>
> Key: KAFKA-8953
> URL: https://issues.apache.org/jira/browse/KAFKA-8953
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Rabi Kumar K C
>Priority: Trivial
>  Labels: beginner, needs-kip, newbie
>
> Kafka Streams ships couple of different timestamp extractors, one named 
> `UsePreviousTimeOnInvalidTimestamp`.
> Given the latest improvements with regard to time tracking, it seems 
> appropriate to rename this class to `UsePartitionTimeOnInvalidTimestamp`, as 
> we know have fixed definition of partition time, and also pass in partition 
> time into the `#extract(...)` method, instead of some non-well-defined 
> "previous timestamp".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Kristian Aurlien (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942756#comment-16942756
 ] 

Kristian Aurlien commented on KAFKA-8965:
-

This seems like a good ticket for a neewbie, can I assign myself to this task?

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8968) Refactor Task-level Metrics

2019-10-02 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-8968:


 Summary: Refactor Task-level Metrics
 Key: KAFKA-8968
 URL: https://issues.apache.org/jira/browse/KAFKA-8968
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bruno Cadonna


Refactor task-level metrics as proposed in KIP-444.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8968) Refactor Task-level Metrics

2019-10-02 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna reassigned KAFKA-8968:


Assignee: Bruno Cadonna

> Refactor Task-level Metrics
> ---
>
> Key: KAFKA-8968
> URL: https://issues.apache.org/jira/browse/KAFKA-8968
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> Refactor task-level metrics as proposed in KIP-444.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8958) Fix Kafka Streams JavaDocs with regard to used Serdes

2019-10-02 Thread bibin sebastian (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942600#comment-16942600
 ] 

bibin sebastian commented on KAFKA-8958:


[~mjsax] i will work on this.

> Fix Kafka Streams JavaDocs with regard to used Serdes
> -
>
> Key: KAFKA-8958
> URL: https://issues.apache.org/jira/browse/KAFKA-8958
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: bibin sebastian
>Priority: Minor
>  Labels: beginner, newbie
>
> In older released, Kafka Streams applied operator specific overwrites of 
> Serdes as in-place overwrites. In newer releases, Kafka Streams tries to 
> re-use Serdes more "aggressively" by pushing serde information downstream if 
> the key and/or value did not change.
> However, we never updated the JavaDocs accordingly. For example 
> `KStream#through(String topic)` JavaDocs say:
> {code:java}
> Materialize this stream to a topic and creates a new {@code KStream} from the 
> topic using default serializers, deserializers, and producer's {@link 
> DefaultPartitioner}.
> {code}
> The JavaDocs don't put into account that Serdes might have been set further 
> upstream, and the defaults from the config would not be used.
> `KStream#through()` is just one example. We should address this through all 
> JavaDocs over all operators (ie, KStream, KGroupedStream, 
> TimeWindowedKStream, SessionWindowedKStream, KTable, and KGroupedTable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8958) Fix Kafka Streams JavaDocs with regard to used Serdes

2019-10-02 Thread bibin sebastian (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bibin sebastian reassigned KAFKA-8958:
--

Assignee: bibin sebastian

> Fix Kafka Streams JavaDocs with regard to used Serdes
> -
>
> Key: KAFKA-8958
> URL: https://issues.apache.org/jira/browse/KAFKA-8958
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: bibin sebastian
>Priority: Minor
>  Labels: beginner, newbie
>
> In older released, Kafka Streams applied operator specific overwrites of 
> Serdes as in-place overwrites. In newer releases, Kafka Streams tries to 
> re-use Serdes more "aggressively" by pushing serde information downstream if 
> the key and/or value did not change.
> However, we never updated the JavaDocs accordingly. For example 
> `KStream#through(String topic)` JavaDocs say:
> {code:java}
> Materialize this stream to a topic and creates a new {@code KStream} from the 
> topic using default serializers, deserializers, and producer's {@link 
> DefaultPartitioner}.
> {code}
> The JavaDocs don't put into account that Serdes might have been set further 
> upstream, and the defaults from the config would not be used.
> `KStream#through()` is just one example. We should address this through all 
> JavaDocs over all operators (ie, KStream, KGroupedStream, 
> TimeWindowedKStream, SessionWindowedKStream, KTable, and KGroupedTable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8887) Use purgatory for CreateAcls and DeleteAcls if implementation is async

2019-10-02 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-8887.
---
  Reviewer: Manikumar
Resolution: Fixed

> Use purgatory for CreateAcls and DeleteAcls if implementation is async
> --
>
> Key: KAFKA-8887
> URL: https://issues.apache.org/jira/browse/KAFKA-8887
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.4.0
>
>
> KAFKA-8886 is updating Authorizer.createAcls and Authorizer.deleteAcls APIs 
> to be asynchronous to avoid blocking request threads during ACL updates when 
> implementations use external stores like databases where updates may block 
> for long. This Jira is to async updates using a purgatory in KafkaApis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Bruno Cadonna (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942550#comment-16942550
 ] 

Bruno Cadonna commented on KAFKA-8965:
--

In KIP-444 {{record-lateness}} is at DEBUG level and it is not marked as 
changed. Thus, I guess the docs are wrong.

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942547#comment-16942547
 ] 

Matthias J. Sax commented on KAFKA-8965:


\cc [~vvcephei] [~cadonna] – is the code wrong of the docs?

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8965) the recording level of record-lateness-[avg|max] is wrong

2019-10-02 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8965:
---
Component/s: streams

> the recording level of record-lateness-[avg|max] is wrong
> -
>
> Key: KAFKA-8965
> URL: https://issues.apache.org/jira/browse/KAFKA-8965
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Junze Bao
>Priority: Major
>
> The document says the metrics is at INFO level but it is actually DEBUG level 
> in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8104) Consumer cannot rejoin to the group after rebalancing

2019-10-02 Thread Nikita Koryabkin (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942526#comment-16942526
 ] 

Nikita Koryabkin commented on KAFKA-8104:
-

[~kgn] , [~nizhikov]

*Version 1.1.0 is also affect*, thanks.

 

[2019-10-01 17:40:33,995] INFO [GroupCoordinator 1001]: Member 
-2af431fd-60e4-4dd7-a4fd-8dd85d4a5620 in group main has failed, 
removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2019-10-01 17:40:33,995] INFO [GroupCoordinator 1001]: Preparing to rebalance 
group main with old generation 15 (__consumer_offsets-1) 
(kafka.coordinator.group.GroupCoordinator)
[2019-10-01 17:40:33,995] INFO [GroupCoordinator 1001]: Group main with 
generation 16 is now empty (__consumer_offsets-1) 
(kafka.coordinator.group.GroupCoordinator)

> Consumer cannot rejoin to the group after rebalancing
> -
>
> Key: KAFKA-8104
> URL: https://issues.apache.org/jira/browse/KAFKA-8104
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0
>Reporter: Gregory Koshelev
>Assignee: Nikolay Izhikov
>Priority: Critical
> Attachments: consumer-rejoin-fail.log
>
>
> TL;DR; {{KafkaConsumer}} cannot rejoin to the group due to inconsistent 
> {{AbstractCoordinator.generation}} (which is {{NO_GENERATION}} and 
> {{AbstractCoordinator.joinFuture}} (which is succeeded {{RequestFuture}}). 
> See explanation below.
> There are 16 consumers in single process (threads from pool-4-thread-1 to 
> pool-4-thread-16). All of them belong to single consumer group 
> {{hercules.sink.elastic.legacy_logs_elk_c2}}. Rebalancing has been acquired 
> and consumers have got {{CommitFailedException}} as expected:
> {noformat}
> 2019-03-10T03:16:37.023Z [pool-4-thread-10] WARN  
> r.k.vostok.hercules.sink.SimpleSink - Commit failed due to rebalancing
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed since the group has already rebalanced and assigned the partitions 
> to another member. This means that the time between subsequent calls to 
> poll() was longer than the configured max.poll.interval.ms, which typically 
> implies that the poll loop is spending too much time message processing. You 
> can address this either by increasing the session timeout or by reducing the 
> maximum size of batches returned in poll() with max.poll.records.
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:798)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:681)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1334)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1298)
>   at ru.kontur.vostok.hercules.sink.Sink.commit(Sink.java:156)
>   at ru.kontur.vostok.hercules.sink.SimpleSink.run(SimpleSink.java:104)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> After that, most of them successfully rejoined to the group with generation 
> 10699:
> {noformat}
> 2019-03-10T03:16:39.208Z [pool-4-thread-13] INFO  
> o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-13, 
> groupId=hercules.sink.elastic.legacy_logs_elk_c2] Successfully joined group 
> with generation 10699
> 2019-03-10T03:16:39.209Z [pool-4-thread-13] INFO  
> o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-13, 
> groupId=hercules.sink.elastic.legacy_logs_elk_c2] Setting newly assigned 
> partitions [legacy_logs_elk_c2-18]
> ...
> 2019-03-10T03:16:39.216Z [pool-4-thread-11] INFO  
> o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-11, 
> groupId=hercules.sink.elastic.legacy_logs_elk_c2] Successfully joined group 
> with generation 10699
> 2019-03-10T03:16:39.217Z [pool-4-thread-11] INFO  
> o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-11, 
> groupId=hercules.sink.elastic.legacy_logs_elk_c2] Setting newly assigned 
> partitions [legacy_logs_elk_c2-10, legacy_logs_elk_c2-11]
> ...
> 2019-03-10T03:16:39.218Z [pool-4-thread-15] INFO  
> o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-15, 
> groupId=hercules.sink.elastic.legacy_logs_elk_c2] Setting newly assigned 
> partitions [legacy_logs_elk_c2-24]
> 2019-03-10T03:16:42.320Z [kafka-coordinator-heartbeat-thread | 
> hercules.sink.elastic.legacy_logs_elk_c2] INFO  
> o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-6, 
> groupId=hercules.sink.elastic.legacy_logs_elk_c2] Attempt to heartbeat failed