[jira] [Resolved] (KAFKA-15126) Change range queries to accept null lower and upper bounds

2023-08-08 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-15126.
-
Resolution: Fixed

[Merged to trunk|https://github.com/apache/kafka/pull/14137]

> Change range queries to accept null lower and upper bounds
> --
>
> Key: KAFKA-15126
> URL: https://issues.apache.org/jira/browse/KAFKA-15126
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Lucia Cerchie
>Assignee: Lucia Cerchie
>Priority: Minor
> Fix For: 3.6.0
>
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> {color:#1d1c1d}When web client requests come in with query params, it's 
> common for those params to be null. We want developers to just be able to 
> pass in the upper/lower bounds if they want instead of implementing their own 
> logic to avoid getting the whole range (which will happen if they leave the 
> params null). {color}
> {color:#1d1c1d}An example of the logic they can avoid using after this KIP is 
> implemented is below:{color}
> {code:java}
> private RangeQuery> 
> createRangeQuery(String lower, String upper) {
> if (isBlank(lower) && isBlank(upper)) {
> return RangeQuery.withNoBounds();
> } else if (!isBlank(lower) && isBlank(upper)) {
> return RangeQuery.withLowerBound(lower);
> } else if (isBlank(lower) && !isBlank(upper)) {
> return RangeQuery.withUpperBound(upper);
> } else {
> return RangeQuery.withRange(lower, upper);
> }
> } {code}
>  
> | |



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15126) Change range queries to accept null lower and upper bounds

2023-08-08 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-15126:

Fix Version/s: 3.6.0

> Change range queries to accept null lower and upper bounds
> --
>
> Key: KAFKA-15126
> URL: https://issues.apache.org/jira/browse/KAFKA-15126
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Lucia Cerchie
>Assignee: Lucia Cerchie
>Priority: Minor
> Fix For: 3.6.0
>
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> {color:#1d1c1d}When web client requests come in with query params, it's 
> common for those params to be null. We want developers to just be able to 
> pass in the upper/lower bounds if they want instead of implementing their own 
> logic to avoid getting the whole range (which will happen if they leave the 
> params null). {color}
> {color:#1d1c1d}An example of the logic they can avoid using after this KIP is 
> implemented is below:{color}
> {code:java}
> private RangeQuery> 
> createRangeQuery(String lower, String upper) {
> if (isBlank(lower) && isBlank(upper)) {
> return RangeQuery.withNoBounds();
> } else if (!isBlank(lower) && isBlank(upper)) {
> return RangeQuery.withLowerBound(lower);
> } else if (isBlank(lower) && !isBlank(upper)) {
> return RangeQuery.withUpperBound(upper);
> } else {
> return RangeQuery.withRange(lower, upper);
> }
> } {code}
>  
> | |



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14539) Simplify StreamsMetadataState by replacing the Cluster metadata with partition info map

2023-06-07 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-14539.
-
Resolution: Fixed

> Simplify StreamsMetadataState by replacing the Cluster metadata with 
> partition info map
> ---
>
> Key: KAFKA-14539
> URL: https://issues.apache.org/jira/browse/KAFKA-14539
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Assignee: Danica Fine
>Priority: Major
>
> We can clean up the StreamsMetadataState class a bit by removing the 
> #onChange invocation that currently occurs within 
> StreamsPartitionAssignor#assign, which then lets us remove the `Cluster` 
> parameter in that callback. Instead of building a fake Cluster object from 
> the map of partition info when we invoke #onChange inside the 
> StreamsPartitionAssignor#onAssignment method, we can just directly pass in 
> the  `Map` and replace the usage of `Cluster` 
> everywhere in StreamsMetadataState
> (I believe the current system is a historical artifact from when we used to 
> require passing in a {{Cluster}} for the default partitioning strategy, which 
> the StreamMetadataState needs to compute the partition for a key. At some 
> point in the past we provided a better way to get the default partition, so 
> we no longer need a {{Cluster}} parameter/field at all)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14539) Simplify StreamsMetadataState by replacing the Cluster metadata with partition info map

2023-06-07 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-14539:

Fix Version/s: 3.6.0

> Simplify StreamsMetadataState by replacing the Cluster metadata with 
> partition info map
> ---
>
> Key: KAFKA-14539
> URL: https://issues.apache.org/jira/browse/KAFKA-14539
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Assignee: Danica Fine
>Priority: Major
> Fix For: 3.6.0
>
>
> We can clean up the StreamsMetadataState class a bit by removing the 
> #onChange invocation that currently occurs within 
> StreamsPartitionAssignor#assign, which then lets us remove the `Cluster` 
> parameter in that callback. Instead of building a fake Cluster object from 
> the map of partition info when we invoke #onChange inside the 
> StreamsPartitionAssignor#onAssignment method, we can just directly pass in 
> the  `Map` and replace the usage of `Cluster` 
> everywhere in StreamsMetadataState
> (I believe the current system is a historical artifact from when we used to 
> require passing in a {{Cluster}} for the default partitioning strategy, which 
> the StreamMetadataState needs to compute the partition for a key. At some 
> point in the past we provided a better way to get the default partition, so 
> we no longer need a {{Cluster}} parameter/field at all)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14609) Kafka Streams Processor API cannot use state stores

2023-01-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17656218#comment-17656218
 ] 

Bill Bejeck commented on KAFKA-14609:
-

It just received approval on the dev list, so I'd say within a week.

> Kafka Streams Processor API cannot use state stores
> ---
>
> Key: KAFKA-14609
> URL: https://issues.apache.org/jira/browse/KAFKA-14609
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.3.0
>Reporter: Philipp Schirmer
>Priority: Major
>
> The recently introduced Kafka Streams Processor API (since 3.3, 
> https://issues.apache.org/jira/browse/KAFKA-13654) likely has a bug with 
> regards to using state stores. The 
> [getStateStore|https://javadoc.io/static/org.apache.kafka/kafka-streams/3.3.1/org/apache/kafka/streams/processor/api/ProcessingContext.html#getStateStore-java.lang.String-]
>  method returns null, even though the store has been registered according to 
> the docs. The old transformer API still works. I created a small project that 
> demonstrates the behavior. It uses both methods to register a store for the 
> transformer, as well as the processor API: 
> https://github.com/bakdata/kafka-streams-state-store-demo/blob/main/src/test/java/com/bakdata/kafka/StreamsStateStoreTest.java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14609) Kafka Streams Processor API cannot use state stores

2023-01-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-14609.
-
Resolution: Fixed

Fixed by https://issues.apache.org/jira/browse/KAFKA-14388

> Kafka Streams Processor API cannot use state stores
> ---
>
> Key: KAFKA-14609
> URL: https://issues.apache.org/jira/browse/KAFKA-14609
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.3.0
>Reporter: Philipp Schirmer
>Priority: Major
>
> The recently introduced Kafka Streams Processor API (since 3.3, 
> https://issues.apache.org/jira/browse/KAFKA-13654) likely has a bug with 
> regards to using state stores. The 
> [getStateStore|https://javadoc.io/static/org.apache.kafka/kafka-streams/3.3.1/org/apache/kafka/streams/processor/api/ProcessingContext.html#getStateStore-java.lang.String-]
>  method returns null, even though the store has been registered according to 
> the docs. The old transformer API still works. I created a small project that 
> demonstrates the behavior. It uses both methods to register a store for the 
> transformer, as well as the processor API: 
> https://github.com/bakdata/kafka-streams-state-store-demo/blob/main/src/test/java/com/bakdata/kafka/StreamsStateStoreTest.java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14609) Kafka Streams Processor API cannot use state stores

2023-01-09 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17656166#comment-17656166
 ] 

Bill Bejeck commented on KAFKA-14609:
-

Hi [~philipp94831] 

Thanks for reporting this issue.  I believe it's been fixed with 
https://issues.apache.org/jira/browse/KAFKA-14388.

You could pull down either the 3.4.0 branch or 3.3.2 and build from source and 
test it.  For now, I'm going to mark this as fixed.  

> Kafka Streams Processor API cannot use state stores
> ---
>
> Key: KAFKA-14609
> URL: https://issues.apache.org/jira/browse/KAFKA-14609
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.3.0
>Reporter: Philipp Schirmer
>Priority: Major
>
> The recently introduced Kafka Streams Processor API (since 3.3, 
> https://issues.apache.org/jira/browse/KAFKA-13654) likely has a bug with 
> regards to using state stores. The 
> [getStateStore|https://javadoc.io/static/org.apache.kafka/kafka-streams/3.3.1/org/apache/kafka/streams/processor/api/ProcessingContext.html#getStateStore-java.lang.String-]
>  method returns null, even though the store has been registered according to 
> the docs. The old transformer API still works. I created a small project that 
> demonstrates the behavior. It uses both methods to register a store for the 
> transformer, as well as the processor API: 
> https://github.com/bakdata/kafka-streams-state-store-demo/blob/main/src/test/java/com/bakdata/kafka/StreamsStateStoreTest.java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14388) NPE When Retrieving StateStore with new Processor API

2022-11-16 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-14388:

Component/s: streams

> NPE When Retrieving StateStore with new Processor API
> -
>
> Key: KAFKA-14388
> URL: https://issues.apache.org/jira/browse/KAFKA-14388
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 3.4.0, 3.3.2
>
>
> Using the new Processor API introduced with KIP-820 when adding a state store 
> to the Processor when executing `context().getStore("store-name")` always 
> returns `null` as the store is not in the `stores` `HashMap` in the 
> `ProcessorStateManager`.  This occurs even when using the 
> `ConnectedStoreProvider.stores()` method
> I've confirmed the store is associated with the processor by viewing the 
> `Topology` description.   
> From some initial triage, it looks like the store is never registered.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14388) NPE When Retrieving StateStore with new Processor API

2022-11-16 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17635035#comment-17635035
 ] 

Bill Bejeck commented on KAFKA-14388:
-

Cherry-picked to 3.3

> NPE When Retrieving StateStore with new Processor API
> -
>
> Key: KAFKA-14388
> URL: https://issues.apache.org/jira/browse/KAFKA-14388
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 3.4.0, 3.3.2
>
>
> Using the new Processor API introduced with KIP-820 when adding a state store 
> to the Processor when executing `context().getStore("store-name")` always 
> returns `null` as the store is not in the `stores` `HashMap` in the 
> `ProcessorStateManager`.  This occurs even when using the 
> `ConnectedStoreProvider.stores()` method
> I've confirmed the store is associated with the processor by viewing the 
> `Topology` description.   
> From some initial triage, it looks like the store is never registered.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14388) NPE When Retrieving StateStore with new Processor API

2022-11-16 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-14388.
-
Resolution: Fixed

Merged PR to trunk

> NPE When Retrieving StateStore with new Processor API
> -
>
> Key: KAFKA-14388
> URL: https://issues.apache.org/jira/browse/KAFKA-14388
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 3.4.0, 3.3.2
>
>
> Using the new Processor API introduced with KIP-820 when adding a state store 
> to the Processor when executing `context().getStore("store-name")` always 
> returns `null` as the store is not in the `stores` `HashMap` in the 
> `ProcessorStateManager`.  This occurs even when using the 
> `ConnectedStoreProvider.stores()` method
> I've confirmed the store is associated with the processor by viewing the 
> `Topology` description.   
> From some initial triage, it looks like the store is never registered.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14388) NPE When Retrieving StateStore with new Processor API

2022-11-16 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-14388:

Fix Version/s: 3.3.2

> NPE When Retrieving StateStore with new Processor API
> -
>
> Key: KAFKA-14388
> URL: https://issues.apache.org/jira/browse/KAFKA-14388
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 3.4.0, 3.3.2
>
>
> Using the new Processor API introduced with KIP-820 when adding a state store 
> to the Processor when executing `context().getStore("store-name")` always 
> returns `null` as the store is not in the `stores` `HashMap` in the 
> `ProcessorStateManager`.  This occurs even when using the 
> `ConnectedStoreProvider.stores()` method
> I've confirmed the store is associated with the processor by viewing the 
> `Topology` description.   
> From some initial triage, it looks like the store is never registered.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14388) NPE When Retrieving StateStore with new Processor API

2022-11-15 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17634526#comment-17634526
 ] 

Bill Bejeck commented on KAFKA-14388:
-

PR with details of the bug https://github.com/apache/kafka/pull/12861

> NPE When Retrieving StateStore with new Processor API
> -
>
> Key: KAFKA-14388
> URL: https://issues.apache.org/jira/browse/KAFKA-14388
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 3.4.0
>
>
> Using the new Processor API introduced with KIP-820 when adding a state store 
> to the Processor when executing `context().getStore("store-name")` always 
> returns `null` as the store is not in the `stores` `HashMap` in the 
> `ProcessorStateManager`.  This occurs even when using the 
> `ConnectedStoreProvider.stores()` method
> I've confirmed the store is associated with the processor by viewing the 
> `Topology` description.   
> From some initial triage, it looks like the store is never registered.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14388) NPE When Retrieving StateStore with new Processor API

2022-11-14 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-14388:
---

 Summary: NPE When Retrieving StateStore with new Processor API
 Key: KAFKA-14388
 URL: https://issues.apache.org/jira/browse/KAFKA-14388
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.3.1, 3.3.0
Reporter: Bill Bejeck
Assignee: Bill Bejeck
 Fix For: 3.4.0


Using the new Processor API introduced with KIP-820 when adding a state store 
to the Processor when executing `context().getStore("store-name")` always 
returns `null` as the store is not in the `stores` `HashMap` in the 
`ProcessorStateManager`.  This occurs even when using the 
`ConnectedStoreProvider.stores()` method

I've confirmed the store is associated with the processor by viewing the 
`Topology` description.   

>From some initial triage, it looks like the store is never registered.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-12950) Replace EasyMock and PowerMock with Mockito for KafkaStreamsTest

2022-10-27 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17625246#comment-17625246
 ] 

Bill Bejeck commented on KAFKA-12950:
-

Merged into trunk

> Replace EasyMock and PowerMock with Mockito for KafkaStreamsTest
> 
>
> Key: KAFKA-12950
> URL: https://issues.apache.org/jira/browse/KAFKA-12950
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, unit tests
>Reporter: Josep Prat
>Assignee: Divij Vaidya
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-13739) Sliding window without grace not working

2022-03-30 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17514700#comment-17514700
 ] 

Bill Bejeck commented on KAFKA-13739:
-

[~tiboun] I've assigned the ticket to you and added you as a contributor so you 
can self-assign tickets in the future.

> Sliding window without grace not working
> 
>
> Key: KAFKA-13739
> URL: https://issues.apache.org/jira/browse/KAFKA-13739
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.1.0
>Reporter: bounkong khamphousone
>Assignee: bounkong khamphousone
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 3.2.0
>
>
> Hi everyone! I would like to understand why KafkaStreams DSL offer the 
> ability to express a SlidingWindow with no grace period but seems that it 
> doesn't work. [confluent's 
> site|https://docs.confluent.io/platform/current/streams/developer-guide/dsl-api.html#sliding-time-windows]
>  state that grace period is required and with the deprecated method, it's 
> default to 24 hours.
> Doing a basic sliding window with a count, if I set grace period to 1 ms, 
> expected output is done. Based on the sliding window documentation, lower and 
> upper bounds are inclusive.
> If I set grace period to 0 ms, I can see that record is not skipped at 
> KStreamSlidingWindowAggregate(l.126) but when we try to create the window and 
> push the event in KStreamSlidingWindowAggregate#createWindows we call the 
> method updateWindowAndForward(l.417). This method (l.468) check that 
> {{{}windowEnd > closeTime{}}}.
> closeTime is defined as {{observedStreamTime - window.gracePeriodMs}} 
> (Sliding window configuration)
> windowEnd is defined as {{{}inputRecordTimestamp{}}}.
>  
> For a first event with a record timestamp, we can assume that 
> observedStreamTime is equal to inputRecordTimestamp.
>  
> Therefore, closeTime is {{inputRecordTimestamp - 0}} (gracePeriodMS) which 
> results to {{{}inputRecordTimestamp{}}}.
> If we go back to the check done in {{updateWindowAndForward}} method, then we 
> have inputRecordTimestamp > inputRecordTimestamp which is always false. The 
> record is then skipped for record's own window.
> Stating that lower and upper bounds are inclusive, I would have expected the 
> event to be pushed in the store and forwarded. Hence, the check would be 
> {{{}windowEnd >= closeTime{}}}.
>  
> Is it a bug or is it intended ?
> Thanks in advance for your explanations!
> Best regards!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (KAFKA-13739) Sliding window without grace not working

2022-03-30 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-13739:
---

Assignee: bounkong khamphousone

> Sliding window without grace not working
> 
>
> Key: KAFKA-13739
> URL: https://issues.apache.org/jira/browse/KAFKA-13739
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.1.0
>Reporter: bounkong khamphousone
>Assignee: bounkong khamphousone
>Priority: Minor
>  Labels: beginner, newbie
>
> Hi everyone! I would like to understand why KafkaStreams DSL offer the 
> ability to express a SlidingWindow with no grace period but seems that it 
> doesn't work. [confluent's 
> site|https://docs.confluent.io/platform/current/streams/developer-guide/dsl-api.html#sliding-time-windows]
>  state that grace period is required and with the deprecated method, it's 
> default to 24 hours.
> Doing a basic sliding window with a count, if I set grace period to 1 ms, 
> expected output is done. Based on the sliding window documentation, lower and 
> upper bounds are inclusive.
> If I set grace period to 0 ms, I can see that record is not skipped at 
> KStreamSlidingWindowAggregate(l.126) but when we try to create the window and 
> push the event in KStreamSlidingWindowAggregate#createWindows we call the 
> method updateWindowAndForward(l.417). This method (l.468) check that 
> {{{}windowEnd > closeTime{}}}.
> closeTime is defined as {{observedStreamTime - window.gracePeriodMs}} 
> (Sliding window configuration)
> windowEnd is defined as {{{}inputRecordTimestamp{}}}.
>  
> For a first event with a record timestamp, we can assume that 
> observedStreamTime is equal to inputRecordTimestamp.
>  
> Therefore, closeTime is {{inputRecordTimestamp - 0}} (gracePeriodMS) which 
> results to {{{}inputRecordTimestamp{}}}.
> If we go back to the check done in {{updateWindowAndForward}} method, then we 
> have inputRecordTimestamp > inputRecordTimestamp which is always false. The 
> record is then skipped for record's own window.
> Stating that lower and upper bounds are inclusive, I would have expected the 
> event to be pushed in the store and forwarded. Hence, the check would be 
> {{{}windowEnd >= closeTime{}}}.
>  
> Is it a bug or is it intended ?
> Thanks in advance for your explanations!
> Best regards!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (KAFKA-13739) Sliding window without grace not working

2022-03-30 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-13739:
---

Assignee: (was: Bounkong Khamphousone)

> Sliding window without grace not working
> 
>
> Key: KAFKA-13739
> URL: https://issues.apache.org/jira/browse/KAFKA-13739
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.1.0
>Reporter: bounkong khamphousone
>Priority: Minor
>  Labels: beginner, newbie
>
> Hi everyone! I would like to understand why KafkaStreams DSL offer the 
> ability to express a SlidingWindow with no grace period but seems that it 
> doesn't work. [confluent's 
> site|https://docs.confluent.io/platform/current/streams/developer-guide/dsl-api.html#sliding-time-windows]
>  state that grace period is required and with the deprecated method, it's 
> default to 24 hours.
> Doing a basic sliding window with a count, if I set grace period to 1 ms, 
> expected output is done. Based on the sliding window documentation, lower and 
> upper bounds are inclusive.
> If I set grace period to 0 ms, I can see that record is not skipped at 
> KStreamSlidingWindowAggregate(l.126) but when we try to create the window and 
> push the event in KStreamSlidingWindowAggregate#createWindows we call the 
> method updateWindowAndForward(l.417). This method (l.468) check that 
> {{{}windowEnd > closeTime{}}}.
> closeTime is defined as {{observedStreamTime - window.gracePeriodMs}} 
> (Sliding window configuration)
> windowEnd is defined as {{{}inputRecordTimestamp{}}}.
>  
> For a first event with a record timestamp, we can assume that 
> observedStreamTime is equal to inputRecordTimestamp.
>  
> Therefore, closeTime is {{inputRecordTimestamp - 0}} (gracePeriodMS) which 
> results to {{{}inputRecordTimestamp{}}}.
> If we go back to the check done in {{updateWindowAndForward}} method, then we 
> have inputRecordTimestamp > inputRecordTimestamp which is always false. The 
> record is then skipped for record's own window.
> Stating that lower and upper bounds are inclusive, I would have expected the 
> event to be pushed in the store and forwarded. Hence, the check would be 
> {{{}windowEnd >= closeTime{}}}.
>  
> Is it a bug or is it intended ?
> Thanks in advance for your explanations!
> Best regards!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (KAFKA-13739) Sliding window without grace not working

2022-03-30 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-13739:
---

Assignee: Bounkong Khamphousone

> Sliding window without grace not working
> 
>
> Key: KAFKA-13739
> URL: https://issues.apache.org/jira/browse/KAFKA-13739
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.1.0
>Reporter: bounkong khamphousone
>Assignee: Bounkong Khamphousone
>Priority: Minor
>  Labels: beginner, newbie
>
> Hi everyone! I would like to understand why KafkaStreams DSL offer the 
> ability to express a SlidingWindow with no grace period but seems that it 
> doesn't work. [confluent's 
> site|https://docs.confluent.io/platform/current/streams/developer-guide/dsl-api.html#sliding-time-windows]
>  state that grace period is required and with the deprecated method, it's 
> default to 24 hours.
> Doing a basic sliding window with a count, if I set grace period to 1 ms, 
> expected output is done. Based on the sliding window documentation, lower and 
> upper bounds are inclusive.
> If I set grace period to 0 ms, I can see that record is not skipped at 
> KStreamSlidingWindowAggregate(l.126) but when we try to create the window and 
> push the event in KStreamSlidingWindowAggregate#createWindows we call the 
> method updateWindowAndForward(l.417). This method (l.468) check that 
> {{{}windowEnd > closeTime{}}}.
> closeTime is defined as {{observedStreamTime - window.gracePeriodMs}} 
> (Sliding window configuration)
> windowEnd is defined as {{{}inputRecordTimestamp{}}}.
>  
> For a first event with a record timestamp, we can assume that 
> observedStreamTime is equal to inputRecordTimestamp.
>  
> Therefore, closeTime is {{inputRecordTimestamp - 0}} (gracePeriodMS) which 
> results to {{{}inputRecordTimestamp{}}}.
> If we go back to the check done in {{updateWindowAndForward}} method, then we 
> have inputRecordTimestamp > inputRecordTimestamp which is always false. The 
> record is then skipped for record's own window.
> Stating that lower and upper bounds are inclusive, I would have expected the 
> event to be pushed in the store and forwarded. Hence, the check would be 
> {{{}windowEnd >= closeTime{}}}.
>  
> Is it a bug or is it intended ?
> Thanks in advance for your explanations!
> Best regards!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2022-03-01 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17499697#comment-17499697
 ] 

Bill Bejeck commented on KAFKA-8659:


Merged to trunk, 3.1, and 3.0 branches

> SetSchemaMetadata SMT fails on records with null value and schema
> -
>
> Key: KAFKA-8659
> URL: https://issues.apache.org/jira/browse/KAFKA-8659
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Assignee: Marc Löhe
>Priority: Minor
> Fix For: 3.0.1, 3.2.0, 3.1.1
>
>
> If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
> value and corresponding schema are {{null}} (i.e. tombstone records from 
> [Debezium|[https://debezium.io/]), the transform will fail.
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
> [updating schema metadata]
> at 
> org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
> at 
> org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
> ... 11 more
> {code}
>  
> I don't see any problem in passing those records as is in favor of failing 
> and will shortly add this in a PR.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2022-03-01 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-8659:
--

Assignee: Marc Löhe

> SetSchemaMetadata SMT fails on records with null value and schema
> -
>
> Key: KAFKA-8659
> URL: https://issues.apache.org/jira/browse/KAFKA-8659
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Assignee: Marc Löhe
>Priority: Minor
> Fix For: 3.0.1, 3.2.0, 3.1.1
>
>
> If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
> value and corresponding schema are {{null}} (i.e. tombstone records from 
> [Debezium|[https://debezium.io/]), the transform will fail.
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
> [updating schema metadata]
> at 
> org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
> at 
> org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
> ... 11 more
> {code}
>  
> I don't see any problem in passing those records as is in favor of failing 
> and will shortly add this in a PR.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2022-03-01 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8659:
---
Fix Version/s: 3.0.1
   3.1.1

> SetSchemaMetadata SMT fails on records with null value and schema
> -
>
> Key: KAFKA-8659
> URL: https://issues.apache.org/jira/browse/KAFKA-8659
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Assignee: Bill Bejeck
>Priority: Minor
> Fix For: 3.0.1, 3.2.0, 3.1.1
>
>
> If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
> value and corresponding schema are {{null}} (i.e. tombstone records from 
> [Debezium|[https://debezium.io/]), the transform will fail.
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
> [updating schema metadata]
> at 
> org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
> at 
> org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
> ... 11 more
> {code}
>  
> I don't see any problem in passing those records as is in favor of failing 
> and will shortly add this in a PR.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2022-03-01 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-8659:
--

Assignee: Bill Bejeck

> SetSchemaMetadata SMT fails on records with null value and schema
> -
>
> Key: KAFKA-8659
> URL: https://issues.apache.org/jira/browse/KAFKA-8659
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Assignee: Bill Bejeck
>Priority: Minor
> Fix For: 3.2.0
>
>
> If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
> value and corresponding schema are {{null}} (i.e. tombstone records from 
> [Debezium|[https://debezium.io/]), the transform will fail.
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
> [updating schema metadata]
> at 
> org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
> at 
> org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
> ... 11 more
> {code}
>  
> I don't see any problem in passing those records as is in favor of failing 
> and will shortly add this in a PR.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2022-03-01 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-8659:
--

Assignee: (was: Bill Bejeck)

> SetSchemaMetadata SMT fails on records with null value and schema
> -
>
> Key: KAFKA-8659
> URL: https://issues.apache.org/jira/browse/KAFKA-8659
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Priority: Minor
> Fix For: 3.0.1, 3.2.0, 3.1.1
>
>
> If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
> value and corresponding schema are {{null}} (i.e. tombstone records from 
> [Debezium|[https://debezium.io/]), the transform will fail.
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
> [updating schema metadata]
> at 
> org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
> at 
> org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
> ... 11 more
> {code}
>  
> I don't see any problem in passing those records as is in favor of failing 
> and will shortly add this in a PR.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2022-03-01 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8659.

Resolution: Fixed

Resovled via https://github.com/apache/kafka/pull/7082

> SetSchemaMetadata SMT fails on records with null value and schema
> -
>
> Key: KAFKA-8659
> URL: https://issues.apache.org/jira/browse/KAFKA-8659
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Priority: Minor
> Fix For: 3.2.0
>
>
> If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
> value and corresponding schema are {{null}} (i.e. tombstone records from 
> [Debezium|[https://debezium.io/]), the transform will fail.
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
> [updating schema metadata]
> at 
> org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
> at 
> org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
> ... 11 more
> {code}
>  
> I don't see any problem in passing those records as is in favor of failing 
> and will shortly add this in a PR.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-8659) SetSchemaMetadata SMT fails on records with null value and schema

2022-03-01 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8659:
---
Fix Version/s: 3.2.0

> SetSchemaMetadata SMT fails on records with null value and schema
> -
>
> Key: KAFKA-8659
> URL: https://issues.apache.org/jira/browse/KAFKA-8659
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Marc Löhe
>Priority: Minor
> Fix For: 3.2.0
>
>
> If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
> value and corresponding schema are {{null}} (i.e. tombstone records from 
> [Debezium|[https://debezium.io/]), the transform will fail.
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
> [updating schema metadata]
> at 
> org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
> at 
> org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
> ... 11 more
> {code}
>  
> I don't see any problem in passing those records as is in favor of failing 
> and will shortly add this in a PR.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13507) GlobalProcessor ignores user specified names

2021-12-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-13507:

Affects Version/s: 2.8.0
   2.7.0
   2.6.0
   2.5.0

> GlobalProcessor ignores user specified names
> 
>
> Key: KAFKA-13507
> URL: https://issues.apache.org/jira/browse/KAFKA-13507
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.5.0, 2.6.0, 2.7.0, 2.8.0
>Reporter: Matthias J. Sax
>Assignee: Tamara Skokova
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 3.2.0
>
>
> Using `StreamsBuilder.addGlobalStore` users can specify a name via `Consumed` 
> parameter. However, the specified name is ignored and the created source and 
> global processor get generated names assigned.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13507) GlobalProcessor ignores user specified names

2021-12-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-13507:

Fix Version/s: 3.2.0

> GlobalProcessor ignores user specified names
> 
>
> Key: KAFKA-13507
> URL: https://issues.apache.org/jira/browse/KAFKA-13507
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Tamara Skokova
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 3.2.0
>
>
> Using `StreamsBuilder.addGlobalStore` users can specify a name via `Consumed` 
> parameter. However, the specified name is ignored and the created source and 
> global processor get generated names assigned.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-12336) custom stream naming does not work while calling stream[K, V](topicPattern: Pattern) API with named Consumed parameter

2021-06-24 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-12336:

Fix Version/s: 2.8.1

> custom stream naming does not work while calling stream[K, V](topicPattern: 
> Pattern) API with named Consumed parameter 
> ---
>
> Key: KAFKA-12336
> URL: https://issues.apache.org/jira/browse/KAFKA-12336
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: Ramil Israfilov
>Assignee: GeordieMai
>Priority: Minor
>  Labels: easy-fix, newbie
> Fix For: 3.0.0, 2.8.1
>
>
> In our Scala application I am trying to implement custom naming for Kafka 
> Streams application nodes.
> We are using topicPattern for our stream source.
> Here is an API which I am calling:
>  
> {code:java}
> val topicsPattern="t-[A-Za-z0-9-].suffix"
> val operations: KStream[MyKey, MyValue] =
>   builder.stream[MyKey, MyValue](Pattern.compile(topicsPattern))(
> Consumed.`with`[MyKey, MyValue].withName("my-fancy-name")
>   )
> {code}
>  Despite the fact that I am providing Consumed with custom name the topology 
> describe still show "KSTREAM-SOURCE-00" as name for our stream source.
> It is not a problem if I just use a name for topic. But our application needs 
> to get messages from set of topics based on topicname pattern matching.
> After checking the kakfa code I see that
> org.apache.kafka.streams.kstream.internals.InternalStreamBuilder (on line 
> 103) has a bug:
> {code:java}
> public  KStream stream(final Pattern topicPattern,
>final ConsumedInternal consumed) {
> final String name = newProcessorName(KStreamImpl.SOURCE_NAME);
> final StreamSourceNode streamPatternSourceNode = new 
> StreamSourceNode<>(name, topicPattern, consumed);
> {code}
> node name construction does not take into account the name of consumed 
> parameter.
> For example code for another stream api call with topic name does it 
> correctly:
> {code:java}
> final String name = new 
> NamedInternal(consumed.name()).orElseGenerateWithPrefix(this, 
> KStreamImpl.SOURCE_NAME);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12336) custom stream naming does not work while calling stream[K, V](topicPattern: Pattern) API with named Consumed parameter

2021-06-24 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368968#comment-17368968
 ] 

Bill Bejeck commented on KAFKA-12336:
-

cherry-picked to 2.8

> custom stream naming does not work while calling stream[K, V](topicPattern: 
> Pattern) API with named Consumed parameter 
> ---
>
> Key: KAFKA-12336
> URL: https://issues.apache.org/jira/browse/KAFKA-12336
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: Ramil Israfilov
>Assignee: GeordieMai
>Priority: Minor
>  Labels: easy-fix, newbie
> Fix For: 3.0.0
>
>
> In our Scala application I am trying to implement custom naming for Kafka 
> Streams application nodes.
> We are using topicPattern for our stream source.
> Here is an API which I am calling:
>  
> {code:java}
> val topicsPattern="t-[A-Za-z0-9-].suffix"
> val operations: KStream[MyKey, MyValue] =
>   builder.stream[MyKey, MyValue](Pattern.compile(topicsPattern))(
> Consumed.`with`[MyKey, MyValue].withName("my-fancy-name")
>   )
> {code}
>  Despite the fact that I am providing Consumed with custom name the topology 
> describe still show "KSTREAM-SOURCE-00" as name for our stream source.
> It is not a problem if I just use a name for topic. But our application needs 
> to get messages from set of topics based on topicname pattern matching.
> After checking the kakfa code I see that
> org.apache.kafka.streams.kstream.internals.InternalStreamBuilder (on line 
> 103) has a bug:
> {code:java}
> public  KStream stream(final Pattern topicPattern,
>final ConsumedInternal consumed) {
> final String name = newProcessorName(KStreamImpl.SOURCE_NAME);
> final StreamSourceNode streamPatternSourceNode = new 
> StreamSourceNode<>(name, topicPattern, consumed);
> {code}
> node name construction does not take into account the name of consumed 
> parameter.
> For example code for another stream api call with topic name does it 
> correctly:
> {code:java}
> final String name = new 
> NamedInternal(consumed.name()).orElseGenerateWithPrefix(this, 
> KStreamImpl.SOURCE_NAME);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12336) custom stream naming does not work while calling stream[K, V](topicPattern: Pattern) API with named Consumed parameter

2021-06-24 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368949#comment-17368949
 ] 

Bill Bejeck commented on KAFKA-12336:
-

merged into trunk

> custom stream naming does not work while calling stream[K, V](topicPattern: 
> Pattern) API with named Consumed parameter 
> ---
>
> Key: KAFKA-12336
> URL: https://issues.apache.org/jira/browse/KAFKA-12336
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: Ramil Israfilov
>Assignee: GeordieMai
>Priority: Minor
>  Labels: easy-fix, newbie
> Fix For: 3.0.0
>
>
> In our Scala application I am trying to implement custom naming for Kafka 
> Streams application nodes.
> We are using topicPattern for our stream source.
> Here is an API which I am calling:
>  
> {code:java}
> val topicsPattern="t-[A-Za-z0-9-].suffix"
> val operations: KStream[MyKey, MyValue] =
>   builder.stream[MyKey, MyValue](Pattern.compile(topicsPattern))(
> Consumed.`with`[MyKey, MyValue].withName("my-fancy-name")
>   )
> {code}
>  Despite the fact that I am providing Consumed with custom name the topology 
> describe still show "KSTREAM-SOURCE-00" as name for our stream source.
> It is not a problem if I just use a name for topic. But our application needs 
> to get messages from set of topics based on topicname pattern matching.
> After checking the kakfa code I see that
> org.apache.kafka.streams.kstream.internals.InternalStreamBuilder (on line 
> 103) has a bug:
> {code:java}
> public  KStream stream(final Pattern topicPattern,
>final ConsumedInternal consumed) {
> final String name = newProcessorName(KStreamImpl.SOURCE_NAME);
> final StreamSourceNode streamPatternSourceNode = new 
> StreamSourceNode<>(name, topicPattern, consumed);
> {code}
> node name construction does not take into account the name of consumed 
> parameter.
> For example code for another stream api call with topic name does it 
> correctly:
> {code:java}
> final String name = new 
> NamedInternal(consumed.name()).orElseGenerateWithPrefix(this, 
> KStreamImpl.SOURCE_NAME);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12336) custom stream naming does not work while calling stream[K, V](topicPattern: Pattern) API with named Consumed parameter

2021-06-24 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-12336:

Fix Version/s: 3.0.0

> custom stream naming does not work while calling stream[K, V](topicPattern: 
> Pattern) API with named Consumed parameter 
> ---
>
> Key: KAFKA-12336
> URL: https://issues.apache.org/jira/browse/KAFKA-12336
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: Ramil Israfilov
>Assignee: GeordieMai
>Priority: Minor
>  Labels: easy-fix, newbie
> Fix For: 3.0.0
>
>
> In our Scala application I am trying to implement custom naming for Kafka 
> Streams application nodes.
> We are using topicPattern for our stream source.
> Here is an API which I am calling:
>  
> {code:java}
> val topicsPattern="t-[A-Za-z0-9-].suffix"
> val operations: KStream[MyKey, MyValue] =
>   builder.stream[MyKey, MyValue](Pattern.compile(topicsPattern))(
> Consumed.`with`[MyKey, MyValue].withName("my-fancy-name")
>   )
> {code}
>  Despite the fact that I am providing Consumed with custom name the topology 
> describe still show "KSTREAM-SOURCE-00" as name for our stream source.
> It is not a problem if I just use a name for topic. But our application needs 
> to get messages from set of topics based on topicname pattern matching.
> After checking the kakfa code I see that
> org.apache.kafka.streams.kstream.internals.InternalStreamBuilder (on line 
> 103) has a bug:
> {code:java}
> public  KStream stream(final Pattern topicPattern,
>final ConsumedInternal consumed) {
> final String name = newProcessorName(KStreamImpl.SOURCE_NAME);
> final StreamSourceNode streamPatternSourceNode = new 
> StreamSourceNode<>(name, topicPattern, consumed);
> {code}
> node name construction does not take into account the name of consumed 
> parameter.
> For example code for another stream api call with topic name does it 
> correctly:
> {code:java}
> final String name = new 
> NamedInternal(consumed.name()).orElseGenerateWithPrefix(this, 
> KStreamImpl.SOURCE_NAME);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12336) custom stream naming does not work while calling stream[K, V](topicPattern: Pattern) API with named Consumed parameter

2021-06-24 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-12336.
-
Resolution: Fixed

> custom stream naming does not work while calling stream[K, V](topicPattern: 
> Pattern) API with named Consumed parameter 
> ---
>
> Key: KAFKA-12336
> URL: https://issues.apache.org/jira/browse/KAFKA-12336
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: Ramil Israfilov
>Assignee: GeordieMai
>Priority: Minor
>  Labels: easy-fix, newbie
>
> In our Scala application I am trying to implement custom naming for Kafka 
> Streams application nodes.
> We are using topicPattern for our stream source.
> Here is an API which I am calling:
>  
> {code:java}
> val topicsPattern="t-[A-Za-z0-9-].suffix"
> val operations: KStream[MyKey, MyValue] =
>   builder.stream[MyKey, MyValue](Pattern.compile(topicsPattern))(
> Consumed.`with`[MyKey, MyValue].withName("my-fancy-name")
>   )
> {code}
>  Despite the fact that I am providing Consumed with custom name the topology 
> describe still show "KSTREAM-SOURCE-00" as name for our stream source.
> It is not a problem if I just use a name for topic. But our application needs 
> to get messages from set of topics based on topicname pattern matching.
> After checking the kakfa code I see that
> org.apache.kafka.streams.kstream.internals.InternalStreamBuilder (on line 
> 103) has a bug:
> {code:java}
> public  KStream stream(final Pattern topicPattern,
>final ConsumedInternal consumed) {
> final String name = newProcessorName(KStreamImpl.SOURCE_NAME);
> final StreamSourceNode streamPatternSourceNode = new 
> StreamSourceNode<>(name, topicPattern, consumed);
> {code}
> node name construction does not take into account the name of consumed 
> parameter.
> For example code for another stream api call with topic name does it 
> correctly:
> {code:java}
> final String name = new 
> NamedInternal(consumed.name()).orElseGenerateWithPrefix(this, 
> KStreamImpl.SOURCE_NAME);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12672) Running test-kraft-server-start results in error

2021-04-16 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-12672:

Affects Version/s: (was: 2.8.0)
   3.0.0

> Running test-kraft-server-start results in error
> 
>
> Key: KAFKA-12672
> URL: https://issues.apache.org/jira/browse/KAFKA-12672
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 3.0.0, 2.8.1
>
>
> Running the {{test-kraft-server-start}} script in the {{raft}} module results 
> in this error
>  
> {code:java}
> ERROR Exiting raft server due to fatal exception (kafka.tools.TestRaftServer$)
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.kafka.common.security.auth.SecurityProtocol.
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.kafka.common.security.auth.SecurityProtocol.valueOf(SecurityProtocol.java:26)
>   at 
> org.apache.kafka.common.security.auth.SecurityProtocol.forName(SecurityProtocol.java:72)
>   at 
> kafka.raft.KafkaRaftManager.$anonfun$buildNetworkClient$1(RaftManager.scala:256)
>   at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:530)
>   at kafka.raft.KafkaRaftManager.buildNetworkClient(RaftManager.scala:256)
>   at 
> kafka.raft.KafkaRaftManager.buildNetworkChannel(RaftManager.scala:234)
>   at kafka.raft.KafkaRaftManager.(RaftManager.scala:126)
>   at kafka.tools.TestRaftServer.startup(TestRaftServer.scala:88)
>   at kafka.tools.TestRaftServer$.main(TestRaftServer.scala:442)
>   at kafka.tools.TestRaftServer.main(TestRaftServer.scala)
> {code}
> Looks like the listener property in the config is not getting picked up as an 
> empty string gets passed to {{SecurityProtocol.forName}}
> EDIT: The issue is the properties file needs to have a 
> {{controller.listener.names}} property with just values of the names



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12672) Running test-kraft-server-start results in error

2021-04-16 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-12672:

Affects Version/s: 2.8.0

> Running test-kraft-server-start results in error
> 
>
> Key: KAFKA-12672
> URL: https://issues.apache.org/jira/browse/KAFKA-12672
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>
> Running the {{test-kraft-server-start}} script in the {{raft}} module results 
> in this error
>  
> {code:java}
> ERROR Exiting raft server due to fatal exception (kafka.tools.TestRaftServer$)
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.kafka.common.security.auth.SecurityProtocol.
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.kafka.common.security.auth.SecurityProtocol.valueOf(SecurityProtocol.java:26)
>   at 
> org.apache.kafka.common.security.auth.SecurityProtocol.forName(SecurityProtocol.java:72)
>   at 
> kafka.raft.KafkaRaftManager.$anonfun$buildNetworkClient$1(RaftManager.scala:256)
>   at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:530)
>   at kafka.raft.KafkaRaftManager.buildNetworkClient(RaftManager.scala:256)
>   at 
> kafka.raft.KafkaRaftManager.buildNetworkChannel(RaftManager.scala:234)
>   at kafka.raft.KafkaRaftManager.(RaftManager.scala:126)
>   at kafka.tools.TestRaftServer.startup(TestRaftServer.scala:88)
>   at kafka.tools.TestRaftServer$.main(TestRaftServer.scala:442)
>   at kafka.tools.TestRaftServer.main(TestRaftServer.scala)
> {code}
> Looks like the listener property in the config is not getting picked up as an 
> empty string gets passed to {{SecurityProtocol.forName}}
> EDIT: The issue is the properties file needs to have a 
> {{controller.listener.names}} property with just values of the names



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12672) Running test-kraft-server-start results in error

2021-04-16 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-12672:

Fix Version/s: 2.8.1
   3.0.0

> Running test-kraft-server-start results in error
> 
>
> Key: KAFKA-12672
> URL: https://issues.apache.org/jira/browse/KAFKA-12672
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 3.0.0, 2.8.1
>
>
> Running the {{test-kraft-server-start}} script in the {{raft}} module results 
> in this error
>  
> {code:java}
> ERROR Exiting raft server due to fatal exception (kafka.tools.TestRaftServer$)
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.kafka.common.security.auth.SecurityProtocol.
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.kafka.common.security.auth.SecurityProtocol.valueOf(SecurityProtocol.java:26)
>   at 
> org.apache.kafka.common.security.auth.SecurityProtocol.forName(SecurityProtocol.java:72)
>   at 
> kafka.raft.KafkaRaftManager.$anonfun$buildNetworkClient$1(RaftManager.scala:256)
>   at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:530)
>   at kafka.raft.KafkaRaftManager.buildNetworkClient(RaftManager.scala:256)
>   at 
> kafka.raft.KafkaRaftManager.buildNetworkChannel(RaftManager.scala:234)
>   at kafka.raft.KafkaRaftManager.(RaftManager.scala:126)
>   at kafka.tools.TestRaftServer.startup(TestRaftServer.scala:88)
>   at kafka.tools.TestRaftServer$.main(TestRaftServer.scala:442)
>   at kafka.tools.TestRaftServer.main(TestRaftServer.scala)
> {code}
> Looks like the listener property in the config is not getting picked up as an 
> empty string gets passed to {{SecurityProtocol.forName}}
> EDIT: The issue is the properties file needs to have a 
> {{controller.listener.names}} property with just values of the names



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12672) Running test-kraft-server-start results in error

2021-04-15 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-12672:

Description: 
Running the {{test-kraft-server-start}} script in the {{raft}} module results 
in this error

 
{code:java}
ERROR Exiting raft server due to fatal exception (kafka.tools.TestRaftServer$)
java.lang.IllegalArgumentException: No enum constant 
org.apache.kafka.common.security.auth.SecurityProtocol.
at java.lang.Enum.valueOf(Enum.java:238)
at 
org.apache.kafka.common.security.auth.SecurityProtocol.valueOf(SecurityProtocol.java:26)
at 
org.apache.kafka.common.security.auth.SecurityProtocol.forName(SecurityProtocol.java:72)
at 
kafka.raft.KafkaRaftManager.$anonfun$buildNetworkClient$1(RaftManager.scala:256)
at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:530)
at kafka.raft.KafkaRaftManager.buildNetworkClient(RaftManager.scala:256)
at 
kafka.raft.KafkaRaftManager.buildNetworkChannel(RaftManager.scala:234)
at kafka.raft.KafkaRaftManager.(RaftManager.scala:126)
at kafka.tools.TestRaftServer.startup(TestRaftServer.scala:88)
at kafka.tools.TestRaftServer$.main(TestRaftServer.scala:442)
at kafka.tools.TestRaftServer.main(TestRaftServer.scala)
{code}
Looks like the listener property in the config is not getting picked up as an 
empty string gets passed to {{SecurityProtocol.forName}}

EDIT: The issue is the properties file needs to have a 
{{controller.listener.names}} property with just values of the names

  was:
Running the {{test-kraft-server-start}} script in the {{raft}} module results 
in this error

 
{code:java}
ERROR Exiting raft server due to fatal exception (kafka.tools.TestRaftServer$)
java.lang.IllegalArgumentException: No enum constant 
org.apache.kafka.common.security.auth.SecurityProtocol.
at java.lang.Enum.valueOf(Enum.java:238)
at 
org.apache.kafka.common.security.auth.SecurityProtocol.valueOf(SecurityProtocol.java:26)
at 
org.apache.kafka.common.security.auth.SecurityProtocol.forName(SecurityProtocol.java:72)
at 
kafka.raft.KafkaRaftManager.$anonfun$buildNetworkClient$1(RaftManager.scala:256)
at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:530)
at kafka.raft.KafkaRaftManager.buildNetworkClient(RaftManager.scala:256)
at 
kafka.raft.KafkaRaftManager.buildNetworkChannel(RaftManager.scala:234)
at kafka.raft.KafkaRaftManager.(RaftManager.scala:126)
at kafka.tools.TestRaftServer.startup(TestRaftServer.scala:88)
at kafka.tools.TestRaftServer$.main(TestRaftServer.scala:442)
at kafka.tools.TestRaftServer.main(TestRaftServer.scala)
{code}
Looks like the listener property in the config is not getting picked up as an 
empty string gets passed to {{SecurityProtocol.forName}}


> Running test-kraft-server-start results in error
> 
>
> Key: KAFKA-12672
> URL: https://issues.apache.org/jira/browse/KAFKA-12672
> Project: Kafka
>  Issue Type: Bug
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>
> Running the {{test-kraft-server-start}} script in the {{raft}} module results 
> in this error
>  
> {code:java}
> ERROR Exiting raft server due to fatal exception (kafka.tools.TestRaftServer$)
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.kafka.common.security.auth.SecurityProtocol.
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.kafka.common.security.auth.SecurityProtocol.valueOf(SecurityProtocol.java:26)
>   at 
> org.apache.kafka.common.security.auth.SecurityProtocol.forName(SecurityProtocol.java:72)
>   at 
> kafka.raft.KafkaRaftManager.$anonfun$buildNetworkClient$1(RaftManager.scala:256)
>   at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:530)
>   at kafka.raft.KafkaRaftManager.buildNetworkClient(RaftManager.scala:256)
>   at 
> kafka.raft.KafkaRaftManager.buildNetworkChannel(RaftManager.scala:234)
>   at kafka.raft.KafkaRaftManager.(RaftManager.scala:126)
>   at kafka.tools.TestRaftServer.startup(TestRaftServer.scala:88)
>   at kafka.tools.TestRaftServer$.main(TestRaftServer.scala:442)
>   at kafka.tools.TestRaftServer.main(TestRaftServer.scala)
> {code}
> Looks like the listener property in the config is not getting picked up as an 
> empty string gets passed to {{SecurityProtocol.forName}}
> EDIT: The issue is the properties file needs to have a 
> {{controller.listener.names}} property with just values of the names



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12672) Running test-kraft-server-start results in error

2021-04-15 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-12672:

Description: 
Running the {{test-kraft-server-start}} script in the {{raft}} module results 
in this error

 
{code:java}
ERROR Exiting raft server due to fatal exception (kafka.tools.TestRaftServer$)
java.lang.IllegalArgumentException: No enum constant 
org.apache.kafka.common.security.auth.SecurityProtocol.
at java.lang.Enum.valueOf(Enum.java:238)
at 
org.apache.kafka.common.security.auth.SecurityProtocol.valueOf(SecurityProtocol.java:26)
at 
org.apache.kafka.common.security.auth.SecurityProtocol.forName(SecurityProtocol.java:72)
at 
kafka.raft.KafkaRaftManager.$anonfun$buildNetworkClient$1(RaftManager.scala:256)
at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:530)
at kafka.raft.KafkaRaftManager.buildNetworkClient(RaftManager.scala:256)
at 
kafka.raft.KafkaRaftManager.buildNetworkChannel(RaftManager.scala:234)
at kafka.raft.KafkaRaftManager.(RaftManager.scala:126)
at kafka.tools.TestRaftServer.startup(TestRaftServer.scala:88)
at kafka.tools.TestRaftServer$.main(TestRaftServer.scala:442)
at kafka.tools.TestRaftServer.main(TestRaftServer.scala)
{code}
Looks like the listener property in the config is not getting picked up as an 
empty string gets passed to {{SecurityProtocol.forName}}

  was:
Running the {{test-kraft-server-start}} script in the {{raft}} module results 
in this error

 
{code:java}
ERROR Exiting raft server due to fatal exception (kafka.tools.TestRaftServer$)
java.lang.IllegalArgumentException: No enum constant 
org.apache.kafka.common.security.auth.SecurityProtocol.
at java.lang.Enum.valueOf(Enum.java:238)
at 
org.apache.kafka.common.security.auth.SecurityProtocol.valueOf(SecurityProtocol.java:26)
at 
org.apache.kafka.common.security.auth.SecurityProtocol.forName(SecurityProtocol.java:72)
at 
kafka.raft.KafkaRaftManager.$anonfun$buildNetworkClient$1(RaftManager.scala:256)
at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:530)
at kafka.raft.KafkaRaftManager.buildNetworkClient(RaftManager.scala:256)
at 
kafka.raft.KafkaRaftManager.buildNetworkChannel(RaftManager.scala:234)
at kafka.raft.KafkaRaftManager.(RaftManager.scala:126)
at kafka.tools.TestRaftServer.startup(TestRaftServer.scala:88)
at kafka.tools.TestRaftServer$.main(TestRaftServer.scala:442)
at kafka.tools.TestRaftServer.main(TestRaftServer.scala)
{code}
Looks like the listener property in the config is not getting picked up


> Running test-kraft-server-start results in error
> 
>
> Key: KAFKA-12672
> URL: https://issues.apache.org/jira/browse/KAFKA-12672
> Project: Kafka
>  Issue Type: Bug
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>
> Running the {{test-kraft-server-start}} script in the {{raft}} module results 
> in this error
>  
> {code:java}
> ERROR Exiting raft server due to fatal exception (kafka.tools.TestRaftServer$)
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.kafka.common.security.auth.SecurityProtocol.
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.kafka.common.security.auth.SecurityProtocol.valueOf(SecurityProtocol.java:26)
>   at 
> org.apache.kafka.common.security.auth.SecurityProtocol.forName(SecurityProtocol.java:72)
>   at 
> kafka.raft.KafkaRaftManager.$anonfun$buildNetworkClient$1(RaftManager.scala:256)
>   at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:530)
>   at kafka.raft.KafkaRaftManager.buildNetworkClient(RaftManager.scala:256)
>   at 
> kafka.raft.KafkaRaftManager.buildNetworkChannel(RaftManager.scala:234)
>   at kafka.raft.KafkaRaftManager.(RaftManager.scala:126)
>   at kafka.tools.TestRaftServer.startup(TestRaftServer.scala:88)
>   at kafka.tools.TestRaftServer$.main(TestRaftServer.scala:442)
>   at kafka.tools.TestRaftServer.main(TestRaftServer.scala)
> {code}
> Looks like the listener property in the config is not getting picked up as an 
> empty string gets passed to {{SecurityProtocol.forName}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12672) Running test-kraft-server-start results in error

2021-04-15 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-12672:
---

 Summary: Running test-kraft-server-start results in error
 Key: KAFKA-12672
 URL: https://issues.apache.org/jira/browse/KAFKA-12672
 Project: Kafka
  Issue Type: Bug
Reporter: Bill Bejeck
Assignee: Bill Bejeck


Running the {{test-kraft-server-start}} script in the {{raft}} module results 
in this error

 
{code:java}
ERROR Exiting raft server due to fatal exception (kafka.tools.TestRaftServer$)
java.lang.IllegalArgumentException: No enum constant 
org.apache.kafka.common.security.auth.SecurityProtocol.
at java.lang.Enum.valueOf(Enum.java:238)
at 
org.apache.kafka.common.security.auth.SecurityProtocol.valueOf(SecurityProtocol.java:26)
at 
org.apache.kafka.common.security.auth.SecurityProtocol.forName(SecurityProtocol.java:72)
at 
kafka.raft.KafkaRaftManager.$anonfun$buildNetworkClient$1(RaftManager.scala:256)
at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:530)
at kafka.raft.KafkaRaftManager.buildNetworkClient(RaftManager.scala:256)
at 
kafka.raft.KafkaRaftManager.buildNetworkChannel(RaftManager.scala:234)
at kafka.raft.KafkaRaftManager.(RaftManager.scala:126)
at kafka.tools.TestRaftServer.startup(TestRaftServer.scala:88)
at kafka.tools.TestRaftServer$.main(TestRaftServer.scala:442)
at kafka.tools.TestRaftServer.main(TestRaftServer.scala)
{code}
Looks like the listener property in the config is not getting picked up



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-3745) Consider adding join key to ValueJoiner interface

2021-03-20 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-3745:
---
Fix Version/s: 3.0.0

> Consider adding join key to ValueJoiner interface
> -
>
> Key: KAFKA-3745
> URL: https://issues.apache.org/jira/browse/KAFKA-3745
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Greg Fodor
>Assignee: Bill Bejeck
>Priority: Minor
>  Labels: api, kip
> Fix For: 3.0.0
>
>
> In working with Kafka Stream joining, it's sometimes the case that a join key 
> is not actually present in the values of the joins themselves (if, for 
> example, a previous transform generated an ephemeral join key.) In such 
> cases, the actual key of the join is not available in the ValueJoiner 
> implementation to be used to construct the final joined value. This can be 
> worked around by explicitly threading the join key into the value if needed, 
> but it seems like extending the interface to pass the join key along as well 
> would be helpful.
> KIP-149: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-149%3A+Enabling+key+access+in+ValueTransformer%2C+ValueMapper%2C+and+ValueJoiner]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12393) Document multi-tenancy considerations

2021-03-04 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-12393.
-
Resolution: Fixed

Merged to trunk and cherry-picked to 2.8

> Document multi-tenancy considerations
> -
>
> Key: KAFKA-12393
> URL: https://issues.apache.org/jira/browse/KAFKA-12393
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael G. Noll
>Assignee: Michael G. Noll
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
>
> We should provide an overview of multi-tenancy consideration (e.g., user 
> spaces, security) as the current documentation lacks such information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12393) Document multi-tenancy considerations

2021-03-04 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-12393:

Fix Version/s: 2.8.0
   3.0.0

> Document multi-tenancy considerations
> -
>
> Key: KAFKA-12393
> URL: https://issues.apache.org/jira/browse/KAFKA-12393
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael G. Noll
>Assignee: Michael G. Noll
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
>
> We should provide an overview of multi-tenancy consideration (e.g., user 
> spaces, security) as the current documentation lacks such information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-3745) Consider adding join key to ValueJoiner interface

2021-02-18 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-3745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17286622#comment-17286622
 ] 

Bill Bejeck commented on KAFKA-3745:


Taking the overloaded method approach with the Scala API causes an issue with 
an "Ambiguous reference to overloaded definition" issue.  I'll investigate and 
provide a follow-on PR.  The current PR changes to the Java API 
[https://github.com/apache/kafka/pull/10150] do not cause any issue with the 
Scala API.  I think considering this is a new addition it's acceptable to take 
an approach where we add this new functionality in steps

> Consider adding join key to ValueJoiner interface
> -
>
> Key: KAFKA-3745
> URL: https://issues.apache.org/jira/browse/KAFKA-3745
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Greg Fodor
>Assignee: Bill Bejeck
>Priority: Minor
>  Labels: api, kip
>
> In working with Kafka Stream joining, it's sometimes the case that a join key 
> is not actually present in the values of the joins themselves (if, for 
> example, a previous transform generated an ephemeral join key.) In such 
> cases, the actual key of the join is not available in the ValueJoiner 
> implementation to be used to construct the final joined value. This can be 
> worked around by explicitly threading the join key into the value if needed, 
> but it seems like extending the interface to pass the join key along as well 
> would be helpful.
> KIP-149: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-149%3A+Enabling+key+access+in+ValueTransformer%2C+ValueMapper%2C+and+ValueJoiner]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-12213) Kafka Streams aggregation Initializer to accept record key

2021-02-18 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17286114#comment-17286114
 ] 

Bill Bejeck edited comment on KAFKA-12213 at 2/18/21, 5:26 PM:
---

Hi, [~MonCalamari]. I have an implementation for the ValueJoinerWithKey access, 
and I'll push a PR soon.  As [~mjsax] pointed out, KIP-149 started before we 
had the Scala API, -and overloading  the `join` method with a `ValueJoiner` and 
a `ValueJoinerWithKey` does cause issues with the Scala API.  I'm working on 
proposed solutions, and I'll send out something on the dev mailing list soon.- 

 

EDIT: What I meant to say is that taking the same approach with the Scala API 
causes an issue with an "Ambiguous reference to overloaded definition" issue.  
I'll investigate an approach and provide a follow-on PR.  The current PR 
changes to the Java API [https://github.com/apache/kafka/pull/10150] do not 
cause any issue with the Scala API.  I think considering this is a new addition 
it's acceptable to take an approach where we add this new functionality in steps

 

But I don't have any plans at the moment to work on key access in the 
`Initializer` or `Reducer` parts of KIP-149 so feel free to move ahead with 
that part of the KIP-149

 

Thanks,

Bill Bejeck


was (Author: bbejeck):
Hi, [~MonCalamari]. I have an implementation for the ValueJoinerWithKey access, 
and I'll push a PR soon.  As [~mjsax] pointed out, KIP-149 started before we 
had the Scala API, -and overloading  the `join` method with a `ValueJoiner` and 
a `ValueJoinerWithKey` does cause issues with the Scala API.  I'm working on 
proposed solutions, and I'll send out something on the dev mailing list soon.- 

 

EDIT: What I meant to say is that taking the same approach with the Scala API 
causes an issue.  I'll investigate an approach and provide a follow-on PR.  The 
current PR changes to the Java API [https://github.com/apache/kafka/pull/10150] 
do not cause any issue with the Scala API.  I think considering this is a new 
addition it's acceptable to take an approach where we add this new 
functionality in steps

 

But I don't have any plans at the moment to work on key access in the 
`Initializer` or `Reducer` parts of KIP-149 so feel free to move ahead with 
that part of the KIP-149

 

Thanks,

Bill Bejeck

> Kafka Streams aggregation Initializer to accept record key
> --
>
> Key: KAFKA-12213
> URL: https://issues.apache.org/jira/browse/KAFKA-12213
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Piotr Fras
>Assignee: Piotr Fras
>Priority: Minor
>  Labels: needs-kip
>
> Sometimes Kafka record key contains useful information for creating a zero 
> object in aggregation Initializer. This feature is to add kafka record key to 
> Initializer.
> There were two approaches I considered to implement this feature, one 
> respecting backwards compatibility for internal and external APIs and the 
> other one which is not. I chose the latter one as it was more strait-forward. 
> We may want to validate this approach tho.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-12213) Kafka Streams aggregation Initializer to accept record key

2021-02-18 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17286114#comment-17286114
 ] 

Bill Bejeck edited comment on KAFKA-12213 at 2/18/21, 5:23 PM:
---

Hi, [~MonCalamari]. I have an implementation for the ValueJoinerWithKey access, 
and I'll push a PR soon.  As [~mjsax] pointed out, KIP-149 started before we 
had the Scala API, -and overloading  the `join` method with a `ValueJoiner` and 
a `ValueJoinerWithKey` does cause issues with the Scala API.  I'm working on 
proposed solutions, and I'll send out something on the dev mailing list soon.- 

 

EDIT: What I meant to say is that taking the same approach with the Scala API 
causes an issue.  I'll investigate an approach and provide a follow-on PR.  The 
current PR changes to the Java API [https://github.com/apache/kafka/pull/10150] 
do not cause any issue with the Scala API.  I think considering this is a new 
addition it's acceptable to take an approach where we add this new 
functionality in steps

 

But I don't have any plans at the moment to work on key access in the 
`Initializer` or `Reducer` parts of KIP-149 so feel free to move ahead with 
that part of the KIP-149

 

Thanks,

Bill Bejeck


was (Author: bbejeck):
Hi, [~MonCalamari]. I have an implementation for the ValueJoinerWithKey access, 
and I'll push a PR soon.  As [~mjsax] pointed out, KIP-149 started before we 
had the Scala API, and overloading  the `join` method with a `ValueJoiner` and 
a `ValueJoinerWithKey` does cause issues with the Scala API.  I'm working on 
proposed solutions, and I'll send out something on the dev mailing list soon. 

 

But I don't have any plans at the moment to work on key access in the 
`Initializer` or `Reducer` parts of KIP-149 so feel free to move ahead with 
that part of the KIP-149

 

Thanks,

Bill Bejeck

> Kafka Streams aggregation Initializer to accept record key
> --
>
> Key: KAFKA-12213
> URL: https://issues.apache.org/jira/browse/KAFKA-12213
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Piotr Fras
>Assignee: Piotr Fras
>Priority: Minor
>  Labels: needs-kip
>
> Sometimes Kafka record key contains useful information for creating a zero 
> object in aggregation Initializer. This feature is to add kafka record key to 
> Initializer.
> There were two approaches I considered to implement this feature, one 
> respecting backwards compatibility for internal and external APIs and the 
> other one which is not. I chose the latter one as it was more strait-forward. 
> We may want to validate this approach tho.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12213) Kafka Streams aggregation Initializer to accept record key

2021-02-17 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17286114#comment-17286114
 ] 

Bill Bejeck commented on KAFKA-12213:
-

Hi, [~MonCalamari]. I have an implementation for the ValueJoinerWithKey access, 
and I'll push a PR soon.  As [~mjsax] pointed out, KIP-149 started before we 
had the Scala API, and overloading  the `join` method with a `ValueJoiner` and 
a `ValueJoinerWithKey` does cause issues with the Scala API.  I'm working on 
proposed solutions, and I'll send out something on the dev mailing list soon. 

 

But I don't have any plans at the moment to work on key access in the 
`Initializer` or `Reducer` parts of KIP-149 so feel free to move ahead with 
that part of the KIP-149

 

Thanks,

Bill Bejeck

> Kafka Streams aggregation Initializer to accept record key
> --
>
> Key: KAFKA-12213
> URL: https://issues.apache.org/jira/browse/KAFKA-12213
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Piotr Fras
>Assignee: Piotr Fras
>Priority: Minor
>  Labels: needs-kip
>
> Sometimes Kafka record key contains useful information for creating a zero 
> object in aggregation Initializer. This feature is to add kafka record key to 
> Initializer.
> There were two approaches I considered to implement this feature, one 
> respecting backwards compatibility for internal and external APIs and the 
> other one which is not. I chose the latter one as it was more strait-forward. 
> We may want to validate this approach tho.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8744) Add Support to Scala API for KIP-307 and KIP-479

2021-01-28 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-8744:
---
Fix Version/s: 2.8.0

> Add Support to Scala API for KIP-307 and KIP-479
> 
>
> Key: KAFKA-8744
> URL: https://issues.apache.org/jira/browse/KAFKA-8744
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Bill Bejeck
>Assignee: Mathieu DESPRIEE
>Priority: Major
> Fix For: 2.8.0
>
>
> With the ability to provide names for all operators in a Kafka Streams 
> topology 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL])
>  coming in the 2.4 release, we also need to add this new feature to the 
> Streams Scala API.
> KIP-307 was refined via KIP-479 
> ([https://cwiki.apache.org/confluence/display/KAFKA/KIP-479%3A+Add+StreamJoined+config+object+to+Join])
>  and this ticket should cover both cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-3745) Consider adding join key to ValueJoiner interface

2021-01-27 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-3745:
--

Assignee: Bill Bejeck

> Consider adding join key to ValueJoiner interface
> -
>
> Key: KAFKA-3745
> URL: https://issues.apache.org/jira/browse/KAFKA-3745
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Greg Fodor
>Assignee: Bill Bejeck
>Priority: Minor
>  Labels: api, kip
>
> In working with Kafka Stream joining, it's sometimes the case that a join key 
> is not actually present in the values of the joins themselves (if, for 
> example, a previous transform generated an ephemeral join key.) In such 
> cases, the actual key of the join is not available in the ValueJoiner 
> implementation to be used to construct the final joined value. This can be 
> worked around by explicitly threading the join key into the value if needed, 
> but it seems like extending the interface to pass the join key along as well 
> would be helpful.
> KIP-149: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-149%3A+Enabling+key+access+in+ValueTransformer%2C+ValueMapper%2C+and+ValueJoiner]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-10140) Incremental config api excludes plugin config changes

2020-12-16 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reopened KAFKA-10140:
-

I resolved this by mistake, reopening now

> Incremental config api excludes plugin config changes
> -
>
> Key: KAFKA-10140
> URL: https://issues.apache.org/jira/browse/KAFKA-10140
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Critical
> Fix For: 2.7.0
>
>
> I was trying to alter the jmx metric filters using the incremental alter 
> config api and hit this error:
> ```
> java.util.NoSuchElementException: key not found: metrics.jmx.blacklist
>   at scala.collection.MapLike.default(MapLike.scala:235)
>   at scala.collection.MapLike.default$(MapLike.scala:234)
>   at scala.collection.AbstractMap.default(Map.scala:65)
>   at scala.collection.MapLike.apply(MapLike.scala:144)
>   at scala.collection.MapLike.apply$(MapLike.scala:143)
>   at scala.collection.AbstractMap.apply(Map.scala:65)
>   at kafka.server.AdminManager.listType$1(AdminManager.scala:681)
>   at 
> kafka.server.AdminManager.$anonfun$prepareIncrementalConfigs$1(AdminManager.scala:693)
>   at 
> kafka.server.AdminManager.prepareIncrementalConfigs(AdminManager.scala:687)
>   at 
> kafka.server.AdminManager.$anonfun$incrementalAlterConfigs$1(AdminManager.scala:618)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:273)
>   at scala.collection.immutable.Map$Map1.foreach(Map.scala:154)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:273)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:266)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:108)
>   at 
> kafka.server.AdminManager.incrementalAlterConfigs(AdminManager.scala:589)
>   at 
> kafka.server.KafkaApis.handleIncrementalAlterConfigsRequest(KafkaApis.scala:2698)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:188)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:78)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> ```
> It looks like we are only allowing changes to the keys defined in 
> `KafkaConfig` through this API. This excludes config changes to any plugin 
> components such as `JmxReporter`. 
> Note that I was able to use the regular `alterConfig` API to change this 
> config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10140) Incremental config api excludes plugin config changes

2020-12-16 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10140:

Fix Version/s: (was: 2.7.0)

> Incremental config api excludes plugin config changes
> -
>
> Key: KAFKA-10140
> URL: https://issues.apache.org/jira/browse/KAFKA-10140
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Critical
>
> I was trying to alter the jmx metric filters using the incremental alter 
> config api and hit this error:
> ```
> java.util.NoSuchElementException: key not found: metrics.jmx.blacklist
>   at scala.collection.MapLike.default(MapLike.scala:235)
>   at scala.collection.MapLike.default$(MapLike.scala:234)
>   at scala.collection.AbstractMap.default(Map.scala:65)
>   at scala.collection.MapLike.apply(MapLike.scala:144)
>   at scala.collection.MapLike.apply$(MapLike.scala:143)
>   at scala.collection.AbstractMap.apply(Map.scala:65)
>   at kafka.server.AdminManager.listType$1(AdminManager.scala:681)
>   at 
> kafka.server.AdminManager.$anonfun$prepareIncrementalConfigs$1(AdminManager.scala:693)
>   at 
> kafka.server.AdminManager.prepareIncrementalConfigs(AdminManager.scala:687)
>   at 
> kafka.server.AdminManager.$anonfun$incrementalAlterConfigs$1(AdminManager.scala:618)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:273)
>   at scala.collection.immutable.Map$Map1.foreach(Map.scala:154)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:273)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:266)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:108)
>   at 
> kafka.server.AdminManager.incrementalAlterConfigs(AdminManager.scala:589)
>   at 
> kafka.server.KafkaApis.handleIncrementalAlterConfigsRequest(KafkaApis.scala:2698)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:188)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:78)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> ```
> It looks like we are only allowing changes to the keys defined in 
> `KafkaConfig` through this API. This excludes config changes to any plugin 
> components such as `JmxReporter`. 
> Note that I was able to use the regular `alterConfig` API to change this 
> config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10292) fix flaky streams/streams_broker_bounce_test.py

2020-12-16 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250330#comment-17250330
 ] 

Bill Bejeck commented on KAFKA-10292:
-

Since this is an outstanding issue, I'm changing the blocker status.  Also, 
I've updated the fix version to 2.8.0.

 

Thanks,

Bill

> fix flaky streams/streams_broker_bounce_test.py
> ---
>
> Key: KAFKA-10292
> URL: https://issues.apache.org/jira/browse/KAFKA-10292
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, system tests
>Reporter: Chia-Ping Tsai
>Assignee: Bruno Cadonna
>Priority: Major
> Fix For: 2.8.0
>
>
> {quote}
> Module: kafkatest.tests.streams.streams_broker_bounce_test
> Class:  StreamsBrokerBounceTest
> Method: test_broker_type_bounce
> Arguments:
> \{
>   "broker_type": "leader",
>   "failure_mode": "clean_bounce",
>   "num_threads": 1,
>   "sleep_time_secs": 120
> \}
> {quote}
> {quote}
> Module: kafkatest.tests.streams.streams_broker_bounce_test
> Class:  StreamsBrokerBounceTest
> Method: test_broker_type_bounce
> Arguments:
> \{
>   "broker_type": "controller",
>   "failure_mode": "hard_shutdown",
>   "num_threads": 3,
>   "sleep_time_secs": 120
> \}
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10292) fix flaky streams/streams_broker_bounce_test.py

2020-12-16 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10292:

Fix Version/s: 2.8.0

> fix flaky streams/streams_broker_bounce_test.py
> ---
>
> Key: KAFKA-10292
> URL: https://issues.apache.org/jira/browse/KAFKA-10292
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, system tests
>Reporter: Chia-Ping Tsai
>Assignee: Bruno Cadonna
>Priority: Major
> Fix For: 2.8.0
>
>
> {quote}
> Module: kafkatest.tests.streams.streams_broker_bounce_test
> Class:  StreamsBrokerBounceTest
> Method: test_broker_type_bounce
> Arguments:
> \{
>   "broker_type": "leader",
>   "failure_mode": "clean_bounce",
>   "num_threads": 1,
>   "sleep_time_secs": 120
> \}
> {quote}
> {quote}
> Module: kafkatest.tests.streams.streams_broker_bounce_test
> Class:  StreamsBrokerBounceTest
> Method: test_broker_type_bounce
> Arguments:
> \{
>   "broker_type": "controller",
>   "failure_mode": "hard_shutdown",
>   "num_threads": 3,
>   "sleep_time_secs": 120
> \}
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10292) fix flaky streams/streams_broker_bounce_test.py

2020-12-16 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10292:

Priority: Major  (was: Blocker)

> fix flaky streams/streams_broker_bounce_test.py
> ---
>
> Key: KAFKA-10292
> URL: https://issues.apache.org/jira/browse/KAFKA-10292
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, system tests
>Reporter: Chia-Ping Tsai
>Assignee: Bruno Cadonna
>Priority: Major
> Fix For: 2.7.0
>
>
> {quote}
> Module: kafkatest.tests.streams.streams_broker_bounce_test
> Class:  StreamsBrokerBounceTest
> Method: test_broker_type_bounce
> Arguments:
> \{
>   "broker_type": "leader",
>   "failure_mode": "clean_bounce",
>   "num_threads": 1,
>   "sleep_time_secs": 120
> \}
> {quote}
> {quote}
> Module: kafkatest.tests.streams.streams_broker_bounce_test
> Class:  StreamsBrokerBounceTest
> Method: test_broker_type_bounce
> Arguments:
> \{
>   "broker_type": "controller",
>   "failure_mode": "hard_shutdown",
>   "num_threads": 3,
>   "sleep_time_secs": 120
> \}
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10292) fix flaky streams/streams_broker_bounce_test.py

2020-12-16 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10292:

Fix Version/s: (was: 2.7.0)

> fix flaky streams/streams_broker_bounce_test.py
> ---
>
> Key: KAFKA-10292
> URL: https://issues.apache.org/jira/browse/KAFKA-10292
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, system tests
>Reporter: Chia-Ping Tsai
>Assignee: Bruno Cadonna
>Priority: Major
>
> {quote}
> Module: kafkatest.tests.streams.streams_broker_bounce_test
> Class:  StreamsBrokerBounceTest
> Method: test_broker_type_bounce
> Arguments:
> \{
>   "broker_type": "leader",
>   "failure_mode": "clean_bounce",
>   "num_threads": 1,
>   "sleep_time_secs": 120
> \}
> {quote}
> {quote}
> Module: kafkatest.tests.streams.streams_broker_bounce_test
> Class:  StreamsBrokerBounceTest
> Method: test_broker_type_bounce
> Arguments:
> \{
>   "broker_type": "controller",
>   "failure_mode": "hard_shutdown",
>   "num_threads": 3,
>   "sleep_time_secs": 120
> \}
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10802) Spurious log message when starting consumers

2020-12-10 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247511#comment-17247511
 ] 

Bill Bejeck commented on KAFKA-10802:
-

cherry-picked to 2.7

> Spurious log message when starting consumers
> 
>
> Key: KAFKA-10802
> URL: https://issues.apache.org/jira/browse/KAFKA-10802
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Mickael Maison
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.7.0, 2.6.1, 2.8.0
>
>
> Reported by Gary Russell in the [2.6.1 RC3 vote 
> thread|https://lists.apache.org/thread.html/r13d2c687b2fafbe9907fceb3d4f3cc6d5b34f9f36a7fcc985c38b506%40%3Cdev.kafka.apache.org%3E]
> I am seeing this on every consumer start:
> 2020-11-25 13:54:34.858  INFO 42250 --- [ntainer#0-0-C-1] 
> o.a.k.c.c.internals.AbstractCoordinator  : [Consumer 
> clientId=consumer-ktest26int-1, groupId=ktest26int] Rebalance failed.
> org.apache.kafka.common.errors.MemberIdRequiredException: The group member 
> needs to have a valid member id before actually entering a consumer group.
> Due to this change 
> https://github.com/apache/kafka/commit/16ec1793d53700623c9cb43e711f585aafd44dd4#diff-15efe9b844f78b686393b6c2e2ad61306c3473225742caed05c7edab9a138832R468
> I understand what a MemberIdRequiredException is, but the previous (2.6.0) 
> log (with exception.getMessage()) didn't stand out like the new one does 
> because it was all on one line.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10802) Spurious log message when starting consumers

2020-12-10 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10802:

Fix Version/s: 2.7.0

> Spurious log message when starting consumers
> 
>
> Key: KAFKA-10802
> URL: https://issues.apache.org/jira/browse/KAFKA-10802
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Mickael Maison
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.7.0, 2.6.1, 2.8.0
>
>
> Reported by Gary Russell in the [2.6.1 RC3 vote 
> thread|https://lists.apache.org/thread.html/r13d2c687b2fafbe9907fceb3d4f3cc6d5b34f9f36a7fcc985c38b506%40%3Cdev.kafka.apache.org%3E]
> I am seeing this on every consumer start:
> 2020-11-25 13:54:34.858  INFO 42250 --- [ntainer#0-0-C-1] 
> o.a.k.c.c.internals.AbstractCoordinator  : [Consumer 
> clientId=consumer-ktest26int-1, groupId=ktest26int] Rebalance failed.
> org.apache.kafka.common.errors.MemberIdRequiredException: The group member 
> needs to have a valid member id before actually entering a consumer group.
> Due to this change 
> https://github.com/apache/kafka/commit/16ec1793d53700623c9cb43e711f585aafd44dd4#diff-15efe9b844f78b686393b6c2e2ad61306c3473225742caed05c7edab9a138832R468
> I understand what a MemberIdRequiredException is, but the previous (2.6.0) 
> log (with exception.getMessage()) didn't stand out like the new one does 
> because it was all on one line.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-10802) Spurious log message when starting consumers

2020-12-10 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247431#comment-17247431
 ] 

Bill Bejeck edited comment on KAFKA-10802 at 12/10/20, 7:07 PM:


Thanks for the heads up [~mimaison] I need to do a new RC for 2.7.0 anyway so 
I'll include this as well.  -Is there a merged PR already-?  NM I failed to 
read all the comments and I see that [~guozhang] has merged something into 
trunk.  


was (Author: bbejeck):
Thanks for the heads up [~mimaison] I need to do a new RC for 2.7.0 anyway so 
I'll include this.  Is there a merged PR already?

> Spurious log message when starting consumers
> 
>
> Key: KAFKA-10802
> URL: https://issues.apache.org/jira/browse/KAFKA-10802
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Mickael Maison
>Priority: Major
>
> Reported by Gary Russell in the [2.6.1 RC3 vote 
> thread|https://lists.apache.org/thread.html/r13d2c687b2fafbe9907fceb3d4f3cc6d5b34f9f36a7fcc985c38b506%40%3Cdev.kafka.apache.org%3E]
> I am seeing this on every consumer start:
> 2020-11-25 13:54:34.858  INFO 42250 --- [ntainer#0-0-C-1] 
> o.a.k.c.c.internals.AbstractCoordinator  : [Consumer 
> clientId=consumer-ktest26int-1, groupId=ktest26int] Rebalance failed.
> org.apache.kafka.common.errors.MemberIdRequiredException: The group member 
> needs to have a valid member id before actually entering a consumer group.
> Due to this change 
> https://github.com/apache/kafka/commit/16ec1793d53700623c9cb43e711f585aafd44dd4#diff-15efe9b844f78b686393b6c2e2ad61306c3473225742caed05c7edab9a138832R468
> I understand what a MemberIdRequiredException is, but the previous (2.6.0) 
> log (with exception.getMessage()) didn't stand out like the new one does 
> because it was all on one line.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10802) Spurious log message when starting consumers

2020-12-10 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17247431#comment-17247431
 ] 

Bill Bejeck commented on KAFKA-10802:
-

Thanks for the heads up [~mimaison] I need to do a new RC for 2.7.0 anyway so 
I'll include this.  Is there a merged PR already?

> Spurious log message when starting consumers
> 
>
> Key: KAFKA-10802
> URL: https://issues.apache.org/jira/browse/KAFKA-10802
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Mickael Maison
>Priority: Major
>
> Reported by Gary Russell in the [2.6.1 RC3 vote 
> thread|https://lists.apache.org/thread.html/r13d2c687b2fafbe9907fceb3d4f3cc6d5b34f9f36a7fcc985c38b506%40%3Cdev.kafka.apache.org%3E]
> I am seeing this on every consumer start:
> 2020-11-25 13:54:34.858  INFO 42250 --- [ntainer#0-0-C-1] 
> o.a.k.c.c.internals.AbstractCoordinator  : [Consumer 
> clientId=consumer-ktest26int-1, groupId=ktest26int] Rebalance failed.
> org.apache.kafka.common.errors.MemberIdRequiredException: The group member 
> needs to have a valid member id before actually entering a consumer group.
> Due to this change 
> https://github.com/apache/kafka/commit/16ec1793d53700623c9cb43e711f585aafd44dd4#diff-15efe9b844f78b686393b6c2e2ad61306c3473225742caed05c7edab9a138832R468
> I understand what a MemberIdRequiredException is, but the previous (2.6.0) 
> log (with exception.getMessage()) didn't stand out like the new one does 
> because it was all on one line.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10772) java.lang.IllegalStateException: There are insufficient bytes available to read assignment from the sync-group response (actual byte size 0)

2020-12-07 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10772:

Fix Version/s: (was: 2.7.0)

> java.lang.IllegalStateException: There are insufficient bytes available to 
> read assignment from the sync-group response (actual byte size 0)
> 
>
> Key: KAFKA-10772
> URL: https://issues.apache.org/jira/browse/KAFKA-10772
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0
>Reporter: Levani Kokhreidze
>Priority: Blocker
> Attachments: KAFKA-10772.log
>
>
> From time to time we encounter the following exception that results in Kafka 
> Streams threads dying.
> Broker version 2.4.1, Client version 2.6.0
> {code:java}
> Nov 27 00:59:53.681 streaming-app service: prod | streaming-app-2 | 
> stream-client [cluster1-profile-stats-pipeline-client-id] State transition 
> from REBALANCING to ERROR Nov 27 00:59:53.681 streaming-app service: prod | 
> streaming-app-2 | stream-client [cluster1-profile-stats-pipeline-client-id] 
> State transition from REBALANCING to ERROR Nov 27 00:59:53.682 streaming-app 
> service: prod | streaming-app-2 | 2020-11-27 00:59:53.681 ERROR 105 --- 
> [-StreamThread-1] .KafkaStreamsBasedStreamProcessingEngine : Stream 
> processing pipeline: [profile-stats] encountered unrecoverable exception. 
> Thread: [cluster1-profile-stats-pipeline-client-id-StreamThread-1] is 
> completely dead. If all worker threads die, Kafka Streams will be moved to 
> permanent ERROR state. Nov 27 00:59:53.682 streaming-app service: prod | 
> streaming-app-2 | Stream processing pipeline: [profile-stats] encountered 
> unrecoverable exception. Thread: 
> [cluster1-profile-stats-pipeline-client-id-StreamThread-1] is completely 
> dead. If all worker threads die, Kafka Streams will be moved to permanent 
> ERROR state. java.lang.IllegalStateException: There are insufficient bytes 
> available to read assignment from the sync-group response (actual byte size 
> 0) , this is not expected; it is possible that the leader's assign function 
> is buggy and did not return any assignment for this member, or because static 
> member is configured and the protocol is buggy hence did not get the 
> assignment for this member at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:367)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:440)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:359)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:513)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1268)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1230) 
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210) 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:766)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:624)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10772) java.lang.IllegalStateException: There are insufficient bytes available to read assignment from the sync-group response (actual byte size 0)

2020-12-07 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245460#comment-17245460
 ] 

Bill Bejeck commented on KAFKA-10772:
-

Hi [~vvcephei], 

Since this appears to be an outstanding issue with either static membership or 
something specific to 2.6 I'm going to say that it's not a blocker for 2.7.  
Additionally, it would seem that it's possible there's a workaround if it's 
possible to not use static-membership. 

I'll update the fix version accordingly.

> java.lang.IllegalStateException: There are insufficient bytes available to 
> read assignment from the sync-group response (actual byte size 0)
> 
>
> Key: KAFKA-10772
> URL: https://issues.apache.org/jira/browse/KAFKA-10772
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0
>Reporter: Levani Kokhreidze
>Priority: Blocker
> Attachments: KAFKA-10772.log
>
>
> From time to time we encounter the following exception that results in Kafka 
> Streams threads dying.
> Broker version 2.4.1, Client version 2.6.0
> {code:java}
> Nov 27 00:59:53.681 streaming-app service: prod | streaming-app-2 | 
> stream-client [cluster1-profile-stats-pipeline-client-id] State transition 
> from REBALANCING to ERROR Nov 27 00:59:53.681 streaming-app service: prod | 
> streaming-app-2 | stream-client [cluster1-profile-stats-pipeline-client-id] 
> State transition from REBALANCING to ERROR Nov 27 00:59:53.682 streaming-app 
> service: prod | streaming-app-2 | 2020-11-27 00:59:53.681 ERROR 105 --- 
> [-StreamThread-1] .KafkaStreamsBasedStreamProcessingEngine : Stream 
> processing pipeline: [profile-stats] encountered unrecoverable exception. 
> Thread: [cluster1-profile-stats-pipeline-client-id-StreamThread-1] is 
> completely dead. If all worker threads die, Kafka Streams will be moved to 
> permanent ERROR state. Nov 27 00:59:53.682 streaming-app service: prod | 
> streaming-app-2 | Stream processing pipeline: [profile-stats] encountered 
> unrecoverable exception. Thread: 
> [cluster1-profile-stats-pipeline-client-id-StreamThread-1] is completely 
> dead. If all worker threads die, Kafka Streams will be moved to permanent 
> ERROR state. java.lang.IllegalStateException: There are insufficient bytes 
> available to read assignment from the sync-group response (actual byte size 
> 0) , this is not expected; it is possible that the leader's assign function 
> is buggy and did not return any assignment for this member, or because static 
> member is configured and the protocol is buggy hence did not get the 
> assignment for this member at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:367)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:440)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:359)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:513)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1268)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1230) 
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210) 
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:766)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:624)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
>  at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10799) AlterIsr path does not update ISR shrink/expand meters

2020-12-04 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-10799.
-
Resolution: Fixed

> AlterIsr path does not update ISR shrink/expand meters
> --
>
> Key: KAFKA-10799
> URL: https://issues.apache.org/jira/browse/KAFKA-10799
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: David Arthur
>Priority: Blocker
> Fix For: 2.7.0
>
>
> We forgot to update the ISR change metrics when we added support for 
> AlterIsr. These are currently only updated when ISR changes are made through 
> Zk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10687) Produce request should be bumped for new error code PRODUCE_FENCED

2020-11-18 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-10687.
-
Resolution: Fixed

Fixed for 2.7 via https://github.com/apache/kafka/pull/9613

> Produce request should be bumped for new error code PRODUCE_FENCED
> --
>
> Key: KAFKA-10687
> URL: https://issues.apache.org/jira/browse/KAFKA-10687
> Project: Kafka
>  Issue Type: Bug
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Blocker
> Fix For: 2.7.0
>
>
> In https://issues.apache.org/jira/browse/KAFKA-9910, we missed a case where 
> the ProduceRequest needs to be bumped to return the new error code 
> PRODUCE_FENCED. This gap needs to be addressed as a blocker since it is 
> shipping in 2.7.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10679) AK site docs changes need to get ported to Kafka/docs

2020-11-04 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-10679:
---

Assignee: Bill Bejeck

> AK site docs changes need to get ported to Kafka/docs
> -
>
> Key: KAFKA-10679
> URL: https://issues.apache.org/jira/browse/KAFKA-10679
> Project: Kafka
>  Issue Type: Bug
>  Components: docs
>Affects Versions: 2.7.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>
> During the update of the Apache Kafka website, changes made to the kafka-site 
> repo were not made to the kafka/docs directory.
> All the changes made need to get migrated to kafka/docs to keep the website 
> in sync. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (KAFKA-10679) AK site docs changes need to get ported to Kafka/docs

2020-11-03 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10679:

Comment: was deleted

(was: Current PR https://github.com/apache/kafka/pull/9551)

> AK site docs changes need to get ported to Kafka/docs
> -
>
> Key: KAFKA-10679
> URL: https://issues.apache.org/jira/browse/KAFKA-10679
> Project: Kafka
>  Issue Type: Bug
>  Components: docs
>Affects Versions: 2.7.0
>Reporter: Bill Bejeck
>Priority: Major
>
> During the update of the Apache Kafka website, changes made to the kafka-site 
> repo were not made to the kafka/docs directory.
> All the changes made need to get migrated to kafka/docs to keep the website 
> in sync. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10679) AK site docs changes need to get ported to Kafka/docs

2020-11-03 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17225574#comment-17225574
 ] 

Bill Bejeck commented on KAFKA-10679:
-

Current PR https://github.com/apache/kafka/pull/9551

> AK site docs changes need to get ported to Kafka/docs
> -
>
> Key: KAFKA-10679
> URL: https://issues.apache.org/jira/browse/KAFKA-10679
> Project: Kafka
>  Issue Type: Bug
>  Components: docs
>Affects Versions: 2.7.0
>Reporter: Bill Bejeck
>Priority: Major
>
> During the update of the Apache Kafka website, changes made to the kafka-site 
> repo were not made to the kafka/docs directory.
> All the changes made need to get migrated to kafka/docs to keep the website 
> in sync. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10679) AK site docs changes need to get ported to Kafka/docs

2020-11-03 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10679:

Summary: AK site docs changes need to get ported to Kafka/docs  (was: AK 
documentation changes need to get ported to Kafka/docs)

> AK site docs changes need to get ported to Kafka/docs
> -
>
> Key: KAFKA-10679
> URL: https://issues.apache.org/jira/browse/KAFKA-10679
> Project: Kafka
>  Issue Type: Bug
>  Components: docs
>Affects Versions: 2.7.0
>Reporter: Bill Bejeck
>Priority: Major
>
> During the update of the Apache Kafka website, changes made to the kafka-site 
> repo were not made to the kafka/docs directory.
> All the changes made need to get migrated to kafka/docs to keep the website 
> in sync. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10679) AK documentation changes need to get ported to Kafka/docs

2020-11-03 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-10679:
---

 Summary: AK documentation changes need to get ported to Kafka/docs
 Key: KAFKA-10679
 URL: https://issues.apache.org/jira/browse/KAFKA-10679
 Project: Kafka
  Issue Type: Bug
  Components: docs
Affects Versions: 2.7.0
Reporter: Bill Bejeck


During the update of the Apache Kafka website, changes made to the kafka-site 
repo were not made to the kafka/docs directory.

All the changes made need to get migrated to kafka/docs to keep the website in 
sync. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10370) WorkerSinkTask: IllegalStateException cased by consumer.seek(tp, offsets) when (tp, offsets) are supplied by WorkerSinkTaskContext

2020-10-27 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10370:

Fix Version/s: (was: 2.7.0)

> WorkerSinkTask: IllegalStateException cased by consumer.seek(tp, offsets) 
> when (tp, offsets) are supplied by WorkerSinkTaskContext
> --
>
> Key: KAFKA-10370
> URL: https://issues.apache.org/jira/browse/KAFKA-10370
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 2.5.0
>Reporter: Ning Zhang
>Assignee: Ning Zhang
>Priority: Major
>
> In 
> [WorkerSinkTask.java|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java],
>  when we want the consumer to consume from certain offsets, rather than from 
> the last committed offset, 
> [WorkerSinkTaskContext|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTaskContext.java#L63-L66]
>  provided a way to supply the offsets from external (e.g. implementation of 
> SinkTask) to rewind the consumer. 
> In the [poll() 
> method|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L312],
>  it first call 
> [rewind()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L615-L633]
>  to (1) read the offsets from WorkerSinkTaskContext, if the offsets are not 
> empty, (2) consumer.seek(tp, offset) to rewind the consumer.
> As a part of [WorkerSinkTask 
> initialization|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L290-L307],
>  when the [SinkTask 
> starts|https://github.com/apache/kafka/blob/trunk/connect/api/src/main/java/org/apache/kafka/connect/sink/SinkTask.java#L83-L88],
>  we can supply the specific offsets by +"context.offset(supplied_offsets);+" 
> in start() method, so that when the consumer does the first poll, it should 
> rewind to the specific offsets in rewind() method. However in practice, we 
> saw the following IllegalStateException when running consumer.seek(tp, 
> offsets);
> {code:java}
> [2020-08-07 23:53:55,752] INFO WorkerSinkTask{id=MirrorSinkConnector-0} 
> Rewind test-1 to offset 3 
> (org.apache.kafka.connect.runtime.WorkerSinkTask:648)
> [2020-08-07 23:53:55,752] INFO [Consumer 
> clientId=connector-consumer-MirrorSinkConnector-0, 
> groupId=connect-MirrorSinkConnector] Seeking to offset 3 for partition test-1 
> (org.apache.kafka.clients.consumer.KafkaConsumer:1592)
> [2020-08-07 23:53:55,752] ERROR WorkerSinkTask{id=MirrorSinkConnector-0} Task 
> threw an uncaught and unrecoverable exception 
> (org.apache.kafka.connect.runtime.WorkerTask:187)
> java.lang.IllegalStateException: No current assignment for partition test-1
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:368)
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.seekUnvalidated(SubscriptionState.java:385)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.seek(KafkaConsumer.java:1597)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.rewind(WorkerSinkTask.java:649)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:334)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:198)
> at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
> at 
> org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> [2020-08-07 23:53:55,752] ERROR WorkerSinkTask{id=MirrorSinkConnector-0} Task 
> is being killed and will not recover until manually restarted 
> (org.apache.kafka.connect.runtime.WorkerTask:188)
> {code}
> As suggested in 
> https://stackoverflow.com/questions/41008610/kafkaconsumer-0-10-java-api-error-message-no-current-assignment-for-partition/41010594,
>  the resolution (that has been initially verified) proposed in the attached 
> PR is to use *consumer.assign* with *consumer.seek* , instead of 
> 

[jira] [Updated] (KAFKA-10378) issue when create producer using java

2020-10-27 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10378:

Fix Version/s: (was: 2.6.1)
   (was: 2.7.0)

> issue when create producer using java
> -
>
> Key: KAFKA-10378
> URL: https://issues.apache.org/jira/browse/KAFKA-10378
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.6.0
> Environment: mac os
> java version "1.8.0_231"
> intellij 
>Reporter: Mohammad Abdelqader
>Assignee: Luke Chen
>Priority: Blocker
>
> I created simple producer using java by Intellij studio. When i run project , 
> it return following issue
> [kafka-producer-network-thread | producer-1] ERROR 
> org.apache.kafka.common.utils.KafkaThread - Uncaught exception in thread 
> 'kafka-producer-network-thread | producer-1':[kafka-producer-network-thread | 
> producer-1] ERROR org.apache.kafka.common.utils.KafkaThread - Uncaught 
> exception in thread 'kafka-producer-network-thread | 
> producer-1':java.lang.NoClassDefFoundError: 
> com/fasterxml/jackson/databind/JsonNode at 
> org.apache.kafka.common.requests.ApiVersionsRequest$Builder.(ApiVersionsRequest.java:36)
>  at 
> org.apache.kafka.clients.NetworkClient.handleConnections(NetworkClient.java:910)
>  at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:555) at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325) 
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240) at 
> java.lang.Thread.run(Thread.java:748)Caused by: 
> java.lang.ClassNotFoundException: com.fasterxml.jackson.databind.JsonNode at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:418) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 6 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9497) Brokers start up even if SASL provider is not loaded and throw NPE when clients connect

2020-10-27 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-9497:
---
Fix Version/s: (was: 2.7.0)

> Brokers start up even if SASL provider is not loaded and throw NPE when 
> clients connect
> ---
>
> Key: KAFKA-9497
> URL: https://issues.apache.org/jira/browse/KAFKA-9497
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.2, 0.11.0.3, 1.1.1, 2.4.0
>Reporter: Rajini Sivaram
>Assignee: Ron Dagostino
>Priority: Major
>
> Note: This is not a regression, this has been the behaviour since SASL was 
> first implemented in Kafka.
>  
> Sasl.createSaslServer and Sasl.createSaslClient may return null if a SASL 
> provider that works for the specified configs cannot be created. We don't 
> currently handle this case. As a result broker/client throws 
> NullPointerException if a provider has not been loaded. On the broker-side, 
> we allow brokers to start up successfully even if SASL provider for its 
> enabled mechanisms are not found. For SASL mechanisms 
> PLAIN/SCRAM-xx/OAUTHBEARER, the login module in Kafka loads the SASL 
> providers. If the login module is incorrectly configured, brokers startup and 
> then fail client connections when hitting NPE. Clients see disconnections 
> during authentication as a result. It is difficult to tell from the client or 
> broker logs why the failure occurred. We should fail during startup if SASL 
> providers are not found and provide better diagnostics for this case.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10297) Don't use deprecated producer config `retries`

2020-10-27 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10297:

Fix Version/s: (was: 2.7.0)

> Don't use deprecated producer config `retries`
> --
>
> Key: KAFKA-10297
> URL: https://issues.apache.org/jira/browse/KAFKA-10297
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.7.0
>Reporter: Matthias J. Sax
>Priority: Blocker
>
> In 2.7.0 release, producer config `retries` gets deprecated via KIP-572.
> Connect is still using this config what needs to be fixed (cf 
> [https://github.com/apache/kafka/pull/8864/files#r439685920])
> {quote}Btw: @hachikuji raise a concern about this issue, too: 
> https://github.com/apache/kafka/pull/8864#pullrequestreview-443424531
> > I just had one question about the proposal. Using retries=0 in the producer 
> > allows the user to have "at-most-once" delivery. This allows the 
> > application to implement its own retry logic for example. Do we still have 
> > a way to do this once this configuration is gone?
> So maybe we need to do some follow up work in the `Producer` to make it work 
> for Connect. But I would defer this to the follow up work.
> My original though was, that setting `max.deliver.timeout.ms := request 
> .timeout.ms` might prevent internal retries. But we need to verify this. It 
> was also brought to my attention, that this might not work if the network 
> disconnects -- only `retries=0` would prevent to re-open the connection but a 
> low timeout would not prevent retries.
> In KIP-572, we proposed for Kafka Streams itself, to treat `task.timeout.ms = 
> 0` as "no retries" -- maybe we can do a similar thing for the producer?
> There is also `max.block.ms` that we should consider. Unfortunately, I am not 
> an expert on the `Producer`. But I would like to move forward with KIP-572 
> for now and are happy to help to resolve those questions.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10638) QueryableStateIntegrationTest fails due to stricter store checking

2020-10-27 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10638:

Priority: Major  (was: Blocker)

> QueryableStateIntegrationTest fails due to stricter store checking
> --
>
> Key: KAFKA-10638
> URL: https://issues.apache.org/jira/browse/KAFKA-10638
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.8.0, 2.7.1
>
>
> Observed:
> {code:java}
> org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
> store source-table because the stream thread is PARTITIONS_ASSIGNED, not 
> RUNNING
>   at 
> org.apache.kafka.streams.state.internals.StreamThreadStateStoreProvider.stores(StreamThreadStateStoreProvider.java:81)
>   at 
> org.apache.kafka.streams.state.internals.WrappingStoreProvider.stores(WrappingStoreProvider.java:50)
>   at 
> org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStore.get(CompositeReadOnlyKeyValueStore.java:52)
>   at 
> org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificActivePartitionStores(StoreQueryIntegrationTest.java:200)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
>   at 

[jira] [Comment Edited] (KAFKA-10638) QueryableStateIntegrationTest fails due to stricter store checking

2020-10-27 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221610#comment-17221610
 ] 

Bill Bejeck edited comment on KAFKA-10638 at 10/27/20, 5:48 PM:


Hey [~vvcephei] I appreciate you wanting to get a fix in for the test, but I'm 
not sure it meets the criteria for a blocker.

Going back over the previous [2.7 
builds|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-2.7-jdk8/activity]
 [32-41], it's only failed once in [build 
40|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-2.7-jdk8/detail/kafka-2.7-jdk8/40/tests/]

For the 2.7.0 release process, I'm going to reduce the severity and move the 
fix version to 2.7.1 and 2.8.0


was (Author: bbejeck):
Hey [~vvcephei] I appreciate you wanting to get a fix in for the test, but I'm 
not sure it meets the criteria for a blocker.

Going back over the previous [2.7 
builds|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-2.7-jdk8/activity],
 it's only failed once in [build 
40|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-2.7-jdk8/detail/kafka-2.7-jdk8/40/tests/]

For the 2.7.0 release process, I'm going to reduce the severity and move the 
fix version to 2.7.1 and 2.8.0

> QueryableStateIntegrationTest fails due to stricter store checking
> --
>
> Key: KAFKA-10638
> URL: https://issues.apache.org/jira/browse/KAFKA-10638
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.7.0
>
>
> Observed:
> {code:java}
> org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
> store source-table because the stream thread is PARTITIONS_ASSIGNED, not 
> RUNNING
>   at 
> org.apache.kafka.streams.state.internals.StreamThreadStateStoreProvider.stores(StreamThreadStateStoreProvider.java:81)
>   at 
> org.apache.kafka.streams.state.internals.WrappingStoreProvider.stores(WrappingStoreProvider.java:50)
>   at 
> org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStore.get(CompositeReadOnlyKeyValueStore.java:52)
>   at 
> org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificActivePartitionStores(StoreQueryIntegrationTest.java:200)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> 

[jira] [Updated] (KAFKA-10638) QueryableStateIntegrationTest fails due to stricter store checking

2020-10-27 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10638:

Fix Version/s: (was: 2.7.0)
   2.7.1
   2.8.0

> QueryableStateIntegrationTest fails due to stricter store checking
> --
>
> Key: KAFKA-10638
> URL: https://issues.apache.org/jira/browse/KAFKA-10638
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.8.0, 2.7.1
>
>
> Observed:
> {code:java}
> org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
> store source-table because the stream thread is PARTITIONS_ASSIGNED, not 
> RUNNING
>   at 
> org.apache.kafka.streams.state.internals.StreamThreadStateStoreProvider.stores(StreamThreadStateStoreProvider.java:81)
>   at 
> org.apache.kafka.streams.state.internals.WrappingStoreProvider.stores(WrappingStoreProvider.java:50)
>   at 
> org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStore.get(CompositeReadOnlyKeyValueStore.java:52)
>   at 
> org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificActivePartitionStores(StoreQueryIntegrationTest.java:200)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> 

[jira] [Commented] (KAFKA-10638) QueryableStateIntegrationTest fails due to stricter store checking

2020-10-27 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221610#comment-17221610
 ] 

Bill Bejeck commented on KAFKA-10638:
-

Hey [~vvcephei] I appreciate you wanting to get a fix in for the test, but I'm 
not sure it meets the criteria for a blocker.

Going back over the previous [2.7 
builds|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-2.7-jdk8/activity],
 it's only failed once in [build 
40|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-2.7-jdk8/detail/kafka-2.7-jdk8/40/tests/]

For the 2.7.0 release process, I'm going to reduce the severity and move the 
fix version to 2.7.1 and 2.8.0

> QueryableStateIntegrationTest fails due to stricter store checking
> --
>
> Key: KAFKA-10638
> URL: https://issues.apache.org/jira/browse/KAFKA-10638
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.7.0
>
>
> Observed:
> {code:java}
> org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
> store source-table because the stream thread is PARTITIONS_ASSIGNED, not 
> RUNNING
>   at 
> org.apache.kafka.streams.state.internals.StreamThreadStateStoreProvider.stores(StreamThreadStateStoreProvider.java:81)
>   at 
> org.apache.kafka.streams.state.internals.WrappingStoreProvider.stores(WrappingStoreProvider.java:50)
>   at 
> org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStore.get(CompositeReadOnlyKeyValueStore.java:52)
>   at 
> org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificActivePartitionStores(StoreQueryIntegrationTest.java:200)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>   at 
> 

[jira] [Resolved] (KAFKA-9381) Javadocs + Scaladocs not published on maven central

2020-10-26 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9381.

Resolution: Fixed

Resolved via [https://github.com/apache/kafka/pull/9486.]

 

Merged to trunk and cherry-picked to 2.7

> Javadocs + Scaladocs not published on maven central
> ---
>
> Key: KAFKA-9381
> URL: https://issues.apache.org/jira/browse/KAFKA-9381
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Julien Jean Paul Sirocchi
>Assignee: Bill Bejeck
>Priority: Blocker
> Fix For: 2.7.0
>
>
> As per title, empty (aside for MANIFEST, LICENCE and NOTICE) 
> javadocs/scaladocs jars on central for any version (kafka nor scala), e.g.
> [http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10641) ACL Command hangs with SSL as not existing with proper error code

2020-10-26 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10641:

Fix Version/s: (was: 2.7.0)
   2.8.0

> ACL Command hangs with SSL as not existing with proper error code
> -
>
> Key: KAFKA-10641
> URL: https://issues.apache.org/jira/browse/KAFKA-10641
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 
> 2.4.0, 2.3.1, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Senthilnathan Muthusamy
>Assignee: Senthilnathan Muthusamy
>Priority: Minor
> Fix For: 2.8.0
>
>
> When using ACL Command with SSL mode, the process is not terminating after 
> successful ACL operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10641) ACL Command hangs with SSL as not existing with proper error code

2020-10-26 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221052#comment-17221052
 ] 

Bill Bejeck commented on KAFKA-10641:
-

Hi [~senthilm-ms], since this is not a blocker issue and we've past code freeze 
for 2.7 I'm going to change the fix version of this ticket to 2.8 as part of 
the 2.7.0 release process.

> ACL Command hangs with SSL as not existing with proper error code
> -
>
> Key: KAFKA-10641
> URL: https://issues.apache.org/jira/browse/KAFKA-10641
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 
> 2.4.0, 2.3.1, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Senthilnathan Muthusamy
>Assignee: Senthilnathan Muthusamy
>Priority: Minor
> Fix For: 2.7.0
>
>
> When using ACL Command with SSL mode, the process is not terminating after 
> successful ACL operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10642) Expose the real stack trace if any exception occurred during SSL Client Trust Verification in extension

2020-10-26 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10642:

Fix Version/s: (was: 2.7.0)
   2.8.0

> Expose the real stack trace if any exception occurred during SSL Client Trust 
> Verification in extension
> ---
>
> Key: KAFKA-10642
> URL: https://issues.apache.org/jira/browse/KAFKA-10642
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0, 2.4.0, 2.3.1, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Senthilnathan Muthusamy
>Assignee: Senthilnathan Muthusamy
>Priority: Minor
> Fix For: 2.8.0
>
>
> If there is any exception occurred in the custom implementation of client 
> trust verification (i.e. using security.provider), the inner exception is 
> suppressed or hidden and not logged to the log file...
>  
> Below is an example stack trace not showing actual exception from the 
> extension/custom implementation.
>  
> [2020-05-13 14:30:26,892] ERROR [KafkaServer id=423810470] Fatal error during 
> KafkaServer startup. Prepare to shutdown 
> (kafka.server.KafkaServer)[2020-05-13 14:30:26,892] ERROR [KafkaServer 
> id=423810470] Fatal error during KafkaServer startup. Prepare to shutdown 
> (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: 
> org.apache.kafka.common.config.ConfigException: Invalid value 
> java.lang.RuntimeException: Delegated task threw Exception/Error for 
> configuration A client SSLEngine created with the provided settings can't 
> connect to a server SSLEngine created with those settings. at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:71)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:753) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:394) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:279)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:278) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:241)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:238)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:238)
>  at kafka.network.SocketServer.startup(SocketServer.scala:121) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:265) at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at 
> kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)Caused by: 
> org.apache.kafka.common.config.ConfigException: Invalid value 
> java.lang.RuntimeException: Delegated task threw Exception/Error for 
> configuration A client SSLEngine created with the provided settings can't 
> connect to a server SSLEngine created with those settings. at 
> org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:100)
>  at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:69)
>  ... 18 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10642) Expose the real stack trace if any exception occurred during SSL Client Trust Verification in extension

2020-10-26 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221051#comment-17221051
 ] 

Bill Bejeck commented on KAFKA-10642:
-

Hi [~senthilm-ms], since this is not a blocker issue and we've past code freeze 
for 2.7 I'm going to change the fix version of this ticket to 2.8 as part of 
the 2.7.0 release process.

> Expose the real stack trace if any exception occurred during SSL Client Trust 
> Verification in extension
> ---
>
> Key: KAFKA-10642
> URL: https://issues.apache.org/jira/browse/KAFKA-10642
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0, 2.4.0, 2.3.1, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Senthilnathan Muthusamy
>Assignee: Senthilnathan Muthusamy
>Priority: Minor
> Fix For: 2.7.0
>
>
> If there is any exception occurred in the custom implementation of client 
> trust verification (i.e. using security.provider), the inner exception is 
> suppressed or hidden and not logged to the log file...
>  
> Below is an example stack trace not showing actual exception from the 
> extension/custom implementation.
>  
> [2020-05-13 14:30:26,892] ERROR [KafkaServer id=423810470] Fatal error during 
> KafkaServer startup. Prepare to shutdown 
> (kafka.server.KafkaServer)[2020-05-13 14:30:26,892] ERROR [KafkaServer 
> id=423810470] Fatal error during KafkaServer startup. Prepare to shutdown 
> (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: 
> org.apache.kafka.common.config.ConfigException: Invalid value 
> java.lang.RuntimeException: Delegated task threw Exception/Error for 
> configuration A client SSLEngine created with the provided settings can't 
> connect to a server SSLEngine created with those settings. at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:71)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:753) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:394) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:279)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:278) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:241)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:238)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:238)
>  at kafka.network.SocketServer.startup(SocketServer.scala:121) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:265) at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at 
> kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)Caused by: 
> org.apache.kafka.common.config.ConfigException: Invalid value 
> java.lang.RuntimeException: Delegated task threw Exception/Error for 
> configuration A client SSLEngine created with the provided settings can't 
> connect to a server SSLEngine created with those settings. at 
> org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:100)
>  at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:69)
>  ... 18 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10647) Only serialize owned partition when consumer protocol version >= 0

2020-10-26 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10647:

Fix Version/s: 2.7.0

> Only serialize owned partition when consumer protocol version >= 0 
> ---
>
> Key: KAFKA-10647
> URL: https://issues.apache.org/jira/browse/KAFKA-10647
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Blocker
> Fix For: 2.7.0
>
>
> A regression got introduced by https://github.com/apache/kafka/pull/8897. The 
> owned partition field must be ignored for version < 1 otherwise the 
> serialization fails with an unsupported version exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10638) QueryableStateIntegrationTest fails due to stricter store checking

2020-10-23 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10638:

Fix Version/s: 2.7.0

> QueryableStateIntegrationTest fails due to stricter store checking
> --
>
> Key: KAFKA-10638
> URL: https://issues.apache.org/jira/browse/KAFKA-10638
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.7.0
>
>
> Observed:
> {code:java}
> org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
> store source-table because the stream thread is PARTITIONS_ASSIGNED, not 
> RUNNING
>   at 
> org.apache.kafka.streams.state.internals.StreamThreadStateStoreProvider.stores(StreamThreadStateStoreProvider.java:81)
>   at 
> org.apache.kafka.streams.state.internals.WrappingStoreProvider.stores(WrappingStoreProvider.java:50)
>   at 
> org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStore.get(CompositeReadOnlyKeyValueStore.java:52)
>   at 
> org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificActivePartitionStores(StoreQueryIntegrationTest.java:200)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
>   at 

[jira] [Updated] (KAFKA-9381) Javadocs + Scaladocs not published on maven central

2020-10-23 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-9381:
---
Priority: Blocker  (was: Critical)

> Javadocs + Scaladocs not published on maven central
> ---
>
> Key: KAFKA-9381
> URL: https://issues.apache.org/jira/browse/KAFKA-9381
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Julien Jean Paul Sirocchi
>Assignee: Bill Bejeck
>Priority: Blocker
> Fix For: 2.8.0
>
>
> As per title, empty (aside for MANIFEST, LICENCE and NOTICE) 
> javadocs/scaladocs jars on central for any version (kafka nor scala), e.g.
> [http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9381) Javadocs + Scaladocs not published on maven central

2020-10-23 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-9381:
---
Fix Version/s: (was: 2.8.0)
   2.7.0

> Javadocs + Scaladocs not published on maven central
> ---
>
> Key: KAFKA-9381
> URL: https://issues.apache.org/jira/browse/KAFKA-9381
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Julien Jean Paul Sirocchi
>Assignee: Bill Bejeck
>Priority: Blocker
> Fix For: 2.7.0
>
>
> As per title, empty (aside for MANIFEST, LICENCE and NOTICE) 
> javadocs/scaladocs jars on central for any version (kafka nor scala), e.g.
> [http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10631) ProducerFencedException is not Handled on Offest Commit

2020-10-22 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-10631:
---

Assignee: Bruno Cadonna

> ProducerFencedException is not Handled on Offest Commit
> ---
>
> Key: KAFKA-10631
> URL: https://issues.apache.org/jira/browse/KAFKA-10631
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.7.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Blocker
> Fix For: 2.7.0
>
>
> The transaction manager does currently not handle producer fenced errors 
> returned from a offset commit request.
> We found this bug because we encountered the following exception in our soak 
> cluster:
> {code:java}
> org.apache.kafka.streams.errors.StreamsException: Error encountered trying to 
> commit a transaction [stream-thread [i-037c09b3c48522d8d-StreamThread-3] task 
> [0_0]]
> at 
> org.apache.kafka.streams.processor.internals.StreamsProducer.commitTransaction(StreamsProducer.java:256)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.commitOffsetsOrTransaction(TaskManager.java:1050)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.commit(TaskManager.java:1013)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:886)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:677)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:553)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:512)
> [2020-10-22T04:09:54+02:00] 
> (streams-soak-2-7-eos-alpha_soak_i-037c09b3c48522d8d_streamslog) Caused by: 
> org.apache.kafka.common.KafkaException: Unexpected error in 
> TxnOffsetCommitResponse: There is a newer producer with the same 
> transactionalId which fences the current one.
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager$TxnOffsetCommitHandler.handleResponse(TransactionManager.java:1726)
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:1278)
> at 
> org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
> at 
> org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:584)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:576)
> at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:415)
> at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:313)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10631) ProducerFencedException is not Handled on Offest Commit

2020-10-22 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10631:

Fix Version/s: 2.7.0

> ProducerFencedException is not Handled on Offest Commit
> ---
>
> Key: KAFKA-10631
> URL: https://issues.apache.org/jira/browse/KAFKA-10631
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.7.0
>Reporter: Bruno Cadonna
>Priority: Blocker
> Fix For: 2.7.0
>
>
> The transaction manager does currently not handle producer fenced errors 
> returned from a offset commit request.
> We found this bug because we encountered the following exception in our soak 
> cluster:
> {code:java}
> org.apache.kafka.streams.errors.StreamsException: Error encountered trying to 
> commit a transaction [stream-thread [i-037c09b3c48522d8d-StreamThread-3] task 
> [0_0]]
> at 
> org.apache.kafka.streams.processor.internals.StreamsProducer.commitTransaction(StreamsProducer.java:256)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.commitOffsetsOrTransaction(TaskManager.java:1050)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.commit(TaskManager.java:1013)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:886)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:677)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:553)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:512)
> [2020-10-22T04:09:54+02:00] 
> (streams-soak-2-7-eos-alpha_soak_i-037c09b3c48522d8d_streamslog) Caused by: 
> org.apache.kafka.common.KafkaException: Unexpected error in 
> TxnOffsetCommitResponse: There is a newer producer with the same 
> transactionalId which fences the current one.
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager$TxnOffsetCommitHandler.handleResponse(TransactionManager.java:1726)
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:1278)
> at 
> org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
> at 
> org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:584)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:576)
> at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:415)
> at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:313)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10554) Perform follower truncation based on epoch offsets returned in Fetch response

2020-10-22 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17219111#comment-17219111
 ] 

Bill Bejeck commented on KAFKA-10554:
-

Since this is not a blocker, and we've hit code freeze, I'm going to move the 
fix version of this ticket to 2.8 as part of the 2.7.0 release process.  Should 
this be incorrect, please discuss on the [DISCUSS] Apache Kafka 2.7.0 release 
email thread.

 

> Perform follower truncation based on epoch offsets returned in Fetch response
> -
>
> Key: KAFKA-10554
> URL: https://issues.apache.org/jira/browse/KAFKA-10554
> Project: Kafka
>  Issue Type: Task
>  Components: replication
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.7.0
>
>
> KAFKA-10435 updated fetch protocol for KIP-595 to return diverging epoch and 
> offset as part of fetch response. We can use this to truncate logs in 
> followers while processing fetch responses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10554) Perform follower truncation based on epoch offsets returned in Fetch response

2020-10-22 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10554:

Fix Version/s: (was: 2.7.0)
   2.8.0

> Perform follower truncation based on epoch offsets returned in Fetch response
> -
>
> Key: KAFKA-10554
> URL: https://issues.apache.org/jira/browse/KAFKA-10554
> Project: Kafka
>  Issue Type: Task
>  Components: replication
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.8.0
>
>
> KAFKA-10435 updated fetch protocol for KIP-595 to return diverging epoch and 
> offset as part of fetch response. We can use this to truncate logs in 
> followers while processing fetch responses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10201) Update codebase to use more inclusive terms

2020-10-22 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-10201:

Fix Version/s: (was: 2.7.0)
   2.8.0

> Update codebase to use more inclusive terms
> ---
>
> Key: KAFKA-10201
> URL: https://issues.apache.org/jira/browse/KAFKA-10201
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xavier Léauté
>Priority: Major
> Fix For: 2.8.0
>
>
> see the corresponding KIP 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-629:+Use+racially+neutral+terms+in+our+codebase



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10201) Update codebase to use more inclusive terms

2020-10-22 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17219093#comment-17219093
 ] 

Bill Bejeck commented on KAFKA-10201:
-

[~xvrl] , Since we've hit code freeze on 10/21, I'm going to set the fix 
version to 2.8 as part of the 2.7.0 release process.

> Update codebase to use more inclusive terms
> ---
>
> Key: KAFKA-10201
> URL: https://issues.apache.org/jira/browse/KAFKA-10201
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xavier Léauté
>Priority: Major
> Fix For: 2.7.0
>
>
> see the corresponding KIP 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-629:+Use+racially+neutral+terms+in+our+codebase



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-9381) Javadocs + Scaladocs not published on maven central

2020-10-21 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218458#comment-17218458
 ] 

Bill Bejeck edited comment on KAFKA-9381 at 10/21/20, 5:59 PM:
---

Since this is a long-standing issue, I'm going to remove the blocker tag. I'm 
taking a look at getting this fixed in this release, so I have picked up the 
ticket.


was (Author: bbejeck):
Since this is a long-standing issue, I'm going to remove the blocker tag. 

> Javadocs + Scaladocs not published on maven central
> ---
>
> Key: KAFKA-9381
> URL: https://issues.apache.org/jira/browse/KAFKA-9381
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Julien Jean Paul Sirocchi
>Assignee: Bill Bejeck
>Priority: Critical
> Fix For: 2.8.0
>
>
> As per title, empty (aside for MANIFEST, LICENCE and NOTICE) 
> javadocs/scaladocs jars on central for any version (kafka nor scala), e.g.
> [http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9381) Javadocs + Scaladocs not published on maven central

2020-10-21 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218458#comment-17218458
 ] 

Bill Bejeck commented on KAFKA-9381:


Since this is a long-standing issue, I'm going to remove the blocker tag. 

> Javadocs + Scaladocs not published on maven central
> ---
>
> Key: KAFKA-9381
> URL: https://issues.apache.org/jira/browse/KAFKA-9381
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Julien Jean Paul Sirocchi
>Assignee: Bill Bejeck
>Priority: Blocker
> Fix For: 2.7.0
>
>
> As per title, empty (aside for MANIFEST, LICENCE and NOTICE) 
> javadocs/scaladocs jars on central for any version (kafka nor scala), e.g.
> [http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9381) Javadocs + Scaladocs not published on maven central

2020-10-21 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-9381:
---
Priority: Critical  (was: Blocker)

> Javadocs + Scaladocs not published on maven central
> ---
>
> Key: KAFKA-9381
> URL: https://issues.apache.org/jira/browse/KAFKA-9381
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Julien Jean Paul Sirocchi
>Assignee: Bill Bejeck
>Priority: Critical
> Fix For: 2.7.0
>
>
> As per title, empty (aside for MANIFEST, LICENCE and NOTICE) 
> javadocs/scaladocs jars on central for any version (kafka nor scala), e.g.
> [http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9381) Javadocs + Scaladocs not published on maven central

2020-10-21 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-9381:
---
Fix Version/s: (was: 2.7.0)
   2.8.0

> Javadocs + Scaladocs not published on maven central
> ---
>
> Key: KAFKA-9381
> URL: https://issues.apache.org/jira/browse/KAFKA-9381
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Julien Jean Paul Sirocchi
>Assignee: Bill Bejeck
>Priority: Critical
> Fix For: 2.8.0
>
>
> As per title, empty (aside for MANIFEST, LICENCE and NOTICE) 
> javadocs/scaladocs jars on central for any version (kafka nor scala), e.g.
> [http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9381) Javadocs + Scaladocs not published on maven central

2020-10-21 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-9381:
--

Assignee: Bill Bejeck  (was: Randall Hauch)

> Javadocs + Scaladocs not published on maven central
> ---
>
> Key: KAFKA-9381
> URL: https://issues.apache.org/jira/browse/KAFKA-9381
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Julien Jean Paul Sirocchi
>Assignee: Bill Bejeck
>Priority: Blocker
> Fix For: 2.7.0
>
>
> As per title, empty (aside for MANIFEST, LICENCE and NOTICE) 
> javadocs/scaladocs jars on central for any version (kafka nor scala), e.g.
> [http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9910) Implement new transaction timed out error

2020-10-21 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218450#comment-17218450
 ] 

Bill Bejeck commented on KAFKA-9910:


This is a sub-task for KIP-588, which is not going in 2.7, so I'm going to move 
the fix version to 2.8.0 as part of the 2.7.0 release process.

> Implement new transaction timed out error
> -
>
> Key: KAFKA-9910
> URL: https://issues.apache.org/jira/browse/KAFKA-9910
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core
>Reporter: Boyang Chen
>Assignee: HaiyuanZhao
>Priority: Major
> Fix For: 2.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9910) Implement new transaction timed out error

2020-10-21 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-9910:
---
Fix Version/s: (was: 2.7.0)
   2.8.0

> Implement new transaction timed out error
> -
>
> Key: KAFKA-9910
> URL: https://issues.apache.org/jira/browse/KAFKA-9910
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core
>Reporter: Boyang Chen
>Assignee: HaiyuanZhao
>Priority: Major
> Fix For: 2.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9803) Allow producers to recover gracefully from transaction timeouts

2020-10-21 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-9803:
---
Fix Version/s: (was: 2.7.0)
   2.8.0

> Allow producers to recover gracefully from transaction timeouts
> ---
>
> Key: KAFKA-9803
> URL: https://issues.apache.org/jira/browse/KAFKA-9803
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer , streams
>Reporter: Jason Gustafson
>Assignee: Boyang Chen
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.8.0
>
>
> Transaction timeouts are detected by the transaction coordinator. When the 
> coordinator detects a timeout, it bumps the producer epoch and aborts the 
> transaction. The epoch bump is necessary in order to prevent the current 
> producer from being able to begin writing to a new transaction which was not 
> started through the coordinator.  
> Transactions may also be aborted if a new producer with the same 
> `transactional.id` starts up. Similarly this results in an epoch bump. 
> Currently the coordinator does not distinguish these two cases. Both will end 
> up as a `ProducerFencedException`, which means the producer needs to shut 
> itself down. 
> We can improve this with the new APIs from KIP-360. When the coordinator 
> times out a transaction, it can remember that fact and allow the existing 
> producer to claim the bumped epoch and continue. Roughly the logic would work 
> like this:
> 1. When a transaction times out, set lastProducerEpoch to the current epoch 
> and do the normal bump.
> 2. Any transactional requests from the old epoch result in a new 
> TRANSACTION_TIMED_OUT error code, which is propagated to the application.
> 3. The producer recovers by sending InitProducerId with the current epoch. 
> The coordinator returns the bumped epoch.
> One issue that needs to be addressed is how to handle INVALID_PRODUCER_EPOCH 
> from Produce requests. Partition leaders will not generally know if a bumped 
> epoch was the result of a timed out transaction or a fenced producer. 
> Possibly the producer can treat these errors as abortable when they come from 
> Produce responses. In that case, the user would try to abort the transaction 
> and then we can see if it was due to a timeout or otherwise.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9803) Allow producers to recover gracefully from transaction timeouts

2020-10-21 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218446#comment-17218446
 ] 

Bill Bejeck commented on KAFKA-9803:


As discussed with [~bchen225242] offline, looks like KIP-588 won't make 2.7.  
As part of the 2.7.0 release process, I'm going to move the fix version to 2.8.0

> Allow producers to recover gracefully from transaction timeouts
> ---
>
> Key: KAFKA-9803
> URL: https://issues.apache.org/jira/browse/KAFKA-9803
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer , streams
>Reporter: Jason Gustafson
>Assignee: Boyang Chen
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.7.0
>
>
> Transaction timeouts are detected by the transaction coordinator. When the 
> coordinator detects a timeout, it bumps the producer epoch and aborts the 
> transaction. The epoch bump is necessary in order to prevent the current 
> producer from being able to begin writing to a new transaction which was not 
> started through the coordinator.  
> Transactions may also be aborted if a new producer with the same 
> `transactional.id` starts up. Similarly this results in an epoch bump. 
> Currently the coordinator does not distinguish these two cases. Both will end 
> up as a `ProducerFencedException`, which means the producer needs to shut 
> itself down. 
> We can improve this with the new APIs from KIP-360. When the coordinator 
> times out a transaction, it can remember that fact and allow the existing 
> producer to claim the bumped epoch and continue. Roughly the logic would work 
> like this:
> 1. When a transaction times out, set lastProducerEpoch to the current epoch 
> and do the normal bump.
> 2. Any transactional requests from the old epoch result in a new 
> TRANSACTION_TIMED_OUT error code, which is propagated to the application.
> 3. The producer recovers by sending InitProducerId with the current epoch. 
> The coordinator returns the bumped epoch.
> One issue that needs to be addressed is how to handle INVALID_PRODUCER_EPOCH 
> from Produce requests. Partition leaders will not generally know if a bumped 
> epoch was the result of a timed out transaction or a fenced producer. 
> Possibly the producer can treat these errors as abortable when they come from 
> Produce responses. In that case, the user would try to abort the transaction 
> and then we can see if it was due to a timeout or otherwise.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-10515) NPE: Foreign key join serde may not be initialized with default serde if application is distributed

2020-10-20 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218008#comment-17218008
 ] 

Bill Bejeck edited comment on KAFKA-10515 at 10/20/20, 11:47 PM:
-

Resolved via [https://github.com/apache/kafka/pull/9338]

 

Merged to trunk and cherry-picked to 2.7


was (Author: bbejeck):
Resolved via https://github.com/apache/kafka/pull/9338

> NPE: Foreign key join serde may not be initialized with default serde if 
> application is distributed
> ---
>
> Key: KAFKA-10515
> URL: https://issues.apache.org/jira/browse/KAFKA-10515
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0, 2.5.1
>Reporter: Thorsten Hake
>Priority: Critical
> Fix For: 2.7.0, 2.5.2, 2.6.1
>
>
> The fix of KAFKA-9517 fixed the initialization of the foreign key joins 
> serdes for KStream applications that do not run distributed over multiple 
> instances.
> However, if an application runs distributed over multiple instances, the 
> foreign key join serdes may still not be initialized leading to the following 
> NPE:
> {noformat}
> Encountered the following error during 
> processing:java.lang.NullPointerException: null
>   at 
> org.apache.kafka.streams.kstream.internals.foreignkeyjoin.SubscriptionWrapperSerde$SubscriptionWrapperSerializer.serialize(SubscriptionWrapperSerde.java:85)
>   at 
> org.apache.kafka.streams.kstream.internals.foreignkeyjoin.SubscriptionWrapperSerde$SubscriptionWrapperSerializer.serialize(SubscriptionWrapperSerde.java:52)
>   at 
> org.apache.kafka.streams.state.internals.ValueAndTimestampSerializer.serialize(ValueAndTimestampSerializer.java:59)
>   at 
> org.apache.kafka.streams.state.internals.ValueAndTimestampSerializer.serialize(ValueAndTimestampSerializer.java:50)
>   at 
> org.apache.kafka.streams.state.internals.ValueAndTimestampSerializer.serialize(ValueAndTimestampSerializer.java:27)
>   at 
> org.apache.kafka.streams.state.StateSerdes.rawValue(StateSerdes.java:192)
>   at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.lambda$put$3(MeteredKeyValueStore.java:144)
>   at 
> org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:806)
>   at 
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.put(MeteredKeyValueStore.java:144)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl$KeyValueStoreReadWriteDecorator.put(ProcessorContextImpl.java:487)
>   at 
> org.apache.kafka.streams.kstream.internals.foreignkeyjoin.SubscriptionStoreReceiveProcessorSupplier$1.process(SubscriptionStoreReceiveProcessorSupplier.java:102)
>   at 
> org.apache.kafka.streams.kstream.internals.foreignkeyjoin.SubscriptionStoreReceiveProcessorSupplier$1.process(SubscriptionStoreReceiveProcessorSupplier.java:55)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.lambda$process$2(ProcessorNode.java:142)
>   at 
> org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:806)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:142)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:104)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.lambda$process$3(StreamTask.java:383)
>   at 
> org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:806)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:383)
>   at 
> org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:475)
>   at 
> org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:550)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:802)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:697)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:670){noformat}
> This happens because the processors for foreign key joins will be distributed 
> across multiple tasks. The serde will only be initialized with the default 
> 

[jira] [Resolved] (KAFKA-10454) Kafka Streams Stuck in infinite REBALANCING loop when stream <> table join partitions don't match

2020-10-20 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-10454.
-
Resolution: Fixed

Resolved via https://github.com/apache/kafka/pull/9237

> Kafka Streams Stuck in infinite REBALANCING loop when stream <> table join 
> partitions don't match
> -
>
> Key: KAFKA-10454
> URL: https://issues.apache.org/jira/browse/KAFKA-10454
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0
>Reporter: Levani Kokhreidze
>Assignee: Levani Kokhreidze
>Priority: Major
> Fix For: 2.7.0, 2.6.1, 2.8.0
>
>
> Here's integration test: [https://github.com/apache/kafka/pull/9237]
>  
> From the first glance, issue is that when one joins stream to table, and 
> table source topic doesn't have same number of partitions as stream topic, 
> `StateChangelogReader` tries to recover state from changelog (which in this 
> case is the same as source topic) for table from partitions that don't exist. 
> Logs are spammed with: 
>  
> {code:java}
> [2020-09-01 12:33:07,508] INFO stream-thread 
> [app-StreamTableJoinInfiniteLoopIntegrationTestloop-86ae06c3-5758-429f-9d29-94ed516db126-StreamThread-1]
>  End offset for changelog 
> topic-b-StreamTableJoinInfiniteLoopIntegrationTestloop-1 cannot be found; 
> will retry in the next time. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader:716) 
> [2020-09-01 12:33:07,508] INFO stream-thread 
> [app-StreamTableJoinInfiniteLoopIntegrationTestloop-86ae06c3-5758-429f-9d29-94ed516db126-StreamThread-1]
>  End offset for changelog 
> topic-b-StreamTableJoinInfiniteLoopIntegrationTestloop-2 cannot be found; 
> will retry in the next time. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader:716) 
> [2020-09-01 12:33:07,508] INFO stream-thread 
> [app-StreamTableJoinInfiniteLoopIntegrationTestloop-86ae06c3-5758-429f-9d29-94ed516db126-StreamThread-1]
>  End offset for changelog 
> topic-b-StreamTableJoinInfiniteLoopIntegrationTestloop-3 cannot be found; 
> will retry in the next time. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader:716) 
> [2020-09-01 12:33:07,510] INFO stream-thread 
> [app-StreamTableJoinInfiniteLoopIntegrationTestloop-86ae06c3-5758-429f-9d29-94ed516db126-StreamThread-1]
>  End offset for changelog 
> topic-b-StreamTableJoinInfiniteLoopIntegrationTestloop-0 cannot be found; 
> will retry in the next time. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader:716) 
> [2020-09-01 12:33:07,510] INFO stream-thread 
> [app-StreamTableJoinInfiniteLoopIntegrationTestloop-86ae06c3-5758-429f-9d29-94ed516db126-StreamThread-1]
>  End offset for changelog 
> topic-b-StreamTableJoinInfiniteLoopIntegrationTestloop-1 cannot be found; 
> will retry in the next time. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader:716) 
> [2020-09-01 12:33:07,510] INFO stream-thread 
> [app-StreamTableJoinInfiniteLoopIntegrationTestloop-86ae06c3-5758-429f-9d29-94ed516db126-StreamThread-1]
>  End offset for changelog 
> topic-b-StreamTableJoinInfiniteLoopIntegrationTestloop-2 cannot be found; 
> will retry in the next time. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader:716) 
> [2020-09-01 12:33:07,510] INFO stream-thread 
> [app-StreamTableJoinInfiniteLoopIntegrationTestloop-86ae06c3-5758-429f-9d29-94ed516db126-StreamThread-1]
>  End offset for changelog 
> topic-b-StreamTableJoinInfiniteLoopIntegrationTestloop-3 cannot be found; 
> will retry in the next time. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader:716) 
> [2020-09-01 12:33:07,513] INFO stream-thread 
> [app-StreamTableJoinInfiniteLoopIntegrationTestloop-86ae06c3-5758-429f-9d29-94ed516db126-StreamThread-1]
>  End offset for changelog 
> topic-b-StreamTableJoinInfiniteLoopIntegrationTestloop-0 cannot be found; 
> will retry in the next time. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader:716) 
> [2020-09-01 12:33:07,513] INFO stream-thread 
> [app-StreamTableJoinInfiniteLoopIntegrationTestloop-86ae06c3-5758-429f-9d29-94ed516db126-StreamThread-1]
>  End offset for changelog 
> topic-b-StreamTableJoinInfiniteLoopIntegrationTestloop-1 cannot be found; 
> will retry in the next time. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader:716) 
> [2020-09-01 12:33:07,513] INFO stream-thread 
> [app-StreamTableJoinInfiniteLoopIntegrationTestloop-86ae06c3-5758-429f-9d29-94ed516db126-StreamThread-1]
>  End offset for changelog 
> topic-b-StreamTableJoinInfiniteLoopIntegrationTestloop-2 cannot be found; 
> will retry in the next time. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader:716) 
> [2020-09-01 

  1   2   3   4   5   6   7   8   >