[jira] [Assigned] (KAFKA-16473) KafkaDockerWrapper uses wrong cluster ID when formatting log dir

2024-04-12 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-16473:
-

Assignee: Sebastian Marsching

> KafkaDockerWrapper uses wrong cluster ID when formatting log dir
> 
>
> Key: KAFKA-16473
> URL: https://issues.apache.org/jira/browse/KAFKA-16473
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Sebastian Marsching
>Assignee: Sebastian Marsching
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>
> There is a bug in {{{}KafkaDockerWrapper{}}}, that causes {{Some( CLUSTER_ID environment variable>)}} to be used when formatting the log dir 
> when Kafka is started for the first time inside a Docker container.
> More specifically, the problem is in {{{}formatStorageCmd{}}}: The code uses 
> {{{}env.get("CLUSTER_ID"){}}}, but this returns an {{Option}} not a 
> {{{}String{}}}.
> The code should instead check whether the environment variable is set, 
> raising an exception if it is not set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16473) KafkaDockerWrapper uses wrong cluster ID when formatting log dir

2024-04-12 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-16473.
---
Fix Version/s: 3.8.0
   3.7.1
   Resolution: Fixed

> KafkaDockerWrapper uses wrong cluster ID when formatting log dir
> 
>
> Key: KAFKA-16473
> URL: https://issues.apache.org/jira/browse/KAFKA-16473
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Sebastian Marsching
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>
> There is a bug in {{{}KafkaDockerWrapper{}}}, that causes {{Some( CLUSTER_ID environment variable>)}} to be used when formatting the log dir 
> when Kafka is started for the first time inside a Docker container.
> More specifically, the problem is in {{{}formatStorageCmd{}}}: The code uses 
> {{{}env.get("CLUSTER_ID"){}}}, but this returns an {{Option}} not a 
> {{{}String{}}}.
> The code should instead check whether the environment variable is set, 
> raising an exception if it is not set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16310) ListOffsets doesn't report the offset with maxTimestamp anymore

2024-03-27 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831631#comment-17831631
 ] 

Manikumar commented on KAFKA-16310:
---

Thanks, I have reverted  KAFKA-16341 and KAFKA-16342. commits from 3.6 branch

> ListOffsets doesn't report the offset with maxTimestamp anymore
> ---
>
> Key: KAFKA-16310
> URL: https://issues.apache.org/jira/browse/KAFKA-16310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.8.0, 3.7.1
>
>
> Updated: This is confirmed a regression issue in v3.7.0. 
> The impact of this issue is that when there is a batch containing records 
> with timestamp not in order, the offset of the timestamp will be wrong.(ex: 
> the timestamp for t0 should be mapping to offset 10, but will get offset 12.. 
> etc). It'll cause the time index is putting the wrong offset, so the result 
> will be unexpected. 
> ===
> The last offset is reported instead.
> A test in librdkafka (0081/do_test_ListOffsets) is failing an it's checking 
> that the offset with the max timestamp is the middle one and not the last 
> one. The tests is passing with 3.6.0 and previous versions
> This is the test:
> [https://github.com/confluentinc/librdkafka/blob/a6d85bdbc1023b1a5477b8befe516242c3e182f6/tests/0081-admin.c#L4989]
>  
> there are three messages, with timestamps:
> {noformat}
> t0 + 100
> t0 + 400
> t0 + 250{noformat}
> and indices 0,1,2. 
> then a ListOffsets with RD_KAFKA_OFFSET_SPEC_MAX_TIMESTAMP is done.
> it should return offset 1 but in 3.7.0 and trunk is returning offset 2
> Even after 5 seconds from producing it's still returning 2 as the offset with 
> max timestamp.
> ProduceRequest and ListOffsets were sent to the same broker (2), the leader 
> didn't change.
> {code:java}
> %7|1709134230.019|SEND|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ProduceRequest (v7, 
> 206 bytes @ 0, CorrId 2) %7|1709134230.020|RECV|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received ProduceResponse 
> (v7, 95 bytes, CorrId 2, rtt 1.18ms) 
> %7|1709134230.020|MSGSET|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: 
> rdkafkatest_rnd22e8d8ec45b53f98_do_test_ListOffsets [0]: MessageSet with 3 
> message(s) (MsgId 0, BaseSeq -1) delivered {code}
> {code:java}
> %7|1709134235.021|SEND|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ListOffsetsRequest 
> (v7, 103 bytes @ 0, CorrId 7) %7|1709134235.022|RECV|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received 
> ListOffsetsResponse (v7, 88 bytes, CorrId 7, rtt 0.54ms){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16342) Fix compressed records

2024-03-27 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16342:
--
Fix Version/s: (was: 3.6.2)

> Fix compressed records
> --
>
> Key: KAFKA-16342
> URL: https://issues.apache.org/jira/browse/KAFKA-16342
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0, 3.6.1
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16341) Fix un-compressed records

2024-03-27 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16341:
--
Fix Version/s: (was: 3.6.2)

> Fix un-compressed records
> -
>
> Key: KAFKA-16341
> URL: https://issues.apache.org/jira/browse/KAFKA-16341
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0, 3.6.1
>Reporter: Luke Chen
>Assignee: Johnny Hsu
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (KAFKA-16310) ListOffsets doesn't report the offset with maxTimestamp anymore

2024-03-27 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reopened KAFKA-16310:
---

> ListOffsets doesn't report the offset with maxTimestamp anymore
> ---
>
> Key: KAFKA-16310
> URL: https://issues.apache.org/jira/browse/KAFKA-16310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>
> Updated: This is confirmed a regression issue in v3.7.0. 
> The impact of this issue is that when there is a batch containing records 
> with timestamp not in order, the offset of the timestamp will be wrong.(ex: 
> the timestamp for t0 should be mapping to offset 10, but will get offset 12.. 
> etc). It'll cause the time index is putting the wrong offset, so the result 
> will be unexpected. 
> ===
> The last offset is reported instead.
> A test in librdkafka (0081/do_test_ListOffsets) is failing an it's checking 
> that the offset with the max timestamp is the middle one and not the last 
> one. The tests is passing with 3.6.0 and previous versions
> This is the test:
> [https://github.com/confluentinc/librdkafka/blob/a6d85bdbc1023b1a5477b8befe516242c3e182f6/tests/0081-admin.c#L4989]
>  
> there are three messages, with timestamps:
> {noformat}
> t0 + 100
> t0 + 400
> t0 + 250{noformat}
> and indices 0,1,2. 
> then a ListOffsets with RD_KAFKA_OFFSET_SPEC_MAX_TIMESTAMP is done.
> it should return offset 1 but in 3.7.0 and trunk is returning offset 2
> Even after 5 seconds from producing it's still returning 2 as the offset with 
> max timestamp.
> ProduceRequest and ListOffsets were sent to the same broker (2), the leader 
> didn't change.
> {code:java}
> %7|1709134230.019|SEND|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ProduceRequest (v7, 
> 206 bytes @ 0, CorrId 2) %7|1709134230.020|RECV|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received ProduceResponse 
> (v7, 95 bytes, CorrId 2, rtt 1.18ms) 
> %7|1709134230.020|MSGSET|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: 
> rdkafkatest_rnd22e8d8ec45b53f98_do_test_ListOffsets [0]: MessageSet with 3 
> message(s) (MsgId 0, BaseSeq -1) delivered {code}
> {code:java}
> %7|1709134235.021|SEND|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ListOffsetsRequest 
> (v7, 103 bytes @ 0, CorrId 7) %7|1709134235.022|RECV|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received 
> ListOffsetsResponse (v7, 88 bytes, CorrId 7, rtt 0.54ms){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16310) ListOffsets doesn't report the offset with maxTimestamp anymore

2024-03-27 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16310:
--
Fix Version/s: (was: 3.6.2)

> ListOffsets doesn't report the offset with maxTimestamp anymore
> ---
>
> Key: KAFKA-16310
> URL: https://issues.apache.org/jira/browse/KAFKA-16310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.8.0, 3.7.1
>
>
> Updated: This is confirmed a regression issue in v3.7.0. 
> The impact of this issue is that when there is a batch containing records 
> with timestamp not in order, the offset of the timestamp will be wrong.(ex: 
> the timestamp for t0 should be mapping to offset 10, but will get offset 12.. 
> etc). It'll cause the time index is putting the wrong offset, so the result 
> will be unexpected. 
> ===
> The last offset is reported instead.
> A test in librdkafka (0081/do_test_ListOffsets) is failing an it's checking 
> that the offset with the max timestamp is the middle one and not the last 
> one. The tests is passing with 3.6.0 and previous versions
> This is the test:
> [https://github.com/confluentinc/librdkafka/blob/a6d85bdbc1023b1a5477b8befe516242c3e182f6/tests/0081-admin.c#L4989]
>  
> there are three messages, with timestamps:
> {noformat}
> t0 + 100
> t0 + 400
> t0 + 250{noformat}
> and indices 0,1,2. 
> then a ListOffsets with RD_KAFKA_OFFSET_SPEC_MAX_TIMESTAMP is done.
> it should return offset 1 but in 3.7.0 and trunk is returning offset 2
> Even after 5 seconds from producing it's still returning 2 as the offset with 
> max timestamp.
> ProduceRequest and ListOffsets were sent to the same broker (2), the leader 
> didn't change.
> {code:java}
> %7|1709134230.019|SEND|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ProduceRequest (v7, 
> 206 bytes @ 0, CorrId 2) %7|1709134230.020|RECV|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received ProduceResponse 
> (v7, 95 bytes, CorrId 2, rtt 1.18ms) 
> %7|1709134230.020|MSGSET|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: 
> rdkafkatest_rnd22e8d8ec45b53f98_do_test_ListOffsets [0]: MessageSet with 3 
> message(s) (MsgId 0, BaseSeq -1) delivered {code}
> {code:java}
> %7|1709134235.021|SEND|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ListOffsetsRequest 
> (v7, 103 bytes @ 0, CorrId 7) %7|1709134235.022|RECV|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received 
> ListOffsetsResponse (v7, 88 bytes, CorrId 7, rtt 0.54ms){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16310) ListOffsets doesn't report the offset with maxTimestamp anymore

2024-03-27 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831624#comment-17831624
 ] 

Manikumar commented on KAFKA-16310:
---

[~chia7712] [~showuon] Can we revert the relavent changes from 3.6 branch as 
this not regression in 3.6.x releases and looks fix requires few more changes. 
This is to unblock 3.6.2 release. 

> ListOffsets doesn't report the offset with maxTimestamp anymore
> ---
>
> Key: KAFKA-16310
> URL: https://issues.apache.org/jira/browse/KAFKA-16310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>
> Updated: This is confirmed a regression issue in v3.7.0. 
> The impact of this issue is that when there is a batch containing records 
> with timestamp not in order, the offset of the timestamp will be wrong.(ex: 
> the timestamp for t0 should be mapping to offset 10, but will get offset 12.. 
> etc). It'll cause the time index is putting the wrong offset, so the result 
> will be unexpected. 
> ===
> The last offset is reported instead.
> A test in librdkafka (0081/do_test_ListOffsets) is failing an it's checking 
> that the offset with the max timestamp is the middle one and not the last 
> one. The tests is passing with 3.6.0 and previous versions
> This is the test:
> [https://github.com/confluentinc/librdkafka/blob/a6d85bdbc1023b1a5477b8befe516242c3e182f6/tests/0081-admin.c#L4989]
>  
> there are three messages, with timestamps:
> {noformat}
> t0 + 100
> t0 + 400
> t0 + 250{noformat}
> and indices 0,1,2. 
> then a ListOffsets with RD_KAFKA_OFFSET_SPEC_MAX_TIMESTAMP is done.
> it should return offset 1 but in 3.7.0 and trunk is returning offset 2
> Even after 5 seconds from producing it's still returning 2 as the offset with 
> max timestamp.
> ProduceRequest and ListOffsets were sent to the same broker (2), the leader 
> didn't change.
> {code:java}
> %7|1709134230.019|SEND|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ProduceRequest (v7, 
> 206 bytes @ 0, CorrId 2) %7|1709134230.020|RECV|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received ProduceResponse 
> (v7, 95 bytes, CorrId 2, rtt 1.18ms) 
> %7|1709134230.020|MSGSET|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: 
> rdkafkatest_rnd22e8d8ec45b53f98_do_test_ListOffsets [0]: MessageSet with 3 
> message(s) (MsgId 0, BaseSeq -1) delivered {code}
> {code:java}
> %7|1709134235.021|SEND|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ListOffsetsRequest 
> (v7, 103 bytes @ 0, CorrId 7) %7|1709134235.022|RECV|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received 
> ListOffsetsResponse (v7, 88 bytes, CorrId 7, rtt 0.54ms){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-16310) ListOffsets doesn't report the offset with maxTimestamp anymore

2024-03-27 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831624#comment-17831624
 ] 

Manikumar edited comment on KAFKA-16310 at 3/28/24 5:15 AM:


[~chia7712] [~showuon] Can we revert the relavent changes from 3.6 branch as 
this not regression in 3.6.x releases and looks like complete fix requires few 
more changes. This is to unblock 3.6.2 release. 


was (Author: omkreddy):
[~chia7712] [~showuon] Can we revert the relavent changes from 3.6 branch as 
this not regression in 3.6.x releases and looks fix requires few more changes. 
This is to unblock 3.6.2 release. 

> ListOffsets doesn't report the offset with maxTimestamp anymore
> ---
>
> Key: KAFKA-16310
> URL: https://issues.apache.org/jira/browse/KAFKA-16310
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>
> Updated: This is confirmed a regression issue in v3.7.0. 
> The impact of this issue is that when there is a batch containing records 
> with timestamp not in order, the offset of the timestamp will be wrong.(ex: 
> the timestamp for t0 should be mapping to offset 10, but will get offset 12.. 
> etc). It'll cause the time index is putting the wrong offset, so the result 
> will be unexpected. 
> ===
> The last offset is reported instead.
> A test in librdkafka (0081/do_test_ListOffsets) is failing an it's checking 
> that the offset with the max timestamp is the middle one and not the last 
> one. The tests is passing with 3.6.0 and previous versions
> This is the test:
> [https://github.com/confluentinc/librdkafka/blob/a6d85bdbc1023b1a5477b8befe516242c3e182f6/tests/0081-admin.c#L4989]
>  
> there are three messages, with timestamps:
> {noformat}
> t0 + 100
> t0 + 400
> t0 + 250{noformat}
> and indices 0,1,2. 
> then a ListOffsets with RD_KAFKA_OFFSET_SPEC_MAX_TIMESTAMP is done.
> it should return offset 1 but in 3.7.0 and trunk is returning offset 2
> Even after 5 seconds from producing it's still returning 2 as the offset with 
> max timestamp.
> ProduceRequest and ListOffsets were sent to the same broker (2), the leader 
> didn't change.
> {code:java}
> %7|1709134230.019|SEND|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ProduceRequest (v7, 
> 206 bytes @ 0, CorrId 2) %7|1709134230.020|RECV|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received ProduceResponse 
> (v7, 95 bytes, CorrId 2, rtt 1.18ms) 
> %7|1709134230.020|MSGSET|0081_admin#producer-3| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: 
> rdkafkatest_rnd22e8d8ec45b53f98_do_test_ListOffsets [0]: MessageSet with 3 
> message(s) (MsgId 0, BaseSeq -1) delivered {code}
> {code:java}
> %7|1709134235.021|SEND|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Sent ListOffsetsRequest 
> (v7, 103 bytes @ 0, CorrId 7) %7|1709134235.022|RECV|0081_admin#producer-2| 
> [thrd:localhost:39951/bootstrap]: localhost:39951/2: Received 
> ListOffsetsResponse (v7, 88 bytes, CorrId 7, rtt 0.54ms){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16411) Correctly migrate default client quota entities in KRaft migration

2024-03-27 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16411:
--
Fix Version/s: 3.8.0
   3.7.1

> Correctly migrate default client quota entities in KRaft migration
> --
>
> Key: KAFKA-16411
> URL: https://issues.apache.org/jira/browse/KAFKA-16411
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Blocker
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16428) Fix bug where config change notification znode may not get created during migration

2024-03-27 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16428:
--
Fix Version/s: 3.6.2
   3.8.0
   3.7.1

> Fix bug where config change notification znode may not get created during 
> migration
> ---
>
> Key: KAFKA-16428
> URL: https://issues.apache.org/jira/browse/KAFKA-16428
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0, 3.6.1
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16341) Fix un-compressed records

2024-03-21 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16341:
--
Fix Version/s: 3.6.2

> Fix un-compressed records
> -
>
> Key: KAFKA-16341
> URL: https://issues.apache.org/jira/browse/KAFKA-16341
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0, 3.6.1
>Reporter: Luke Chen
>Assignee: Johnny Hsu
>Priority: Major
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16073) Kafka Tiered Storage: Consumer Fetch Error Due to Delayed localLogStartOffset Update During Segment Deletion

2024-03-20 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16073:
--
Fix Version/s: 3.6.3
   (was: 3.6.2)

> Kafka Tiered Storage: Consumer Fetch Error Due to Delayed localLogStartOffset 
> Update During Segment Deletion
> 
>
> Key: KAFKA-16073
> URL: https://issues.apache.org/jira/browse/KAFKA-16073
> Project: Kafka
>  Issue Type: Bug
>  Components: core, Tiered-Storage
>Affects Versions: 3.6.1
>Reporter: hzh0425
>Assignee: hzh0425
>Priority: Major
>  Labels: KIP-405, kip-405, tiered-storage
> Fix For: 3.8.0, 3.7.1, 3.6.3
>
>
> The identified bug in Apache Kafka's tiered storage feature involves a 
> delayed update of {{localLogStartOffset}} in the 
> {{UnifiedLog.deleteSegments}} method, impacting consumer fetch operations. 
> When segments are deleted from the log's memory state, the 
> {{localLogStartOffset}} isn't promptly updated. Concurrently, 
> {{ReplicaManager.handleOffsetOutOfRangeError}} checks if a consumer's fetch 
> offset is less than the {{{}localLogStartOffset{}}}. If it's greater, Kafka 
> erroneously sends an {{OffsetOutOfRangeException}} to the consumer.
> In a specific concurrent scenario, imagine sequential offsets: {{{}offset1 < 
> offset2 < offset3{}}}. A client requests data at {{{}offset2{}}}. While a 
> background deletion process removes segments from memory, it hasn't yet 
> updated the {{LocalLogStartOffset}} from {{offset1}} to {{{}offset3{}}}. 
> Consequently, when the fetch offset ({{{}offset2{}}}) is evaluated against 
> the stale {{offset1}} in {{{}ReplicaManager.handleOffsetOutOfRangeError{}}}, 
> it incorrectly triggers an {{{}OffsetOutOfRangeException{}}}. This issue 
> arises from the out-of-sync update of {{{}localLogStartOffset{}}}, leading to 
> incorrect handling of consumer fetch requests and potential data access 
> errors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16222) KRaft Migration: Incorrect default user-principal quota after migration

2024-03-18 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16222:
--
Fix Version/s: 3.6.2
   3.8.0
   3.7.1

> KRaft Migration: Incorrect default user-principal quota after migration
> ---
>
> Key: KAFKA-16222
> URL: https://issues.apache.org/jira/browse/KAFKA-16222
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft, migration
>Affects Versions: 3.7.0, 3.6.1
>Reporter: Dominik
>Assignee: PoAn Yang
>Priority: Blocker
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>
> We observed that our default user quota seems not to be migrated correctly.
> Before Migration:
> bin/kafka-configs.sh --describe --all --entity-type users
> Quota configs for the *default user-principal* are 
> consumer_byte_rate=100.0, producer_byte_rate=100.0
> Quota configs for user-principal {color:#172b4d}'myuser{*}@{*}prod'{color} 
> are consumer_byte_rate=1.5E8, producer_byte_rate=1.5E8
> After Migration:
> bin/kafka-configs.sh --describe --all --entity-type users
> Quota configs for *user-principal ''* are consumer_byte_rate=100.0, 
> producer_byte_rate=100.0
> Quota configs for user-principal {color:#172b4d}'myuser{*}%40{*}prod'{color} 
> are consumer_byte_rate=1.5E8, producer_byte_rate=1.5E8
>  
> Additional finding: Our names contains a "@" which also lead to incorrect 
> after migration state.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16073) Kafka Tiered Storage: Consumer Fetch Error Due to Delayed localLogStartOffset Update During Segment Deletion

2024-03-18 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16073:
--
Fix Version/s: 3.7.1

> Kafka Tiered Storage: Consumer Fetch Error Due to Delayed localLogStartOffset 
> Update During Segment Deletion
> 
>
> Key: KAFKA-16073
> URL: https://issues.apache.org/jira/browse/KAFKA-16073
> Project: Kafka
>  Issue Type: Bug
>  Components: core, Tiered-Storage
>Affects Versions: 3.6.1
>Reporter: hzh0425
>Assignee: hzh0425
>Priority: Major
>  Labels: KIP-405, kip-405, tiered-storage
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>
> The identified bug in Apache Kafka's tiered storage feature involves a 
> delayed update of {{localLogStartOffset}} in the 
> {{UnifiedLog.deleteSegments}} method, impacting consumer fetch operations. 
> When segments are deleted from the log's memory state, the 
> {{localLogStartOffset}} isn't promptly updated. Concurrently, 
> {{ReplicaManager.handleOffsetOutOfRangeError}} checks if a consumer's fetch 
> offset is less than the {{{}localLogStartOffset{}}}. If it's greater, Kafka 
> erroneously sends an {{OffsetOutOfRangeException}} to the consumer.
> In a specific concurrent scenario, imagine sequential offsets: {{{}offset1 < 
> offset2 < offset3{}}}. A client requests data at {{{}offset2{}}}. While a 
> background deletion process removes segments from memory, it hasn't yet 
> updated the {{LocalLogStartOffset}} from {{offset1}} to {{{}offset3{}}}. 
> Consequently, when the fetch offset ({{{}offset2{}}}) is evaluated against 
> the stale {{offset1}} in {{{}ReplicaManager.handleOffsetOutOfRangeError{}}}, 
> it incorrectly triggers an {{{}OffsetOutOfRangeException{}}}. This issue 
> arises from the out-of-sync update of {{{}localLogStartOffset{}}}, leading to 
> incorrect handling of consumer fetch requests and potential data access 
> errors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16322) Fix CVE-2023-50572 by updating jline from 3.22.0 to 3.25.1

2024-03-18 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16322:
--
Fix Version/s: 3.7.1

> Fix CVE-2023-50572 by updating jline from 3.22.0 to 3.25.1
> --
>
> Key: KAFKA-16322
> URL: https://issues.apache.org/jira/browse/KAFKA-16322
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Johnny Hsu
>Priority: Major
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
>
> https://devhub.checkmarx.com/cve-details/CVE-2023-50572/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16322) Fix CVE-2023-50572 by updating jline from 3.22.0 to 3.25.1

2024-03-18 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16322:
--
Fix Version/s: 3.6.2

> Fix CVE-2023-50572 by updating jline from 3.22.0 to 3.25.1
> --
>
> Key: KAFKA-16322
> URL: https://issues.apache.org/jira/browse/KAFKA-16322
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Johnny Hsu
>Priority: Major
> Fix For: 3.6.2, 3.8.0
>
>
> https://devhub.checkmarx.com/cve-details/CVE-2023-50572/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16210) Upgrade jose4j to 0.9.4

2024-03-18 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16210:
--
Fix Version/s: 3.6.2

> Upgrade jose4j to 0.9.4
> ---
>
> Key: KAFKA-16210
> URL: https://issues.apache.org/jira/browse/KAFKA-16210
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Divij Vaidya
>Priority: Major
> Fix For: 3.7.0, 3.6.2, 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16226) Java client: Performance regression in Trogdor benchmark with high partition counts

2024-03-13 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16226:
--
Fix Version/s: 3.6.2
   (was: 3.6.3)

> Java client: Performance regression in Trogdor benchmark with high partition 
> counts
> ---
>
> Key: KAFKA-16226
> URL: https://issues.apache.org/jira/browse/KAFKA-16226
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.7.0, 3.6.1
>Reporter: Mayank Shekhar Narula
>Assignee: Mayank Shekhar Narula
>Priority: Major
>  Labels: kip-951
> Fix For: 3.6.2, 3.8.0, 3.7.1
>
> Attachments: baseline_lock_profile.png, kafka_15415_lock_profile.png
>
>
> h1. Background
> https://issues.apache.org/jira/browse/KAFKA-15415 implemented optimisation in 
> java-client to skip backoff period if client knows of a newer leader, for 
> produce-batch being retried.
> h1. What changed
> The implementation introduced a regression noticed on a trogdor-benchmark 
> running with high partition counts(36000!).
> With regression, following metrics changed on the produce side.
>  # record-queue-time-avg: increased from 20ms to 30ms.
>  # request-latency-avg: increased from 50ms to 100ms.
> h1. Why it happened
> As can be seen from the original 
> [PR|https://github.com/apache/kafka/pull/14384] 
> RecordAccmulator.partitionReady() & drainBatchesForOneNode() started using 
> synchronised method Metadata.currentLeader(). This has led to increased 
> synchronization between KafkaProducer's application-thread that call send(), 
> and background-thread that actively send producer-batches to leaders.
> Lock profiles clearly show increased synchronisation in KAFKA-15415 
> PR(highlighted in {color:#de350b}Red{color}) Vs baseline ( see below ). Note 
> the synchronisation is much worse for paritionReady() in this benchmark as 
> its called for each partition, and it has 36k partitions!
> h3. Lock Profile: Kafka-15415
> !kafka_15415_lock_profile.png!
> h3. Lock Profile: Baseline
> !baseline_lock_profile.png!
> h1. Fix
> Synchronization has to be reduced between 2 threads in order to address this. 
> [https://github.com/apache/kafka/pull/15323] is a fix for it, as it avoids 
> using Metadata.currentLeader() instead rely on Cluster.leaderFor().
> With the fix, lock-profile & metrics are similar to baseline.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16217) Transactional producer stuck in IllegalStateException during close

2024-03-13 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16217:
--
Fix Version/s: 3.8.0
   3.6.3
   (was: 3.6.2)

> Transactional producer stuck in IllegalStateException during close
> --
>
> Key: KAFKA-16217
> URL: https://issues.apache.org/jira/browse/KAFKA-16217
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.7.0, 3.6.1
>Reporter: Calvin Liu
>Assignee: Kirk True
>Priority: Major
>  Labels: transactions
> Fix For: 3.8.0, 3.7.1, 3.6.3
>
>
> The producer is stuck during the close. It keeps retrying to abort the 
> transaction but it never succeeds. 
> {code:java}
> [ERROR] 2024-02-01 17:21:22,804 [kafka-producer-network-thread | 
> producer-transaction-bench-transaction-id-f60SGdyRQGGFjdgg3vUgKg] 
> org.apache.kafka.clients.producer.internals.Sender run - [Producer 
> clientId=producer-transaction-ben
> ch-transaction-id-f60SGdyRQGGFjdgg3vUgKg, 
> transactionalId=transaction-bench-transaction-id-f60SGdyRQGGFjdgg3vUgKg] 
> Error in kafka producer I/O thread while aborting transaction:
> java.lang.IllegalStateException: Cannot attempt operation `abortTransaction` 
> because the previous call to `commitTransaction` timed out and must be retried
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager.handleCachedTransactionRequestResult(TransactionManager.java:1138)
> at 
> org.apache.kafka.clients.producer.internals.TransactionManager.beginAbort(TransactionManager.java:323)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:274)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> at org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:66) 
> {code}
> With the additional log, I found the root cause. If the producer is in a bad 
> transaction state(in my case, the TransactionManager.pendingTransition was 
> set to commitTransaction and did not get cleaned), then the producer calls 
> close and tries to abort the existing transaction, the producer will get 
> stuck in the transaction abortion. It is related to the fix 
> [https://github.com/apache/kafka/pull/13591].
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16226) Java client: Performance regression in Trogdor benchmark with high partition counts

2024-03-13 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16226:
--
Fix Version/s: 3.6.3
   (was: 3.6.2)

> Java client: Performance regression in Trogdor benchmark with high partition 
> counts
> ---
>
> Key: KAFKA-16226
> URL: https://issues.apache.org/jira/browse/KAFKA-16226
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.7.0, 3.6.1
>Reporter: Mayank Shekhar Narula
>Assignee: Mayank Shekhar Narula
>Priority: Major
>  Labels: kip-951
> Fix For: 3.8.0, 3.7.1, 3.6.3
>
> Attachments: baseline_lock_profile.png, kafka_15415_lock_profile.png
>
>
> h1. Background
> https://issues.apache.org/jira/browse/KAFKA-15415 implemented optimisation in 
> java-client to skip backoff period if client knows of a newer leader, for 
> produce-batch being retried.
> h1. What changed
> The implementation introduced a regression noticed on a trogdor-benchmark 
> running with high partition counts(36000!).
> With regression, following metrics changed on the produce side.
>  # record-queue-time-avg: increased from 20ms to 30ms.
>  # request-latency-avg: increased from 50ms to 100ms.
> h1. Why it happened
> As can be seen from the original 
> [PR|https://github.com/apache/kafka/pull/14384] 
> RecordAccmulator.partitionReady() & drainBatchesForOneNode() started using 
> synchronised method Metadata.currentLeader(). This has led to increased 
> synchronization between KafkaProducer's application-thread that call send(), 
> and background-thread that actively send producer-batches to leaders.
> Lock profiles clearly show increased synchronisation in KAFKA-15415 
> PR(highlighted in {color:#de350b}Red{color}) Vs baseline ( see below ). Note 
> the synchronisation is much worse for paritionReady() in this benchmark as 
> its called for each partition, and it has 36k partitions!
> h3. Lock Profile: Kafka-15415
> !kafka_15415_lock_profile.png!
> h3. Lock Profile: Baseline
> !baseline_lock_profile.png!
> h1. Fix
> Synchronization has to be reduced between 2 threads in order to address this. 
> [https://github.com/apache/kafka/pull/15323] is a fix for it, as it avoids 
> using Metadata.currentLeader() instead rely on Cluster.leaderFor().
> With the fix, lock-profile & metrics are similar to baseline.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16139) StreamsUpgradeTest fails consistently in 3.7.0

2024-03-13 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-16139:
--
Fix Version/s: 3.6.2
   (was: 3.6.1)

> StreamsUpgradeTest fails consistently in 3.7.0
> --
>
> Key: KAFKA-16139
> URL: https://issues.apache.org/jira/browse/KAFKA-16139
> Project: Kafka
>  Issue Type: Test
>  Components: streams, system tests
>Affects Versions: 3.7.0
>Reporter: Stanislav Kozlovski
>Assignee: Bruno Cadonna
>Priority: Major
> Fix For: 3.7.0, 3.6.2
>
>
> h1. 
> kafkatest.tests.streams.streams_upgrade_test.StreamsUpgradeTest#test_rolling_upgrade_with_2_bouncesArguments:\{
>  “from_version”: “3.5.1”, “to_version”: “3.7.0-SNAPSHOT”}
>  
> {{TimeoutError('Could not detect Kafka Streams version 3.7.0-SNAPSHOT on 
> ubuntu@worker2')}}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15817) Avoid reconnecting to the same IP address if multiple addresses are available

2024-03-13 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-15817:
--
Fix Version/s: 3.6.2

> Avoid reconnecting to the same IP address if multiple addresses are available
> -
>
> Key: KAFKA-15817
> URL: https://issues.apache.org/jira/browse/KAFKA-15817
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.3.2, 3.4.1, 3.6.0, 3.5.1
>Reporter: Bob Barrett
>Assignee: Bob Barrett
>Priority: Major
> Fix For: 3.7.0, 3.6.2
>
>
> In https://issues.apache.org/jira/browse/KAFKA-12193, we changed the DNS 
> resolution behavior for clients to re-resolve DNS after disconnecting from a 
> broker, rather than wait until we iterated over all addresses from a given 
> resolution. This is useful when the IP addresses have changed between the 
> connection and disconnection.
> However, with the behavior change, this does mean that clients could 
> potentially reconnect immediately to the same IP they just disconnected from, 
> if the IPs have not changed. In cases where the disconnection happened 
> because that IP was unhealthy (such as a case where a load balancer has 
> instances in multiple availability zones and one zone is unhealthy, or a case 
> where an intermediate component in the network path is going through a 
> rolling restart), this will delay the client successfully reconnecting. To 
> address this, clients should remember the IP they just disconnected from and 
> skip that IP when reconnecting, as long as the address resolved to multiple 
> addresses.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15878) KIP-768: Extend support for opaque (i.e. non-JWT) tokens in SASL/OAUTHBEARER

2024-03-01 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-15878:
-

Assignee: Anuj Sharma

> KIP-768: Extend support for opaque (i.e. non-JWT) tokens in SASL/OAUTHBEARER
> 
>
> Key: KAFKA-15878
> URL: https://issues.apache.org/jira/browse/KAFKA-15878
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Anuj Sharma
>Assignee: Anuj Sharma
>Priority: Major
>  Labels: oauth
> Fix For: 3.8.0
>
>
> {code:java}
> // code placeholder
> {code}
> h1. Overview
>  * This issue pertains to 
> [SASL/OAUTHBEARER|https://kafka.apache.org/documentation/#security_sasl_oauthbearer]
>  mechanism of Kafka authentication. 
>  * Kafka clients can use [SASL/OAUTHBEARER  
> |https://kafka.apache.org/documentation/#security_sasl_oauthbearer]mechanism 
> by overriding the [custom call back 
> handlers|https://kafka.apache.org/documentation/#security_sasl_oauthbearer_prod]
>  . 
>  * 
> [KIP-768|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186877575]
>  available from v3.1  further extends the mechanism with a production grade 
> implementation. 
>  * Kafka's 
> [SASL/OAUTHBEARER|https://kafka.apache.org/documentation/#security_sasl_oauthbearer]
>   mechanism currently {*}rejects the non-JWT (i.e. opaque) tokens{*}. This is 
> because of a more restrictive set of characters than what 
> [RFC-6750|https://datatracker.ietf.org/doc/html/rfc6750#section-2.1] 
> recommends. 
>  * This JIRA can be considered an extension of 
> [KIP-768|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186877575]
>  to support the opaque tokens as well apart from the JWT tokens.
>  
> In summary the following character set should be supported as per the RFC - 
> {code:java}
> 1*( ALPHA / DIGIT /
>"-" / "." / "_" / "~" / "+" / "/" ) *"="
> {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15738) KRaft support in ConsumerWithLegacyMessageFormatIntegrationTest

2024-01-12 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-15738.
---
Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in ConsumerWithLegacyMessageFormatIntegrationTest
> ---
>
> Key: KAFKA-15738
> URL: https://issues.apache.org/jira/browse/KAFKA-15738
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Abhinav Dixit
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in ConsumerWithLegacyMessageFormatIntegrationTest in 
> core/src/test/scala/integration/kafka/api/ConsumerWithLegacyMessageFormatIntegrationTest.scala
>  need to be updated to support KRaft
> 0 : def testOffsetsForTimes(): Unit = {
> 102 : def testEarliestOrLatestOffsets(): Unit = {
> Scanned 132 lines. Found 0 KRaft tests out of 2 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15735) KRaft support in SaslMultiMechanismConsumerTest

2024-01-09 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-15735.
---
Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in SaslMultiMechanismConsumerTest
> ---
>
> Key: KAFKA-15735
> URL: https://issues.apache.org/jira/browse/KAFKA-15735
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Sanskar Jhajharia
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in SaslMultiMechanismConsumerTest in 
> core/src/test/scala/integration/kafka/api/SaslMultiMechanismConsumerTest.scala
>  need to be updated to support KRaft
> 45 : def testMultipleBrokerMechanisms(): Unit = {
> Scanned 94 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15726) KRaft support in ProduceRequestTest

2024-01-09 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-15726.
---
Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in ProduceRequestTest
> ---
>
> Key: KAFKA-15726
> URL: https://issues.apache.org/jira/browse/KAFKA-15726
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in ProduceRequestTest in 
> core/src/test/scala/unit/kafka/server/ProduceRequestTest.scala need to be 
> updated to support KRaft
> 45 : def testSimpleProduceRequest(): Unit = {
> 82 : def testProduceWithInvalidTimestamp(): Unit = {
> 128 : def testProduceToNonReplica(): Unit = {
> 170 : def testCorruptLz4ProduceRequest(): Unit = {
> 204 : def testZSTDProduceRequest(): Unit = {
> Scanned 253 lines. Found 0 KRaft tests out of 5 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15904) Downgrade tests are failing with directory.id 

2023-11-27 Thread Manikumar (Jira)
Manikumar created KAFKA-15904:
-

 Summary: Downgrade tests are failing with directory.id 
 Key: KAFKA-15904
 URL: https://issues.apache.org/jira/browse/KAFKA-15904
 Project: Kafka
  Issue Type: Bug
Reporter: Manikumar
 Fix For: 3.7.0


{{kafkatest.tests.core.downgrade_test.TestDowngrade}} tests are failing after 
[https://github.com/apache/kafka/pull/14628.] 
We have added {{directory.id}} to metadata.properties. This means 
{{metadata.properties}} will be different for different log directories.
Cluster downgrades will fail with below error if we have multiple log 
directories . This looks blocker or requires additional downgrade steps from AK 
3.7. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14927) Dynamic configs not validated when using kafka-configs and --add-config-file

2023-10-10 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-14927.
---
Fix Version/s: 3.7.0
 Assignee: Aman Singh  (was: José Armando García Sancio)
   Resolution: Fixed

> Dynamic configs not validated when using kafka-configs and --add-config-file
> 
>
> Key: KAFKA-14927
> URL: https://issues.apache.org/jira/browse/KAFKA-14927
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.3.2
>Reporter: Justin Daines
>Assignee: Aman Singh
>Priority: Minor
>  Labels: 4.0-blocker
> Fix For: 3.7.0
>
>
> Using {{kafka-configs}} should validate dynamic configurations before 
> applying. It is possible to send a file with invalid configurations. 
> For example a file containing the following:
> {code:java}
> {
>   "routes": {
>     "crn:///kafka=*": {
>       "management": {
>         "allowed": "confluent-audit-log-events_audit",
>         "denied": "confluent-audit-log-events-denied"
>       },
>       "describe": {
>         "allowed": "",
>         "denied": "confluent-audit-log-events-denied"
>       },
>       "authentication": {
>         "allowed": "confluent-audit-log-events_audit",
>         "denied": "confluent-audit-log-events-denied-authn"
>       },
>       "authorize": {
>         "allowed": "confluent-audit-log-events_audit",
>         "denied": "confluent-audit-log-events-denied-authz"
>       },
>       "interbroker": {
>         "allowed": "",
>         "denied": ""
>       }
>     },
>     "crn:///kafka=*/group=*": {
>       "consume": {
>         "allowed": "confluent-audit-log-events_audit",
>         "denied": "confluent-audit-log-events"
>       }
>     },
>     "crn:///kafka=*/topic=*": {
>       "produce": {
>         "allowed": "confluent-audit-log-events_audit",
>         "denied": "confluent-audit-log-events"
>       },
>       "consume": {
>         "allowed": "confluent-audit-log-events_audit",
>         "denied": "confluent-audit-log-events"
>       }
>     }
>   },
>   "destinations": {
>     "topics": {
>       "confluent-audit-log-events": {
>         "retention_ms": 777600
>       },
>       "confluent-audit-log-events-denied": {
>         "retention_ms": 777600
>       },
>       "confluent-audit-log-events-denied-authn": {
>         "retention_ms": 777600
>       },
>       "confluent-audit-log-events-denied-authz": {
>         "retention_ms": 777600
>       },
>       "confluent-audit-log-events_audit": {
>         "retention_ms": 777600
>       }
>     }
>   },
>   "default_topics": {
>     "allowed": "confluent-audit-log-events_audit",
>     "denied": "confluent-audit-log-events"
>   },
>   "excluded_principals": [
>     "User:schemaregistryUser",
>     "User:ANONYMOUS",
>     "User:appSA",
>     "User:admin",
>     "User:connectAdmin",
>     "User:connectorSubmitter",
>     "User:connectorSA",
>     "User:schemaregistryUser",
>     "User:ksqlDBAdmin",
>     "User:ksqlDBUser",
>     "User:controlCenterAndKsqlDBServer",
>     "User:controlcenterAdmin",
>     "User:restAdmin",
>     "User:appSA",
>     "User:clientListen",
>     "User:superUser"
>   ]
> } {code}
> {code:java}
> kafka-configs --bootstrap-server $KAFKA_BOOTSTRAP --entity-type brokers 
> --entity-default --alter --add-config-file audit-log.json {code}
> Yields the following dynamic configs:
> {code:java}
> Default configs for brokers in the cluster are:
>   "destinations"=null sensitive=true 
> synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:"destinations"=null}
>   "confluent-audit-log-events-denied-authn"=null sensitive=true 
> synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:"confluent-audit-log-events-denied-authn"=null}
>   "routes"=null sensitive=true 
> synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:"routes"=null}
>   "User=null sensitive=true 
> synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:"User=null}
>   },=null sensitive=true synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:},=null}
>   "excluded_principals"=null sensitive=true 
> synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:"excluded_principals"=null}
>   "confluent-audit-log-events_audit"=null sensitive=true 
> synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:"confluent-audit-log-events_audit"=null}
>   "authorize"=null sensitive=true 
> synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:"authorize"=null}
>   "default_topics"=null sensitive=true 
> synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:"default_topics"=null}
>   "topics"=null sensitive=true 
> synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:"topics"=null}
>   ]=null sensitive=true synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:]=null}
>   "interbroker"=null sensitive=true 
> synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:"interbroker"=null}
>   "produce"=null sensitive=true 
> 

[jira] [Updated] (KAFKA-15502) Handle large keystores in SslEngineValidator

2023-10-08 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-15502:
--
Affects Version/s: 3.5.1
   3.4.1

> Handle large keystores in SslEngineValidator
> 
>
> Key: KAFKA-15502
> URL: https://issues.apache.org/jira/browse/KAFKA-15502
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.4.1, 3.6.0, 3.5.1
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
> Fix For: 3.4.2, 3.5.2, 3.7.0, 3.6.1
>
>
> We have observed an issue where inter broker SSL listener is not coming up 
> for large keystores (size >16K)
> 1. Currently validator code doesn't work well with large stores. Right now, 
> WRAP returns if there is already data in the buffer. But if we need more data 
> to be wrapped for UNWRAP to succeed, we end up looping forever.
> 2. Observed large TLSv3 post handshake messages are not getting read and 
> causing validator code loop forever. This is observed with JDK17+
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15502) Handle large keystores in SslEngineValidator

2023-10-08 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-15502.
---
Fix Version/s: 3.4.2
   3.5.2
   3.7.0
   3.6.1
   Resolution: Fixed

> Handle large keystores in SslEngineValidator
> 
>
> Key: KAFKA-15502
> URL: https://issues.apache.org/jira/browse/KAFKA-15502
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
> Fix For: 3.4.2, 3.5.2, 3.7.0, 3.6.1
>
>
> We have observed an issue where inter broker SSL listener is not coming up 
> for large keystores (size >16K)
> 1. Currently validator code doesn't work well with large stores. Right now, 
> WRAP returns if there is already data in the buffer. But if we need more data 
> to be wrapped for UNWRAP to succeed, we end up looping forever.
> 2. Observed large TLSv3 post handshake messages are not getting read and 
> causing validator code loop forever. This is observed with JDK17+
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15502) Handle large keystores in SslEngineValidator

2023-09-25 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-15502:
--
Description: 
We have observed an issue where inter broker SSL listener is not coming up for 
large keystores (size >16K)

1. Currently validator code doesn't work well with large stores. Right now, 
WRAP returns if there is already data in the buffer. But if we need more data 
to be wrapped for UNWRAP to succeed, we end up looping forever.

2. Observed large TLSv3 post handshake messages are not getting read and 
causing validator code loop forever. This is observed with JDK17+
 

  was:
We have observed an issue where inter broker SSL listener is not coming up for 
large keystores (size >16K)

1. Currently validator code doesn't work well with large stores. Right now, 
WRAP returns if there is already data in the buffer. But if we need more data 
to be wrapped for UNWRAP to succeed, we end up looping forever.

2. Observed large TLSv3 post handshake messages are not getting read and 
causing UNWRAP loop forever. This is observed with JDK17+
 


> Handle large keystores in SslEngineValidator
> 
>
> Key: KAFKA-15502
> URL: https://issues.apache.org/jira/browse/KAFKA-15502
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> We have observed an issue where inter broker SSL listener is not coming up 
> for large keystores (size >16K)
> 1. Currently validator code doesn't work well with large stores. Right now, 
> WRAP returns if there is already data in the buffer. But if we need more data 
> to be wrapped for UNWRAP to succeed, we end up looping forever.
> 2. Observed large TLSv3 post handshake messages are not getting read and 
> causing validator code loop forever. This is observed with JDK17+
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15502) Handle large keystores in SslEngineValidator

2023-09-25 Thread Manikumar (Jira)
Manikumar created KAFKA-15502:
-

 Summary: Handle large keystores in SslEngineValidator
 Key: KAFKA-15502
 URL: https://issues.apache.org/jira/browse/KAFKA-15502
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.0
Reporter: Manikumar
Assignee: Manikumar


We have observed an issue where inter broker SSL listener is not coming up for 
large keystores (size >16K)

1. Currently validator code doesn't work well with large stores. Right now, 
WRAP returns if there is already data in the buffer. But if we need more data 
to be wrapped for UNWRAP to succeed, we end up looping forever.

2. Observed large TLSv3 post handshake messages are not getting read and 
causing UNWRAP loop forever. This is observed with JDK17+
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15273) Log common name of expired client certificate

2023-09-15 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-15273.
---
Fix Version/s: 3.7.0
   Resolution: Fixed

> Log common name of expired client certificate
> -
>
> Key: KAFKA-15273
> URL: https://issues.apache.org/jira/browse/KAFKA-15273
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, core, security
>Affects Versions: 3.6.0
>Reporter: Eike Thaden
>Assignee: Eike Thaden
>Priority: Minor
>  Labels: PatchAvailable
> Fix For: 3.7.0
>
>
> If a client tries to authenticate via mTLS with an expired certificate, the 
> connection is closed and the IP address of the connection attempt is logged. 
> However, in complex enterprise IT environments it might be very hard or even 
> impossible to identify which client tried to connect if only the IP address 
> is known (e.g. due to complex virtualization/containerization/NAT). This 
> results in significant effort for the Kafka platform teams to identify the 
> developmers responsible for such a misconfigured client.
> As a possible solution I propose to log the common name used in the client 
> certificate in addition to the IP address. Due to security considerations, 
> this should only be done if that certificate is just expired and would be 
> valid otherwise (e.g. signed by a known, non-expired root/intermediate CA). 
> The way Kafka should handle any valid/invalid/expired certificate must be 
> exactly the same as before, except for the creation of a log message in case 
> it is expired.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15243) User creation mismatch

2023-07-27 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-15243:
--
Fix Version/s: 3.4.2
   3.5.2

> User creation mismatch
> --
>
> Key: KAFKA-15243
> URL: https://issues.apache.org/jira/browse/KAFKA-15243
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.3.2
>Reporter: Sergio Troiano
>Assignee: Sergio Troiano
>Priority: Major
>  Labels: kafka-source
> Fix For: 3.6.0, 3.4.2, 3.5.2
>
>
> We found the Kafka users were not created properly, so let's suppose we 
> create the user [myu...@myuser.com|mailto:myu...@myuser.com]
>  
> COMMAND:
> {code:java}
> /etc/new_kafka/bin/kafka-configs.sh  --bootstrap-server localhost:9092 
> --alter --add-config 
> 'SCRAM-SHA-256=[iterations=4096,password=blabla],SCRAM-SHA-256=[password=blabla]'
>  --entity-type users --entity-name myu...@myuser.com{code}
> RESPONSE:
> {code:java}
> Completed updating config for user myu...@myuser.com{code}
> When listing the users I see the user was created as an encoded string
> COMMAND
> {code:java}
> kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type 
> users|grep myuser {code}
> RESPONSE
> {code:java}
> SCRAM credential configs for user-principal 'myuser%40myuser.com' are 
> SCRAM-SHA-256=iterations=8192, SCRAM-SHA-512=iterations=4096 {code}
>  
> So basically the user is being "sanitized" and giving a false OK to the user 
> requester. The user requested does not exist as it should, it creates the 
> encoded one instead.
>  
> I dug deep in the code until I found this is happening in the 
> ZkAdminManager.scala in this line 
>  
> {code:java}
> adminZkClient.changeConfigs(ConfigType.User, Sanitizer.sanitize(user), 
> configsByPotentiallyValidUser(user)) {code}
> So removing the Sanitizer fix the problem, but I have a couple of doubts
> I checked we Sanitize because of some JMX metrics, but in this case I don't 
> know if this is really needed, supossing this is needed I think we should 
> forbid to create users with characters that will be encoded.
> Even worse after creating an user in general we create ACLs and they are 
> created properly without encoding the characters, this creates a mismatch 
> between the user and the ACLs.
>  
>  
> So I can work on fixing this, but I think we need to decide :
>  
> A) We forbid to create users with characters that will be encoded, so we fail 
> in the user creation step.
>  
> B) We allow the user creation with special characters and remove the 
> Sanitizer.sanitize(user) from the 2 places where it shows up in the file 
> ZkAdminManager.scala
>  
>  
> And of course if we go for B we need to create the tests.
> Please let me know what you think and i can work on it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15243) User creation mismatch

2023-07-26 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-15243.
---
Fix Version/s: 3.6.0
   Resolution: Fixed

> User creation mismatch
> --
>
> Key: KAFKA-15243
> URL: https://issues.apache.org/jira/browse/KAFKA-15243
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.3.2
>Reporter: Sergio Troiano
>Assignee: Sergio Troiano
>Priority: Major
>  Labels: kafka-source
> Fix For: 3.6.0
>
>
> We found the Kafka users were not created properly, so let's suppose we 
> create the user [myu...@myuser.com|mailto:myu...@myuser.com]
>  
> COMMAND:
> {code:java}
> /etc/new_kafka/bin/kafka-configs.sh  --bootstrap-server localhost:9092 
> --alter --add-config 
> 'SCRAM-SHA-256=[iterations=4096,password=blabla],SCRAM-SHA-256=[password=blabla]'
>  --entity-type users --entity-name myu...@myuser.com{code}
> RESPONSE:
> {code:java}
> Completed updating config for user myu...@myuser.com{code}
> When listing the users I see the user was created as an encoded string
> COMMAND
> {code:java}
> kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type 
> users|grep myuser {code}
> RESPONSE
> {code:java}
> SCRAM credential configs for user-principal 'myuser%40myuser.com' are 
> SCRAM-SHA-256=iterations=8192, SCRAM-SHA-512=iterations=4096 {code}
>  
> So basically the user is being "sanitized" and giving a false OK to the user 
> requester. The user requested does not exist as it should, it creates the 
> encoded one instead.
>  
> I dug deep in the code until I found this is happening in the 
> ZkAdminManager.scala in this line 
>  
> {code:java}
> adminZkClient.changeConfigs(ConfigType.User, Sanitizer.sanitize(user), 
> configsByPotentiallyValidUser(user)) {code}
> So removing the Sanitizer fix the problem, but I have a couple of doubts
> I checked we Sanitize because of some JMX metrics, but in this case I don't 
> know if this is really needed, supossing this is needed I think we should 
> forbid to create users with characters that will be encoded.
> Even worse after creating an user in general we create ACLs and they are 
> created properly without encoding the characters, this creates a mismatch 
> between the user and the ACLs.
>  
>  
> So I can work on fixing this, but I think we need to decide :
>  
> A) We forbid to create users with characters that will be encoded, so we fail 
> in the user creation step.
>  
> B) We allow the user creation with special characters and remove the 
> Sanitizer.sanitize(user) from the 2 places where it shows up in the file 
> ZkAdminManager.scala
>  
>  
> And of course if we go for B we need to create the tests.
> Please let me know what you think and i can work on it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15243) User creation mismatch

2023-07-25 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17747010#comment-17747010
 ] 

Manikumar commented on KAFKA-15243:
---

[~sergio_troi...@hotmail.com] Yes, Please open a PR. 

> User creation mismatch
> --
>
> Key: KAFKA-15243
> URL: https://issues.apache.org/jira/browse/KAFKA-15243
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.3.2
>Reporter: Sergio Troiano
>Assignee: Sergio Troiano
>Priority: Major
>  Labels: kafka-source
>
> We found the Kafka users were not created properly, so let's suppose we 
> create the user [myu...@myuser.com|mailto:myu...@myuser.com]
>  
> COMMAND:
> {code:java}
> /etc/new_kafka/bin/kafka-configs.sh  --bootstrap-server localhost:9092 
> --alter --add-config 
> 'SCRAM-SHA-256=[iterations=4096,password=blabla],SCRAM-SHA-256=[password=blabla]'
>  --entity-type users --entity-name myu...@myuser.com{code}
> RESPONSE:
> {code:java}
> Completed updating config for user myu...@myuser.com{code}
> When listing the users I see the user was created as an encoded string
> COMMAND
> {code:java}
> kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type 
> users|grep myuser {code}
> RESPONSE
> {code:java}
> SCRAM credential configs for user-principal 'myuser%40myuser.com' are 
> SCRAM-SHA-256=iterations=8192, SCRAM-SHA-512=iterations=4096 {code}
>  
> So basically the user is being "sanitized" and giving a false OK to the user 
> requester. The user requested does not exist as it should, it creates the 
> encoded one instead.
>  
> I dug deep in the code until I found this is happening in the 
> ZkAdminManager.scala in this line 
>  
> {code:java}
> adminZkClient.changeConfigs(ConfigType.User, Sanitizer.sanitize(user), 
> configsByPotentiallyValidUser(user)) {code}
> So removing the Sanitizer fix the problem, but I have a couple of doubts
> I checked we Sanitize because of some JMX metrics, but in this case I don't 
> know if this is really needed, supossing this is needed I think we should 
> forbid to create users with characters that will be encoded.
> Even worse after creating an user in general we create ACLs and they are 
> created properly without encoding the characters, this creates a mismatch 
> between the user and the ACLs.
>  
>  
> So I can work on fixing this, but I think we need to decide :
>  
> A) We forbid to create users with characters that will be encoded, so we fail 
> in the user creation step.
>  
> B) We allow the user creation with special characters and remove the 
> Sanitizer.sanitize(user) from the 2 places where it shows up in the file 
> ZkAdminManager.scala
>  
>  
> And of course if we go for B we need to create the tests.
> Please let me know what you think and i can work on it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15243) User creation mismatch

2023-07-25 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17746952#comment-17746952
 ] 

Manikumar commented on KAFKA-15243:
---

[~sergio_troi...@hotmail.com] This doesn't require KIP. This is broker side 
bug. we can just return desanitized names

> User creation mismatch
> --
>
> Key: KAFKA-15243
> URL: https://issues.apache.org/jira/browse/KAFKA-15243
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.3.2
>Reporter: Sergio Troiano
>Assignee: Sergio Troiano
>Priority: Major
>  Labels: kafka-source
>
> We found the Kafka users were not created properly, so let's suppose we 
> create the user [myu...@myuser.com|mailto:myu...@myuser.com]
>  
> COMMAND:
> {code:java}
> /etc/new_kafka/bin/kafka-configs.sh  --bootstrap-server localhost:9092 
> --alter --add-config 
> 'SCRAM-SHA-256=[iterations=4096,password=blabla],SCRAM-SHA-256=[password=blabla]'
>  --entity-type users --entity-name myu...@myuser.com{code}
> RESPONSE:
> {code:java}
> Completed updating config for user myu...@myuser.com{code}
> When listing the users I see the user was created as an encoded string
> COMMAND
> {code:java}
> kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type 
> users|grep myuser {code}
> RESPONSE
> {code:java}
> SCRAM credential configs for user-principal 'myuser%40myuser.com' are 
> SCRAM-SHA-256=iterations=8192, SCRAM-SHA-512=iterations=4096 {code}
>  
> So basically the user is being "sanitized" and giving a false OK to the user 
> requester. The user requested does not exist as it should, it creates the 
> encoded one instead.
>  
> I dug deep in the code until I found this is happening in the 
> ZkAdminManager.scala in this line 
>  
> {code:java}
> adminZkClient.changeConfigs(ConfigType.User, Sanitizer.sanitize(user), 
> configsByPotentiallyValidUser(user)) {code}
> So removing the Sanitizer fix the problem, but I have a couple of doubts
> I checked we Sanitize because of some JMX metrics, but in this case I don't 
> know if this is really needed, supossing this is needed I think we should 
> forbid to create users with characters that will be encoded.
> Even worse after creating an user in general we create ACLs and they are 
> created properly without encoding the characters, this creates a mismatch 
> between the user and the ACLs.
>  
>  
> So I can work on fixing this, but I think we need to decide :
>  
> A) We forbid to create users with characters that will be encoded, so we fail 
> in the user creation step.
>  
> B) We allow the user creation with special characters and remove the 
> Sanitizer.sanitize(user) from the 2 places where it shows up in the file 
> ZkAdminManager.scala
>  
>  
> And of course if we go for B we need to create the tests.
> Please let me know what you think and i can work on it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15243) User creation mismatch

2023-07-25 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17746794#comment-17746794
 ] 

Manikumar edited comment on KAFKA-15243 at 7/25/23 6:26 AM:


[~sergio_troi...@hotmail.com]  We sanitize the names because some characters 
are not allowed in Zookeeper paths. We sanitize the names using 
`Sanitizer.sanitize(user)` before storing in ZK and use `Sanitizer.desanitize` 
after reading from ZK.
In this case, it looks like a bug when calling describe all user scram configs 
(`--entity-type users`). We are returning sanitized names in the response  here 
[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ZkAdminManager.scala#L851]
 . We should rerun desanitized names


was (Author: omkreddy):
[Sergio 
Troiano|https://mail.google.com/jira/secure/ViewProfile.jspa?name=sergio_troiano%40hotmail.com]
 We sanitize the names because some characters are not allowed in Zookeeper 
paths. We sanitize the names using `Sanitizer.sanitize(user)` before storing in 
ZK and use `Sanitizer.desanitize` after reading from ZK.
In this case, it looks like a bug when calling describe all user scram configs 
(`--entity-type users`). We are returning sanitized names in the response  here 
[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ZkAdminManager.scala#L851]
 . We should rerun desanitized names

> User creation mismatch
> --
>
> Key: KAFKA-15243
> URL: https://issues.apache.org/jira/browse/KAFKA-15243
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.3.2
>Reporter: Sergio Troiano
>Assignee: Sergio Troiano
>Priority: Major
>  Labels: kafka-source
>
> We found the Kafka users were not created properly, so let's suppose we 
> create the user [myu...@myuser.com|mailto:myu...@myuser.com]
>  
> COMMAND:
> {code:java}
> /etc/new_kafka/bin/kafka-configs.sh  --bootstrap-server localhost:9092 
> --alter --add-config 
> 'SCRAM-SHA-256=[iterations=4096,password=blabla],SCRAM-SHA-256=[password=blabla]'
>  --entity-type users --entity-name myu...@myuser.com{code}
> RESPONSE:
> {code:java}
> Completed updating config for user myu...@myuser.com{code}
> When listing the users I see the user was created as an encoded string
> COMMAND
> {code:java}
> kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type 
> users|grep myuser {code}
> RESPONSE
> {code:java}
> SCRAM credential configs for user-principal 'myuser%40myuser.com' are 
> SCRAM-SHA-256=iterations=8192, SCRAM-SHA-512=iterations=4096 {code}
>  
> So basically the user is being "sanitized" and giving a false OK to the user 
> requester. The user requested does not exist as it should, it creates the 
> encoded one instead.
>  
> I dug deep in the code until I found this is happening in the 
> ZkAdminManager.scala in this line 
>  
> {code:java}
> adminZkClient.changeConfigs(ConfigType.User, Sanitizer.sanitize(user), 
> configsByPotentiallyValidUser(user)) {code}
> So removing the Sanitizer fix the problem, but I have a couple of doubts
> I checked we Sanitize because of some JMX metrics, but in this case I don't 
> know if this is really needed, supossing this is needed I think we should 
> forbid to create users with characters that will be encoded.
> Even worse after creating an user in general we create ACLs and they are 
> created properly without encoding the characters, this creates a mismatch 
> between the user and the ACLs.
>  
>  
> So I can work on fixing this, but I think we need to decide :
>  
> A) We forbid to create users with characters that will be encoded, so we fail 
> in the user creation step.
>  
> B) We allow the user creation with special characters and remove the 
> Sanitizer.sanitize(user) from the 2 places where it shows up in the file 
> ZkAdminManager.scala
>  
>  
> And of course if we go for B we need to create the tests.
> Please let me know what you think and i can work on it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15243) User creation mismatch

2023-07-25 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17746794#comment-17746794
 ] 

Manikumar commented on KAFKA-15243:
---

[Sergio 
Troiano|https://mail.google.com/jira/secure/ViewProfile.jspa?name=sergio_troiano%40hotmail.com]
 We sanitize the names because some characters are not allowed in Zookeeper 
paths. We sanitize the names using `Sanitizer.sanitize(user)` before storing in 
ZK and use `Sanitizer.desanitize` after reading from ZK.
In this case, it looks like a bug when calling describe all user scram configs 
(`--entity-type users`). We are returning sanitized names in the response  here 
[https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ZkAdminManager.scala#L851]
 . We should rerun desanitized names

> User creation mismatch
> --
>
> Key: KAFKA-15243
> URL: https://issues.apache.org/jira/browse/KAFKA-15243
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.3.2
>Reporter: Sergio Troiano
>Assignee: Sergio Troiano
>Priority: Major
>  Labels: kafka-source
>
> We found the Kafka users were not created properly, so let's suppose we 
> create the user [myu...@myuser.com|mailto:myu...@myuser.com]
>  
> COMMAND:
> {code:java}
> /etc/new_kafka/bin/kafka-configs.sh  --bootstrap-server localhost:9092 
> --alter --add-config 
> 'SCRAM-SHA-256=[iterations=4096,password=blabla],SCRAM-SHA-256=[password=blabla]'
>  --entity-type users --entity-name myu...@myuser.com{code}
> RESPONSE:
> {code:java}
> Completed updating config for user myu...@myuser.com{code}
> When listing the users I see the user was created as an encoded string
> COMMAND
> {code:java}
> kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type 
> users|grep myuser {code}
> RESPONSE
> {code:java}
> SCRAM credential configs for user-principal 'myuser%40myuser.com' are 
> SCRAM-SHA-256=iterations=8192, SCRAM-SHA-512=iterations=4096 {code}
>  
> So basically the user is being "sanitized" and giving a false OK to the user 
> requester. The user requested does not exist as it should, it creates the 
> encoded one instead.
>  
> I dug deep in the code until I found this is happening in the 
> ZkAdminManager.scala in this line 
>  
> {code:java}
> adminZkClient.changeConfigs(ConfigType.User, Sanitizer.sanitize(user), 
> configsByPotentiallyValidUser(user)) {code}
> So removing the Sanitizer fix the problem, but I have a couple of doubts
> I checked we Sanitize because of some JMX metrics, but in this case I don't 
> know if this is really needed, supossing this is needed I think we should 
> forbid to create users with characters that will be encoded.
> Even worse after creating an user in general we create ACLs and they are 
> created properly without encoding the characters, this creates a mismatch 
> between the user and the ACLs.
>  
>  
> So I can work on fixing this, but I think we need to decide :
>  
> A) We forbid to create users with characters that will be encoded, so we fail 
> in the user creation step.
>  
> B) We allow the user creation with special characters and remove the 
> Sanitizer.sanitize(user) from the 2 places where it shows up in the file 
> ZkAdminManager.scala
>  
>  
> And of course if we go for B we need to create the tests.
> Please let me know what you think and i can work on it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15077) FileTokenRetriever doesn't trim the token before returning it.

2023-06-11 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-15077.
---
Resolution: Fixed

> FileTokenRetriever doesn't trim the token before returning it.
> --
>
> Key: KAFKA-15077
> URL: https://issues.apache.org/jira/browse/KAFKA-15077
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Sushant Mahajan
>Assignee: Sushant Mahajan
>Priority: Minor
> Fix For: 3.6.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The {{FileTokenRetriever}} class is used to read the access_token from a file 
> on the clients system and then the info is passed along with jaas config to 
> the {{{}OAuthBearerSaslServer{}}}.
> The server uses the class {{OAuthBearerClientInitialResponse}} to validate 
> the token format.
> In case the token was sent using {{FileTokenRetriever}} on the client side, 
> some EOL character is getting appended to the token, causing authentication 
> to fail with the message (in case to topic create):
>  {{ERROR org.apache.kafka.common.errors.SaslAuthenticationException: 
> Authentication failed during authentication due to invalid credentials with 
> SASL mechanism OAUTHBEARER}}
>  
> On the server side the following line 
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/OAuthBearerClientInitialResponse.java#L68]
>  with throw an exception failing the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15077) FileTokenRetriever does trim the token before returning it.

2023-06-09 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-15077:
--
Fix Version/s: 3.6.0

> FileTokenRetriever does trim the token before returning it.
> ---
>
> Key: KAFKA-15077
> URL: https://issues.apache.org/jira/browse/KAFKA-15077
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Sushant Mahajan
>Assignee: Sushant Mahajan
>Priority: Minor
> Fix For: 3.6.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The {{FileTokenRetriever}} class is used to read the access_token from a file 
> on the clients system and then the info is passed along with jaas config to 
> the {{{}OAuthBearerSaslServer{}}}.
> The server uses the class {{OAuthBearerClientInitialResponse}} to validate 
> the token format.
> In case the token was sent using {{FileTokenRetriever}} on the client side, 
> some EOL character is getting appended to the token, causing authentication 
> to fail with the message (in case to topic create):
>  {{ERROR org.apache.kafka.common.errors.SaslAuthenticationException: 
> Authentication failed during authentication due to invalid credentials with 
> SASL mechanism OAUTHBEARER}}
>  
> On the server side the following line 
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/OAuthBearerClientInitialResponse.java#L68]
>  with throw an exception failing the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15077) FileTokenRetriever does trim the token before returning it.

2023-06-09 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-15077:
-

Assignee: Sushant Mahajan

> FileTokenRetriever does trim the token before returning it.
> ---
>
> Key: KAFKA-15077
> URL: https://issues.apache.org/jira/browse/KAFKA-15077
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Sushant Mahajan
>Assignee: Sushant Mahajan
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The {{FileTokenRetriever}} class is used to read the access_token from a file 
> on the clients system and then the info is passed along with jaas config to 
> the {{{}OAuthBearerSaslServer{}}}.
> The server uses the class {{OAuthBearerClientInitialResponse}} to validate 
> the token format.
> In case the token was sent using {{FileTokenRetriever}} on the client side, 
> some EOL character is getting appended to the token, causing authentication 
> to fail with the message (in case to topic create):
>  {{ERROR org.apache.kafka.common.errors.SaslAuthenticationException: 
> Authentication failed during authentication due to invalid credentials with 
> SASL mechanism OAUTHBEARER}}
>  
> On the server side the following line 
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/OAuthBearerClientInitialResponse.java#L68]
>  with throw an exception failing the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14994) jose4j is vulnerable to CVE- Improper Cryptographic Algorithm

2023-05-13 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-14994:
--
Fix Version/s: 3.5.0
   3.4.1
   (was: 3.6.0)

>  jose4j is vulnerable to CVE- Improper Cryptographic Algorithm
> --
>
> Key: KAFKA-14994
> URL: https://issues.apache.org/jira/browse/KAFKA-14994
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Gaurav Jetly
>Assignee: Atul Sharma
>Priority: Major
>  Labels: Security
> Fix For: 3.5.0, 3.4.1
>
>
> Jose4j has the following vulnerability with high score of 7.1. 
> jose4j is vulnerable to Improper Cryptographic Algorithm. The vulnerability 
> exists due to the way `RSA1_5` and `RSA_OAEP` is implemented, allowing an 
> attacker to decrypt `RSA1_5` or `RSA_OAEP` encrypted ciphertexts, and in 
> addition, it may be feasible to sign with affected keys.
> Please help upgrade the library to latest version
> Current version in use: 0.7.9
> Latest version with the fix: 0.9.3
> CVE-
> - Improper Cryptographic Algorithm
> - Severity: HIGH
> - CVSS: 7.1
> - Disclosure Date: 07 Feb 2023 19:00PM EST
> - Vulnerability Info: 
> https://sca.analysiscenter.veracode.com/vulnerability-database/vulnerabilities/40398



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14994) jose4j is vulnerable to CVE- Improper Cryptographic Algorithm

2023-05-13 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-14994:
--
Fix Version/s: 3.6.0

>  jose4j is vulnerable to CVE- Improper Cryptographic Algorithm
> --
>
> Key: KAFKA-14994
> URL: https://issues.apache.org/jira/browse/KAFKA-14994
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Gaurav Jetly
>Assignee: Atul Sharma
>Priority: Major
>  Labels: Security
> Fix For: 3.6.0
>
>
> Jose4j has the following vulnerability with high score of 7.1. 
> jose4j is vulnerable to Improper Cryptographic Algorithm. The vulnerability 
> exists due to the way `RSA1_5` and `RSA_OAEP` is implemented, allowing an 
> attacker to decrypt `RSA1_5` or `RSA_OAEP` encrypted ciphertexts, and in 
> addition, it may be feasible to sign with affected keys.
> Please help upgrade the library to latest version
> Current version in use: 0.7.9
> Latest version with the fix: 0.9.3
> CVE-
> - Improper Cryptographic Algorithm
> - Severity: HIGH
> - CVSS: 7.1
> - Disclosure Date: 07 Feb 2023 19:00PM EST
> - Vulnerability Info: 
> https://sca.analysiscenter.veracode.com/vulnerability-database/vulnerabilities/40398



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14237) Kafka TLS Doesn't Present Intermediary Certificates when using PEM

2023-04-03 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708071#comment-17708071
 ] 

Manikumar commented on KAFKA-14237:
---

[~sophokles73] Thanks for your interest. You can take a look at KIP and 
implementation:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-651+-+Support+PEM+format+for+SSL+certificates+and+private+key
[https://github.com/apache/kafka/pull/9345/files]

https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L273

> Kafka TLS Doesn't Present Intermediary Certificates when using PEM
> --
>
> Key: KAFKA-14237
> URL: https://issues.apache.org/jira/browse/KAFKA-14237
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.2.1
> Environment: Deployed using the Bitnami Helm 
> Chart(https://github.com/bitnami/charts/tree/master/bitnami/kafka)
> The Bitnami Helm Chart uses Docker Image: 
> https://github.com/bitnami/containers/tree/main/bitnami/kafka
> An issue was already opened with Bitnami and they told us to send this 
> upstream: https://github.com/bitnami/containers/issues/6654
>Reporter: Ryan R
>Priority: Blocker
>
> When using PEM TLS certificates, Kafka does not present the entire 
> certificate chain.
>  
> Our {{/opt/bitnami/kafka/config/server.properties}} file looks like this:
> {code:java}
> ssl.keystore.type=PEM
> ssl.truststore.type=PEM
> ssl.keystore.key=-BEGIN PRIVATE KEY- \
> 
> -END PRIVATE KEY-
> ssl.keystore.certificate.chain=-BEGIN CERTIFICATE- \
> 
> -END CERTIFICATE- \
> -BEGIN CERTIFICATE- \
> MIIFFjCCAv6gAwIBAgIRAJErCErPDBinU/bWLiWnX1owDQYJKoZIhvcNAQELBQAw \
> TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh \
> cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMjAwOTA0MDAwMDAw \
> WhcNMjUwOTE1MTYwMDAwWjAyMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNTGV0J3Mg \
> RW5jcnlwdDELMAkGA1UEAxMCUjMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK \
> AoIBAQC7AhUozPaglNMPEuyNVZLD+ILxmaZ6QoinXSaqtSu5xUyxr45r+XXIo9cP \
> R5QUVTVXjJ6oojkZ9YI8QqlObvU7wy7bjcCwXPNZOOftz2nwWgsbvsCUJCWH+jdx \
> sxPnHKzhm+/b5DtFUkWWqcFTzjTIUu61ru2P3mBw4qVUq7ZtDpelQDRrK9O8Zutm \
> NHz6a4uPVymZ+DAXXbpyb/uBxa3Shlg9F8fnCbvxK/eG3MHacV3URuPMrSXBiLxg \
> Z3Vms/EY96Jc5lP/Ooi2R6X/ExjqmAl3P51T+c8B5fWmcBcUr2Ok/5mzk53cU6cG \
> /kiFHaFpriV1uxPMUgP17VGhi9sVAgMBAAGjggEIMIIBBDAOBgNVHQ8BAf8EBAMC \
> AYYwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMBIGA1UdEwEB/wQIMAYB \
> Af8CAQAwHQYDVR0OBBYEFBQusxe3WFbLrlAJQOYfr52LFMLGMB8GA1UdIwQYMBaA \
> FHm0WeZ7tuXkAXOACIjIGlj26ZtuMDIGCCsGAQUFBwEBBCYwJDAiBggrBgEFBQcw \
> AoYWaHR0cDovL3gxLmkubGVuY3Iub3JnLzAnBgNVHR8EIDAeMBygGqAYhhZodHRw \
> Oi8veDEuYy5sZW5jci5vcmcvMCIGA1UdIAQbMBkwCAYGZ4EMAQIBMA0GCysGAQQB \
> gt8TAQEBMA0GCSqGSIb3DQEBCwUAA4ICAQCFyk5HPqP3hUSFvNVneLKYY611TR6W \
> PTNlclQtgaDqw+34IL9fzLdwALduO/ZelN7kIJ+m74uyA+eitRY8kc607TkC53wl \
> ikfmZW4/RvTZ8M6UK+5UzhK8jCdLuMGYL6KvzXGRSgi3yLgjewQtCPkIVz6D2QQz \
> CkcheAmCJ8MqyJu5zlzyZMjAvnnAT45tRAxekrsu94sQ4egdRCnbWSDtY7kh+BIm \
> lJNXoB1lBMEKIq4QDUOXoRgffuDghje1WrG9ML+Hbisq/yFOGwXD9RiX8F6sw6W4 \
> avAuvDszue5L3sz85K+EC4Y/wFVDNvZo4TYXao6Z0f+lQKc0t8DQYzk1OXVu8rp2 \
> yJMC6alLbBfODALZvYH7n7do1AZls4I9d1P4jnkDrQoxB3UqQ9hVl3LEKQ73xF1O \
> yK5GhDDX8oVfGKF5u+decIsH4YaTw7mP3GFxJSqv3+0lUFJoi5Lc5da149p90Ids \
> hCExroL1+7mryIkXPeFM5TgO9r0rvZaBFOvV2z0gp35Z0+L4WPlbuEjN/lxPFin+ \
> HlUjr8gRsI3qfJOQFy/9rKIJR0Y/8Omwt/8oTWgy1mdeHmmjk7j1nYsvC9JSQ6Zv \
> MldlTTKB3zhThV1+XWYp6rjd5JW1zbVWEkLNxE7GJThEUG3szgBVGP7pSWTUTsqX \
> nLRbwHOoq7hHwg== \
> -END CERTIFICATE- \
> ssl.truststore.certificates=-BEGIN CERTIFICATE- \
> MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw \
> TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh \
> cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4 \
> WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu \
> ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY \
> MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc \
> h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+ \
> 0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U \
> A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW \
> T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH \
> B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC \
> B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv \
> KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn \
> OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn \
> jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw \
> qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI \
> 

[jira] [Comment Edited] (KAFKA-14696) CVE-2023-25194: Apache Kafka: Possible RCE/Denial of service attack via SASL JAAS JndiLoginModule configuration using Kafka Connect

2023-02-10 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17687087#comment-17687087
 ] 

Manikumar edited comment on KAFKA-14696 at 2/10/23 1:58 PM:


Yes, you can fix cherry picking the commit 
ae22ec1a0ea005664439c3f45111aa34390ecaa1 2.8 branch.


was (Author: omkreddy):
Yes, you ca fix cherry picking the commit 
ae22ec1a0ea005664439c3f45111aa34390ecaa1 2.8 branch.

> CVE-2023-25194: Apache Kafka: Possible RCE/Denial of service attack via SASL 
> JAAS JndiLoginModule configuration using Kafka Connect
> ---
>
> Key: KAFKA-14696
> URL: https://issues.apache.org/jira/browse/KAFKA-14696
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.8.1, 2.8.2
>Reporter: MillieZhang
>Priority: Major
> Fix For: 3.4.0
>
>
> CVE Reference: [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-34917]
>  
> Will Kafka 2.8.X provide a patch to fix this vulnerability?
> If yes, when will the patch be provided?
>  
> Thanks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14696) CVE-2023-25194: Apache Kafka: Possible RCE/Denial of service attack via SASL JAAS JndiLoginModule configuration using Kafka Connect

2023-02-10 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17687087#comment-17687087
 ] 

Manikumar commented on KAFKA-14696:
---

Yes, you ca fix cherry picking the commit 
ae22ec1a0ea005664439c3f45111aa34390ecaa1 2.8 branch.

> CVE-2023-25194: Apache Kafka: Possible RCE/Denial of service attack via SASL 
> JAAS JndiLoginModule configuration using Kafka Connect
> ---
>
> Key: KAFKA-14696
> URL: https://issues.apache.org/jira/browse/KAFKA-14696
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.8.1, 2.8.2
>Reporter: MillieZhang
>Priority: Major
> Fix For: 3.4.0
>
>
> CVE Reference: [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-34917]
>  
> Will Kafka 2.8.X provide a patch to fix this vulnerability?
> If yes, when will the patch be provided?
>  
> Thanks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14696) CVE-2023-25194: Apache Kafka: Possible RCE/Denial of service attack via SASL JAAS JndiLoginModule configuration using Kafka Connect

2023-02-09 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17686696#comment-17686696
 ] 

Manikumar commented on KAFKA-14696:
---

There are no plans to provide a patch for older versions (< 3.40)

We have given some advice as part CVE announcement: 
https://lists.apache.org/thread/vy1c7fqcdqvq5grcqp6q5jyyb302khyz

> CVE-2023-25194: Apache Kafka: Possible RCE/Denial of service attack via SASL 
> JAAS JndiLoginModule configuration using Kafka Connect
> ---
>
> Key: KAFKA-14696
> URL: https://issues.apache.org/jira/browse/KAFKA-14696
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.8.1, 2.8.2
>Reporter: MillieZhang
>Priority: Major
> Fix For: 3.4.0
>
>
> CVE Reference: [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-34917]
>  
> Will Kafka 2.8.X provide a patch to fix this vulnerability?
> If yes, when will the patch be provided?
>  
> Thanks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14496) Wrong Base64 encoder used by OIDC OAuthBearerLoginCallbackHandler

2022-12-15 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-14496:
--
Fix Version/s: 3.4.0
   3.3.2

> Wrong Base64 encoder used by OIDC OAuthBearerLoginCallbackHandler
> -
>
> Key: KAFKA-14496
> URL: https://issues.apache.org/jira/browse/KAFKA-14496
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.3.1
>Reporter: Endre Vig
>Priority: Major
> Fix For: 3.4.0, 3.3.2
>
> Attachments: base64test.zip
>
>
> Currently our team is setting up a blueprint for our Kafka 
> consumers/producers to provide guidelines on how to connect to our broker 
> using the OIDC security mechanism. The blueprint is written in Java using the 
> latest 3.3.1 Kafka library dependencies managed by Spring Boot 3.0.0.
> While trying to use the new built-in 
> {{org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler}}
>  introduced by [KIP-768: Extend SASL/OAUTHBEARER with Support for 
> OIDC|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186877575],
>  we've noticed that some calls to retrieve the token work out well, while 
> some of them (seemingly randomly) are failing with 401 Unauthorized.
> After some debugging we've got to the conclusion that the faulty behavior is 
> caused by 
> {{org.apache.kafka.common.security.oauthbearer.secured.HttpAccessTokenRetriever#formatAuthorizationHeader:}}
> {code:java}
> static String formatAuthorizationHeader(String clientId, String clientSecret) 
> {
>     clientId = sanitizeString("the token endpoint request client ID 
> parameter", clientId);
>     clientSecret = sanitizeString("the token endpoint request client secret 
> parameter", clientSecret);
> 
> String s = String.format("%s:%s", clientId, clientSecret);
>     String encoded = Base64.getUrlEncoder().encodeToString(Utils.utf8(s));
>     return String.format("Basic %s", encoded);
> } {code}
> The above code is using {{java.util.Base64#getUrlEncoder}} on line 311 to 
> encode the authorization header value, which is using the alphabet described 
> in [section 5 of the RFC|https://www.rfc-editor.org/rfc/rfc4648#section-5] 
> during the encoding algorithm. As stated by the Basic Authentication Scheme 
> [definition|https://www.rfc-editor.org/rfc/rfc7617#section-2] however, 
> [section 4 of the RFC|https://www.rfc-editor.org/rfc/rfc4648#section-4] 
> should be used:
> ??4. and obtains the basic-credentials by encoding this octet sequence using 
> Base64 ([RFC4648], Section 4) into a sequence of US-ASCII characters 
> ([RFC0020]).??
> The difference between the 2 alphabets are only on two characters (62: '+' 
> vs. '-' and 63: '/' vs. '_'), that's why the 401 Unauthorized response arises 
> only for certain credential values.
> Here's a concrete example use case:
>  
> {code:java}
> String s = String.format("%s:%s", "SOME_RANDOM_LONG_USER_01234", 
> "9Q|0`8i~ute-n9ksjLWb\\50\"AX@UUED5E");
> System.out.println(Base64.getUrlEncoder().encodeToString(Utils.utf8(s))); 
> {code}
> would print out:
> {code:java}
> U09NRV9SQU5ET01fTE9OR19VU0VSXzAxMjM0OjlRfDBgOGl-dXRlLW45a3NqTFdiXDUwIkFYQFVVRUQ1RQ==
>  {code}
> while
> {code:java}
> String s = String.format("%s:%s", "SOME_RANDOM_LONG_USER_01234", 
> "9Q|0`8i~ute-n9ksjLWb\\50\"AX@UUED5E");
> System.out.println(Base64.getEncoder().encodeToString(Utils.utf8(s))); {code}
> would give:
> {code:java}
> U09NRV9SQU5ET01fTE9OR19VU0VSXzAxMjM0OjlRfDBgOGl+dXRlLW45a3NqTFdiXDUwIkFYQFVVRUQ1RQ==
>  {code}
> Please notice the '-' vs. '+' characters.
>  
> The 2 code snippets above would not behave differently for other credentials, 
> where the encoded result doesn't use the 62nd character of the alphabet:
> {code:java}
> String s = String.format("%s:%s", "SHORT_USER_01234", 
> "9Q|0`8i~ute-n9ksjLWb\\50\"AX@UUED5E");
> System.out.println(Base64.getEncoder().encodeToString(Utils.utf8(s))); {code}
> {code:java}
> U0hPUlRfVVNFUl8wMTIzNDo5UXwwYDhpfnV0ZS1uOWtzakxXYlw1MCJBWEBVVUVENUU=
> {code}
>  
> As a *conclusion* I would suggest that line 311 of 
> {{HttpAccessTokenRetriever}} should be modified to use 
> {{Base64.getEncoder().encodeToString(...)}} instead of 
> {{Base64.getUrlEncoder().encodeToString(...).}} 
>  
> I'm attaching a short sample application with tests proving that the above 
> encoding method is rejected by the standard Spring Security HTTP basic 
> authentication as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14496) Wrong Base64 encoder used by OIDC OAuthBearerLoginCallbackHandler

2022-12-15 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17648154#comment-17648154
 ] 

Manikumar commented on KAFKA-14496:
---

[~vendre] Thanks for reporting the issue. Would you like to submit a fix for 
this?

 

cc [~kirktrue]

> Wrong Base64 encoder used by OIDC OAuthBearerLoginCallbackHandler
> -
>
> Key: KAFKA-14496
> URL: https://issues.apache.org/jira/browse/KAFKA-14496
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.3.1
>Reporter: Endre Vig
>Priority: Major
> Attachments: base64test.zip
>
>
> Currently our team is setting up a blueprint for our Kafka 
> consumers/producers to provide guidelines on how to connect to our broker 
> using the OIDC security mechanism. The blueprint is written in Java using the 
> latest 3.3.1 Kafka library dependencies managed by Spring Boot 3.0.0.
> While trying to use the new built-in 
> {{org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler}}
>  introduced by [KIP-768: Extend SASL/OAUTHBEARER with Support for 
> OIDC|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186877575],
>  we've noticed that some calls to retrieve the token work out well, while 
> some of them (seemingly randomly) are failing with 401 Unauthorized.
> After some debugging we've got to the conclusion that the faulty behavior is 
> caused by 
> {{org.apache.kafka.common.security.oauthbearer.secured.HttpAccessTokenRetriever#formatAuthorizationHeader:}}
> {code:java}
> static String formatAuthorizationHeader(String clientId, String clientSecret) 
> {
>     clientId = sanitizeString("the token endpoint request client ID 
> parameter", clientId);
>     clientSecret = sanitizeString("the token endpoint request client secret 
> parameter", clientSecret);
> 
> String s = String.format("%s:%s", clientId, clientSecret);
>     String encoded = Base64.getUrlEncoder().encodeToString(Utils.utf8(s));
>     return String.format("Basic %s", encoded);
> } {code}
> The above code is using {{java.util.Base64#getUrlEncoder}} on line 311 to 
> encode the authorization header value, which is using the alphabet described 
> in [section 5 of the RFC|https://www.rfc-editor.org/rfc/rfc4648#section-5] 
> during the encoding algorithm. As stated by the Basic Authentication Scheme 
> [definition|https://www.rfc-editor.org/rfc/rfc7617#section-2] however, 
> [section 4 of the RFC|https://www.rfc-editor.org/rfc/rfc4648#section-4] 
> should be used:
> ??4. and obtains the basic-credentials by encoding this octet sequence using 
> Base64 ([RFC4648], Section 4) into a sequence of US-ASCII characters 
> ([RFC0020]).??
> The difference between the 2 alphabets are only on two characters (62: '+' 
> vs. '-' and 63: '/' vs. '_'), that's why the 401 Unauthorized response arises 
> only for certain credential values.
> Here's a concrete example use case:
>  
> {code:java}
> String s = String.format("%s:%s", "SOME_RANDOM_LONG_USER_01234", 
> "9Q|0`8i~ute-n9ksjLWb\\50\"AX@UUED5E");
> System.out.println(Base64.getUrlEncoder().encodeToString(Utils.utf8(s))); 
> {code}
> would print out:
> {code:java}
> U09NRV9SQU5ET01fTE9OR19VU0VSXzAxMjM0OjlRfDBgOGl-dXRlLW45a3NqTFdiXDUwIkFYQFVVRUQ1RQ==
>  {code}
> while
> {code:java}
> String s = String.format("%s:%s", "SOME_RANDOM_LONG_USER_01234", 
> "9Q|0`8i~ute-n9ksjLWb\\50\"AX@UUED5E");
> System.out.println(Base64.getEncoder().encodeToString(Utils.utf8(s))); {code}
> would give:
> {code:java}
> U09NRV9SQU5ET01fTE9OR19VU0VSXzAxMjM0OjlRfDBgOGl+dXRlLW45a3NqTFdiXDUwIkFYQFVVRUQ1RQ==
>  {code}
> Please notice the '-' vs. '+' characters.
>  
> The 2 code snippets above would not behave differently for other credentials, 
> where the encoded result doesn't use the 62nd character of the alphabet:
> {code:java}
> String s = String.format("%s:%s", "SHORT_USER_01234", 
> "9Q|0`8i~ute-n9ksjLWb\\50\"AX@UUED5E");
> System.out.println(Base64.getEncoder().encodeToString(Utils.utf8(s))); {code}
> {code:java}
> U0hPUlRfVVNFUl8wMTIzNDo5UXwwYDhpfnV0ZS1uOWtzakxXYlw1MCJBWEBVVUVENUU=
> {code}
>  
> As a *conclusion* I would suggest that line 311 of 
> {{HttpAccessTokenRetriever}} should be modified to use 
> {{Base64.getEncoder().encodeToString(...)}} instead of 
> {{Base64.getUrlEncoder().encodeToString(...).}} 
>  
> I'm attaching a short sample application with tests proving that the above 
> encoding method is rejected by the standard Spring Security HTTP basic 
> authentication as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14212) Fetch error response when hitting public OAuth/OIDC provider

2022-11-22 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-14212:
--
Fix Version/s: 3.3.2

> Fetch error response when hitting public OAuth/OIDC provider
> 
>
> Key: KAFKA-14212
> URL: https://issues.apache.org/jira/browse/KAFKA-14212
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sushant Mahajan
>Assignee: Sushant Mahajan
>Priority: Minor
> Fix For: 3.4.0, 3.3.2
>
>
> The class 
> org.apache.kafka.common.security.oauthbearer.secured.HttpAccessTokenRetriever 
> is used to send client creds to public OAuth/OIDC provider and fetch the 
> response, possibly including the access token.
> However, if there is an error - the exact error message from the provider is 
> not currently being retrieved.
> The error message can help the client easily diagnose if failure to fetch 
> token is due to some misconfiguration on their side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14320) Upgrade Jackson for CVE fix

2022-11-18 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-14320.
---
Resolution: Fixed

> Upgrade Jackson for CVE fix
> ---
>
> Key: KAFKA-14320
> URL: https://issues.apache.org/jira/browse/KAFKA-14320
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.2.0
>Reporter: Javier Li Sam
>Assignee: Thomas Cooper
>Priority: Minor
>  Labels: security
> Fix For: 3.4.0, 3.3.2
>
>
> There is a CVE for Jackson:
> Jackson: [CVE-2020-36518|https://nvd.nist.gov/vuln/detail/CVE-2020-36518] - 
> Fixed by upgrading to 2.14.0+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14398) Update EndToEndAuthorizerTest.scala to test with ZK and KRAFT quorum servers

2022-11-17 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-14398:
--
Fix Version/s: 3.4.0

> Update EndToEndAuthorizerTest.scala to test with ZK and KRAFT quorum servers
> 
>
> Key: KAFKA-14398
> URL: https://issues.apache.org/jira/browse/KAFKA-14398
> Project: Kafka
>  Issue Type: Improvement
>  Components: kraft, unit tests
>Reporter: Proven Provenzano
>Assignee: Proven Provenzano
>Priority: Major
> Fix For: 3.4.0
>
>
> KRAFT is a replacement for ZK for storing metadata.
> We should validate that ACLs work with KRAFT for the supported authentication 
> mechanizms. 
> I will update EndToEndAuthorizerTest.scala to test with ZK and KRAFT.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14375) Remove use of "authorizer-properties" in EndToEndAuthorizationTest.scala

2022-11-17 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-14375:
--
Fix Version/s: 3.4.0

> Remove use of "authorizer-properties" in EndToEndAuthorizationTest.scala
> 
>
> Key: KAFKA-14375
> URL: https://issues.apache.org/jira/browse/KAFKA-14375
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Proven Provenzano
>Assignee: Proven Provenzano
>Priority: Major
> Fix For: 3.4.0
>
>
> The use of {{authorizer-properties}} in AclCommand is deprecated and 
> EndToEndAuthroiztionTest.scala should be updated to not use it. 
> I will instead set {{kafkaPrincipal}} as a super user and set up the brokers 
> with AclAuthorzier. This will allow {{kafkaPrincipal}} to set ACLs and 
> clientPrincipal to validate them as per the tests.
> This update is a precursor to updating  EndToEndAuthroiztionTest.scala to run 
> in KRAFT mode



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14375) Remove use of "authorizer-properties" in EndToEndAuthorizationTest.scala

2022-11-10 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-14375:
-

Assignee: Proven Provenzano

> Remove use of "authorizer-properties" in EndToEndAuthorizationTest.scala
> 
>
> Key: KAFKA-14375
> URL: https://issues.apache.org/jira/browse/KAFKA-14375
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Proven Provenzano
>Assignee: Proven Provenzano
>Priority: Major
>
> The use of {{authorizer-properties}} in AclCommand is deprecated and 
> EndToEndAuthroiztionTest.scala should be updated to not use it. 
> I will instead set {{kafkaPrincipal}} as a super user and set up the brokers 
> with AclAuthorzier. This will allow {{kafkaPrincipal}} to set ACLs and 
> clientPrincipal to validate them as per the tests.
> This update is a precursor to updating  EndToEndAuthroiztionTest.scala to run 
> in KRAFT mode



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13518) Update gson dependency

2022-10-24 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-13518.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

> Update gson dependency
> --
>
> Key: KAFKA-13518
> URL: https://issues.apache.org/jira/browse/KAFKA-13518
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.0.0
>Reporter: Pavel Kuznetsov
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: security
> Fix For: 3.4.0
>
>
> *Describe the bug*
> I checked kafka_2.13-3.0.0.tgz distribution with WhiteSource and find out 
> that some libraries have vulnerabilities.
> Here they are:
> * gson-2.8.6.jar has WS-2021-0419 vulnerability. The way to fix it is to 
> upgrade to com.google.code.gson:gson:2.8.9
> * netty-codec-4.1.65.Final.jar has CVE-2021-37136 and CVE-2021-37137 
> vulnerabilities. The way to fix it is to upgrade to 
> io.netty:netty-codec:4.1.68.Final
> *To Reproduce*
> Download kafka_2.13-3.0.0.tgz and find jars, listed above.
> Check that these jars with corresponding versions are mentioned in 
> corresponding vulnerability description.
> *Expected behavior*
> * gson upgraded to 2.8.9 or higher
> * netty-codec upgraded to 4.1.68.Final or higher
> *Actual behaviour*
> * gson is 2.8.6
> * netty-codec is 4.1.65.Final



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14063) CVE-2022-34917: Kafka message parsing can cause ooms with small antagonistic payloads

2022-09-27 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610078#comment-17610078
 ] 

Manikumar commented on KAFKA-14063:
---

There wont be official PR.  Yes, you can pickup the commit from respective 
branch

> CVE-2022-34917: Kafka message parsing can cause ooms with small antagonistic 
> payloads
> -
>
> Key: KAFKA-14063
> URL: https://issues.apache.org/jira/browse/KAFKA-14063
> Project: Kafka
>  Issue Type: Bug
>  Components: generator
>Affects Versions: 3.2.0
>Reporter: Daniel Collins
>Priority: Major
> Fix For: 2.8.2, 3.3.0, 3.0.2, 3.1.2, 3.2.3
>
>
> When parsing code receives a payload for a variable length field where the 
> length is specified in the code as some arbitrarily large number (assume 
> INT32_MAX for example) this will immediately try to allocate an ArrayList to 
> hold this many elements, before checking whether this is a reasonable array 
> size given the available data. 
> The fix for this is to instead throw a runtime exception if the length of a 
> variably sized container exceeds the amount of remaining data. Then, the 
> worst a user can do is force the server to allocate 8x the size of the actual 
> delivered data (if they claim there are N elements for a container of Objects 
> (i.e. not a byte string) and each Object bottoms out in an 8 byte pointer in 
> the ArrayList's backing array).
> This was identified by fuzzing the kafka request parsing code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-14063) CVE-2022-34917: Kafka message parsing can cause ooms with small antagonistic payloads

2022-09-26 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608137#comment-17608137
 ] 

Manikumar edited comment on KAFKA-14063 at 9/26/22 2:21 PM:


commit for 2.8 branch: 
[https://github.com/apache/kafka/commit/14951a83e3fdead212156e5532359500d72f68bc]

commit for 3.0 branch: 
[https://github.com/apache/kafka/commit/aaceb6b79bfcb1d32874ccdbc8f3138d1c1c00fb]

commit for 3.1 branch: 
[https://github.com/apache/kafka/commit/c1295662768e64b4467e27c3d5158f95f2307657]

commit for 3.2 branch: 
[https://github.com/apache/kafka/commit/2bfa24b2bd416e7b8c4a0c566b984c43904fdecb]


was (Author: omkreddy):
commit for 2.8 branch: 
[https://github.com/apache/kafka/commit/14951a83e3fdead212156e5532359500d72f68bc]

commit for 3.0 branch: 
https://github.com/apache/kafka/commit/aaceb6b79bfcb1d32874ccdbc8f3138d1c1c00fb

commit for 3.1 branch:

https://github.com/apache/kafka/commit/c1295662768e64b4467e27c3d5158f95f2307657

commit for 3.2 branch:

https://github.com/apache/kafka/commit/2bfa24b2bd416e7b8c4a0c566b984c43904fdecb

> CVE-2022-34917: Kafka message parsing can cause ooms with small antagonistic 
> payloads
> -
>
> Key: KAFKA-14063
> URL: https://issues.apache.org/jira/browse/KAFKA-14063
> Project: Kafka
>  Issue Type: Bug
>  Components: generator
>Affects Versions: 3.2.0
>Reporter: Daniel Collins
>Priority: Major
> Fix For: 2.8.2, 3.3.0, 3.0.2, 3.1.2, 3.2.3
>
>
> When parsing code receives a payload for a variable length field where the 
> length is specified in the code as some arbitrarily large number (assume 
> INT32_MAX for example) this will immediately try to allocate an ArrayList to 
> hold this many elements, before checking whether this is a reasonable array 
> size given the available data. 
> The fix for this is to instead throw a runtime exception if the length of a 
> variably sized container exceeds the amount of remaining data. Then, the 
> worst a user can do is force the server to allocate 8x the size of the actual 
> delivered data (if they claim there are N elements for a container of Objects 
> (i.e. not a byte string) and each Object bottoms out in an 8 byte pointer in 
> the ArrayList's backing array).
> This was identified by fuzzing the kafka request parsing code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-14063) CVE-2022-34917: Kafka message parsing can cause ooms with small antagonistic payloads

2022-09-26 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608137#comment-17608137
 ] 

Manikumar edited comment on KAFKA-14063 at 9/26/22 2:21 PM:


commit for 2.8 branch: 
[https://github.com/apache/kafka/commit/14951a83e3fdead212156e5532359500d72f68bc]

commit for 3.0 branch: 
https://github.com/apache/kafka/commit/aaceb6b79bfcb1d32874ccdbc8f3138d1c1c00fb

commit for 3.1 branch:

https://github.com/apache/kafka/commit/c1295662768e64b4467e27c3d5158f95f2307657

commit for 3.2 branch:

https://github.com/apache/kafka/commit/2bfa24b2bd416e7b8c4a0c566b984c43904fdecb


was (Author: omkreddy):
This is the commit for 2.8 branch: 
https://github.com/apache/kafka/commit/14951a83e3fdead212156e5532359500d72f68bc

> CVE-2022-34917: Kafka message parsing can cause ooms with small antagonistic 
> payloads
> -
>
> Key: KAFKA-14063
> URL: https://issues.apache.org/jira/browse/KAFKA-14063
> Project: Kafka
>  Issue Type: Bug
>  Components: generator
>Affects Versions: 3.2.0
>Reporter: Daniel Collins
>Priority: Major
> Fix For: 2.8.2, 3.3.0, 3.0.2, 3.1.2, 3.2.3
>
>
> When parsing code receives a payload for a variable length field where the 
> length is specified in the code as some arbitrarily large number (assume 
> INT32_MAX for example) this will immediately try to allocate an ArrayList to 
> hold this many elements, before checking whether this is a reasonable array 
> size given the available data. 
> The fix for this is to instead throw a runtime exception if the length of a 
> variably sized container exceeds the amount of remaining data. Then, the 
> worst a user can do is force the server to allocate 8x the size of the actual 
> delivered data (if they claim there are N elements for a container of Objects 
> (i.e. not a byte string) and each Object bottoms out in an 8 byte pointer in 
> the ArrayList's backing array).
> This was identified by fuzzing the kafka request parsing code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-13725) KIP-768 OAuth code mixes public and internal classes in same package

2022-09-23 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-13725:
--
Fix Version/s: 3.4.0
Affects Version/s: 3.3.0

> KIP-768 OAuth code mixes public and internal classes in same package
> 
>
> Key: KAFKA-13725
> URL: https://issues.apache.org/jira/browse/KAFKA-13725
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, security
>Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.3.0
>Reporter: Kirk True
>Assignee: Kirk True
>Priority: Major
> Fix For: 3.4.0
>
>
> The {{org.apache.kafka.common.security.oauthbearer.secured}} package from 
> KIP-768 incorrectly mixed all of the classes (public and internal) in the 
> package together.
> This bug is to remove all but the public classes from that package and move 
> the rest to a new 
> {{{}org.apache.kafka.common.security.oauthbearer.internal.{}}}{{{}secured{}}} 
> package. This should be back-ported to all versions in which the KIP-768 
> OAuth work occurs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14063) CVE-2022-34917: Kafka message parsing can cause ooms with small antagonistic payloads

2022-09-22 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608137#comment-17608137
 ] 

Manikumar commented on KAFKA-14063:
---

This is the commit for 2.8 branch: 
https://github.com/apache/kafka/commit/14951a83e3fdead212156e5532359500d72f68bc

> CVE-2022-34917: Kafka message parsing can cause ooms with small antagonistic 
> payloads
> -
>
> Key: KAFKA-14063
> URL: https://issues.apache.org/jira/browse/KAFKA-14063
> Project: Kafka
>  Issue Type: Bug
>  Components: generator
>Affects Versions: 3.2.0
>Reporter: Daniel Collins
>Priority: Major
> Fix For: 2.8.2, 3.3.0, 3.0.2, 3.1.2, 3.2.3
>
>
> When parsing code receives a payload for a variable length field where the 
> length is specified in the code as some arbitrarily large number (assume 
> INT32_MAX for example) this will immediately try to allocate an ArrayList to 
> hold this many elements, before checking whether this is a reasonable array 
> size given the available data. 
> The fix for this is to instead throw a runtime exception if the length of a 
> variably sized container exceeds the amount of remaining data. Then, the 
> worst a user can do is force the server to allocate 8x the size of the actual 
> delivered data (if they claim there are N elements for a container of Objects 
> (i.e. not a byte string) and each Object bottoms out in an 8 byte pointer in 
> the ArrayList's backing array).
> This was identified by fuzzing the kafka request parsing code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14063) CVE-2022-34917: Kafka message parsing can cause ooms with small antagonistic payloads

2022-09-21 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-14063:
--
Summary: CVE-2022-34917: Kafka message parsing can cause ooms with small 
antagonistic payloads  (was: Kafka message parsing can cause ooms with small 
antagonistic payloads)

> CVE-2022-34917: Kafka message parsing can cause ooms with small antagonistic 
> payloads
> -
>
> Key: KAFKA-14063
> URL: https://issues.apache.org/jira/browse/KAFKA-14063
> Project: Kafka
>  Issue Type: Bug
>  Components: generator
>Affects Versions: 3.2.0
>Reporter: Daniel Collins
>Priority: Major
> Fix For: 2.8.2, 3.3.0, 3.0.2, 3.1.2, 3.2.3
>
>
> When parsing code receives a payload for a variable length field where the 
> length is specified in the code as some arbitrarily large number (assume 
> INT32_MAX for example) this will immediately try to allocate an ArrayList to 
> hold this many elements, before checking whether this is a reasonable array 
> size given the available data. 
> The fix for this is to instead throw a runtime exception if the length of a 
> variably sized container exceeds the amount of remaining data. Then, the 
> worst a user can do is force the server to allocate 8x the size of the actual 
> delivered data (if they claim there are N elements for a container of Objects 
> (i.e. not a byte string) and each Object bottoms out in an 8 byte pointer in 
> the ArrayList's backing array).
> This was identified by fuzzing the kafka request parsing code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14063) Kafka message parsing can cause ooms with small antagonistic payloads

2022-09-21 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17607810#comment-17607810
 ] 

Manikumar commented on KAFKA-14063:
---

CVE Reference: https://nvd.nist.gov/vuln/detail/CVE-2022-34917

> Kafka message parsing can cause ooms with small antagonistic payloads
> -
>
> Key: KAFKA-14063
> URL: https://issues.apache.org/jira/browse/KAFKA-14063
> Project: Kafka
>  Issue Type: Bug
>  Components: generator
>Affects Versions: 3.2.0
>Reporter: Daniel Collins
>Priority: Major
> Fix For: 2.8.2, 3.3.0, 3.0.2, 3.1.2, 3.2.3
>
>
> When parsing code receives a payload for a variable length field where the 
> length is specified in the code as some arbitrarily large number (assume 
> INT32_MAX for example) this will immediately try to allocate an ArrayList to 
> hold this many elements, before checking whether this is a reasonable array 
> size given the available data. 
> The fix for this is to instead throw a runtime exception if the length of a 
> variably sized container exceeds the amount of remaining data. Then, the 
> worst a user can do is force the server to allocate 8x the size of the actual 
> delivered data (if they claim there are N elements for a container of Objects 
> (i.e. not a byte string) and each Object bottoms out in an 8 byte pointer in 
> the ArrayList's backing array).
> This was identified by fuzzing the kafka request parsing code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14063) Kafka message parsing can cause ooms with small antagonistic payloads

2022-09-21 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-14063:
--
Fix Version/s: 3.3.0

> Kafka message parsing can cause ooms with small antagonistic payloads
> -
>
> Key: KAFKA-14063
> URL: https://issues.apache.org/jira/browse/KAFKA-14063
> Project: Kafka
>  Issue Type: Bug
>  Components: generator
>Affects Versions: 3.2.0
>Reporter: Daniel Collins
>Priority: Major
> Fix For: 2.8.2, 3.3.0, 3.0.2, 3.1.2, 3.2.3
>
>
> When parsing code receives a payload for a variable length field where the 
> length is specified in the code as some arbitrarily large number (assume 
> INT32_MAX for example) this will immediately try to allocate an ArrayList to 
> hold this many elements, before checking whether this is a reasonable array 
> size given the available data. 
> The fix for this is to instead throw a runtime exception if the length of a 
> variably sized container exceeds the amount of remaining data. Then, the 
> worst a user can do is force the server to allocate 8x the size of the actual 
> delivered data (if they claim there are N elements for a container of Objects 
> (i.e. not a byte string) and each Object bottoms out in an 8 byte pointer in 
> the ArrayList's backing array).
> This was identified by fuzzing the kafka request parsing code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14212) Fetch error response when hitting public OAuth/OIDC provider

2022-09-20 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-14212.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

> Fetch error response when hitting public OAuth/OIDC provider
> 
>
> Key: KAFKA-14212
> URL: https://issues.apache.org/jira/browse/KAFKA-14212
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sushant Mahajan
>Assignee: Sushant Mahajan
>Priority: Minor
> Fix For: 3.4.0
>
>
> The class 
> org.apache.kafka.common.security.oauthbearer.secured.HttpAccessTokenRetriever 
> is used to send client creds to public OAuth/OIDC provider and fetch the 
> response, possibly including the access token.
> However, if there is an error - the exact error message from the provider is 
> not currently being retrieved.
> The error message can help the client easily diagnose if failure to fetch 
> token is due to some misconfiguration on their side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14063) Kafka message parsing can cause ooms with small antagonistic payloads

2022-09-19 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-14063.
---
Resolution: Fixed

> Kafka message parsing can cause ooms with small antagonistic payloads
> -
>
> Key: KAFKA-14063
> URL: https://issues.apache.org/jira/browse/KAFKA-14063
> Project: Kafka
>  Issue Type: Bug
>  Components: generator
>Affects Versions: 3.2.0
>Reporter: Daniel Collins
>Priority: Major
> Fix For: 2.8.2, 3.2.3, 3.1.2, 3.0.2
>
>
> When parsing code receives a payload for a variable length field where the 
> length is specified in the code as some arbitrarily large number (assume 
> INT32_MAX for example) this will immediately try to allocate an ArrayList to 
> hold this many elements, before checking whether this is a reasonable array 
> size given the available data. 
> The fix for this is to instead throw a runtime exception if the length of a 
> variably sized container exceeds the amount of remaining data. Then, the 
> worst a user can do is force the server to allocate 8x the size of the actual 
> delivered data (if they claim there are N elements for a container of Objects 
> (i.e. not a byte string) and each Object bottoms out in an 8 byte pointer in 
> the ArrayList's backing array).
> This was identified by fuzzing the kafka request parsing code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14063) Kafka message parsing can cause ooms with small antagonistic payloads

2022-09-19 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-14063:
--
Fix Version/s: 2.8.2
   3.2.3
   3.1.2
   3.0.2

> Kafka message parsing can cause ooms with small antagonistic payloads
> -
>
> Key: KAFKA-14063
> URL: https://issues.apache.org/jira/browse/KAFKA-14063
> Project: Kafka
>  Issue Type: Bug
>  Components: generator
>Affects Versions: 3.2.0
>Reporter: Daniel Collins
>Priority: Major
> Fix For: 2.8.2, 3.0.2, 3.1.2, 3.2.3
>
>
> When parsing code receives a payload for a variable length field where the 
> length is specified in the code as some arbitrarily large number (assume 
> INT32_MAX for example) this will immediately try to allocate an ArrayList to 
> hold this many elements, before checking whether this is a reasonable array 
> size given the available data. 
> The fix for this is to instead throw a runtime exception if the length of a 
> variably sized container exceeds the amount of remaining data. Then, the 
> worst a user can do is force the server to allocate 8x the size of the actual 
> delivered data (if they claim there are N elements for a container of Objects 
> (i.e. not a byte string) and each Object bottoms out in an 8 byte pointer in 
> the ArrayList's backing array).
> This was identified by fuzzing the kafka request parsing code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-13805) Upgrade vulnerable dependencies march 2022

2022-09-02 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-13805:
--
Fix Version/s: (was: 2.8.2)
   (was: 3.0.2)

> Upgrade vulnerable dependencies march 2022
> --
>
> Key: KAFKA-13805
> URL: https://issues.apache.org/jira/browse/KAFKA-13805
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.1, 3.0.1
>Reporter: Shivakumar
>Priority: Blocker
>  Labels: secutiry
>
> https://nvd.nist.gov/vuln/detail/CVE-2020-36518
> |Packages|Package Version|CVSS|Fix Status|
> |com.fasterxml.jackson.core_jackson-databind| 2.10.5.1| 7.5|fixed in 2.13.2.1|
> |com.fasterxml.jackson.core_jackson-databind|2.13.1|7.5|fixed in 2.13.2.1|
> Our security scan detected the above vulnerabilities
> upgrade to correct versions for fixing vulnerabilities



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13730) OAuth access token validation fails if it does not contain the "sub" claim

2022-07-27 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-13730.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

> OAuth access token validation fails if it does not contain the "sub" claim
> --
>
> Key: KAFKA-13730
> URL: https://issues.apache.org/jira/browse/KAFKA-13730
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.1.0
>Reporter: Daniel Fonai
>Assignee: Kirk True
>Priority: Minor
> Fix For: 3.4.0
>
>
> Client authentication fails, when configured to use OAuth and the JWT access 
> token does {*}not contain the sub claim{*}. This issue was discovered while 
> testing Kafka integration with Ping Identity OAuth server. According to 
> Ping's 
> [documentation|https://apidocs.pingidentity.com/pingone/devguide/v1/api/#access-tokens-and-id-tokens]:
> {quote}sub – A string that specifies the identifier for the authenticated 
> user. This claim is not present for client_credentials tokens.
> {quote}
> In this case Kafka broker rejects the token regardless of the 
> [sasl.oauthbearer.sub.claim.name|https://kafka.apache.org/documentation/#brokerconfigs_sasl.oauthbearer.sub.claim.name]
>  property value.
>  
> 
>  
> Steps to reproduce:
> 1. Client configuration:
> {noformat}
> security.protocol=SASL_PLAINTEXT
> sasl.mechanism=OAUTHBEARER
> sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler
> sasl.oauthbearer.token.endpoint.url=https://oauth.server.fqdn/token/endpoint
> sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule
>  required\
>  clientId="kafka-client"\
>  clientSecret="kafka-client-secret";
> sasl.oauthbearer.sub.claim.name=client_id # claim name for the principal to 
> be extracted from, needed for client side validation too
> {noformat}
> 2. Broker configuration:
> {noformat}
> sasl.enabled.mechanisms=...,OAUTHBEARER
> listener.name.sasl_plaintext.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule
>  required;
> listener.name.sasl_plaintext.oauthbearer.sasl.server.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerValidatorCallbackHandler
> sasl.oauthbearer.jwks.endpoint.url=https://oauth.server.fqdn/jwks/endpoint
> sasl.oauthbearer.expected.audience=oauth-audience # based on OAuth server 
> setup
> sasl.oauthbearer.sub.claim.name=client_id # claim name for the principal to 
> be extracted from
> {noformat}
> 3. Try to perform some client operation:
> {noformat}
> kafka-topics --bootstrap-server `hostname`:9092 --list --command-config 
> oauth-client.properties
> {noformat}
> Result:
> Client authentication fails due to invalid access token.
>  - client log:
> {noformat}
> [2022-03-11 16:21:20,461] ERROR [AdminClient clientId=adminclient-1] 
> Connection to node -1 (localhost/127.0.0.1:9092) failed authentication due 
> to: {"status":"invalid_token"} (org.apache.kafka.clients.NetworkClient)
> [2022-03-11 16:21:20,463] WARN [AdminClient clientId=adminclient-1] Metadata 
> update failed due to authentication error 
> (org.apache.kafka.clients.admin.internals.AdminMetadataManager)
> org.apache.kafka.common.errors.SaslAuthenticationException: 
> {"status":"invalid_token"}
> Error while executing topic command : {"status":"invalid_token"}
> [2022-03-11 16:21:20,468] ERROR 
> org.apache.kafka.common.errors.SaslAuthenticationException: 
> {"status":"invalid_token"}
>  (kafka.admin.TopicCommand$)
> {noformat}
>  - broker log:
> {noformat}
> [2022-03-11 16:21:20,150] WARN Could not validate the access token: JWT 
> (claims->{"client_id":"...","iss":"...","iat":1647012079,"exp":1647015679,"aud":[...],"env":"...","org":"..."})
>  rejected due to invalid claims or other invalid content. Additional details: 
> [[14] No Subject (sub) claim is present.] 
> (org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerValidatorCallbackHandler)
> org.apache.kafka.common.security.oauthbearer.secured.ValidateException: Could 
> not validate the access token: JWT 
> (claims->{"client_id":"...","iss":"...","iat":1647012079,"exp":1647015679,"aud":[...],"env":"...","org":"..."})
>  rejected due to invalid claims or other invalid content. Additional details: 
> [[14] No Subject (sub) claim is present.]
>   at 
> org.apache.kafka.common.security.oauthbearer.secured.ValidatorAccessTokenValidator.validate(ValidatorAccessTokenValidator.java:159)
>   at 
> org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerValidatorCallbackHandler.handleValidatorCallback(OAuthBearerValidatorCallbackHandler.java:184)
>   at 
> 

[jira] [Assigned] (KAFKA-13983) Support special character in Resource name in ACLs operation by sanitizing

2022-07-08 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-13983:
-

Assignee: Aman Singh

> Support special character in Resource name in ACLs operation by sanitizing 
> ---
>
> Key: KAFKA-13983
> URL: https://issues.apache.org/jira/browse/KAFKA-13983
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Aman Singh
>Assignee: Aman Singh
>Priority: Minor
>
> Currently, resource names in ACLS can contain any special characters, but 
> resource names with some special characters are not a valid zookeeper path 
> entry.
> For example resource name {color:#de350b}{{test/true}} {color}is not a valid 
> zookeeper path entry.
> Zookeeper will create a child node, name as {color:#de350b}{{true}}{color} 
> inside the {color:#de350b}{{test}}{color} node.
> This will create two problems:-
>  # If there is *one*  ACL with a resource name {color:#de350b}{{test}}{color} 
> it can't be deleted because if there is only one, Kafka tries to delete the 
> node as well by thinking it will be empty which is not true it has the child 
> node {{{color:#de350b}true{color}.}}
>  # When broker restarts {color:#de350b}{{ACL cache}}{color}(which is used for 
> ACL operations like describe, authorization etc) update from zookeeper and 
> Kafka only looks for  ACLs that are direct child nodes of resource type in 
> the ACL tree. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13983) Support special character in Resource name in ACLs operation by sanitizing

2022-07-08 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-13983.
---
Fix Version/s: 3.3.0
 Reviewer: Manikumar
   Resolution: Fixed

> Support special character in Resource name in ACLs operation by sanitizing 
> ---
>
> Key: KAFKA-13983
> URL: https://issues.apache.org/jira/browse/KAFKA-13983
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Aman Singh
>Assignee: Aman Singh
>Priority: Minor
> Fix For: 3.3.0
>
>
> Currently, resource names in ACLS can contain any special characters, but 
> resource names with some special characters are not a valid zookeeper path 
> entry.
> For example resource name {color:#de350b}{{test/true}} {color}is not a valid 
> zookeeper path entry.
> Zookeeper will create a child node, name as {color:#de350b}{{true}}{color} 
> inside the {color:#de350b}{{test}}{color} node.
> This will create two problems:-
>  # If there is *one*  ACL with a resource name {color:#de350b}{{test}}{color} 
> it can't be deleted because if there is only one, Kafka tries to delete the 
> node as well by thinking it will be empty which is not true it has the child 
> node {{{color:#de350b}true{color}.}}
>  # When broker restarts {color:#de350b}{{ACL cache}}{color}(which is used for 
> ACL operations like describe, authorization etc) update from zookeeper and 
> Kafka only looks for  ACLs that are direct child nodes of resource type in 
> the ACL tree. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-13300) Kafka ACL Restriction Group Is not being applied

2021-09-15 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17415352#comment-17415352
 ] 

Manikumar edited comment on KAFKA-13300 at 9/15/21, 7:43 AM:
-

kafka-acls.sh command {{"-add"}} option is for adding an acl and {{"-remove"}} 
is to remove an existing acl. Consuming from a group without read permission 
should fail unless we configure {{"allow.everyone.if.no.acl.found=true"}}
 [https://kafka.apache.org/documentation/#security_authz]

I am not able to reproduce the issue. Can you attach the \{{ server.properties 
file}}, authorizer debug logs and steps to reproduce the issue.


was (Author: omkreddy):
kafka-acls.sh command {{"-add"}} option is for adding an acl and {{"-remove"}} 
is to remove an existing acl. Consuming from a group without read permission 
should fail unless we configure {{"allow.everyone.if.no.acl.found=true"}}
 [https://kafka.apache.org/documentation/#security_authz]

I am not able to reproduce the issue. Can you attach the \{{ server.properties 
file}} and steps to reproduce the issue.

> Kafka ACL Restriction Group Is not being applied
> 
>
> Key: KAFKA-13300
> URL: https://issues.apache.org/jira/browse/KAFKA-13300
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.2
>Reporter: Adriano Jesus
>Priority: Minor
>
> Hi,
> I am creating a KAFKA ACL with a fake group restriction as above:
>  
> {code:java}
> ./kafka-acls.sh \                                                             
>                                                     
>     --authorizer-properties zookeeper.connect=$ZOOKEEPER \
>     --remove --allow-principal User:'Kafka-tools' \
>     --consumer  --group fake-group \
>     --topic delete-me-2
> {code}
>  
> When I try to consume a message with the same user, 'Kafka-tools', and with 
> another group I am still able to consume the messages:
> {code:java}
> // ./kafka-console-consumer.sh --bootstrap-server=$KAFKA --topic delete-me-2 
> --consumer.config user-auth.properties --from-beginning --group teste
> {code}
> According to documentation this property can be used as consumer group 
> ([https://docs.confluent.io/platform/current/kafka/authorization.html):]
> "*Group*
> Groups in the brokers. All protocol calls that work with groups, such as 
> joining a group, must have corresponding privileges with the group in the 
> subject. Group ({{group.id}}) can mean Consumer Group, Stream Group 
> ({{application.id}}), Connect Worker Group, or any other group that uses the 
> Consumer Group protocol, like Schema Registry cluster."
> I did another test adding a consumer act permission with this command:
> {code:java}
> ./kafka-acls.sh \                                                             
>                                                     
>     --authorizer-properties zookeeper.connect=$ZOOKEEPER \
>     --add --allow-principal User:'Kafka-tools' \
>     --consumer  --group fake-group \
>     --topic delete-me-2
> {code}
> After that I removed the ACL authorization to READ operation for Group 
> resource. I tried again to consume from this topic. And still being able to 
> consume message from this topic even though without READ group permission.
> Maybe my interpretation is wrong. But it seens that Kafka ACL is validating 
> the group permissions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-13300) Kafka ACL Restriction Group Is not being applied

2021-09-15 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17415352#comment-17415352
 ] 

Manikumar edited comment on KAFKA-13300 at 9/15/21, 7:28 AM:
-

kafka-acls.sh command {{"-add"}} option is for adding an acl and {{"-remove"}} 
is to remove an existing acl. Consuming from a group without read permission 
should fail unless we configure {{"allow.everyone.if.no.acl.found=true"}}
 [https://kafka.apache.org/documentation/#security_authz]

I am not able to reproduce the issue. Can you attach the \{{ server.properties 
file}} and steps to reproduce the issue.


was (Author: omkreddy):
kafka-acls.sh command {{"--add"}} option is for adding an acl and 
{{"--remove"}} is to remove an existing acl. 
 Consuming from a group without read permission should fail unless we configure 
{{"allow.everyone.if.no.acl.found=true"}}
 https://kafka.apache.org/documentation/#security_authz
 
 I am not able to reproduce the issue. Can you attach the{{ server.properties 
file}} and steps to reproduce the issue.

> Kafka ACL Restriction Group Is not being applied
> 
>
> Key: KAFKA-13300
> URL: https://issues.apache.org/jira/browse/KAFKA-13300
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.2
>Reporter: Adriano Jesus
>Priority: Minor
>
> Hi,
> I am creating a KAFKA ACL with a fake group restriction as above:
>  
> {code:java}
> ./kafka-acls.sh \                                                             
>                                                     
>     --authorizer-properties zookeeper.connect=$ZOOKEEPER \
>     --remove --allow-principal User:'Kafka-tools' \
>     --consumer  --group fake-group \
>     --topic delete-me-2
> {code}
>  
> When I try to consume a message with the same user, 'Kafka-tools', and with 
> another group I am still able to consume the messages:
> {code:java}
> // ./kafka-console-consumer.sh --bootstrap-server=$KAFKA --topic delete-me-2 
> --consumer.config user-auth.properties --from-beginning --group teste
> {code}
> According to documentation this property can be used as consumer group 
> ([https://docs.confluent.io/platform/current/kafka/authorization.html):]
> "*Group*
> Groups in the brokers. All protocol calls that work with groups, such as 
> joining a group, must have corresponding privileges with the group in the 
> subject. Group ({{group.id}}) can mean Consumer Group, Stream Group 
> ({{application.id}}), Connect Worker Group, or any other group that uses the 
> Consumer Group protocol, like Schema Registry cluster."
> I did another test adding a consumer act permission with this command:
> {code:java}
> ./kafka-acls.sh \                                                             
>                                                     
>     --authorizer-properties zookeeper.connect=$ZOOKEEPER \
>     --add --allow-principal User:'Kafka-tools' \
>     --consumer  --group fake-group \
>     --topic delete-me-2
> {code}
> After that I removed the ACL authorization to READ operation for Group 
> resource. I tried again to consume from this topic. And still being able to 
> consume message from this topic even though without READ group permission.
> Maybe my interpretation is wrong. But it seens that Kafka ACL is validating 
> the group permissions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13300) Kafka ACL Restriction Group Is not being applied

2021-09-15 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17415352#comment-17415352
 ] 

Manikumar commented on KAFKA-13300:
---

kafka-acls.sh command {{"--add"}} option is for adding an acl and 
{{"--remove"}} is to remove an existing acl. 
 Consuming from a group without read permission should fail unless we configure 
{{"allow.everyone.if.no.acl.found=true"}}
 https://kafka.apache.org/documentation/#security_authz
 
 I am not able to reproduce the issue. Can you attach the{{ server.properties 
file}} and steps to reproduce the issue.

> Kafka ACL Restriction Group Is not being applied
> 
>
> Key: KAFKA-13300
> URL: https://issues.apache.org/jira/browse/KAFKA-13300
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.2
>Reporter: Adriano Jesus
>Priority: Minor
>
> Hi,
> I am creating a KAFKA ACL with a fake group restriction as above:
>  
> {code:java}
> ./kafka-acls.sh \                                                             
>                                                     
>     --authorizer-properties zookeeper.connect=$ZOOKEEPER \
>     --remove --allow-principal User:'Kafka-tools' \
>     --consumer  --group fake-group \
>     --topic delete-me-2
> {code}
>  
> When I try to consume a message with the same user, 'Kafka-tools', and with 
> another group I am still able to consume the messages:
> {code:java}
> // ./kafka-console-consumer.sh --bootstrap-server=$KAFKA --topic delete-me-2 
> --consumer.config user-auth.properties --from-beginning --group teste
> {code}
> According to documentation this property can be used as consumer group 
> ([https://docs.confluent.io/platform/current/kafka/authorization.html):]
> "*Group*
> Groups in the brokers. All protocol calls that work with groups, such as 
> joining a group, must have corresponding privileges with the group in the 
> subject. Group ({{group.id}}) can mean Consumer Group, Stream Group 
> ({{application.id}}), Connect Worker Group, or any other group that uses the 
> Consumer Group protocol, like Schema Registry cluster."
> I did another test adding a consumer act permission with this command:
> {code:java}
> ./kafka-acls.sh \                                                             
>                                                     
>     --authorizer-properties zookeeper.connect=$ZOOKEEPER \
>     --add --allow-principal User:'Kafka-tools' \
>     --consumer  --group fake-group \
>     --topic delete-me-2
> {code}
> After that I removed the ACL authorization to READ operation for Group 
> resource. I tried again to consume from this topic. And still being able to 
> consume message from this topic even though without READ group permission.
> Maybe my interpretation is wrong. But it seens that Kafka ACL is validating 
> the group permissions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12985) CVE-2021-28169 - Upgrade jetty to 9.4.42

2021-07-22 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-12985:
--
Summary: CVE-2021-28169 - Upgrade jetty to 9.4.42  (was: CVE-2021-28169 - 
Upgrade jetty to 9.4.41)

> CVE-2021-28169 - Upgrade jetty to 9.4.42
> 
>
> Key: KAFKA-12985
> URL: https://issues.apache.org/jira/browse/KAFKA-12985
> Project: Kafka
>  Issue Type: Task
>  Components: security
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
> Fix For: 3.0.0, 2.7.2, 2.8.1
>
>
> CVE-2021-28169 vulnerability affects Jetty versions up to 9.4.40. For more 
> information see https://nvd.nist.gov/vuln/detail/CVE-2021-28169
> Upgrading to Jetty version 9.4.41 should address this issue 
> (https://github.com/eclipse/jetty.project/security/advisories/GHSA-gwcr-j4wh-j3cq).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12985) CVE-2021-28169 - Upgrade jetty to 9.4.41

2021-07-22 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12985.
---
Fix Version/s: 2.8.1
   2.7.2
   3.0.0
   Resolution: Fixed

> CVE-2021-28169 - Upgrade jetty to 9.4.41
> 
>
> Key: KAFKA-12985
> URL: https://issues.apache.org/jira/browse/KAFKA-12985
> Project: Kafka
>  Issue Type: Task
>  Components: security
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
> Fix For: 3.0.0, 2.7.2, 2.8.1
>
>
> CVE-2021-28169 vulnerability affects Jetty versions up to 9.4.40. For more 
> information see https://nvd.nist.gov/vuln/detail/CVE-2021-28169
> Upgrading to Jetty version 9.4.41 should address this issue 
> (https://github.com/eclipse/jetty.project/security/advisories/GHSA-gwcr-j4wh-j3cq).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (KAFKA-5905) Remove PrincipalBuilder and DefaultPrincipalBuilder

2021-07-09 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar closed KAFKA-5905.


> Remove PrincipalBuilder and DefaultPrincipalBuilder
> ---
>
> Key: KAFKA-5905
> URL: https://issues.apache.org/jira/browse/KAFKA-5905
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Manikumar
>Priority: Blocker
> Fix For: 3.0.0
>
>
> These classes were deprecated after KIP-189: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-189%3A+Improve+principal+builder+interface+and+add+support+for+SASL,
>  which is part of 1.0.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13041) Support debugging system tests with ducker-ak

2021-07-08 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-13041.
---
Fix Version/s: 3.0.0
   Resolution: Fixed

> Support debugging system tests with ducker-ak
> -
>
> Key: KAFKA-13041
> URL: https://issues.apache.org/jira/browse/KAFKA-13041
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Stanislav Vodetskyi
>Priority: Major
> Fix For: 3.0.0
>
>
> Currently when you're using ducker-ak to run system tests locally, your only 
> debug option is to add print/log messages.
> It should be possible to connect to a ducker-ak test with a remote debugger.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12866) Kafka requires ZK root access even when using a chroot

2021-06-01 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12866.
---
Resolution: Fixed

> Kafka requires ZK root access even when using a chroot
> --
>
> Key: KAFKA-12866
> URL: https://issues.apache.org/jira/browse/KAFKA-12866
> Project: Kafka
>  Issue Type: Bug
>  Components: core, zkclient
>Affects Versions: 2.6.1, 2.8.0, 2.7.1, 2.6.2
>Reporter: Igor Soarez
>Assignee: Igor Soarez
>Priority: Major
> Fix For: 3.0.0
>
>
> When a Zookeeper chroot is configured, users do not expect Kafka to need 
> Zookeeper access outside of that chroot.
> h1. Why is this important?
> A zookeeper cluster may be shared with other Kafka clusters or even other 
> applications. It is an expected security practice to restrict each 
> cluster/application's access to it's own Zookeeper chroot.
> h1. Steps to reproduce
> h2. Zookeeper setup
> Using the zkCli, create a chroot for Kafka, make it available to Kafka but 
> lock the root znode.
>  
> {code:java}
> [zk: localhost:2181(CONNECTED) 1] create /somechroot
> Created /some
> [zk: localhost:2181(CONNECTED) 2] setAcl /somechroot world:anyone:cdrwa
> [zk: localhost:2181(CONNECTED) 3] addauth digest test:12345
> [zk: localhost:2181(CONNECTED) 4] setAcl / 
> digest:test:Mx1uO9GLtm1qaVAQ20Vh9ODgACg=:cdrwa{code}
>  
> h2. Kafka setup
> Configure the chroot in broker.properties:
>  
> {code:java}
> zookeeper.connect=localhost:2181/somechroot{code}
>  
>  
> h2. Expected behavior
> The expected behavior here is that Kafka will use the chroot without issues.
> h2. Actual result
> Kafka fails to start with a fatal exception:
> {code:java}
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
> NoAuth for /chroot
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:120)
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
> at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:583)
> at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1729)
> at 
> kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1627)
> at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1957)
> at 
> kafka.zk.ZkClientAclTest.testChrootExistsAndRootIsLocked(ZkClientAclTest.scala:60)
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12866) Kafka requires ZK root access even when using a chroot

2021-06-01 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-12866:
--
Fix Version/s: 3.0.0

> Kafka requires ZK root access even when using a chroot
> --
>
> Key: KAFKA-12866
> URL: https://issues.apache.org/jira/browse/KAFKA-12866
> Project: Kafka
>  Issue Type: Bug
>  Components: core, zkclient
>Affects Versions: 2.6.1, 2.8.0, 2.7.1, 2.6.2
>Reporter: Igor Soarez
>Assignee: Igor Soarez
>Priority: Major
> Fix For: 3.0.0
>
>
> When a Zookeeper chroot is configured, users do not expect Kafka to need 
> Zookeeper access outside of that chroot.
> h1. Why is this important?
> A zookeeper cluster may be shared with other Kafka clusters or even other 
> applications. It is an expected security practice to restrict each 
> cluster/application's access to it's own Zookeeper chroot.
> h1. Steps to reproduce
> h2. Zookeeper setup
> Using the zkCli, create a chroot for Kafka, make it available to Kafka but 
> lock the root znode.
>  
> {code:java}
> [zk: localhost:2181(CONNECTED) 1] create /somechroot
> Created /some
> [zk: localhost:2181(CONNECTED) 2] setAcl /somechroot world:anyone:cdrwa
> [zk: localhost:2181(CONNECTED) 3] addauth digest test:12345
> [zk: localhost:2181(CONNECTED) 4] setAcl / 
> digest:test:Mx1uO9GLtm1qaVAQ20Vh9ODgACg=:cdrwa{code}
>  
> h2. Kafka setup
> Configure the chroot in broker.properties:
>  
> {code:java}
> zookeeper.connect=localhost:2181/somechroot{code}
>  
>  
> h2. Expected behavior
> The expected behavior here is that Kafka will use the chroot without issues.
> h2. Actual result
> Kafka fails to start with a fatal exception:
> {code:java}
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
> NoAuth for /chroot
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:120)
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
> at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:583)
> at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1729)
> at 
> kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1627)
> at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1957)
> at 
> kafka.zk.ZkClientAclTest.testChrootExistsAndRootIsLocked(ZkClientAclTest.scala:60)
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12865) Documentation error for Admin Client API in describe ACLs

2021-05-29 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-12865:
-

Assignee: Rohit Sachan

> Documentation error for Admin Client API in describe ACLs
> -
>
> Key: KAFKA-12865
> URL: https://issues.apache.org/jira/browse/KAFKA-12865
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.8.0
>Reporter: Rohit Sachan
>Assignee: Rohit Sachan
>Priority: Major
> Fix For: 3.0.0
>
>
> There is a documentation bug in *Admin.java's* `describeAcls` and its 
> overloaded variation, function's return type shows `*DeleteAclsResult*` 
> instead of `*DescribeAclResult*`. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12865) Documentation error for Admin Client API in describe ACLs

2021-05-29 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12865.
---
Fix Version/s: 3.0.0
   Resolution: Fixed

> Documentation error for Admin Client API in describe ACLs
> -
>
> Key: KAFKA-12865
> URL: https://issues.apache.org/jira/browse/KAFKA-12865
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.8.0
>Reporter: Rohit Sachan
>Priority: Major
> Fix For: 3.0.0
>
>
> There is a documentation bug in *Admin.java's* `describeAcls` and its 
> overloaded variation, function's return type shows `*DeleteAclsResult*` 
> instead of `*DescribeAclResult*`. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12820) Upgrade maven-artifact dependency to resolve CVE-2021-26291

2021-05-21 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12820.
---
Fix Version/s: 2.8.1
   2.7.2
   2.6.3
   3.0.0
   Resolution: Fixed

> Upgrade maven-artifact dependency to resolve CVE-2021-26291
> ---
>
> Key: KAFKA-12820
> URL: https://issues.apache.org/jira/browse/KAFKA-12820
> Project: Kafka
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.6.1, 2.8.0, 2.7.1
>Reporter: Boojapho
>Assignee: Dongjin Lee
>Priority: Major
> Fix For: 3.0.0, 2.6.3, 2.7.2, 2.8.1
>
>
> Current Gradle builds of Kafka contain a dependency of `maven-artifact` 
> version 3.6.3, which contains CVE-2021-26291 
> ([https://nvd.nist.gov/vuln/detail/CVE-2021-26291).]  This vulnerability has 
> been fixed in Maven 3.8.1 
> ([https://maven.apache.org/docs/3.8.1/release-notes.html]).  Apache Kafka 
> should update `dependencies.gradle` to use the latest `maven-artifact` 
> library to eliminate this vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8562) SASL_SSL still performs reverse DNS lookup despite KAFKA-5051

2021-05-11 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-8562:
-
Fix Version/s: 2.8.1
   2.7.2

> SASL_SSL still performs reverse DNS lookup despite KAFKA-5051
> -
>
> Key: KAFKA-8562
> URL: https://issues.apache.org/jira/browse/KAFKA-8562
> Project: Kafka
>  Issue Type: Bug
>Reporter: Badai Aqrandista
>Assignee: Davor Poldrugo
>Priority: Minor
> Fix For: 3.0.0, 2.7.2, 2.8.1
>
>
> When using SASL_SSL, the Kafka client performs a reverse DNS lookup to 
> resolve IP to DNS. So, this circumvent the security fix made in KAFKA-5051. 
> This is the line of code from AK 2.2 where it performs the lookup:
> https://github.com/apache/kafka/blob/2.2.0/clients/src/main/java/org/apache/kafka/common/network/SaslChannelBuilder.java#L205
> Following log messages show that consumer initially tried to connect with IP 
> address 10.0.2.15. Then suddenly it created SaslClient with a hostname:
> {code:java}
> [2019-06-18 06:23:36,486] INFO Kafka commitId: 00d486623990ed9d 
> (org.apache.kafka.common.utils.AppInfoParser)
> [2019-06-18 06:23:36,487] DEBUG [Consumer 
> clientId=KafkaStore-reader-_schemas, groupId=schema-registry-10.0.2.15-18081] 
> Kafka consumer initialized (org.apache.kafka.clients.consumer.KafkaConsumer)
> [2019-06-18 06:23:36,505] DEBUG [Consumer 
> clientId=KafkaStore-reader-_schemas, groupId=schema-registry-10.0.2.15-18081] 
> Initiating connection to node 10.0.2.15:19094 (id: -1 rack: null) using 
> address /10.0.2.15 (org.apache.kafka.clients.NetworkClient)
> [2019-06-18 06:23:36,512] DEBUG Set SASL client state to 
> SEND_APIVERSIONS_REQUEST 
> (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
> [2019-06-18 06:23:36,515] DEBUG Creating SaslClient: 
> client=null;service=kafka;serviceHostname=quickstart.confluent.io;mechs=[PLAIN]
>  (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
> {code}
> Thanks
> Badai



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12752) CVE-2021-28168 upgrade jersey to 2.34 or 3.02

2021-05-06 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12752.
---
Fix Version/s: 2.8.1
   2.7.2
   3.0.0
 Reviewer: Manikumar
   Resolution: Fixed

> CVE-2021-28168 upgrade jersey to 2.34 or 3.02
> -
>
> Key: KAFKA-12752
> URL: https://issues.apache.org/jira/browse/KAFKA-12752
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: John Stacy
>Assignee: Dongjin Lee
>Priority: Major
>  Labels: CVE, security
> Fix For: 3.0.0, 2.7.2, 2.8.1
>
>
> [https://nvd.nist.gov/vuln/detail/CVE-2021-28168]
> CVE-2021-28168 affects jersey versions <=2.33, <=3.0.1. Upgrading to 2.34 or 
> 3.02 should resolve the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12400) Upgrade jetty to fix CVE-2020-27223

2021-03-02 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12400.
---
Resolution: Fixed

Issue resolved by pull request 10245
[https://github.com/apache/kafka/pull/10245]

> Upgrade jetty to fix CVE-2020-27223
> ---
>
> Key: KAFKA-12400
> URL: https://issues.apache.org/jira/browse/KAFKA-12400
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
> Fix For: 2.7.1, 2.6.2, 2.8.0
>
>
> h3. CVE-2020-27223 Detail
> In Eclipse Jetty 9.4.6.v20170531 to 9.4.36.v20210114 (inclusive), 10.0.0, and 
> 11.0.0 when Jetty handles a request containing multiple Accept headers with a 
> large number of quality (i.e. q) parameters, the server may enter a denial of 
> service (DoS) state due to high CPU usage processing those quality values, 
> resulting in minutes of CPU time exhausted processing those quality values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12389) Upgrade of netty-codec due to CVE-2021-21290

2021-03-02 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12389.
---
Fix Version/s: 2.8.0
   2.6.2
   2.7.1
   Resolution: Fixed

Issue resolved by pull request 10235
[https://github.com/apache/kafka/pull/10235]

> Upgrade of netty-codec due to CVE-2021-21290
> 
>
> Key: KAFKA-12389
> URL: https://issues.apache.org/jira/browse/KAFKA-12389
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Dominique Mongelli
>Assignee: Dongjin Lee
>Priority: Major
> Fix For: 2.7.1, 2.6.2, 2.8.0
>
>
> Our security tool raised the following security flaw on kafka 2.7: 
> [https://nvd.nist.gov/vuln/detail/CVE-2021-21290]
> It is a vulnerability related to jar *netty-codec-4.1.51.Final.jar*.
> Looking at source code, the netty-codec in trunk and 2.7.0 branches are still 
> vulnerable.
> Based on netty issue tracker, the vulnerability is fixed in 4.1.59.Final: 
> https://github.com/netty/netty/security/advisories/GHSA-5mcr-gq6c-3hq2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-8562) SASL_SSL still performs reverse DNS lookup despite KAFKA-5051

2021-02-25 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-8562:


Assignee: Davor Poldrugo

> SASL_SSL still performs reverse DNS lookup despite KAFKA-5051
> -
>
> Key: KAFKA-8562
> URL: https://issues.apache.org/jira/browse/KAFKA-8562
> Project: Kafka
>  Issue Type: Bug
>Reporter: Badai Aqrandista
>Assignee: Davor Poldrugo
>Priority: Minor
> Fix For: 3.0.0
>
>
> When using SASL_SSL, the Kafka client performs a reverse DNS lookup to 
> resolve IP to DNS. So, this circumvent the security fix made in KAFKA-5051. 
> This is the line of code from AK 2.2 where it performs the lookup:
> https://github.com/apache/kafka/blob/2.2.0/clients/src/main/java/org/apache/kafka/common/network/SaslChannelBuilder.java#L205
> Following log messages show that consumer initially tried to connect with IP 
> address 10.0.2.15. Then suddenly it created SaslClient with a hostname:
> {code:java}
> [2019-06-18 06:23:36,486] INFO Kafka commitId: 00d486623990ed9d 
> (org.apache.kafka.common.utils.AppInfoParser)
> [2019-06-18 06:23:36,487] DEBUG [Consumer 
> clientId=KafkaStore-reader-_schemas, groupId=schema-registry-10.0.2.15-18081] 
> Kafka consumer initialized (org.apache.kafka.clients.consumer.KafkaConsumer)
> [2019-06-18 06:23:36,505] DEBUG [Consumer 
> clientId=KafkaStore-reader-_schemas, groupId=schema-registry-10.0.2.15-18081] 
> Initiating connection to node 10.0.2.15:19094 (id: -1 rack: null) using 
> address /10.0.2.15 (org.apache.kafka.clients.NetworkClient)
> [2019-06-18 06:23:36,512] DEBUG Set SASL client state to 
> SEND_APIVERSIONS_REQUEST 
> (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
> [2019-06-18 06:23:36,515] DEBUG Creating SaslClient: 
> client=null;service=kafka;serviceHostname=quickstart.confluent.io;mechs=[PLAIN]
>  (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
> {code}
> Thanks
> Badai



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7188) Avoid reverse DNS lookup in SASL channel builder

2021-02-25 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7188.
--
  Assignee: (was: Rajini Sivaram)
Resolution: Duplicate

> Avoid reverse DNS lookup in SASL channel builder
> 
>
> Key: KAFKA-7188
> URL: https://issues.apache.org/jira/browse/KAFKA-7188
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Reporter: Rajini Sivaram
>Priority: Major
>
> SaslChannelBuilder uses InetAddress.getHostName which may perform reverse DNS 
> lookup, causing delays in some environments. We should replace these with 
> SocketAddress.getHostString if possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12324) Upgrade jetty to fix CVE-2020-27218

2021-02-22 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12324.
---
Fix Version/s: 2.8.0
   2.6.2
   2.7.1
   Resolution: Fixed

Issue resolved by pull request 10177
[https://github.com/apache/kafka/pull/10177]

> Upgrade jetty to fix CVE-2020-27218
> ---
>
> Key: KAFKA-12324
> URL: https://issues.apache.org/jira/browse/KAFKA-12324
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: John Stacy
>Assignee: Dongjin Lee
>Priority: Major
> Fix For: 2.7.1, 2.6.2, 2.8.0
>
>
> h3. CVE-2020-27218 Detail
> In Eclipse Jetty version 9.4.0.RC0 to 9.4.34.v20201102, 10.0.0.alpha0 to 
> 10.0.0.beta2, and 11.0.0.alpha0 to 11.0.0.beta2, if GZIP request body 
> inflation is enabled and requests from different clients are multiplexed onto 
> a single connection, and if an attacker can send a request with a body that 
> is received entirely but not consumed by the application, then a subsequent 
> request on the same connection will see that body prepended to its body. The 
> attacker will not see any data but may inject data into the body of the 
> subsequent request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12297) Implementation of MockProducer contradicts documentation of Callback for async send

2021-02-13 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12297.
---
Fix Version/s: 3.0.0
 Reviewer: Manikumar
   Resolution: Fixed

> Implementation of MockProducer contradicts documentation of Callback for 
> async send
> ---
>
> Key: KAFKA-12297
> URL: https://issues.apache.org/jira/browse/KAFKA-12297
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , unit tests
>Affects Versions: 2.7.0
>Reporter: Olaf Gottschalk
>Priority: Major
> Fix For: 3.0.0
>
>
> In Unit tests, a MockProducer is used to imitate a real producer.
> Using the errorNext(RuntimeException e) method, it is possible to indicate 
> failures.
> BUT: the asynchronous send method with a callback has a clear documentation 
> of that callback interface, stating that Metadata will always be set, and 
> never null.
> {{The metadata for the record that was sent (i.e. the partition and offset). 
> An empty metadata with -1 value for all fields except for topicPartition will 
> be returned if an error occurred.}}
>  
> The bug is, that in MockProducer's Completion implementation the following 
> happens:
> {{if (e == null)}}
>  {{    callback.onCompletion(metadata, null);}}
>  {{else}}
>  {{    callback.onCompletion(null, e);}}
>  
> Behaving against the own documentation leads to very subtle bugs: tests that 
> implement the error condition checking metadata != null will be fine, but in 
> real life fail horribly.
>  
> A MockProducer should at all times behave exactly like the real thing and 
> adhere to the documentation of the Callback!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10669) ListOffsetRequest: make CurrentLeaderEpoch field ignorable and set MaxNumOffsets field to 1

2020-11-02 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-10669:
-

Assignee: Manikumar

> ListOffsetRequest: make CurrentLeaderEpoch field ignorable and set 
> MaxNumOffsets field to 1
> ---
>
> Key: KAFKA-10669
> URL: https://issues.apache.org/jira/browse/KAFKA-10669
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 2.7.0
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Blocker
> Fix For: 2.7.0
>
>
> Couple of failures observed after KAFKA-9627: Replace ListOffset 
> request/response with automated protocol 
> ([https://github.com/apache/kafka/pull/8295])
> 1. Latest consumer fails to consume from 0.10.0.1 brokers. Below system tests 
> are failing
>  
> kafkatest.tests.client.client_compatibility_features_test.ClientCompatibilityFeaturesTest
>  
> kafkatest.tests.client.client_compatibility_produce_consume_test.ClientCompatibilityProduceConsumeTest
> 2. In some scenarios, latest consumer fails with below error when connecting 
> to a Kafka cluster which consists of newer and older (<=2.0) Kafka brokers 
>  org.apache.kafka.common.errors.UnsupportedVersionException: Attempted to 
> write a non-default currentLeaderEpoch at version 3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10669) ListOffsetRequest: make CurrentLeaderEpoch field ignorable and set MaxNumOffsets field to 1

2020-11-02 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-10669.
---
Resolution: Fixed

Issue resolved by pull request 9540
[https://github.com/apache/kafka/pull/9540]

> ListOffsetRequest: make CurrentLeaderEpoch field ignorable and set 
> MaxNumOffsets field to 1
> ---
>
> Key: KAFKA-10669
> URL: https://issues.apache.org/jira/browse/KAFKA-10669
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 2.7.0
>Reporter: Manikumar
>Priority: Blocker
> Fix For: 2.7.0
>
>
> Couple of failures observed after KAFKA-9627: Replace ListOffset 
> request/response with automated protocol 
> ([https://github.com/apache/kafka/pull/8295])
> 1. Latest consumer fails to consume from 0.10.0.1 brokers. Below system tests 
> are failing
>  
> kafkatest.tests.client.client_compatibility_features_test.ClientCompatibilityFeaturesTest
>  
> kafkatest.tests.client.client_compatibility_produce_consume_test.ClientCompatibilityProduceConsumeTest
> 2. In some scenarios, latest consumer fails with below error when connecting 
> to a Kafka cluster which consists of newer and older (<=2.0) Kafka brokers 
>  org.apache.kafka.common.errors.UnsupportedVersionException: Attempted to 
> write a non-default currentLeaderEpoch at version 3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10669) ListOffsetRequest: make CurrentLeaderEpoch field ignorable and set MaxNumOffsets field to 1

2020-10-31 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-10669:
--
Description: 
Couple of failures observed after KAFKA-9627: Replace ListOffset 
request/response with automated protocol 
([https://github.com/apache/kafka/pull/8295])

1. Latest consumer fails to consume from 0.10.0.1 brokers. Below system tests 
are failing
 
kafkatest.tests.client.client_compatibility_features_test.ClientCompatibilityFeaturesTest
 
kafkatest.tests.client.client_compatibility_produce_consume_test.ClientCompatibilityProduceConsumeTest

2. In some scenarios, latest consumer fails with below error when connecting to 
a Kafka cluster which consists of newer and older (<=2.0) Kafka brokers 
 org.apache.kafka.common.errors.UnsupportedVersionException: Attempted to write 
a non-default currentLeaderEpoch at version 3

  was:
Couple of failures observed after KAFKA-9627: Replace ListOffset 
request/response with automated protocol 
([https://github.com/apache/kafka/pull/8295])

1. Latest consumer fails to consume from 0.10.0.1 brokers. Below system tests 
are failing
 
kafkatest.tests.client.client_compatibility_features_test.ClientCompatibilityFeaturesTest
 system test failing for "0.10.0.1" broker
 
kafkatest.tests.client.client_compatibility_produce_consume_test.ClientCompatibilityProduceConsumeTest

2. In some scenarios, latest consumer fails with below error when connecting to 
a Kafka cluster which consists of newer and older (<=2.0) Kafka brokers 
 org.apache.kafka.common.errors.UnsupportedVersionException: Attempted to write 
a non-default currentLeaderEpoch at version 3


> ListOffsetRequest: make CurrentLeaderEpoch field ignorable and set 
> MaxNumOffsets field to 1
> ---
>
> Key: KAFKA-10669
> URL: https://issues.apache.org/jira/browse/KAFKA-10669
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 2.7.0
>Reporter: Manikumar
>Priority: Blocker
> Fix For: 2.7.0
>
>
> Couple of failures observed after KAFKA-9627: Replace ListOffset 
> request/response with automated protocol 
> ([https://github.com/apache/kafka/pull/8295])
> 1. Latest consumer fails to consume from 0.10.0.1 brokers. Below system tests 
> are failing
>  
> kafkatest.tests.client.client_compatibility_features_test.ClientCompatibilityFeaturesTest
>  
> kafkatest.tests.client.client_compatibility_produce_consume_test.ClientCompatibilityProduceConsumeTest
> 2. In some scenarios, latest consumer fails with below error when connecting 
> to a Kafka cluster which consists of newer and older (<=2.0) Kafka brokers 
>  org.apache.kafka.common.errors.UnsupportedVersionException: Attempted to 
> write a non-default currentLeaderEpoch at version 3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10669) ListOffsetRequest: make CurrentLeaderEpoch field ignorable and set MaxNumOffsets field to 1

2020-10-31 Thread Manikumar (Jira)
Manikumar created KAFKA-10669:
-

 Summary: ListOffsetRequest: make CurrentLeaderEpoch field 
ignorable and set MaxNumOffsets field to 1
 Key: KAFKA-10669
 URL: https://issues.apache.org/jira/browse/KAFKA-10669
 Project: Kafka
  Issue Type: Task
Affects Versions: 2.7.0
Reporter: Manikumar
 Fix For: 2.7.0


Couple of failures observed after KAFKA-9627: Replace ListOffset 
request/response with automated protocol 
([https://github.com/apache/kafka/pull/8295])

1. Latest consumer fails to consume from 0.10.0.1 brokers. Below system tests 
are failing
 
kafkatest.tests.client.client_compatibility_features_test.ClientCompatibilityFeaturesTest
 system test failing for "0.10.0.1" broker
 
kafkatest.tests.client.client_compatibility_produce_consume_test.ClientCompatibilityProduceConsumeTest

2. In some scenarios, latest consumer fails with below error when connecting to 
a Kafka cluster which consists of newer and older (<=2.0) Kafka brokers 
 org.apache.kafka.common.errors.UnsupportedVersionException: Attempted to write 
a non-default currentLeaderEpoch at version 3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10592) system tests not running after python3 merge

2020-10-21 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218291#comment-17218291
 ] 

Manikumar commented on KAFKA-10592:
---

[~nizhikov]   Do we also need to update [vagrant image 
|https://github.com/apache/kafka/blob/trunk/vagrant/base.sh#L44] with required 
python3 dependencies and update the 
[instructions|https://github.com/apache/kafka/blob/trunk/tests/README.md#local-quickstart]
 if any?

> system tests not running after python3 merge
> 
>
> Key: KAFKA-10592
> URL: https://issues.apache.org/jira/browse/KAFKA-10592
> Project: Kafka
>  Issue Type: Task
>  Components: system tests
>Reporter: Ron Dagostino
>Assignee: Manikumar
>Priority: Major
>
> We are seeing these errors on system tests due to the python3 merge: 
> {noformat}
> [ERROR:2020-10-08 21:03:51,341]: Failed to import 
> kafkatest.sanity_checks.test_performance_services, which may indicate a 
> broken test that cannot be loaded: ImportError: No module named server
>  [ERROR:2020-10-08 21:03:51,351]: Failed to import 
> kafkatest.benchmarks.core.benchmark_test, which may indicate a broken test 
> that cannot be loaded: ImportError: No module named server
>  [ERROR:2020-10-08 21:03:51,501]: Failed to import 
> kafkatest.tests.core.throttling_test, which may indicate a broken test that 
> cannot be loaded: ImportError: No module named server
>  [ERROR:2020-10-08 21:03:51,598]: Failed to import 
> kafkatest.tests.client.quota_test, which may indicate a broken test that 
> cannot be loaded: ImportError: No module named server
>  {noformat}
> I ran one of the system tests at the commit prior to the python3 merge 
> ([https://github.com/apache/kafka/commit/40a23cc0c2e1efa8632f59b093672221a3c03c36])
>  and it ran fine:
> [http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/2020-10-09--001.1602255415--rondagostino--rtd_just_before_python3_merge--40a23cc0c/report.html]
> I ran the exact same test file at the next commit – the python3 commit at 
> [https://github.com/apache/kafka/commit/4e65030e055104a7526e85b563a11890c61d6ddf]
>  – and it failed with the import error. The test results show no report.html 
> file because nothing ran: 
> [http://testing.confluent.io/confluent-kafka-system-test-results/?prefix=2020-10-09--001.1602251990--apache--trunk--7947c18b5/]
> Not sure when this began because I do see these tests running successfully 
> during the development process as documented in 
> https://issues.apache.org/jira/browse/KAFKA-10402 (`tests run: 684` as 
> recently as 9/20 in that ticket). But the PR build (rebased onto latest 
> trunk) showed the above import errors and only 606 tests run. I assume those 
> 4 files mentioned include 78 tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10109) kafka-acls.sh/AclCommand opens multiple AdminClients

2020-07-09 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-10109.
---
Fix Version/s: 2.7.0
   Resolution: Fixed

Issue resolved by pull request 8808
[https://github.com/apache/kafka/pull/8808]

> kafka-acls.sh/AclCommand opens multiple AdminClients
> 
>
> Key: KAFKA-10109
> URL: https://issues.apache.org/jira/browse/KAFKA-10109
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
> Fix For: 2.7.0
>
>
> {{AclCommand.AclCommandService}} uses {{withAdminClient(opts: 
> AclCommandOptions)(f: Admin => Unit)}} to abstract the execution of an action 
> using an {{AdminClient}} instance. Unfortunately the use of this method in 
> implemeting {{addAcls()}} and {{removeAcls()}} calls {{listAcls()}}. This 
> causes the creation of a second {{AdminClient}} instance. When the 
> {{--command-config}} option has been used to specify a {{client.id}} for the 
> Admin client, the second instance  fails to register an MBean, resulting in a 
> warning being logged.
> {code}
> ./bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config 
> config/broker_connection.conf.reproducing --add --allow-principal User:alice 
> --operation Describe --topic 'test' --resource-pattern-type prefixed
> Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=test, 
> patternType=PREFIXED)`: 
>   (principal=User:alice, host=*, operation=DESCRIBE, 
> permissionType=ALLOW) 
> [2020-06-03 18:43:12,190] WARN Error registering AppInfo mbean 
> (org.apache.kafka.common.utils.AppInfoParser)
> javax.management.InstanceAlreadyExistsException: 
> kafka.admin.client:type=app-info,id=administrator_data
>   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>   at 
> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:64)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient.(KafkaAdminClient.java:500)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:444)
>   at org.apache.kafka.clients.admin.Admin.create(Admin.java:59)
>   at 
> org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:39)
>   at 
> kafka.admin.AclCommand$AdminClientService.withAdminClient(AclCommand.scala:105)
>   at 
> kafka.admin.AclCommand$AdminClientService.listAcls(AclCommand.scala:146)
>   at 
> kafka.admin.AclCommand$AdminClientService.$anonfun$addAcls$1(AclCommand.scala:123)
>   at 
> kafka.admin.AclCommand$AdminClientService.$anonfun$addAcls$1$adapted(AclCommand.scala:116)
>   at 
> kafka.admin.AclCommand$AdminClientService.withAdminClient(AclCommand.scala:108)
>   at 
> kafka.admin.AclCommand$AdminClientService.addAcls(AclCommand.scala:116)
>   at kafka.admin.AclCommand$.main(AclCommand.scala:78)
>   at kafka.admin.AclCommand.main(AclCommand.scala)
> Current ACLs for resource `ResourcePattern(resourceType=TOPIC, name=test, 
> patternType=PREFIXED)`: 
>   (principal=User:alice, host=*, operation=DESCRIBE, permissionType=ALLOW)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10214) fix flaky zookeeper_tls_test.py

2020-07-01 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-10214.
---
Fix Version/s: 2.6.0
   Resolution: Fixed

Issue resolved by pull request 8949
[https://github.com/apache/kafka/pull/8949]

> fix flaky zookeeper_tls_test.py
> ---
>
> Key: KAFKA-10214
> URL: https://issues.apache.org/jira/browse/KAFKA-10214
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.6.0
>
>
> After 
> https://github.com/apache/kafka/commit/3661f981fff2653aaf1d5ee0b6dde3410b5498db,
>  security_config is cached. Hence, the later changes to security flag can't 
> impact the security_config used by later tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10220) NPE when describing resources

2020-07-01 Thread Manikumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149170#comment-17149170
 ] 

Manikumar commented on KAFKA-10220:
---

looks like this got introduced in [https://github.com/apache/kafka/pull/8312]

cc [~tombentley]

> NPE when describing resources
> -
>
> Key: KAFKA-10220
> URL: https://issues.apache.org/jira/browse/KAFKA-10220
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Edoardo Comar
>Assignee: Luke Chen
>Priority: Major
>
> In current trunk code 
>  Describing a topic from the CLI can fail with an NPE in the broker
> on the line 
> {{          
> resource.configurationKeys.asScala.forall(_.contains(configName))}}
>  
> (configurationKeys is null)
> {{[2020-06-30 11:10:39,464] ERROR [Admin Manager on Broker 0]: Error 
> processing describe configs request for resource 
> DescribeConfigsResource(resourceType=2, resourceName='topic1', 
> configurationKeys=null) 
> (kafka.server.AdminManager)}}{{java.lang.NullPointerException}}{{at 
> kafka.server.AdminManager.$anonfun$describeConfigs$3(AdminManager.scala:395)}}{{at
>  
> kafka.server.AdminManager.$anonfun$describeConfigs$3$adapted(AdminManager.scala:393)}}{{at
>  
> scala.collection.TraversableLike.$anonfun$filterImpl$1(TraversableLike.scala:248)}}{{at
>  scala.collection.Iterator.foreach(Iterator.scala:929)}}{{at 
> scala.collection.Iterator.foreach$(Iterator.scala:929)}}{{at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)}}{{at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)}}{{at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)}}{{at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)}}{{at 
> scala.collection.TraversableLike.filterImpl(TraversableLike.scala:247)}}{{at 
> scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:245)}}{{at 
> scala.collection.AbstractTraversable.filterImpl(Traversable.scala:104)}}{{at 
> scala.collection.TraversableLike.filter(TraversableLike.scala:259)}}{{at 
> scala.collection.TraversableLike.filter$(TraversableLike.scala:259)}}{{at 
> scala.collection.AbstractTraversable.filter(Traversable.scala:104)}}{{at 
> kafka.server.AdminManager.createResponseConfig$1(AdminManager.scala:393)}}{{at
>  
> kafka.server.AdminManager.$anonfun$describeConfigs$1(AdminManager.scala:412)}}{{at
>  scala.collection.immutable.List.map(List.scala:283)}}{{at 
> kafka.server.AdminManager.describeConfigs(AdminManager.scala:386)}}{{at 
> kafka.server.KafkaApis.handleDescribeConfigsRequest(KafkaApis.scala:2595)}}{{at
>  kafka.server.KafkaApis.handle(KafkaApis.scala:165)}}{{at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:70)}}{{at 
> java.lang.Thread.run(Thread.java:748)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   7   8   9   10   >