[GitHub] [kafka] JoeCqupt opened a new pull request #11088: MINOR: remove unnecessary judgment in method: assignReplicasToBrokersRackAware

2021-07-19 Thread GitBox


JoeCqupt opened a new pull request #11088:
URL: https://github.com/apache/kafka/pull/11088


   remove unnecessary judgment in method 
AdminUtils::assignReplicasToBrokersRackAware
   
   because replicationFactor <= numBrokers , so the judgment is not unnecessary


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ZuevKirill95 opened a new pull request #11087: MINOR: Add port in error message

2021-07-19 Thread GitBox


ZuevKirill95 opened a new pull request #11087:
URL: https://github.com/apache/kafka/pull/11087


   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] satishd commented on pull request #11033: KAFKA-12988 Asynchronous API support for RemoteLogMetadataManager add/update methods.

2021-07-19 Thread GitBox


satishd commented on pull request #11033:
URL: https://github.com/apache/kafka/pull/11033#issuecomment-883064875


   @junrao This PR is rebased with trunk, please review and let me know your 
comments.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on a change in pull request #11080: fix NPE when record==null in append

2021-07-19 Thread GitBox


ijuma commented on a change in pull request #11080:
URL: https://github.com/apache/kafka/pull/11080#discussion_r672784612



##
File path: 
clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java
##
@@ -293,7 +293,9 @@ public static DefaultRecord readFrom(ByteBuffer buffer,
  Long logAppendTime) {
 int sizeOfBodyInBytes = ByteUtils.readVarint(buffer);
 if (buffer.remaining() < sizeOfBodyInBytes)
-return null;
+throw new InvalidRecordException("Invalid record size: expected " 
+ sizeOfBodyInBytes +
+" bytes in record payload, but instead the buffer has only " + 
buffer.remaining() +
+" remaining bytes.");

Review comment:
   I think the intent here was to cover the case where an incomplete record 
is returned by the broker. However, we have broker logic to try and avoid this 
case since KIP-74:
   
   ```java
   } else if (!hardMaxBytesLimit && readInfo.fetchedData.firstEntryIncomplete) {
   // For FetchRequest version 3, we replace incomplete message 
sets with an empty one as consumers can make
   // progress in such cases and don't need to report a 
`RecordTooLargeException`
   FetchDataInfo(readInfo.fetchedData.fetchOffsetMetadata, 
MemoryRecords.EMPTY)
   ```
   
   @hachikuji Do you remember if there is still a reason to return `null` here 
instead of the exception @ccding is proposing?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on pull request #11086: KAFKA-13103: add REBALANCE_IN_PROGRESS error as retriable error

2021-07-19 Thread GitBox


showuon commented on pull request #11086:
URL: https://github.com/apache/kafka/pull/11086#issuecomment-883022752


   @dajac @rajinisivaram , please take a look. Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11086: KAFKA-13103: add REBALANCE_IN_PROGRESS error as retriable error

2021-07-19 Thread GitBox


showuon commented on a change in pull request #11086:
URL: https://github.com/apache/kafka/pull/11086#discussion_r672778548



##
File path: 
clients/src/test/java/org/apache/kafka/clients/admin/KafkaAdminClientTest.java
##
@@ -3184,8 +3197,111 @@ public void testDeleteConsumerGroupsRetryBackoff() 
throws Exception {
 }
 }
 
+// this test is testing retriable errors and non-retriable errors in the 
new broker
 @Test
-public void testDeleteConsumerGroupsWithOlderBroker() throws Exception {
+public void testDeleteConsumerGroupsWithRetriableAndNonretriableErrors() 
throws Exception {

Review comment:
   We used to only have tests for older broker for deleteConsumerGroup 
test. Add a test for new broker.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #11086: KAFKA-13103: add REBALANCE_IN_PROGRESS error as retriable error

2021-07-19 Thread GitBox


showuon commented on a change in pull request #11086:
URL: https://github.com/apache/kafka/pull/11086#discussion_r672778548



##
File path: 
clients/src/test/java/org/apache/kafka/clients/admin/KafkaAdminClientTest.java
##
@@ -3184,8 +3197,111 @@ public void testDeleteConsumerGroupsRetryBackoff() 
throws Exception {
 }
 }
 
+// this test is testing retriable errors and non-retriable errors in the 
new broker
 @Test
-public void testDeleteConsumerGroupsWithOlderBroker() throws Exception {
+public void testDeleteConsumerGroupsWithRetriableAndNonretriableErrors() 
throws Exception {

Review comment:
   We used to only have tests for older broker. Add a test for new broker.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon opened a new pull request #11086: KAFKA-13103: add REBALANCE_IN_PROGRESS error as retriable error

2021-07-19 Thread GitBox


showuon opened a new pull request #11086:
URL: https://github.com/apache/kafka/pull/11086


   Add `REBALANCE_IN_PROGRESS` error as retriable error for all group API 
handlers in KIP-699, and tests for it.
   
   1. AlterConsumerGroupOffsetsHandler
   2. DeleteConsumerGroupOffsetsHandler
   3. DeleteConsumerGroupsHandler
   4. DescribeConsumerGroupsHandler
   5. ListConsumerGroupOffsetsHandler
   6. RemoveMembersFromConsumerGroupHandler
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman commented on a change in pull request #10921: KAFKA-13096: Ensure queryable store providers is up to date after adding stream thread

2021-07-19 Thread GitBox


ableegoldman commented on a change in pull request #10921:
URL: https://github.com/apache/kafka/pull/10921#discussion_r672765942



##
File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
##
@@ -1081,6 +1079,7 @@ private int getNumStreamThreads(final boolean 
hasGlobalTopology) {
 } else {
 log.info("Successfully removed {} in {}ms", 
streamThread.getName(), time.milliseconds() - startMs);
 threads.remove(streamThread);
+queryableStoreProvider.removeStoreProvider(new 
StreamThreadStateStoreProvider(streamThread));

Review comment:
   It seems a bit weird to have to create a `new 
StreamThreadStateStoreProvider(streamThread)` just to remove and existing 
`StreamThreadStateStoreProvider` from this -- can we maybe change it so that 
`#removeStoreProvider` accepts a `StreamThread` reference, or even just a 
`String streamThreadName`? And then store a map from name to 
`StreamThreadStateStoreProvider` or something -- WDYT?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman commented on a change in pull request #10921: KAFKA-13096: Ensure queryable store providers is up to date after adding stream thread

2021-07-19 Thread GitBox


ableegoldman commented on a change in pull request #10921:
URL: https://github.com/apache/kafka/pull/10921#discussion_r672764087



##
File path: streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java
##
@@ -1081,6 +1079,7 @@ private int getNumStreamThreads(final boolean 
hasGlobalTopology) {
 } else {
 log.info("Successfully removed {} in {}ms", 
streamThread.getName(), time.milliseconds() - startMs);
 threads.remove(streamThread);
+queryableStoreProvider.removeStoreProvider(new 
StreamThreadStateStoreProvider(streamThread));

Review comment:
   Ah, yeah I guess it would have always had to handle `DEAD` threads since 
in the before-time (ie before we added the `add/removeStreamThread` APIs) it 
was always possible for a thread to just die when hit with an unexpected 
exception.
   
   That said, I feel a lot better about trimming a removed thread from the list 
explicitly. Don't want to build up a mass grave of mostly dead threads (well, 
dead thread store providers) that can never be garbage collected over the 
application's lifetime




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman commented on a change in pull request #11083: KAFKA-13010: Retry getting tasks incase of rebalance for TaskMetadata tests

2021-07-19 Thread GitBox


ableegoldman commented on a change in pull request #11083:
URL: https://github.com/apache/kafka/pull/11083#discussion_r672762836



##
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/TaskMetadataIntegrationTest.java
##
@@ -158,9 +158,13 @@ public void shouldReportCorrectEndOffsetInformation() {
 }
 }
 
-private TaskMetadata getTaskMetadata(final KafkaStreams kafkaStreams) {
+private TaskMetadata getTaskMetadata(final KafkaStreams kafkaStreams) 
throws InterruptedException {
+TestUtils.waitForCondition( () -> kafkaStreams

Review comment:
   I'm sure the odds of this are super low, but to really make this 
airtight you'd need to have the `List` that you use for the test 
be the same one that you test in this condition, otherwise it's _possible_ for 
the one task to have been revoked in the split second between 
`waitForCondition()` and the new call to `metadataForLocalThreads()`. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman commented on pull request #11085: MINOR: reduce debug log spam and busy loop during shutdown

2021-07-19 Thread GitBox


ableegoldman commented on pull request #11085:
URL: https://github.com/apache/kafka/pull/11085#issuecomment-883003472


   call for review @wcarlson5 @lct45 @guozhangwang @vvcephei @cadonna 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman commented on a change in pull request #11085: MINOR: reduce debug log spam and busy loop during shutdown

2021-07-19 Thread GitBox


ableegoldman commented on a change in pull request #11085:
URL: https://github.com/apache/kafka/pull/11085#discussion_r672761316



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java
##
@@ -885,8 +885,8 @@ private long pollPhase() {
 records = pollRequests(pollTime);
 } else if (state == State.PENDING_SHUTDOWN) {
 // we are only here because there's rebalance in progress,
-// just poll with zero to complete it
-records = pollRequests(Duration.ZERO);
+// just long poll to give it enough time to complete it
+records = pollRequests(pollTime);

Review comment:
   This is the main fix, see the PR description for full context. I was 
actually wondering if we shouldn't go even further and call `poll(MAX_VALUE)` 
instead, since there's really no reason to return from poll when the thread is 
shutting down but a rebalance is still in progress. Thoughts?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman commented on a change in pull request #11085: MINOR: reduce debug log spam and busy loop during shutdown

2021-07-19 Thread GitBox


ableegoldman commented on a change in pull request #11085:
URL: https://github.com/apache/kafka/pull/11085#discussion_r672760705



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java
##
@@ -719,10 +719,10 @@ void runOnce() {
 
 final long pollLatency = pollPhase();
 
-// Shutdown hook could potentially be triggered and transit the thread 
state to PENDING_SHUTDOWN during #pollRequests().
-// The task manager internal states could be uninitialized if the 
state transition happens during #onPartitionsAssigned().

Review comment:
   This comment and the short-circuit `return` was a fix for an NPE from a 
year or two ago, but it turns out we actually broke this fix when we 
encapsulated everything into the `pollPhase` -- [the 
fix](https://issues.apache.org/jira/browse/KAFKA-8620) was to return in between 
returning from `poll()` and calling `addRecordsToTasks()`, since we could end 
up with uninitialized tasks/TaskManager state if the shutdown hook was 
triggered during the rebalance callback. 
   Luckily, at some point we happened to shore up the task management logic so 
that the rebalance callbacks will always proceed even if the thread has already 
been told to shut down, so we're not in any trouble here. This also means that 
technically, we don't even need to `return` here anymore -- but there's no real 
reason to continue through the loop, so I just updated the comment and left it 
as is.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ccding commented on a change in pull request #11080: fix NPE when record==null in append

2021-07-19 Thread GitBox


ccding commented on a change in pull request #11080:
URL: https://github.com/apache/kafka/pull/11080#discussion_r672758944



##
File path: 
clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java
##
@@ -293,7 +293,9 @@ public static DefaultRecord readFrom(ByteBuffer buffer,
  Long logAppendTime) {
 int sizeOfBodyInBytes = ByteUtils.readVarint(buffer);
 if (buffer.remaining() < sizeOfBodyInBytes)
-return null;
+throw new InvalidRecordException("Invalid record size: expected " 
+ sizeOfBodyInBytes +
+" bytes in record payload, but instead the buffer has only " + 
buffer.remaining() +
+" remaining bytes.");

Review comment:
   Are you saying the case that we are yet to complete reading the request? 
I didn't see a retry path, but it will cause a null point exception at 
https://github.com/apache/kafka/blob/bfc57aa4ddcd719fc4a646c2ac09d4979c076455/core/src/main/scala/kafka/log/LogValidator.scala#L191
   
   What do you suggest me do here?

##
File path: 
clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java
##
@@ -293,7 +293,9 @@ public static DefaultRecord readFrom(ByteBuffer buffer,
  Long logAppendTime) {
 int sizeOfBodyInBytes = ByteUtils.readVarint(buffer);
 if (buffer.remaining() < sizeOfBodyInBytes)
-return null;
+throw new InvalidRecordException("Invalid record size: expected " 
+ sizeOfBodyInBytes +
+" bytes in record payload, but instead the buffer has only " + 
buffer.remaining() +
+" remaining bytes.");

Review comment:
   Are you saying the case that we are yet to complete reading the request? 
I didn't see a retry path, but it will cause a null point exception at 
https://github.com/apache/kafka/blob/bfc57aa4ddcd719fc4a646c2ac09d4979c076455/core/src/main/scala/kafka/log/LogValidator.scala#L191
   
   What do you suggest I do here?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13103) Should group admin handlers consider REBALANCE_IN_PROGRESS and GROUP_AUTHORIZATION_FAILED as retryable errors?

2021-07-19 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383700#comment-17383700
 ] 

Luke Chen commented on KAFKA-13103:
---

I'll get the PR ready today (my time) in case this improvement wants to get 
into v3.0. Thanks.

> Should group admin handlers consider REBALANCE_IN_PROGRESS and 
> GROUP_AUTHORIZATION_FAILED as retryable errors?
> --
>
> Key: KAFKA-13103
> URL: https://issues.apache.org/jira/browse/KAFKA-13103
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: Luke Chen
>Priority: Major
>
> [~rajinisiva...@gmail.com] and I were discussing if we should consider 
> REBALANCE_IN_PROGRESS and GROUP_AUTHORIZATION_FAILED as retryable errors in 
> the group handlers. I think that this could make sense, especially for 
> `REBALANCE_IN_PROGRESS`. `GROUP_AUTHORIZATION_FAILED` is more debatable as it 
> means that the handler would retry until the request timeout is reached. It 
> might be armful if the authorisation is really denied.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] ableegoldman opened a new pull request #11085: MINOR: reduce debug log spam and busy loop during shutdown

2021-07-19 Thread GitBox


ableegoldman opened a new pull request #11085:
URL: https://github.com/apache/kafka/pull/11085


   While investigating some system test failures I found the Streams logs often 
got up to several GB in size, with much of it being due to these three lines 
being logged on repeat many times per ms:
   ```
   [2021-07-19 11:15:26,880] DEBUG [] Invoking poll on main Consumer 
   [2021-07-19 11:15:26,880] DEBUG [] Main Consumer poll completed in 0 ms and 
fetched 0 records 
   [2021-07-19 11:15:26,880] DEBUG [] Thread state is already PENDING_SHUTDOWN, 
skipping the run once call 
   ```
   Even though they're debug logs this is just way too much printed way too 
often, and also reveals how much cpu we're wasting just spinning in this busy 
loop doing nothing else. It seems the situation was that the thread had been 
told to shut down but was waiting for an in-progress rebalance to complete 
before it could exit the poll loop and complete its shutdown. Turns out we were 
polling with `Duration.ZERO` in this case and therefore just running through 
the loop again and again and again within a single millisecond.
   
   So, we should be passing in a larger timeout to `poll()` when the thread is 
in `PENDING_SHUTDOWN` or `PENDING_ERROR` to let it actually wait for the 
ongoing rebalance to finish. Also threw in a few miscellaneous fixes/cleanups 
that I came across on the side


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on pull request #10811: KAFKA-12598: ConfigCommand should only support communication via ZooKeeper for a reduced set of cases

2021-07-19 Thread GitBox


showuon commented on pull request #10811:
URL: https://github.com/apache/kafka/pull/10811#issuecomment-882996715


   @ijuma , thanks for the update. It looks better now! Also, thank you and 
@rondagostino for your patiently review!
   
   For your question:
   > it looks to me that we don't have test coverage in ConfigCommandTest for 
the case where we try to update broker configs via zk when the brokers are up, 
is that right?
   
   I actually added a test for it:
   `shouldNotAllowAddBrokerQuotaConfigWhileBrokerUpUsingZookeeper` 
[here](https://github.com/apache/kafka/blob/da252c633d88ecec6f68200081ad94a3081e5f35/core/src/test/scala/unit/kafka/admin/ConfigCommandTest.scala#L905)
   
   And this test is to test update multiple broker configs using zookeeper 
(when no brokers up)
   `testDynamicBrokerConfigUpdateUsingZooKeeper` 
[here](https://github.com/apache/kafka/blob/da252c633d88ecec6f68200081ad94a3081e5f35/core/src/test/scala/unit/kafka/admin/ConfigCommandTest.scala#L1245)
   
   Do you think we should add more tests for it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-12588) Remove deprecated --zookeeper in shell commands

2021-07-19 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-12588.
-
Resolution: Fixed

> Remove deprecated --zookeeper in shell commands
> ---
>
> Key: KAFKA-12588
> URL: https://issues.apache.org/jira/browse/KAFKA-12588
> Project: Kafka
>  Issue Type: Task
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Blocker
> Fix For: 3.0.0
>
>
> At first check, there are still 4 commands existing *--zookeeper* option. 
> Should be removed in V3.0.0
>  
> _preferredReplicaLeaderElectionCommand_
> _ConfigCommand_
> _ReassignPartitionsCommand_
> _TopicCommand_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] ijuma commented on pull request #10811: KAFKA-12598: ConfigCommand should only support communication via ZooKeeper for a reduced set of cases

2021-07-19 Thread GitBox


ijuma commented on pull request #10811:
URL: https://github.com/apache/kafka/pull/10811#issuecomment-882989575


   Merged to master and cherry-picking to 3.0.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma merged pull request #10811: KAFKA-12598: ConfigCommand should only support communication via ZooKeeper for a reduced set of cases

2021-07-19 Thread GitBox


ijuma merged pull request #10811:
URL: https://github.com/apache/kafka/pull/10811


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #10811: KAFKA-12598: remove zookeeper support on configCommand except security config

2021-07-19 Thread GitBox


ijuma commented on pull request #10811:
URL: https://github.com/apache/kafka/pull/10811#issuecomment-882984058


   Failures are unrelated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe commented on a change in pull request #11084: KAFKA-13100: Create a snapshot during leadership promotion

2021-07-19 Thread GitBox


cmccabe commented on a change in pull request #11084:
URL: https://github.com/apache/kafka/pull/11084#discussion_r672716954



##
File path: 
metadata/src/main/java/org/apache/kafka/controller/QuorumController.java
##
@@ -651,20 +661,21 @@ public void 
handleCommit(BatchReader reader) {
 // If we are writing a new snapshot, then we need 
to keep that around;
 // otherwise, we should delete up to the current 
committed offset.
 snapshotRegistry.deleteSnapshotsUpTo(
-Math.min(offset, 
snapshotGeneratorManager.snapshotLastOffsetFromLog()));
+
snapshotGeneratorManager.snapshotLastOffsetFromLog().orElse(offset)

Review comment:
   newline not needed here

##
File path: 
metadata/src/main/java/org/apache/kafka/controller/QuorumController.java
##
@@ -651,20 +661,21 @@ public void 
handleCommit(BatchReader reader) {
 // If we are writing a new snapshot, then we need 
to keep that around;
 // otherwise, we should delete up to the current 
committed offset.
 snapshotRegistry.deleteSnapshotsUpTo(
-Math.min(offset, 
snapshotGeneratorManager.snapshotLastOffsetFromLog()));
+
snapshotGeneratorManager.snapshotLastOffsetFromLog().orElse(offset)

Review comment:
   extra newline not needed here




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe commented on a change in pull request #11084: KAFKA-13100: Create a snapshot during leadership promotion

2021-07-19 Thread GitBox


cmccabe commented on a change in pull request #11084:
URL: https://github.com/apache/kafka/pull/11084#discussion_r672716671



##
File path: 
metadata/src/main/java/org/apache/kafka/timeline/SnapshotRegistry.java
##
@@ -178,14 +178,18 @@ public Snapshot getSnapshot(long epoch) {
 /**
  * Creates a new snapshot at the given epoch.
  *
+ * If {@code epoch} already exists and it is the last snapshot then just 
return that snapshot.

Review comment:
   With this modification, the function should be renamed something like 
`getOrCreateSnapshot`, right?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jsancio commented on a change in pull request #11084: KAFKA-13100: Create a snapshot during leadership promotion

2021-07-19 Thread GitBox


jsancio commented on a change in pull request #11084:
URL: https://github.com/apache/kafka/pull/11084#discussion_r672716678



##
File path: raft/src/main/java/org/apache/kafka/raft/KafkaRaftClient.java
##
@@ -302,17 +302,13 @@ private void onUpdateLeaderHighWatermark(
 }
 
 private void updateListenersProgress(long highWatermark) {
-updateListenersProgress(listenerContexts, highWatermark);
-}
-
-private void updateListenersProgress(List 
listenerContexts, long highWatermark) {
 for (ListenerContext listenerContext : listenerContexts) {
 listenerContext.nextExpectedOffset().ifPresent(nextExpectedOffset 
-> {
 if (nextExpectedOffset < log.startOffset() && 
nextExpectedOffset < highWatermark) {
 SnapshotReader snapshot = 
latestSnapshot().orElseThrow(() -> new IllegalStateException(
 String.format(
 "Snapshot expected since next offset of %s is %s, 
log start offset is %s and high-watermark is %s",
-listenerContext.listener.getClass().getTypeName(),
+listenerContext.listenerName(),

Review comment:
   Note that all of the change to the `raft` module are cosmetic mainly to 
improving logging.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe commented on a change in pull request #11084: KAFKA-13100: Create a snapshot during leadership promotion

2021-07-19 Thread GitBox


cmccabe commented on a change in pull request #11084:
URL: https://github.com/apache/kafka/pull/11084#discussion_r672715848



##
File path: 
metadata/src/main/java/org/apache/kafka/controller/QuorumController.java
##
@@ -755,11 +766,22 @@ public void handleLeaderChange(LeaderAndEpoch newLeader) {
 newEpoch + ", but we never renounced controller 
epoch " +
 curEpoch);
 }
-log.warn("Becoming active at controller epoch {}.", 
newEpoch);
+log.info(

Review comment:
   This can fit in two lines... let's try to avoid "exploded" function 
calls that look like function bodies.

##
File path: 
metadata/src/main/java/org/apache/kafka/controller/QuorumController.java
##
@@ -755,11 +766,22 @@ public void handleLeaderChange(LeaderAndEpoch newLeader) {
 newEpoch + ", but we never renounced controller 
epoch " +
 curEpoch);
 }
-log.warn("Becoming active at controller epoch {}.", 
newEpoch);
+log.info(

Review comment:
   This can fit in two or three lines... let's try to avoid "exploded" 
function calls that look like function bodies.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jsancio opened a new pull request #11084: KAFKA-13100: Create a snapshot during leadership promotion

2021-07-19 Thread GitBox


jsancio opened a new pull request #11084:
URL: https://github.com/apache/kafka/pull/11084


   This commit includes a few changes:
   
   1. The leader assumes that there is always an in-memory snapshot at the
   last committed offset. This means that the controller needs to generate
   an in-memory snapshot when getting promoted from inactive to active.
   
   2. Delete all in-memory snapshots less that the last committed offset
   when the on-disk snapshot is canceled or it completes.
   
   3. The controller always starts inactive. When loading an on-disk
   snapshot the controller is always inactive. This means that we don't
   need to generate an in-memory snapshot at the offset -1 because there
   is no requirement that there exists an in-memory snapshot at the last
   committed offset when the controller is inactive.
   
   4. SnapshotRegistry's createSnapshot should allow the creating of a
   snapshot if the last snapshot's offset is the given offset. This allows
   for simpler client code.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dielhennr commented on a change in pull request #10463: KAFKA-12670: support unclean.leader.election.enable in KRaft mode

2021-07-19 Thread GitBox


dielhennr commented on a change in pull request #10463:
URL: https://github.com/apache/kafka/pull/10463#discussion_r672693205



##
File path: 
metadata/src/main/java/org/apache/kafka/controller/ConfigurationControlManager.java
##
@@ -42,23 +42,38 @@
 import java.util.Map.Entry;
 import java.util.NoSuchElementException;
 import java.util.Objects;
+import java.util.function.Function;
 
 import static org.apache.kafka.clients.admin.AlterConfigOp.OpType.APPEND;
+import static 
org.apache.kafka.common.config.TopicConfig.UNCLEAN_LEADER_ELECTION_ENABLE_CONFIG;
 
 
 public class ConfigurationControlManager {
+static final ConfigResource DEFAULT_NODE_RESOURCE = new 
ConfigResource(Type.BROKER, "");
+
 private final Logger log;
+private final int nodeId;
+private final ConfigResource currentNodeResource;
 private final SnapshotRegistry snapshotRegistry;
 private final Map configDefs;
+private final TimelineHashMap emptyMap;

Review comment:
   I don't think it can be. It needs to be a TimelineHashMap to work and 
needs to receive the snapshot registry in the constructor.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] wcarlson5 commented on pull request #11083: KAFKA-13010: retry for tasks

2021-07-19 Thread GitBox


wcarlson5 commented on pull request #11083:
URL: https://github.com/apache/kafka/pull/11083#issuecomment-882919484


   @ableegoldman @jlprat @cadonna Can I get a review?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] wcarlson5 opened a new pull request #11083: KAFKA-13010: retry for tasks

2021-07-19 Thread GitBox


wcarlson5 opened a new pull request #11083:
URL: https://github.com/apache/kafka/pull/11083


   If there is a cooperative the tasks might not be assigned to a thread so 
retrying should work in a very short timeframe
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Walker Carlson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383653#comment-17383653
 ] 

Walker Carlson commented on KAFKA-13010:


It looks like the issue only shows up between cooperative rebalances and just 
retrying will fix it. I will make a PR with a fix shortly

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12994) Migrate all Tests to New API and Remove Suppression for Deprecation Warnings related to KIP-633

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383652#comment-17383652
 ] 

Konstantine Karantasis commented on KAFKA-12994:


Downgraded the priority to Major since this is not a blocker. If we get a PR 
soon we could consider inclusion to 3.0 if the changes don't have any risk. 

> Migrate all Tests to New API and Remove Suppression for Deprecation Warnings 
> related to KIP-633
> ---
>
> Key: KAFKA-12994
> URL: https://issues.apache.org/jira/browse/KAFKA-12994
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Affects Versions: 3.0.0
>Reporter: Israel Ekpo
>Assignee: Israel Ekpo
>Priority: Major
>  Labels: kip, kip-633
> Fix For: 3.0.0
>
>
> Due to the API changes for KIP-633 a lot of deprecation warnings have been 
> generated in tests that are using the old deprecated APIs. There are a lot of 
> tests using the deprecated methods. We should absolutely migrate them all to 
> the new APIs and then get rid of all the applicable annotations for 
> suppressing the deprecation warnings.
> The applies to all Java and Scala examples and tests using the deprecated 
> APIs in the JoinWindows, SessionWindows, TimeWindows and SlidingWindows 
> classes.
>  
> This is based on the feedback from reviewers in this PR
>  
> https://github.com/apache/kafka/pull/10926



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13021) Improve Javadocs for API Changes from KIP-633

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-13021:
---
Priority: Major  (was: Blocker)

> Improve Javadocs for API Changes from KIP-633
> -
>
> Key: KAFKA-13021
> URL: https://issues.apache.org/jira/browse/KAFKA-13021
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, streams
>Affects Versions: 3.0.0
>Reporter: Israel Ekpo
>Assignee: Israel Ekpo
>Priority: Major
>
> There are Javadoc changes from the following PR that needs to be completed 
> prior to the 3.0 release. This Jira item is to track that work
> [https://github.com/apache/kafka/pull/10926]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12994) Migrate all Tests to New API and Remove Suppression for Deprecation Warnings related to KIP-633

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-12994:
---
Priority: Major  (was: Blocker)

> Migrate all Tests to New API and Remove Suppression for Deprecation Warnings 
> related to KIP-633
> ---
>
> Key: KAFKA-12994
> URL: https://issues.apache.org/jira/browse/KAFKA-12994
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Affects Versions: 3.0.0
>Reporter: Israel Ekpo
>Assignee: Israel Ekpo
>Priority: Major
>  Labels: kip, kip-633
> Fix For: 3.0.0
>
>
> Due to the API changes for KIP-633 a lot of deprecation warnings have been 
> generated in tests that are using the old deprecated APIs. There are a lot of 
> tests using the deprecated methods. We should absolutely migrate them all to 
> the new APIs and then get rid of all the applicable annotations for 
> suppressing the deprecation warnings.
> The applies to all Java and Scala examples and tests using the deprecated 
> APIs in the JoinWindows, SessionWindows, TimeWindows and SlidingWindows 
> classes.
>  
> This is based on the feedback from reviewers in this PR
>  
> https://github.com/apache/kafka/pull/10926



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12644) Add Missing Class-Level Javadoc to Descendants of org.apache.kafka.common.errors.ApiException

2021-07-19 Thread Jason Gustafson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383650#comment-17383650
 ] 

Jason Gustafson commented on KAFKA-12644:
-

Downgrading priority since this is not a blocker. We can nevertheless aim for 
3.0.

> Add Missing Class-Level Javadoc to Descendants of 
> org.apache.kafka.common.errors.ApiException
> -
>
> Key: KAFKA-12644
> URL: https://issues.apache.org/jira/browse/KAFKA-12644
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, documentation
>Affects Versions: 3.0.0, 2.8.1
>Reporter: Israel Ekpo
>Assignee: Israel Ekpo
>Priority: Blocker
>  Labels: documentation
> Fix For: 3.0.0, 2.8.1
>
>
> I noticed that class-level Javadocs are missing from some classes in the 
> org.apache.kafka.common.errors package. This issue is for tracking the work 
> of adding the missing class-level javadocs for those Exception classes.
> https://kafka.apache.org/27/javadoc/org/apache/kafka/common/errors/package-summary.html
> https://github.com/apache/kafka/tree/trunk/clients/src/main/java/org/apache/kafka/common/errors
> Basic class-level documentation could be derived by mapping the error 
> conditions documented in the protocol
> https://kafka.apache.org/protocol#protocol_constants



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12644) Add Missing Class-Level Javadoc to Descendants of org.apache.kafka.common.errors.ApiException

2021-07-19 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-12644:

Priority: Major  (was: Blocker)

> Add Missing Class-Level Javadoc to Descendants of 
> org.apache.kafka.common.errors.ApiException
> -
>
> Key: KAFKA-12644
> URL: https://issues.apache.org/jira/browse/KAFKA-12644
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, documentation
>Affects Versions: 3.0.0, 2.8.1
>Reporter: Israel Ekpo
>Assignee: Israel Ekpo
>Priority: Major
>  Labels: documentation
> Fix For: 3.0.0, 2.8.1
>
>
> I noticed that class-level Javadocs are missing from some classes in the 
> org.apache.kafka.common.errors package. This issue is for tracking the work 
> of adding the missing class-level javadocs for those Exception classes.
> https://kafka.apache.org/27/javadoc/org/apache/kafka/common/errors/package-summary.html
> https://github.com/apache/kafka/tree/trunk/clients/src/main/java/org/apache/kafka/common/errors
> Basic class-level documentation could be derived by mapping the error 
> conditions documented in the protocol
> https://kafka.apache.org/protocol#protocol_constants



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] dielhennr commented on a change in pull request #11082: KAFKA-13104: Controller should notify raft client when it resigns

2021-07-19 Thread GitBox


dielhennr commented on a change in pull request #11082:
URL: https://github.com/apache/kafka/pull/11082#discussion_r672657134



##
File path: 
metadata/src/main/java/org/apache/kafka/controller/QuorumController.java
##
@@ -284,6 +284,7 @@ private Throwable handleEventException(String name,
 "Reverting to last committed offset {}.",
 this, exception.getClass().getSimpleName(), curClaimEpoch, deltaUs,
 lastCommittedOffset, exception);
+raftClient.resign(curClaimEpoch);

Review comment:
   @jsancio For 272 to execute, startProcessingTime must not be present. 
The only place I see an exception get thrown before startProcessingTime is 
defined is in the ControllerWriteEvent and it is a NotControllerException. This 
would mean that resign/renounce should not be on line 272 since it is already 
not the controller.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12622) Automate LICENSE file validation

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-12622:
---
Fix Version/s: (was: 3.0.0)
   3.0.1

> Automate LICENSE file validation
> 
>
> Key: KAFKA-12622
> URL: https://issues.apache.org/jira/browse/KAFKA-12622
> Project: Kafka
>  Issue Type: Task
>Reporter: John Roesler
>Priority: Major
> Fix For: 2.8.1, 3.0.1
>
>
> In https://issues.apache.org/jira/browse/KAFKA-12602, we manually constructed 
> a correct license file for 2.8.0. This file will certainly become wrong again 
> in later releases, so we need to write some kind of script to automate a 
> check.
> It crossed my mind to automate the generation of the file, but it seems to be 
> an intractable problem, considering that each dependency may change licenses, 
> may package license files, link to them from their poms, link to them from 
> their repos, etc. I've also found multiple URLs listed with various 
> delimiters, broken links that I have to chase down, etc.
> Therefore, it seems like the solution to aim for is simply: list all the jars 
> that we package, and print out a report of each jar that's extra or missing 
> vs. the ones in our `LICENSE-binary` file.
> The check should be part of the release script at least, if not part of the 
> regular build (so we keep it up to date as dependencies change).
>  
> Here's how I do this manually right now:
> {code:java}
> // build the binary artifacts
> $ ./gradlewAll releaseTarGz
> // unpack the binary artifact 
> $ tar xf core/build/distributions/kafka_2.13-X.Y.Z.tgz
> $ cd xf kafka_2.13-X.Y.Z
> // list the packaged jars 
> // (you can ignore the jars for our own modules, like kafka, kafka-clients, 
> etc.)
> $ ls libs/
> // cross check the jars with the packaged LICENSE
> // make sure all dependencies are listed with the right versions
> $ cat LICENSE
> // also double check all the mentioned license files are present
> $ ls licenses {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-12291) Fix Ignored Upgrade Tests in streams_upgrade_test.py: test_upgrade_downgrade_brokers

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383642#comment-17383642
 ] 

Konstantine Karantasis edited comment on KAFKA-12291 at 7/19/21, 10:35 PM:
---

[~cadonna] we are now past code freeze for 3.0. Is this an issue that should 
block the 3.0 release, or could we postpone the fix to the next one?


was (Author: kkonstantine):
[~cadonna] we are now past code freeze for 3.0. Is this an issue that should 
block the 3.0, or could we postpone the fix to the next one?

> Fix Ignored Upgrade Tests in streams_upgrade_test.py: 
> test_upgrade_downgrade_brokers
> 
>
> Key: KAFKA-12291
> URL: https://issues.apache.org/jira/browse/KAFKA-12291
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, system tests
>Reporter: Bruno Cadonna
>Priority: Blocker
> Fix For: 3.0.0
>
>
> Fix in the oldest branch that ignores the test and cherry-pick forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12291) Fix Ignored Upgrade Tests in streams_upgrade_test.py: test_upgrade_downgrade_brokers

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383642#comment-17383642
 ] 

Konstantine Karantasis commented on KAFKA-12291:


[~cadonna] we are now past code freeze for 3.0. Is this an issue that should 
block the 3.0, or could we postpone the fix to the next one?

> Fix Ignored Upgrade Tests in streams_upgrade_test.py: 
> test_upgrade_downgrade_brokers
> 
>
> Key: KAFKA-12291
> URL: https://issues.apache.org/jira/browse/KAFKA-12291
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, system tests
>Reporter: Bruno Cadonna
>Priority: Blocker
> Fix For: 3.0.0
>
>
> Fix in the oldest branch that ignores the test and cherry-pick forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13021) Improve Javadocs for API Changes from KIP-633

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383636#comment-17383636
 ] 

Konstantine Karantasis commented on KAFKA-13021:


I don't see a PR linked for this issue. Unless we have a straightforward low 
risk PR which we could merge very soon, I'd recommend postponing the fix for 
the next release and unblocking 3.0

> Improve Javadocs for API Changes from KIP-633
> -
>
> Key: KAFKA-13021
> URL: https://issues.apache.org/jira/browse/KAFKA-13021
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, streams
>Affects Versions: 3.0.0
>Reporter: Israel Ekpo
>Assignee: Israel Ekpo
>Priority: Blocker
>
> There are Javadoc changes from the following PR that needs to be completed 
> prior to the 3.0 release. This Jira item is to track that work
> [https://github.com/apache/kafka/pull/10926]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12994) Migrate all Tests to New API and Remove Suppression for Deprecation Warnings related to KIP-633

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383632#comment-17383632
 ] 

Konstantine Karantasis commented on KAFKA-12994:


Hi [~iekpo]. You mentioned on the dev mailing list that a PR was in the works 
for this issue.

https://lists.apache.org/thread.html/r25f41514ae9751f260b5773abc039dfc828b00154297f20b4a14a151%40%3Cdev.kafka.apache.org%3E

However, I don't see a link on this issue here yet. 

We are now past code freeze for 3.0. We'll need to either get a PR for this 
issue, review soon and possibly accept as a blocker or postpone for the next 
release. Do you have an update you could share?

> Migrate all Tests to New API and Remove Suppression for Deprecation Warnings 
> related to KIP-633
> ---
>
> Key: KAFKA-12994
> URL: https://issues.apache.org/jira/browse/KAFKA-12994
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Affects Versions: 3.0.0
>Reporter: Israel Ekpo
>Assignee: Israel Ekpo
>Priority: Blocker
>  Labels: kip, kip-633
> Fix For: 3.0.0
>
>
> Due to the API changes for KIP-633 a lot of deprecation warnings have been 
> generated in tests that are using the old deprecated APIs. There are a lot of 
> tests using the deprecated methods. We should absolutely migrate them all to 
> the new APIs and then get rid of all the applicable annotations for 
> suppressing the deprecation warnings.
> The applies to all Java and Scala examples and tests using the deprecated 
> APIs in the JoinWindows, SessionWindows, TimeWindows and SlidingWindows 
> classes.
>  
> This is based on the feedback from reviewers in this PR
>  
> https://github.com/apache/kafka/pull/10926



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Walker Carlson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383620#comment-17383620
 ] 

Walker Carlson edited comment on KAFKA-13010 at 7/19/21, 10:15 PM:
---

I ran it for the commit before KIP-744 and it did fail. I am going to do a 
binary search from when the test was introduced to see where it started failing.

 

EDIT: It looks like it always would fail eventually.

 

I am seeing if this is an error that will persist beyond one try


was (Author: wcarlson5):
I ran it for the commit before KIP-744 and it did fail. I am going to do a 
binary search from when the test was introduced to see where it started failing.

 

EDIT: It looks like it always would fail eventually.

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Walker Carlson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383620#comment-17383620
 ] 

Walker Carlson edited comment on KAFKA-13010 at 7/19/21, 10:08 PM:
---

I ran it for the commit before KIP-744 and it did fail. I am going to do a 
binary search from when the test was introduced to see where it started failing.

 

EDIT: It looks like it always would fail eventually.


was (Author: wcarlson5):
I ran it for the commit before KIP-744 and it did fail. I am going to do a 
binary search from when the test was introduced to see where it started failing.

 

EDIT: It looks like it always would fail eventually. I did find this i n the 
logs

 
{code:java}
java.nio.file.FileSystemException: 
/var/folders/55/w9205z6d7csbfby6mj1jpnx0gp/T/kafka-6916100724973642606/TaskMetadataTest_TaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformation/0_0/.checkpoint:
 File name too longjava.nio.file.FileSystemException: 
/var/folders/55/w9205z6d7csbfby6mj1jpnx0gp/T/kafka-6916100724973642606/TaskMetadataTest_TaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformation/0_0/.checkpoint:
 File name too long at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
 at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
 at java.nio.file.Files.readAttributes(Files.java:1737) at 
java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219) at 
java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276) at 
java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322) at 
java.nio.file.Files.walkFileTree(Files.java:2662) at 
java.nio.file.Files.walkFileTree(Files.java:2742) at 
org.apache.kafka.common.utils.Utils.delete(Utils.java:815) at 

[jira] [Comment Edited] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Walker Carlson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383620#comment-17383620
 ] 

Walker Carlson edited comment on KAFKA-13010 at 7/19/21, 10:07 PM:
---

I ran it for the commit before KIP-744 and it did fail. I am going to do a 
binary search from when the test was introduced to see where it started failing.

 

EDIT: It looks like it always would fail eventually. I did find this i n the 
logs

 
{code:java}
java.nio.file.FileSystemException: 
/var/folders/55/w9205z6d7csbfby6mj1jpnx0gp/T/kafka-6916100724973642606/TaskMetadataTest_TaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformation/0_0/.checkpoint:
 File name too longjava.nio.file.FileSystemException: 
/var/folders/55/w9205z6d7csbfby6mj1jpnx0gp/T/kafka-6916100724973642606/TaskMetadataTest_TaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformationTaskMetadataIntegrationTestshouldReportCorrectCommittedOffsetInformation/0_0/.checkpoint:
 File name too long at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
 at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
 at java.nio.file.Files.readAttributes(Files.java:1737) at 
java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219) at 
java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276) at 
java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322) at 
java.nio.file.Files.walkFileTree(Files.java:2662) at 
java.nio.file.Files.walkFileTree(Files.java:2742) at 
org.apache.kafka.common.utils.Utils.delete(Utils.java:815) at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:84)
 at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:604)
 at 

[jira] [Comment Edited] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Walker Carlson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383620#comment-17383620
 ] 

Walker Carlson edited comment on KAFKA-13010 at 7/19/21, 10:05 PM:
---

I ran it for the commit before KIP-744 and it did fail. I am going to do a 
binary search from when the test was introduced to see where it started failing.

 

EDIT: It looks like it always would fail eventually


was (Author: wcarlson5):
I ran it for the commit before KIP-744 and it did fail. I am going to do a 
binary search from when the test was introduced to see where it started failing

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] dielhennr commented on a change in pull request #11082: KAFKA-13104: Controller should notify raft client when it resigns

2021-07-19 Thread GitBox


dielhennr commented on a change in pull request #11082:
URL: https://github.com/apache/kafka/pull/11082#discussion_r672662886



##
File path: 
metadata/src/main/java/org/apache/kafka/controller/QuorumController.java
##
@@ -284,6 +284,7 @@ private Throwable handleEventException(String name,
 "Reverting to last committed offset {}.",
 this, exception.getClass().getSimpleName(), curClaimEpoch, deltaUs,
 lastCommittedOffset, exception);
+raftClient.resign(curClaimEpoch);

Review comment:
   Forwarding write requests to the controller is a requirement for KRaft




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Walker Carlson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383620#comment-17383620
 ] 

Walker Carlson commented on KAFKA-13010:


I ran it for the commit before KIP-744 and it did fail. I am going to do a 
binary search from when the test was introduced to see where it started failing

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] dielhennr commented on a change in pull request #11082: KAFKA-13104: Controller should notify raft client when it resigns

2021-07-19 Thread GitBox


dielhennr commented on a change in pull request #11082:
URL: https://github.com/apache/kafka/pull/11082#discussion_r672657134



##
File path: 
metadata/src/main/java/org/apache/kafka/controller/QuorumController.java
##
@@ -284,6 +284,7 @@ private Throwable handleEventException(String name,
 "Reverting to last committed offset {}.",
 this, exception.getClass().getSimpleName(), curClaimEpoch, deltaUs,
 lastCommittedOffset, exception);
+raftClient.resign(curClaimEpoch);

Review comment:
   @jsancio For 272 to execute, startProcessingTime must not be present. 
The only place I see an exception get thrown before startProcessingTime is 
defined is in the ControllerWriteEvent and it is a NotControllerException. This 
would mean that we would not have to resign/renounce since it is already not 
the controller.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-7330) Kakfa 0.10.2.1 producer close method issue

2021-07-19 Thread Arikathota, Durga (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arikathota, Durga updated KAFKA-7330:
-
Attachment: disclaim.txt
image001.png

Any update on this issue please ?

Kind Regards,
Durga Prasad Arikathota
[ubslogo]
UBS WMA | Associate Director - Digital Advice Portal
1000, Harbor Blvd, Weehawken, NJ 07086
* (Skype) +1-201-352-2063


Visit our website at http://www.ubs.com

This message contains confidential information and is intended only 
for the individual named.  If you are not the named addressee you 
should not disseminate, distribute or copy this e-mail.  Please 
notify the sender immediately by e-mail if you have received this 
e-mail by mistake and delete this e-mail from your system.

E-mails are not encrypted and cannot be guaranteed to be secure or 
error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or contain viruses.  The sender 
therefore does not accept liability for any errors or omissions in the 
contents of this message which arise as a result of e-mail transmission.  
If verification is required please request a hard-copy version.  This 
message is provided for informational purposes and should not be 
construed as a solicitation or offer to buy or sell any securities 
or related financial instruments.

UBS reserves the right to retain all messages. Messages are protected
and accessed only in legally justified cases.

For information on how UBS processes and keeps secure your personal
data, please visit our Privacy Notice at
https://www.ubs.com/global/en/legal/privacy.html


> Kakfa 0.10.2.1 producer close method issue
> --
>
> Key: KAFKA-7330
> URL: https://issues.apache.org/jira/browse/KAFKA-7330
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.2.1
> Environment: linux
>Reporter: vinay
>Priority: Major
> Attachments: disclaim.txt, image001.png
>
>
> We are creating new producer connections each time but in separate 
> threads(pool of 40 and all threads will be used most of the time) and closing 
> it when work in done. its a java process.
> Not always but sometimes close connection takes more than 10 seconds and 
> observerd that particular message was never published.
> Could you please help.
>  
> Properties prop = new Properties(); 
> {{prop.put("bootstarp.servers",-);}}
> prop.put("acks","all"); 
> //some ssl properties 
> {\{--- }}
> //ends KafkaProducer 
> connection = null; 
> try { 
> connection = new KafkaProducer(props); msg.setTopic(topic); 
> {{msg.setDate(new Date());}}
>  connection.send(msg);
>  } catch() {
> } finally { 
> connection.close();// sometimes it takes more than 10 secs and the above 
> message was not send }}
> {{}}}
> Producer config :
> max.in.flight.requests.per.connection = 5
> acks= all
> batch.size = 16384
> linger.ms = 0
> max.block.ms = 6
> request.timeout.ms = 5000
> retries = 0
> ---
>  
> Even  i see below warning most of the times,
> Failed to send SSL Close message:
> java.io.IOException: Unexpected status returned by SSLEngine.wrap, expected 
> CLOSED, received OK. Will not send close message to peer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] jsancio commented on a change in pull request #11082: KAFKA-13104: Controller should notify raft client when it resigns

2021-07-19 Thread GitBox


jsancio commented on a change in pull request #11082:
URL: https://github.com/apache/kafka/pull/11082#discussion_r672645909



##
File path: 
metadata/src/main/java/org/apache/kafka/controller/QuorumController.java
##
@@ -284,6 +284,7 @@ private Throwable handleEventException(String name,
 "Reverting to last committed offset {}.",
 this, exception.getClass().getSimpleName(), curClaimEpoch, deltaUs,
 lastCommittedOffset, exception);
+raftClient.resign(curClaimEpoch);

Review comment:
   Do we also need to resign and renounce in line 272 of this file?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12487) Sink connectors do not work with the cooperative consumer rebalance protocol

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-12487:
---
Fix Version/s: (was: 3.0.0)
   3.1.0

> Sink connectors do not work with the cooperative consumer rebalance protocol
> 
>
> Key: KAFKA-12487
> URL: https://issues.apache.org/jira/browse/KAFKA-12487
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 3.0.0, 2.4.2, 2.5.2, 2.8.0, 2.7.1, 2.6.2
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Blocker
> Fix For: 3.1.0
>
>
> The {{ConsumerRebalanceListener}} used by the framework to respond to 
> rebalance events in consumer groups for sink tasks is hard-coded with the 
> assumption that the consumer performs rebalances eagerly. In other words, it 
> assumes that whenever {{onPartitionsRevoked}} is called, all partitions have 
> been revoked from that consumer, and whenever {{onPartitionsAssigned}} is 
> called, the partitions passed in to that method comprise the complete set of 
> topic partitions assigned to that consumer.
> See the [WorkerSinkTask.HandleRebalance 
> class|https://github.com/apache/kafka/blob/b96fc7892f1e885239d3290cf509e1d1bb41e7db/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L669-L730]
>  for the specifics.
>  
> One issue this can cause is silently ignoring to-be-committed offsets 
> provided by sink tasks, since the framework ignores offsets provided by tasks 
> in their {{preCommit}} method if it does not believe that the consumer for 
> that task is currently assigned the topic partition for that offset. See 
> these lines in the [WorkerSinkTask::commitOffsets 
> method|https://github.com/apache/kafka/blob/b96fc7892f1e885239d3290cf509e1d1bb41e7db/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L429-L430]
>  for reference.
>  
> This may not be the only issue caused by configuring a sink connector's 
> consumer to use cooperative rebalancing. Rigorous unit and integration 
> testing should be added before claiming that the Connect framework supports 
> the use of cooperative consumers with sink connectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12487) Sink connectors do not work with the cooperative consumer rebalance protocol

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383609#comment-17383609
 ] 

Konstantine Karantasis commented on KAFKA-12487:


Changing the default consumer protocol to be the cooperative protocol has been 
postponed for 3.1.0. Given that we are past the code freeze for 3.0, I'm 
postponing this issue to 3.1.0 while keeping its blocker status for this 
release. The change is not trivial and would be good to have enough time to 
test before we release. 

> Sink connectors do not work with the cooperative consumer rebalance protocol
> 
>
> Key: KAFKA-12487
> URL: https://issues.apache.org/jira/browse/KAFKA-12487
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 3.0.0, 2.4.2, 2.5.2, 2.8.0, 2.7.1, 2.6.2
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Blocker
> Fix For: 3.0.0
>
>
> The {{ConsumerRebalanceListener}} used by the framework to respond to 
> rebalance events in consumer groups for sink tasks is hard-coded with the 
> assumption that the consumer performs rebalances eagerly. In other words, it 
> assumes that whenever {{onPartitionsRevoked}} is called, all partitions have 
> been revoked from that consumer, and whenever {{onPartitionsAssigned}} is 
> called, the partitions passed in to that method comprise the complete set of 
> topic partitions assigned to that consumer.
> See the [WorkerSinkTask.HandleRebalance 
> class|https://github.com/apache/kafka/blob/b96fc7892f1e885239d3290cf509e1d1bb41e7db/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L669-L730]
>  for the specifics.
>  
> One issue this can cause is silently ignoring to-be-committed offsets 
> provided by sink tasks, since the framework ignores offsets provided by tasks 
> in their {{preCommit}} method if it does not believe that the consumer for 
> that task is currently assigned the topic partition for that offset. See 
> these lines in the [WorkerSinkTask::commitOffsets 
> method|https://github.com/apache/kafka/blob/b96fc7892f1e885239d3290cf509e1d1bb41e7db/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L429-L430]
>  for reference.
>  
> This may not be the only issue caused by configuring a sink connector's 
> consumer to use cooperative rebalancing. Rigorous unit and integration 
> testing should be added before claiming that the Connect framework supports 
> the use of cooperative consumers with sink connectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13069) Add magic number to DefaultKafkaPrincipalBuilder.KafkaPrincipalSerde

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383604#comment-17383604
 ] 

Konstantine Karantasis commented on KAFKA-13069:


Postponing to the subsequent release given that this issue is not a blocker and 
did not make it on time for 3.0 code freeze. 

> Add magic number to DefaultKafkaPrincipalBuilder.KafkaPrincipalSerde
> 
>
> Key: KAFKA-13069
> URL: https://issues.apache.org/jira/browse/KAFKA-13069
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Ron Dagostino
>Assignee: Ron Dagostino
>Priority: Critical
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13069) Add magic number to DefaultKafkaPrincipalBuilder.KafkaPrincipalSerde

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-13069:
---
Fix Version/s: (was: 3.0.0)
   3.1.0

> Add magic number to DefaultKafkaPrincipalBuilder.KafkaPrincipalSerde
> 
>
> Key: KAFKA-13069
> URL: https://issues.apache.org/jira/browse/KAFKA-13069
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Ron Dagostino
>Assignee: Ron Dagostino
>Priority: Critical
> Fix For: 3.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12712) KRaft: Missing controller.quorom.voters config not properly handled

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-12712:
---
Fix Version/s: (was: 3.0.0)
   3.1.0

> KRaft: Missing controller.quorom.voters config not properly handled
> ---
>
> Key: KAFKA-12712
> URL: https://issues.apache.org/jira/browse/KAFKA-12712
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.8.0
>Reporter: Magnus Edenhill
>Priority: Major
>  Labels: kip-500
> Fix For: 3.1.0
>
>
> When trying out KRaft in 2.8 I mispelled controller.quorum.voters as 
> controller.quorum.voters, but the broker did not fail to start, nor did it 
> print any warning.
>  
> Instead it raised this error:
>  
> {code:java}
> [2021-04-23 18:25:13,484] INFO Starting controller 
> (kafka.server.ControllerServer)[2021-04-23 18:25:13,484] INFO Starting 
> controller (kafka.server.ControllerServer)[2021-04-23 18:25:13,485] ERROR 
> [kafka-raft-io-thread]: Error due to 
> (kafka.raft.KafkaRaftManager$RaftIoThread)java.lang.IllegalArgumentException: 
> bound must be positive at java.util.Random.nextInt(Random.java:388) at 
> org.apache.kafka.raft.RequestManager.findReadyVoter(RequestManager.java:57) 
> at 
> org.apache.kafka.raft.KafkaRaftClient.maybeSendAnyVoterFetch(KafkaRaftClient.java:1778)
>  at 
> org.apache.kafka.raft.KafkaRaftClient.pollUnattachedAsObserver(KafkaRaftClient.java:2080)
>  at 
> org.apache.kafka.raft.KafkaRaftClient.pollUnattached(KafkaRaftClient.java:2061)
>  at 
> org.apache.kafka.raft.KafkaRaftClient.pollCurrentState(KafkaRaftClient.java:2096)
>  at org.apache.kafka.raft.KafkaRaftClient.poll(KafkaRaftClient.java:2181) at 
> kafka.raft.KafkaRaftManager$RaftIoThread.doWork(RaftManager.scala:53) at 
> kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
> {code}
> which I guess eventually (1 minute later) lead to this error which terminated 
> the broker:
> {code:java}
> [2021-04-23 18:26:14,435] ERROR [BrokerLifecycleManager id=2] Shutting down 
> because we were unable to register with the controller quorum. 
> (kafka.server.BrokerLifecycleManager)[2021-04-23 18:26:14,435] ERROR 
> [BrokerLifecycleManager id=2] Shutting down because we were unable to 
> register with the controller quorum. 
> (kafka.server.BrokerLifecycleManager)[2021-04-23 18:26:14,436] INFO 
> [BrokerLifecycleManager id=2] registrationTimeout: shutting down event queue. 
> (org.apache.kafka.queue.KafkaEventQueue)[2021-04-23 18:26:14,437] INFO 
> [BrokerLifecycleManager id=2] Transitioning from STARTING to SHUTTING_DOWN. 
> (kafka.server.BrokerLifecycleManager)[2021-04-23 18:26:14,437] INFO 
> [broker-2-to-controller-send-thread]: Shutting down 
> (kafka.server.BrokerToControllerRequestThread)[2021-04-23 18:26:14,438] INFO 
> [broker-2-to-controller-send-thread]: Stopped 
> (kafka.server.BrokerToControllerRequestThread)[2021-04-23 18:26:14,438] INFO 
> [broker-2-to-controller-send-thread]: Shutdown completed 
> (kafka.server.BrokerToControllerRequestThread)[2021-04-23 18:26:14,441] ERROR 
> [BrokerServer id=2] Fatal error during broker startup. Prepare to shutdown 
> (kafka.server.BrokerServer)java.util.concurrent.CancellationException at 
> java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2276) at 
> kafka.server.BrokerLifecycleManager$ShutdownEvent.run(BrokerLifecycleManager.scala:474)
>  at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:174)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
> But since the client listeners were made available prior to shutting down, 
> the broker was deemed up and operational by the (naiive) monitoring tool.
> So..:
>  - Broker should fail on startup on invalid/unknown config properties. I 
> understand this is tehcnically tricky, so at least a warning log should be 
> printed.
>  - Perhaps not create client listeners before control plane is somewhat happy.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] dielhennr opened a new pull request #11082: KAFKA-13104: Controller should notify raft client when it resigns

2021-07-19 Thread GitBox


dielhennr opened a new pull request #11082:
URL: https://github.com/apache/kafka/pull/11082


   When the active controller encounters an event exception it attempts to 
renounce leadership. Unfortunately, this doesn't tell the RaftClient that it 
should attempt to give up leadership. This will result in inconsistent state 
with the RaftClient as leader but with the controller as inactive.
   
   We should change this implementation so that the active controller asks the 
RaftClient to resign. 
   
   https://issues.apache.org/jira/browse/KAFKA-13104


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-12712) KRaft: Missing controller.quorom.voters config not properly handled

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383602#comment-17383602
 ] 

Konstantine Karantasis commented on KAFKA-12712:


Postponing to the subsequent release given that this issue is not a blocker and 
did not make it on time for 3.0 code freeze. 

> KRaft: Missing controller.quorom.voters config not properly handled
> ---
>
> Key: KAFKA-12712
> URL: https://issues.apache.org/jira/browse/KAFKA-12712
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.8.0
>Reporter: Magnus Edenhill
>Priority: Major
>  Labels: kip-500
> Fix For: 3.0.0
>
>
> When trying out KRaft in 2.8 I mispelled controller.quorum.voters as 
> controller.quorum.voters, but the broker did not fail to start, nor did it 
> print any warning.
>  
> Instead it raised this error:
>  
> {code:java}
> [2021-04-23 18:25:13,484] INFO Starting controller 
> (kafka.server.ControllerServer)[2021-04-23 18:25:13,484] INFO Starting 
> controller (kafka.server.ControllerServer)[2021-04-23 18:25:13,485] ERROR 
> [kafka-raft-io-thread]: Error due to 
> (kafka.raft.KafkaRaftManager$RaftIoThread)java.lang.IllegalArgumentException: 
> bound must be positive at java.util.Random.nextInt(Random.java:388) at 
> org.apache.kafka.raft.RequestManager.findReadyVoter(RequestManager.java:57) 
> at 
> org.apache.kafka.raft.KafkaRaftClient.maybeSendAnyVoterFetch(KafkaRaftClient.java:1778)
>  at 
> org.apache.kafka.raft.KafkaRaftClient.pollUnattachedAsObserver(KafkaRaftClient.java:2080)
>  at 
> org.apache.kafka.raft.KafkaRaftClient.pollUnattached(KafkaRaftClient.java:2061)
>  at 
> org.apache.kafka.raft.KafkaRaftClient.pollCurrentState(KafkaRaftClient.java:2096)
>  at org.apache.kafka.raft.KafkaRaftClient.poll(KafkaRaftClient.java:2181) at 
> kafka.raft.KafkaRaftManager$RaftIoThread.doWork(RaftManager.scala:53) at 
> kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
> {code}
> which I guess eventually (1 minute later) lead to this error which terminated 
> the broker:
> {code:java}
> [2021-04-23 18:26:14,435] ERROR [BrokerLifecycleManager id=2] Shutting down 
> because we were unable to register with the controller quorum. 
> (kafka.server.BrokerLifecycleManager)[2021-04-23 18:26:14,435] ERROR 
> [BrokerLifecycleManager id=2] Shutting down because we were unable to 
> register with the controller quorum. 
> (kafka.server.BrokerLifecycleManager)[2021-04-23 18:26:14,436] INFO 
> [BrokerLifecycleManager id=2] registrationTimeout: shutting down event queue. 
> (org.apache.kafka.queue.KafkaEventQueue)[2021-04-23 18:26:14,437] INFO 
> [BrokerLifecycleManager id=2] Transitioning from STARTING to SHUTTING_DOWN. 
> (kafka.server.BrokerLifecycleManager)[2021-04-23 18:26:14,437] INFO 
> [broker-2-to-controller-send-thread]: Shutting down 
> (kafka.server.BrokerToControllerRequestThread)[2021-04-23 18:26:14,438] INFO 
> [broker-2-to-controller-send-thread]: Stopped 
> (kafka.server.BrokerToControllerRequestThread)[2021-04-23 18:26:14,438] INFO 
> [broker-2-to-controller-send-thread]: Shutdown completed 
> (kafka.server.BrokerToControllerRequestThread)[2021-04-23 18:26:14,441] ERROR 
> [BrokerServer id=2] Fatal error during broker startup. Prepare to shutdown 
> (kafka.server.BrokerServer)java.util.concurrent.CancellationException at 
> java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2276) at 
> kafka.server.BrokerLifecycleManager$ShutdownEvent.run(BrokerLifecycleManager.scala:474)
>  at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:174)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
> But since the client listeners were made available prior to shutting down, 
> the broker was deemed up and operational by the (naiive) monitoring tool.
> So..:
>  - Broker should fail on startup on invalid/unknown config properties. I 
> understand this is tehcnically tricky, so at least a warning log should be 
> printed.
>  - Perhaps not create client listeners before control plane is somewhat happy.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12882) Add RegisteredBrokerCount and UnfencedBrokerCount metrics to the QuorumController

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383600#comment-17383600
 ] 

Konstantine Karantasis commented on KAFKA-12882:


The KIP has been published but it's still under discussion. 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-748%3A+Add+Broker+Count+Metrics#KIP748:AddBrokerCountMetrics]

I'm resetting the Fix version since we are past the relevant deadlines. Please 
make sure to set the appropriate version once the KIP gets approved. 

> Add RegisteredBrokerCount and UnfencedBrokerCount metrics to the 
> QuorumController
> -
>
> Key: KAFKA-12882
> URL: https://issues.apache.org/jira/browse/KAFKA-12882
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ryan Dielhenn
>Assignee: Ryan Dielhenn
>Priority: Minor
>  Labels: needs-kip
> Fix For: 3.0.0
>
>
> Adding RegisteredBrokerCount and UnfencedBrokerCount metrics to the 
> QuorumController.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12882) Add RegisteredBrokerCount and UnfencedBrokerCount metrics to the QuorumController

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-12882:
---
Fix Version/s: (was: 3.0.0)

> Add RegisteredBrokerCount and UnfencedBrokerCount metrics to the 
> QuorumController
> -
>
> Key: KAFKA-12882
> URL: https://issues.apache.org/jira/browse/KAFKA-12882
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ryan Dielhenn
>Assignee: Ryan Dielhenn
>Priority: Minor
>  Labels: needs-kip
>
> Adding RegisteredBrokerCount and UnfencedBrokerCount metrics to the 
> QuorumController.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] dielhennr closed pull request #11081: MINOR: Typo in RaftClient Javadoc

2021-07-19 Thread GitBox


dielhennr closed pull request #11081:
URL: https://github.com/apache/kafka/pull/11081


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12699) Streams no longer overrides the java default uncaught exception handler

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-12699:
---
Fix Version/s: (was: 3.0.0)
   3.1.0

> Streams no longer overrides the java default uncaught exception handler  
> -
>
> Key: KAFKA-12699
> URL: https://issues.apache.org/jira/browse/KAFKA-12699
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.8.0
>Reporter: Walker Carlson
>Priority: Minor
> Fix For: 3.1.0
>
>
> If a user used `Thread.setUncaughtExceptionHanlder()` to set the handler for 
> all threads in the runtime streams would override that with its own handler. 
> However since streams does not use the `Thread` handler anymore it will no 
> longer do so. This can cause problems if the user does something like 
> `System.exit(1)` in the handler. 
>  
> If using the old handler in streams it will still work as it used to



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-12699) Streams no longer overrides the java default uncaught exception handler

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383586#comment-17383586
 ] 

Konstantine Karantasis edited comment on KAFKA-12699 at 7/19/21, 9:08 PM:
--

Postponing to the subsequent release given that this issue is not a blocker and 
did not make it on time for the 3.0 code freeze. 


was (Author: kkonstantine):
Postponing to the subsequent release given that this issue is not a blocker and 
did not make it on time for 3.0 code freeze. 

> Streams no longer overrides the java default uncaught exception handler  
> -
>
> Key: KAFKA-12699
> URL: https://issues.apache.org/jira/browse/KAFKA-12699
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.8.0
>Reporter: Walker Carlson
>Priority: Minor
> Fix For: 3.0.0
>
>
> If a user used `Thread.setUncaughtExceptionHanlder()` to set the handler for 
> all threads in the runtime streams would override that with its own handler. 
> However since streams does not use the `Thread` handler anymore it will no 
> longer do so. This can cause problems if the user does something like 
> `System.exit(1)` in the handler. 
>  
> If using the old handler in streams it will still work as it used to



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12699) Streams no longer overrides the java default uncaught exception handler

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383586#comment-17383586
 ] 

Konstantine Karantasis commented on KAFKA-12699:


Postponing to the subsequent release given that this issue is not a blocker and 
did not make it on time for 3.0 code freeze. 

> Streams no longer overrides the java default uncaught exception handler  
> -
>
> Key: KAFKA-12699
> URL: https://issues.apache.org/jira/browse/KAFKA-12699
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.8.0
>Reporter: Walker Carlson
>Priority: Minor
> Fix For: 3.0.0
>
>
> If a user used `Thread.setUncaughtExceptionHanlder()` to set the handler for 
> all threads in the runtime streams would override that with its own handler. 
> However since streams does not use the `Thread` handler anymore it will no 
> longer do so. This can cause problems if the user does something like 
> `System.exit(1)` in the handler. 
>  
> If using the old handler in streams it will still work as it used to



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12774) kafka-streams 2.8: logging in uncaught-exceptionhandler doesn't go through log4j

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-12774:
---
Fix Version/s: 3.0.1

> kafka-streams 2.8: logging in uncaught-exceptionhandler doesn't go through 
> log4j
> 
>
> Key: KAFKA-12774
> URL: https://issues.apache.org/jira/browse/KAFKA-12774
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.8.0
>Reporter: Jørgen
>Priority: Minor
> Fix For: 3.1.0, 2.8.1, 3.0.1
>
>
> When exceptions is handled in the uncaught-exception handler introduced in 
> KS2.8, the logging of the stacktrace doesn't seem to go through the logging 
> framework configured by the application (log4j2 in our case), but gets 
> printed to console "line-by-line".
> All other exceptions logged by kafka-streams go through log4j2 and gets 
> formatted properly according to the log4j2 appender (json in our case). 
> Haven't tested this on other frameworks like logback.
> Application setup:
>  * Spring-boot 2.4.5
>  * Log4j 2.13.3
>  * Slf4j 1.7.30
> Log4j2 appender config:
> {code:java}
> 
> 
>  stacktraceAsString="true" properties="true">
>  value="$${date:-MM-dd'T'HH:mm:ss.SSSZ}"/>
> 
> 
>  {code}
> Uncaught exception handler config:
> {code:java}
> kafkaStreams.setUncaughtExceptionHandler { exception ->
> logger.warn("Uncaught exception handled - replacing thread", exception) 
> // logged properly
> 
> StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.REPLACE_THREAD
> } {code}
> Stacktrace that gets printed line-by-line:
> {code:java}
> Exception in thread "xxx-f5860dff-9a41-490e-8ab0-540b1a7f9ce4-StreamThread-2" 
> org.apache.kafka.streams.errors.StreamsException: Error encountered sending 
> record to topic xxx-repartition for task 3_2 due 
> to:org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id.Exception handler choose to FAIL the processing, no more 
> records would be sent.  at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:226)
>at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.lambda$send$0(RecordCollectorImpl.java:196)
>  at 
> org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1365)
> at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:231)
>  at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.abort(ProducerBatch.java:159)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortUndrainedBatches(RecordAccumulator.java:783)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:430)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:315)  
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:242)
>   at java.base/java.lang.Thread.run(Unknown Source)Caused by: 
> org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id. {code}
>  
> It's a little bit hard to reproduce as I haven't found any way to trigger 
> uncaught-exception-handler through junit-tests.
> Link to discussion on slack: 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1620389197436700



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12774) kafka-streams 2.8: logging in uncaught-exceptionhandler doesn't go through log4j

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-12774:
---
Fix Version/s: (was: 3.0.0)
   3.1.0

> kafka-streams 2.8: logging in uncaught-exceptionhandler doesn't go through 
> log4j
> 
>
> Key: KAFKA-12774
> URL: https://issues.apache.org/jira/browse/KAFKA-12774
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.8.0
>Reporter: Jørgen
>Priority: Minor
> Fix For: 3.1.0, 2.8.1
>
>
> When exceptions is handled in the uncaught-exception handler introduced in 
> KS2.8, the logging of the stacktrace doesn't seem to go through the logging 
> framework configured by the application (log4j2 in our case), but gets 
> printed to console "line-by-line".
> All other exceptions logged by kafka-streams go through log4j2 and gets 
> formatted properly according to the log4j2 appender (json in our case). 
> Haven't tested this on other frameworks like logback.
> Application setup:
>  * Spring-boot 2.4.5
>  * Log4j 2.13.3
>  * Slf4j 1.7.30
> Log4j2 appender config:
> {code:java}
> 
> 
>  stacktraceAsString="true" properties="true">
>  value="$${date:-MM-dd'T'HH:mm:ss.SSSZ}"/>
> 
> 
>  {code}
> Uncaught exception handler config:
> {code:java}
> kafkaStreams.setUncaughtExceptionHandler { exception ->
> logger.warn("Uncaught exception handled - replacing thread", exception) 
> // logged properly
> 
> StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.REPLACE_THREAD
> } {code}
> Stacktrace that gets printed line-by-line:
> {code:java}
> Exception in thread "xxx-f5860dff-9a41-490e-8ab0-540b1a7f9ce4-StreamThread-2" 
> org.apache.kafka.streams.errors.StreamsException: Error encountered sending 
> record to topic xxx-repartition for task 3_2 due 
> to:org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id.Exception handler choose to FAIL the processing, no more 
> records would be sent.  at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:226)
>at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.lambda$send$0(RecordCollectorImpl.java:196)
>  at 
> org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1365)
> at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:231)
>  at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.abort(ProducerBatch.java:159)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.abortUndrainedBatches(RecordAccumulator.java:783)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:430)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:315)  
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:242)
>   at java.base/java.lang.Thread.run(Unknown Source)Caused by: 
> org.apache.kafka.common.errors.InvalidPidMappingException: The producer 
> attempted to use a producer id which is not currently assigned to its 
> transactional id. {code}
>  
> It's a little bit hard to reproduce as I haven't found any way to trigger 
> uncaught-exception-handler through junit-tests.
> Link to discussion on slack: 
> https://confluentcommunity.slack.com/archives/C48AHTCUQ/p1620389197436700



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10641) ACL Command hangs with SSL as not existing with proper error code

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-10641:
---
Fix Version/s: (was: 3.0.0)

> ACL Command hangs with SSL as not existing with proper error code
> -
>
> Key: KAFKA-10641
> URL: https://issues.apache.org/jira/browse/KAFKA-10641
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 
> 2.4.0, 2.3.1, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Senthilnathan Muthusamy
>Assignee: Senthilnathan Muthusamy
>Priority: Minor
>
> When using ACL Command with SSL mode, the process is not terminating after 
> successful ACL operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10641) ACL Command hangs with SSL as not existing with proper error code

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383584#comment-17383584
 ] 

Konstantine Karantasis commented on KAFKA-10641:


Resetting the fix version of this ticket since there hasn't been any follow up 
on the comments on the PR. Regarding 3.0, this issue is not a blocker and did 
not make it on time for the 3.0 code freeze. 

> ACL Command hangs with SSL as not existing with proper error code
> -
>
> Key: KAFKA-10641
> URL: https://issues.apache.org/jira/browse/KAFKA-10641
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 
> 2.4.0, 2.3.1, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Senthilnathan Muthusamy
>Assignee: Senthilnathan Muthusamy
>Priority: Minor
> Fix For: 3.0.0
>
>
> When using ACL Command with SSL mode, the process is not terminating after 
> successful ACL operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10642) Expose the real stack trace if any exception occurred during SSL Client Trust Verification in extension

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383582#comment-17383582
 ] 

Konstantine Karantasis commented on KAFKA-10642:


Postponing to the subsequent release given that this issue is not a blocker and 
did not make it on time for 3.0 code freeze. Having said that, it seems it's 
been a while since there was any update on the associated PR. cc [~senthilm-ms] 
[~rajinisiva...@gmail.com] on whether this is still an issue. 

> Expose the real stack trace if any exception occurred during SSL Client Trust 
> Verification in extension
> ---
>
> Key: KAFKA-10642
> URL: https://issues.apache.org/jira/browse/KAFKA-10642
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0, 2.4.0, 2.3.1, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Senthilnathan Muthusamy
>Assignee: Senthilnathan Muthusamy
>Priority: Minor
> Fix For: 3.0.0
>
>
> If there is any exception occurred in the custom implementation of client 
> trust verification (i.e. using security.provider), the inner exception is 
> suppressed or hidden and not logged to the log file...
>  
> Below is an example stack trace not showing actual exception from the 
> extension/custom implementation.
>  
> [2020-05-13 14:30:26,892] ERROR [KafkaServer id=423810470] Fatal error during 
> KafkaServer startup. Prepare to shutdown 
> (kafka.server.KafkaServer)[2020-05-13 14:30:26,892] ERROR [KafkaServer 
> id=423810470] Fatal error during KafkaServer startup. Prepare to shutdown 
> (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: 
> org.apache.kafka.common.config.ConfigException: Invalid value 
> java.lang.RuntimeException: Delegated task threw Exception/Error for 
> configuration A client SSLEngine created with the provided settings can't 
> connect to a server SSLEngine created with those settings. at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:71)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:753) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:394) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:279)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:278) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:241)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:238)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:238)
>  at kafka.network.SocketServer.startup(SocketServer.scala:121) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:265) at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at 
> kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)Caused by: 
> org.apache.kafka.common.config.ConfigException: Invalid value 
> java.lang.RuntimeException: Delegated task threw Exception/Error for 
> configuration A client SSLEngine created with the provided settings can't 
> connect to a server SSLEngine created with those settings. at 
> org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:100)
>  at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:69)
>  ... 18 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10642) Expose the real stack trace if any exception occurred during SSL Client Trust Verification in extension

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-10642:
---
Fix Version/s: (was: 3.0.0)
   3.1.0

> Expose the real stack trace if any exception occurred during SSL Client Trust 
> Verification in extension
> ---
>
> Key: KAFKA-10642
> URL: https://issues.apache.org/jira/browse/KAFKA-10642
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.3.0, 2.4.0, 2.3.1, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Senthilnathan Muthusamy
>Assignee: Senthilnathan Muthusamy
>Priority: Minor
> Fix For: 3.1.0
>
>
> If there is any exception occurred in the custom implementation of client 
> trust verification (i.e. using security.provider), the inner exception is 
> suppressed or hidden and not logged to the log file...
>  
> Below is an example stack trace not showing actual exception from the 
> extension/custom implementation.
>  
> [2020-05-13 14:30:26,892] ERROR [KafkaServer id=423810470] Fatal error during 
> KafkaServer startup. Prepare to shutdown 
> (kafka.server.KafkaServer)[2020-05-13 14:30:26,892] ERROR [KafkaServer 
> id=423810470] Fatal error during KafkaServer startup. Prepare to shutdown 
> (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: 
> org.apache.kafka.common.config.ConfigException: Invalid value 
> java.lang.RuntimeException: Delegated task threw Exception/Error for 
> configuration A client SSLEngine created with the provided settings can't 
> connect to a server SSLEngine created with those settings. at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:71)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:753) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:394) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:279)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:278) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:241)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:238)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:238)
>  at kafka.network.SocketServer.startup(SocketServer.scala:121) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:265) at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at 
> kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)Caused by: 
> org.apache.kafka.common.config.ConfigException: Invalid value 
> java.lang.RuntimeException: Delegated task threw Exception/Error for 
> configuration A client SSLEngine created with the provided settings can't 
> connect to a server SSLEngine created with those settings. at 
> org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:100)
>  at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:69)
>  ... 18 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383564#comment-17383564
 ] 

Josep Prat edited comment on KAFKA-13010 at 7/19/21, 9:02 PM:
--

I only mentioned because it seems to be the only other change in that PR that 
is not just moving implementations around.

If you run the test before the KIP-744 changes were introduced, does it also 
fail after, let's say 200 iterations?


was (Author: josep.prat):
I only mentioned because it seems to be the only other change in that PR that 
is not just moving implementations around.

If youbrun the test before the KIP-744 changes were introduced, does it also 
fail after, let's say 200 iterations?

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12899) Support --bootstrap-server in ReplicaVerificationTool

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-12899:
---
Fix Version/s: (was: 3.0.0)

> Support --bootstrap-server in ReplicaVerificationTool
> -
>
> Key: KAFKA-12899
> URL: https://issues.apache.org/jira/browse/KAFKA-12899
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
>  Labels: needs-kip
>
> kafka.tools.ReplicaVerificationTool still uses --broker-list, breaking 
> consistency with other (already migrated) tools.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12899) Support --bootstrap-server in ReplicaVerificationTool

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383577#comment-17383577
 ] 

Konstantine Karantasis commented on KAFKA-12899:


KIP freeze has past a while ago and the associated KIP has not received the 
required votes yet: 
[https://lists.apache.org/thread.html/rebd427d5fd34acf5b378d7a904af2c804e7460b32d34ddbb3368776c%40%3Cdev.kafka.apache.org%3E]

Will reset the target version for this feature. I believe it makes sense to set 
it once the voting process concludes. 

> Support --bootstrap-server in ReplicaVerificationTool
> -
>
> Key: KAFKA-12899
> URL: https://issues.apache.org/jira/browse/KAFKA-12899
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
>  Labels: needs-kip
> Fix For: 3.0.0
>
>
> kafka.tools.ReplicaVerificationTool still uses --broker-list, breaking 
> consistency with other (already migrated) tools.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9803) Allow producers to recover gracefully from transaction timeouts

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-9803:
--
Fix Version/s: (was: 3.0.0)
   3.1.0

> Allow producers to recover gracefully from transaction timeouts
> ---
>
> Key: KAFKA-9803
> URL: https://issues.apache.org/jira/browse/KAFKA-9803
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer , streams
>Reporter: Jason Gustafson
>Assignee: Boyang Chen
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.1.0
>
>
> Transaction timeouts are detected by the transaction coordinator. When the 
> coordinator detects a timeout, it bumps the producer epoch and aborts the 
> transaction. The epoch bump is necessary in order to prevent the current 
> producer from being able to begin writing to a new transaction which was not 
> started through the coordinator.  
> Transactions may also be aborted if a new producer with the same 
> `transactional.id` starts up. Similarly this results in an epoch bump. 
> Currently the coordinator does not distinguish these two cases. Both will end 
> up as a `ProducerFencedException`, which means the producer needs to shut 
> itself down. 
> We can improve this with the new APIs from KIP-360. When the coordinator 
> times out a transaction, it can remember that fact and allow the existing 
> producer to claim the bumped epoch and continue. Roughly the logic would work 
> like this:
> 1. When a transaction times out, set lastProducerEpoch to the current epoch 
> and do the normal bump.
> 2. Any transactional requests from the old epoch result in a new 
> TRANSACTION_TIMED_OUT error code, which is propagated to the application.
> 3. The producer recovers by sending InitProducerId with the current epoch. 
> The coordinator returns the bumped epoch.
> One issue that needs to be addressed is how to handle INVALID_PRODUCER_EPOCH 
> from Produce requests. Partition leaders will not generally know if a bumped 
> epoch was the result of a timed out transaction or a fenced producer. 
> Possibly the producer can treat these errors as abortable when they come from 
> Produce responses. In that case, the user would try to abort the transaction 
> and then we can see if it was due to a timeout or otherwise.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9803) Allow producers to recover gracefully from transaction timeouts

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383576#comment-17383576
 ] 

Konstantine Karantasis commented on KAFKA-9803:
---

Postponing to the subsequent release given that this issue is not a blocker and 
did not make it on time for 3.0 code freeze. 

> Allow producers to recover gracefully from transaction timeouts
> ---
>
> Key: KAFKA-9803
> URL: https://issues.apache.org/jira/browse/KAFKA-9803
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer , streams
>Reporter: Jason Gustafson
>Assignee: Boyang Chen
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.0.0
>
>
> Transaction timeouts are detected by the transaction coordinator. When the 
> coordinator detects a timeout, it bumps the producer epoch and aborts the 
> transaction. The epoch bump is necessary in order to prevent the current 
> producer from being able to begin writing to a new transaction which was not 
> started through the coordinator.  
> Transactions may also be aborted if a new producer with the same 
> `transactional.id` starts up. Similarly this results in an epoch bump. 
> Currently the coordinator does not distinguish these two cases. Both will end 
> up as a `ProducerFencedException`, which means the producer needs to shut 
> itself down. 
> We can improve this with the new APIs from KIP-360. When the coordinator 
> times out a transaction, it can remember that fact and allow the existing 
> producer to claim the bumped epoch and continue. Roughly the logic would work 
> like this:
> 1. When a transaction times out, set lastProducerEpoch to the current epoch 
> and do the normal bump.
> 2. Any transactional requests from the old epoch result in a new 
> TRANSACTION_TIMED_OUT error code, which is propagated to the application.
> 3. The producer recovers by sending InitProducerId with the current epoch. 
> The coordinator returns the bumped epoch.
> One issue that needs to be addressed is how to handle INVALID_PRODUCER_EPOCH 
> from Produce requests. Partition leaders will not generally know if a bumped 
> epoch was the result of a timed out transaction or a fenced producer. 
> Possibly the producer can treat these errors as abortable when they come from 
> Produce responses. In that case, the user would try to abort the transaction 
> and then we can see if it was due to a timeout or otherwise.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12842) Failing test: org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383575#comment-17383575
 ] 

Konstantine Karantasis commented on KAFKA-12842:


This is an infrequent failure on a third-party dependency. Not a blocker, so 
I'm postponing it to the next release. 

> Failing test: 
> org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic
> --
>
> Key: KAFKA-12842
> URL: https://issues.apache.org/jira/browse/KAFKA-12842
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: John Roesler
>Priority: Major
> Fix For: 3.1.0
>
>
> This test failed during a PR build, which means that it failed twice in a 
> row, due to the test-retry logic in PR builds.
>  
> [https://github.com/apache/kafka/pull/10744/checks?check_run_id=2643417209]
>  
> {noformat}
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at org.reflections.Store.getAllIncluding(Store.java:82)
>   at org.reflections.Store.getAll(Store.java:93)
>   at org.reflections.Reflections.getSubTypesOf(Reflections.java:404)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:352)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:337)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:268)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:216)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:209)
>   at 
> org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:61)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:93)
>   at 
> org.apache.kafka.connect.util.clusters.WorkerHandle.start(WorkerHandle.java:50)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.addWorker(EmbeddedConnectCluster.java:174)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.startConnect(EmbeddedConnectCluster.java:260)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.start(EmbeddedConnectCluster.java:141)
>   at 
> org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic(ConnectWorkerIntegrationTest.java:303)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   

[jira] [Assigned] (KAFKA-12842) Failing test: org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis reassigned KAFKA-12842:
--

Assignee: (was: Konstantine Karantasis)

> Failing test: 
> org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic
> --
>
> Key: KAFKA-12842
> URL: https://issues.apache.org/jira/browse/KAFKA-12842
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: John Roesler
>Priority: Major
> Fix For: 3.1.0
>
>
> This test failed during a PR build, which means that it failed twice in a 
> row, due to the test-retry logic in PR builds.
>  
> [https://github.com/apache/kafka/pull/10744/checks?check_run_id=2643417209]
>  
> {noformat}
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at org.reflections.Store.getAllIncluding(Store.java:82)
>   at org.reflections.Store.getAll(Store.java:93)
>   at org.reflections.Reflections.getSubTypesOf(Reflections.java:404)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:352)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:337)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:268)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:216)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:209)
>   at 
> org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:61)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:93)
>   at 
> org.apache.kafka.connect.util.clusters.WorkerHandle.start(WorkerHandle.java:50)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.addWorker(EmbeddedConnectCluster.java:174)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.startConnect(EmbeddedConnectCluster.java:260)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.start(EmbeddedConnectCluster.java:141)
>   at 
> org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic(ConnectWorkerIntegrationTest.java:303)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> 

[jira] [Updated] (KAFKA-12842) Failing test: org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-12842:
---
Fix Version/s: (was: 3.0.0)
   3.1.0

> Failing test: 
> org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic
> --
>
> Key: KAFKA-12842
> URL: https://issues.apache.org/jira/browse/KAFKA-12842
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: John Roesler
>Priority: Major
> Fix For: 3.1.0
>
>
> This test failed during a PR build, which means that it failed twice in a 
> row, due to the test-retry logic in PR builds.
>  
> [https://github.com/apache/kafka/pull/10744/checks?check_run_id=2643417209]
>  
> {noformat}
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at org.reflections.Store.getAllIncluding(Store.java:82)
>   at org.reflections.Store.getAll(Store.java:93)
>   at org.reflections.Reflections.getSubTypesOf(Reflections.java:404)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:352)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:337)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:268)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:216)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:209)
>   at 
> org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:61)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:93)
>   at 
> org.apache.kafka.connect.util.clusters.WorkerHandle.start(WorkerHandle.java:50)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.addWorker(EmbeddedConnectCluster.java:174)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.startConnect(EmbeddedConnectCluster.java:260)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.start(EmbeddedConnectCluster.java:141)
>   at 
> org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic(ConnectWorkerIntegrationTest.java:303)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> 

[jira] [Assigned] (KAFKA-12842) Failing test: org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis reassigned KAFKA-12842:
--

Assignee: Konstantine Karantasis

> Failing test: 
> org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic
> --
>
> Key: KAFKA-12842
> URL: https://issues.apache.org/jira/browse/KAFKA-12842
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: John Roesler
>Assignee: Konstantine Karantasis
>Priority: Major
> Fix For: 3.1.0
>
>
> This test failed during a PR build, which means that it failed twice in a 
> row, due to the test-retry logic in PR builds.
>  
> [https://github.com/apache/kafka/pull/10744/checks?check_run_id=2643417209]
>  
> {noformat}
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at org.reflections.Store.getAllIncluding(Store.java:82)
>   at org.reflections.Store.getAll(Store.java:93)
>   at org.reflections.Reflections.getSubTypesOf(Reflections.java:404)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:352)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:337)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:268)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:216)
>   at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:209)
>   at 
> org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:61)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:93)
>   at 
> org.apache.kafka.connect.util.clusters.WorkerHandle.start(WorkerHandle.java:50)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.addWorker(EmbeddedConnectCluster.java:174)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.startConnect(EmbeddedConnectCluster.java:260)
>   at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.start(EmbeddedConnectCluster.java:141)
>   at 
> org.apache.kafka.connect.integration.ConnectWorkerIntegrationTest.testSourceTaskNotBlockedOnShutdownWithNonExistentTopic(ConnectWorkerIntegrationTest.java:303)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> 

[jira] [Commented] (KAFKA-9910) Implement new transaction timed out error

2021-07-19 Thread Konstantine Karantasis (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383573#comment-17383573
 ] 

Konstantine Karantasis commented on KAFKA-9910:
---

Postponing to the subsequent release given that this issue is not a blocker and 
did not make it on time for 3.0 code freeze. 

> Implement new transaction timed out error
> -
>
> Key: KAFKA-9910
> URL: https://issues.apache.org/jira/browse/KAFKA-9910
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core
>Reporter: Boyang Chen
>Assignee: HaiyuanZhao
>Priority: Major
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9910) Implement new transaction timed out error

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-9910:
--
Fix Version/s: (was: 3.0.0)
   3.1.0

> Implement new transaction timed out error
> -
>
> Key: KAFKA-9910
> URL: https://issues.apache.org/jira/browse/KAFKA-9910
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core
>Reporter: Boyang Chen
>Assignee: HaiyuanZhao
>Priority: Major
> Fix For: 3.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12803) Support reassigning partitions when in KRaft mode

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-12803.

Resolution: Fixed

Resolving as Fixed given that the PR got merged and cherry-picked to 3.0:
https://github.com/apache/kafka/pull/10753

> Support reassigning partitions when in KRaft mode
> -
>
> Key: KAFKA-12803
> URL: https://issues.apache.org/jira/browse/KAFKA-12803
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller
>Affects Versions: 2.8.0
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
>  Labels: kip-500
> Fix For: 3.0.0
>
>
> Support reassigning partitions when in KRaft mode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13090) Improve cluster snapshot integration test

2021-07-19 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-13090.

Resolution: Fixed

Resolving given that the PR got merged and cherry-picked to 3.0: 
https://github.com/apache/kafka/pull/11054

> Improve cluster snapshot integration test
> -
>
> Key: KAFKA-13090
> URL: https://issues.apache.org/jira/browse/KAFKA-13090
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Major
>  Labels: kip-500
> Fix For: 3.0.0
>
>
> Extends the test in RaftClusterSnapshotTest to verify that both the 
> controllers and brokers are generating snapshots.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] dielhennr opened a new pull request #11081: MINOR: Typo in RaftClient Javadoc

2021-07-19 Thread GitBox


dielhennr opened a new pull request #11081:
URL: https://github.com/apache/kafka/pull/11081


   Small typo in RaftClient javadoc.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383564#comment-17383564
 ] 

Josep Prat commented on KAFKA-13010:


I only mentioned because it seems to be the only other change in that PR that 
is not just moving implementations around.

If youbrun the test before the KIP-744 changes were introduced, does it also 
fail after, let's say 200 iterations?

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] ijuma commented on a change in pull request #11080: fix NPE when record==null in append

2021-07-19 Thread GitBox


ijuma commented on a change in pull request #11080:
URL: https://github.com/apache/kafka/pull/11080#discussion_r672610426



##
File path: 
clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java
##
@@ -293,7 +293,9 @@ public static DefaultRecord readFrom(ByteBuffer buffer,
  Long logAppendTime) {
 int sizeOfBodyInBytes = ByteUtils.readVarint(buffer);
 if (buffer.remaining() < sizeOfBodyInBytes)
-return null;
+throw new InvalidRecordException("Invalid record size: expected " 
+ sizeOfBodyInBytes +
+" bytes in record payload, but instead the buffer has only " + 
buffer.remaining() +
+" remaining bytes.");

Review comment:
   Is this really an exceptional case? Don't we do reads where we don't 
know exactly where the read ends and hence will trigger this path?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Walker Carlson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383560#comment-17383560
 ] 

Walker Carlson commented on KAFKA-13010:


This doesn't use the StreamsMetadata so I don't think that would be related 

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383557#comment-17383557
 ] 

Josep Prat commented on KAFKA-13010:


The only other difference left now is in StreamsMetadataImpl where the 
immutable collections were created within the getters instead of the 
constructor. A diff between StreamsMetadataImpl and the Deprecated 
StreamsMetadata class will show the places where it changed

I guess it's another long shot

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Walker Carlson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383553#comment-17383553
 ] 

Walker Carlson commented on KAFKA-13010:


it took about 126 runs but it still failed

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13105) Expose a method in KafkaConfig to write the configs to a logger

2021-07-19 Thread Ismael Juma (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383550#comment-17383550
 ] 

Ismael Juma commented on KAFKA-13105:
-

I think we should make this Jira about including dynamic configs when we log 
broker configs in kraft mode.

> Expose a method in KafkaConfig to write the configs to a logger
> ---
>
> Key: KAFKA-13105
> URL: https://issues.apache.org/jira/browse/KAFKA-13105
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Priority: Minor
>
> We should expose a method in KafkaConfig to write the configs to a logger. 
> Currently there is no good way to write them out except creating a new 
> KafkaConfig object with doLog = true, which is unintuitive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383537#comment-17383537
 ] 

Josep Prat edited comment on KAFKA-13010 at 7/19/21, 7:40 PM:
--

If this is the reason why we are seeing this bug, by replacing line 60 in 
ThreadMetadataImpl the following might cause the test to not fail:
{code:java}
this.producerClientIds = producerClientIds;
{code}
As I can't reproduce the test locally would you be able to try this 
[~wcarlson5] ?


was (Author: josep.prat):
If this is the reason why we are seeing this bug, by replacing line 60 in 
ThreadMetadataImpl the following might cause the test to not fail:
{code:java}
this.producerClientIds = producerClientIds;
{code}
As I can't reproduce the test locally would you be able to try this?

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383537#comment-17383537
 ] 

Josep Prat commented on KAFKA-13010:


If this is the reason why we are seeing this bug, by replacing line 60 in 
ThreadMetadataImpl the following might cause the test to not fail:
{code:java}
this.producerClientIds = producerClientIds;
{code}
As I can't reproduce the test locally would you be able to try this?

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-13105) Expose a method in KafkaConfig to write the configs to a logger

2021-07-19 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe updated KAFKA-13105:
-
Issue Type: Improvement  (was: Bug)

> Expose a method in KafkaConfig to write the configs to a logger
> ---
>
> Key: KAFKA-13105
> URL: https://issues.apache.org/jira/browse/KAFKA-13105
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Priority: Minor
>
> We should expose a method in KafkaConfig to write the configs to a logger. 
> Currently there is no good way to write them out except creating a new 
> KafkaConfig object with doLog = true, which is unintuitive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] cmccabe merged pull request #11067: MINOR: log broker configs in KRaft mode

2021-07-19 Thread GitBox


cmccabe merged pull request #11067:
URL: https://github.com/apache/kafka/pull/11067


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cmccabe commented on a change in pull request #11067: MINOR: log broker configs in KRaft mode

2021-07-19 Thread GitBox


cmccabe commented on a change in pull request #11067:
URL: https://github.com/apache/kafka/pull/11067#discussion_r672574835



##
File path: core/src/main/scala/kafka/server/BrokerServer.scala
##
@@ -389,6 +389,9 @@ class BrokerServer(
   // a potentially lengthy recovery-from-unclean-shutdown operation here, 
if required.
   metadataListener.startPublishing(metadataPublisher).get()
 
+  // Log static broker configurations.
+  new KafkaConfig(config.originals(), true)

Review comment:
   Filed https://issues.apache.org/jira/browse/KAFKA-13105




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-13105) Expose a method in KafkaConfig to write the configs to a logger

2021-07-19 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-13105:


 Summary: Expose a method in KafkaConfig to write the configs to a 
logger
 Key: KAFKA-13105
 URL: https://issues.apache.org/jira/browse/KAFKA-13105
 Project: Kafka
  Issue Type: Bug
Reporter: Colin McCabe


We should expose a method in KafkaConfig to write the configs to a logger. 
Currently there is no good way to write them out except creating a new 
KafkaConfig object with doLog = true, which is unintuitive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Walker Carlson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383533#comment-17383533
 ] 

Walker Carlson commented on KAFKA-13010:


That sounds likely as the ThreadMetadata is retrieved using 
`metadataForLocalThreads()`

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383531#comment-17383531
 ] 

Josep Prat commented on KAFKA-13010:


One thing that was modified under that PR that might cause some race conditions 
is the fact that StreamsMetadataImpl now saves all collections as immutable 
ones during creation instead of doing it inside the getters. Similar thing 
ThreadMetadataImpl that now saves producerClientIds as an immutable collection 
within the constructor.

However, TaskMetadataImpl behaves the same in regards of immutable collections. 
I doubt it is related, but it's worth a shot.

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] ccding commented on pull request #11080: fix NPE when record==null in append

2021-07-19 Thread GitBox


ccding commented on pull request #11080:
URL: https://github.com/apache/kafka/pull/11080#issuecomment-882800476


   This PR is ready for review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-13010) Flaky test org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()

2021-07-19 Thread Walker Carlson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383516#comment-17383516
 ] 

Walker Carlson commented on KAFKA-13010:


That does seem to be the case. I think somehow those "existing active tasks" 
are getting excluded from the `activeTasks()` list in the `ThreadMetadata`

> Flaky test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()
> ---
>
> Key: KAFKA-13010
> URL: https://issues.apache.org/jira/browse/KAFKA-13010
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Walker Carlson
>Priority: Major
>  Labels: flaky-test
> Attachments: 
> TaskMetadataIntegrationTest#shouldReportCorrectEndOffsetInformation.rtf
>
>
> Integration test {{test 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation()}}
>  sometimes fails with
> {code:java}
> java.lang.AssertionError: only one task
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.getTaskMetadata(TaskMetadataIntegrationTest.java:162)
>   at 
> org.apache.kafka.streams.integration.TaskMetadataIntegrationTest.shouldReportCorrectCommittedOffsetInformation(TaskMetadataIntegrationTest.java:117)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >