Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #38

2021-02-09 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10021: Changed Kafka backing stores to use shared admin 
client to get end offsets and create topics (#9780)


--
[...truncated 3.11 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #522

2021-02-09 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove unused parameters in functions. (#10035)


--
[...truncated 3.56 MB...]

ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren() STARTED

ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren() PASSED

ZooKeeperClientTest > testSetDataExistingZNode() STARTED

ZooKeeperClientTest > testSetDataExistingZNode() PASSED

ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChangeNotTriggered() 
STARTED

ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChangeNotTriggered() 
PASSED

ZooKeeperClientTest > testMixedPipeline() STARTED

ZooKeeperClientTest > testMixedPipeline() PASSED

ZooKeeperClientTest > testGetDataExistingZNode() STARTED

ZooKeeperClientTest > testGetDataExistingZNode() PASSED

ZooKeeperClientTest > testDeleteExistingZNode() STARTED

ZooKeeperClientTest > testDeleteExistingZNode() PASSED

ZooKeeperClientTest > testSessionExpiry() STARTED

ZooKeeperClientTest > testSessionExpiry() PASSED

ZooKeeperClientTest > testSetDataNonExistentZNode() STARTED

ZooKeeperClientTest > testSetDataNonExistentZNode() PASSED

ZooKeeperClientTest > testConnectionViaNettyClient() STARTED

ZooKeeperClientTest > testConnectionViaNettyClient() PASSED

ZooKeeperClientTest > testDeleteNonExistentZNode() STARTED

ZooKeeperClientTest > testDeleteNonExistentZNode() PASSED

ZooKeeperClientTest > testExistsExistingZNode() STARTED

ZooKeeperClientTest > testExistsExistingZNode() PASSED

ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics() STARTED

ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics() PASSED

ZooKeeperClientTest > testZNodeChangeHandlerForDeletion() STARTED

ZooKeeperClientTest > testZNodeChangeHandlerForDeletion() PASSED

ZooKeeperClientTest > testGetAclNonExistentZNode() STARTED

ZooKeeperClientTest > testGetAclNonExistentZNode() PASSED

ZooKeeperClientTest > testStateChangeHandlerForAuthFailure() STARTED

ZooKeeperClientTest > testStateChangeHandlerForAuthFailure() PASSED

ResourceTypeTest > testJavaConversions() STARTED

ResourceTypeTest > testJavaConversions() PASSED

ResourceTypeTest > testFromString() STARTED

ResourceTypeTest > testFromString() PASSED

OperationTest > testJavaConversions() STARTED

OperationTest > testJavaConversions() PASSED

DelegationTokenManagerTest > testPeriodicTokenExpiry() STARTED

DelegationTokenManagerTest > testPeriodicTokenExpiry() PASSED

DelegationTokenManagerTest > testTokenRequestsWithDelegationTokenDisabled() 
STARTED

DelegationTokenManagerTest > testTokenRequestsWithDelegationTokenDisabled() 
PASSED

DelegationTokenManagerTest > testDescribeToken() STARTED

DelegationTokenManagerTest > testDescribeToken() PASSED

DelegationTokenManagerTest > testCreateToken() STARTED

DelegationTokenManagerTest > testCreateToken() PASSED

DelegationTokenManagerTest > testExpireToken() STARTED

DelegationTokenManagerTest > testExpireToken() PASSED

DelegationTokenManagerTest > testRenewToken() STARTED

DelegationTokenManagerTest > testRenewToken() PASSED

DelegationTokenManagerTest > testRemoveTokenHmac() STARTED

DelegationTokenManagerTest > testRemoveTokenHmac() PASSED

AclAuthorizerWithZkSaslTest > testAclUpdateWithSessionExpiration() STARTED

AclAuthorizerWithZkSaslTest > testAclUpdateWithSessionExpiration() PASSED

AclAuthorizerWithZkSaslTest > testAclUpdateWithAuthFailure() STARTED

AclAuthorizerWithZkSaslTest > testAclUpdateWithAuthFailure() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@5e2d7efc, value = [B@7349e468), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@5e2d7efc, value = [B@7349e468), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@3894889a, value = [B@4c24144), properties=Map(print.key -> true, print.value 
-> false), expected=someKey
 STARTED

DefaultMessageFormatterTest > [2] 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #468

2021-02-09 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove unused parameters in functions. (#10035)


--
[...truncated 3.57 MB...]

TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData() PASSED

TransactionsTest > testSendOffsetsToTransactionTimeout() STARTED

TransactionsTest > testSendOffsetsToTransactionTimeout() PASSED

TransactionsTest > testFailureToFenceEpoch() STARTED

TransactionsTest > testFailureToFenceEpoch() PASSED

TransactionsTest > testFencingOnSend() STARTED

TransactionsTest > testFencingOnSend() PASSED

TransactionsTest > testFencingOnCommit() STARTED

TransactionsTest > testFencingOnCommit() PASSED

TransactionsTest > testAbortTransactionTimeout() STARTED

TransactionsTest > testAbortTransactionTimeout() PASSED

TransactionsTest > testMultipleMarkersOneLeader() STARTED

TransactionsTest > testMultipleMarkersOneLeader() PASSED

TransactionsTest > testCommitTransactionTimeout() STARTED

TransactionsTest > testCommitTransactionTimeout() PASSED

SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess() STARTED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess() PASSED

SaslClientsWithInvalidCredentialsTest > testProducerWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > testProducerWithAuthenticationFailure() 
PASSED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure() 
PASSED

SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > testConsumerWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > testConsumerWithAuthenticationFailure() 
PASSED

UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota() STARTED

UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota() PASSED

UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled() STARTED

UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled() PASSED

UserClientIdQuotaTest > testThrottledProducerConsumer() STARTED

UserClientIdQuotaTest > testThrottledProducerConsumer() PASSED

UserClientIdQuotaTest > testQuotaOverrideDelete() STARTED

UserClientIdQuotaTest > testQuotaOverrideDelete() PASSED

UserClientIdQuotaTest > testThrottledRequest() STARTED

UserClientIdQuotaTest > testThrottledRequest() PASSED

ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() STARTED

ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() PASSED

ZooKeeperClientTest > testZooKeeperSessionStateMetric() STARTED

ZooKeeperClientTest > testZooKeeperSessionStateMetric() PASSED

ZooKeeperClientTest > testExceptionInBeforeInitializingSession() STARTED

ZooKeeperClientTest > testExceptionInBeforeInitializingSession() PASSED

ZooKeeperClientTest > testGetChildrenExistingZNode() STARTED

ZooKeeperClientTest > testGetChildrenExistingZNode() PASSED

ZooKeeperClientTest > testConnection() STARTED

ZooKeeperClientTest > testConnection() PASSED

ZooKeeperClientTest > testZNodeChangeHandlerForCreation() STARTED

ZooKeeperClientTest > testZNodeChangeHandlerForCreation() PASSED

ZooKeeperClientTest > testGetAclExistingZNode() STARTED

ZooKeeperClientTest > testGetAclExistingZNode() PASSED

ZooKeeperClientTest > testSessionExpiryDuringClose() STARTED

ZooKeeperClientTest > testSessionExpiryDuringClose() PASSED

ZooKeeperClientTest > testReinitializeAfterAuthFailure() STARTED

ZooKeeperClientTest > testReinitializeAfterAuthFailure() PASSED

ZooKeeperClientTest > testSetAclNonExistentZNode() STARTED

ZooKeeperClientTest > testSetAclNonExistentZNode() PASSED

ZooKeeperClientTest > testConnectionLossRequestTermination() STARTED

ZooKeeperClientTest > testConnectionLossRequestTermination() PASSED

ZooKeeperClientTest > testExistsNonExistentZNode() STARTED

ZooKeeperClientTest > testExistsNonExistentZNode() PASSED

ZooKeeperClientTest > testGetDataNonExistentZNode() STARTED

ZooKeeperClientTest > 

Re: Re: [DISCUSS] KIP-706: Add method "Producer#produce" to return CompletionStage instead of Future

2021-02-09 Thread Chia-Ping Tsai
Ping for more discussion :)

On 2021/01/31 05:39:17, Chia-Ping Tsai  wrote: 
> It seems to me changing the input type might make complicate the migration 
> from deprecated send method to new API.
> 
> Personally, I prefer to introduce a interface called “SendRecord” to replace 
> ProducerRecord. Hence, the new API/classes is shown below.
> 
> 1. CompletionStage send(SendRecord)
> 2. class ProducerRecord implement SendRecord
> 3. Introduce builder pattern for SendRecord
> 
> That includes following benefit.
> 
> 1. Kafka users who don’t use both return type and callback do not need to 
> change code even though we remove deprecated send methods. (of course, they 
> still need to compile code with new Kafka)
> 
> 2. Kafka users who need Future can easily migrate to new API by regex 
> replacement. (cast ProduceRecord to SendCast and add toCompletableFuture)
> 
> 3. It is easy to support topic id in the future. We can add new method to 
> SendRecord builder. For example:
> 
> Builder topicName(String)
> Builder topicId(UUID)
> 
> 4. builder pattern can make code more readable. Especially, Produce record 
> has a lot of fields which can be defined by users.
> —
> Chia-Ping
> 
> On 2021/01/30 22:50:36 Ismael Juma wrote:
> > Another thing to think about: the consumer api currently has
> > `subscribe(String|Pattern)` and a number of methods that accept
> > `TopicPartition`. A similar approach could be used for the Consumer to work
> > with topic ids or topic names. The consumer side also has to support
> > regexes so it probably makes sense to have a separate interface.
> > 
> > Ismael
> > 
> > On Sat, Jan 30, 2021 at 2:40 PM Ismael Juma  wrote:
> > 
> > > I think this is a promising idea. I'd personally avoid the overload and
> > > simply have a `Topic` type that implements `SendTarget`. It's a mix of 
> > > both
> > > proposals: strongly typed, no overloads and general class names that
> > > implement `SendTarget`.
> > >
> > > Ismael
> > >
> > > On Sat, Jan 30, 2021 at 2:22 PM Jason Gustafson 
> > > wrote:
> > >
> > >> Giving this a little more thought, I imagine sending to a topic is the
> > >> most
> > >> common case, so maybe it's an overload worth having. Also, if 
> > >> `SendTarget`
> > >> is just a marker interface, we could let `TopicPartition` implement it
> > >> directly. Then we have:
> > >>
> > >> interface SendTarget;
> > >> class TopicPartition implements SendTarget;
> > >>
> > >> CompletionStage send(String topic, Record record);
> > >> CompletionStage send(SendTarget target, Record record);
> > >>
> > >> The `SendTarget` would give us a lot of flexibility in the future. It
> > >> would
> > >> give us a couple options for topic ids. We could either have an overload
> > >> of
> > >> `send` which accepts `Uuid`, or we could add a `TopicId` type which
> > >> implements `SendTarget`.
> > >>
> > >> -Jason
> > >>
> > >>
> > >> On Sat, Jan 30, 2021 at 1:11 PM Jason Gustafson 
> > >> wrote:
> > >>
> > >> > Yeah, good question. I guess we always tend to regret using lower-level
> > >> > types in these APIs. Perhaps there should be some kind of interface:
> > >> >
> > >> > interface SendTarget
> > >> > class TopicIdTarget implements SendTarget
> > >> > class TopicTarget implements SendTarget
> > >> > class TopicPartitionTarget implements SendTarget
> > >> >
> > >> > Then we just have:
> > >> >
> > >> > CompletionStage send(SendTarget target, Record record);
> > >> >
> > >> > Not sure if we could reuse `Record` in the consumer though. We do have
> > >> > some state in `ConsumerRecord` which is not present in `ProducerRecord`
> > >> > (e.g. offset). Perhaps we could provide a `Record` view from
> > >> > `ConsumerRecord` for convenience. That would be useful for use cases
> > >> which
> > >> > involve reading from one topic and writing to another.
> > >> >
> > >> > -Jason
> > >> >
> > >> > On Sat, Jan 30, 2021 at 12:29 PM Ismael Juma  wrote:
> > >> >
> > >> >> Interesting idea. A couple of things to consider:
> > >> >>
> > >> >> 1. Would we introduce the Message concept to the Consumer too? I think
> > >> >> that's what .NET does.
> > >> >> 2. If we eventually allow a send to a topic id instead of topic name,
> > >> >> would
> > >> >> that result in two additional overloads?
> > >> >>
> > >> >> Ismael
> > >> >>
> > >> >> On Sat, Jan 30, 2021 at 11:38 AM Jason Gustafson 
> > >> >> wrote:
> > >> >>
> > >> >> > For the sake of having another option to shoot down, we could take a
> > >> >> page
> > >> >> > from the .net client and separate the message data from the
> > >> destination
> > >> >> > (i.e. topic or partition). This would get around the need to use a
> > >> new
> > >> >> > verb. For example:
> > >> >> >
> > >> >> > CompletionStage send(String topic, Message message);
> > >> >> > CompletionStage send(TopicPartition topicPartition,
> > >> >> Message
> > >> >> > message);
> > >> >> >
> > >> >> > -Jason
> > >> >> >
> > >> >> >
> > >> >> >
> > >> >> > On Sat, Jan 30, 2021 at 11:30 AM Jason Gustafson  > >> 

Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-02-09 Thread Luke Chen
I just saw the defect KAFKA-12312
, so I brought it to
your attention.
Do you think it's not a compatibility issue? If not, I think we don't need
to cherry-pick the fix.

Thanks.
Luke

On Wed, Feb 10, 2021 at 11:16 AM Ismael Juma  wrote:

> It's a perf improvement, there was no regression. I think Luke needs to be
> clearer how this impacts users. Luke, are you referring to cases where
> someone runs the broker in an embedded scenario (eg tests)?
>
> Ismael
>
> On Tue, Feb 9, 2021, 6:50 PM Sophie Blee-Goldman 
> wrote:
>
> > What do you think Ismael? I agreed initially because I saw the commit
> > message says it fixes a performance regression. But admittedly I don't
> have
> > much context on this particular issue
> >
> > If it's low risk then I don't have a strong argument against including
> it.
> > However
> > I aim to cut the rc tomorrow or Thursday, and if it hasn't been
> > cherrypicked by then
> > I won't block the release on it.
> >
> > On Tue, Feb 9, 2021 at 4:53 PM Luke Chen  wrote:
> >
> > > Hi Ismael,
> > > Yes, I agree it's like an improvement, not a bug. I don't insist on
> > putting
> > > it into 2.6, just want to bring it to your attention.
> > > In my opinion, this issue will block users who adopt the scala 2.13.4
> or
> > > later to use Kafka 2.6.
> > > So if we still have time, we can consider to cherry-pick the fix into
> 2.6
> > > and 2.7.
> > >
> > > What do you think?
> > >
> > > Thank you.
> > > Luke
> > >
> > > On Wed, Feb 10, 2021 at 3:24 AM Ismael Juma  wrote:
> > >
> > > > Can you elaborate why this needs to be in 2.6? Seems like an
> > improvement
> > > > versus a critical bug fix.
> > > >
> > > > Ismael
> > > >
> > > > On Mon, Feb 8, 2021 at 6:39 PM Luke Chen  wrote:
> > > >
> > > > > Hi Sophie,
> > > > > I found there is 1 issue that should be cherry-picked into 2.6 and
> > 2.7
> > > > > branches: KAFKA-12312 <
> > > https://issues.apache.org/jira/browse/KAFKA-12312
> > > > >.
> > > > > Simply put, *Scala* *2.13.4* is released at the end of 2020, and we
> > > > > upgraded to it and fixed some compatible issues on this PR
> > > > > , more specifically,
> it's
> > > > here
> > > > > <
> > > > >
> > > >
> > >
> >
> https://github.com/apache/kafka/pull/9643/files#diff-fda3fb44e69a19600913bd951431fb0035996c76325b1c1d84d6f34bec281205R292
> > > > > >
> > > > > .
> > > > > We only merged this fix on *trunk*(which will be on 2.8), but we
> > didn't
> > > > > tell users (or we didn't know there'll be compatible issues) not to
> > > adopt
> > > > > the latest *Scala* *2.13.4*.
> > > > >
> > > > > Therefore, I think we should cherry-pick this fix into 2.6 and 2.7
> > > > > branches. What do you think?
> > > > >
> > > > > Thank you.
> > > > > Luke
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Feb 9, 2021 at 3:10 AM Sophie Blee-Goldman <
> > > sop...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > Hey all,
> > > > > >
> > > > > > Since all outstanding bugfixes seem to have made their way over
> to
> > > the
> > > > > 2.6
> > > > > > branch by now, I plan to move ahead with cutting an RC. As
> always,
> > > > please
> > > > > > let me know if you uncover and critical or blocker bugs that
> affect
> > > 2.6
> > > > > >
> > > > > > Thanks!
> > > > > > Sophie
> > > > > >
> > > > > > On Thu, Jan 28, 2021 at 9:25 AM John Roesler <
> vvcep...@apache.org>
> > > > > wrote:
> > > > > >
> > > > > > > Thanks so much for stepping up, Sophie!
> > > > > > >
> > > > > > > I'm +1
> > > > > > >
> > > > > > > -John
> > > > > > >
> > > > > > > On Wed, 2021-01-27 at 17:59 -0500, Bill Bejeck wrote:
> > > > > > > > Thanks for taking this on Sophie. +1
> > > > > > > >
> > > > > > > > Bill
> > > > > > > >
> > > > > > > > On Wed, Jan 27, 2021 at 5:59 PM Ismael Juma <
> ism...@juma.me.uk
> > >
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks Sophie! +1
> > > > > > > > >
> > > > > > > > > Ismael
> > > > > > > > >
> > > > > > > > > On Wed, Jan 27, 2021 at 2:45 PM Sophie Blee-Goldman <
> > > > > > > sop...@confluent.io>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi all,
> > > > > > > > > >
> > > > > > > > > > I'd like to volunteer as release manager for a 2.6.2
> > release.
> > > > > This
> > > > > > is
> > > > > > > > > being
> > > > > > > > > > accelerated
> > > > > > > > > > to address a critical regression in Kafka Streams for
> > Windows
> > > > > > users.
> > > > > > > > > >
> > > > > > > > > > You can find the release plan on the wiki:
> > > > > > > > > >
> > > > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.6.2
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > Sophie
> > > > > > > > > >
> > > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-02-09 Thread Ismael Juma
It's a perf improvement, there was no regression. I think Luke needs to be
clearer how this impacts users. Luke, are you referring to cases where
someone runs the broker in an embedded scenario (eg tests)?

Ismael

On Tue, Feb 9, 2021, 6:50 PM Sophie Blee-Goldman 
wrote:

> What do you think Ismael? I agreed initially because I saw the commit
> message says it fixes a performance regression. But admittedly I don't have
> much context on this particular issue
>
> If it's low risk then I don't have a strong argument against including it.
> However
> I aim to cut the rc tomorrow or Thursday, and if it hasn't been
> cherrypicked by then
> I won't block the release on it.
>
> On Tue, Feb 9, 2021 at 4:53 PM Luke Chen  wrote:
>
> > Hi Ismael,
> > Yes, I agree it's like an improvement, not a bug. I don't insist on
> putting
> > it into 2.6, just want to bring it to your attention.
> > In my opinion, this issue will block users who adopt the scala 2.13.4 or
> > later to use Kafka 2.6.
> > So if we still have time, we can consider to cherry-pick the fix into 2.6
> > and 2.7.
> >
> > What do you think?
> >
> > Thank you.
> > Luke
> >
> > On Wed, Feb 10, 2021 at 3:24 AM Ismael Juma  wrote:
> >
> > > Can you elaborate why this needs to be in 2.6? Seems like an
> improvement
> > > versus a critical bug fix.
> > >
> > > Ismael
> > >
> > > On Mon, Feb 8, 2021 at 6:39 PM Luke Chen  wrote:
> > >
> > > > Hi Sophie,
> > > > I found there is 1 issue that should be cherry-picked into 2.6 and
> 2.7
> > > > branches: KAFKA-12312 <
> > https://issues.apache.org/jira/browse/KAFKA-12312
> > > >.
> > > > Simply put, *Scala* *2.13.4* is released at the end of 2020, and we
> > > > upgraded to it and fixed some compatible issues on this PR
> > > > , more specifically, it's
> > > here
> > > > <
> > > >
> > >
> >
> https://github.com/apache/kafka/pull/9643/files#diff-fda3fb44e69a19600913bd951431fb0035996c76325b1c1d84d6f34bec281205R292
> > > > >
> > > > .
> > > > We only merged this fix on *trunk*(which will be on 2.8), but we
> didn't
> > > > tell users (or we didn't know there'll be compatible issues) not to
> > adopt
> > > > the latest *Scala* *2.13.4*.
> > > >
> > > > Therefore, I think we should cherry-pick this fix into 2.6 and 2.7
> > > > branches. What do you think?
> > > >
> > > > Thank you.
> > > > Luke
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Tue, Feb 9, 2021 at 3:10 AM Sophie Blee-Goldman <
> > sop...@confluent.io>
> > > > wrote:
> > > >
> > > > > Hey all,
> > > > >
> > > > > Since all outstanding bugfixes seem to have made their way over to
> > the
> > > > 2.6
> > > > > branch by now, I plan to move ahead with cutting an RC. As always,
> > > please
> > > > > let me know if you uncover and critical or blocker bugs that affect
> > 2.6
> > > > >
> > > > > Thanks!
> > > > > Sophie
> > > > >
> > > > > On Thu, Jan 28, 2021 at 9:25 AM John Roesler 
> > > > wrote:
> > > > >
> > > > > > Thanks so much for stepping up, Sophie!
> > > > > >
> > > > > > I'm +1
> > > > > >
> > > > > > -John
> > > > > >
> > > > > > On Wed, 2021-01-27 at 17:59 -0500, Bill Bejeck wrote:
> > > > > > > Thanks for taking this on Sophie. +1
> > > > > > >
> > > > > > > Bill
> > > > > > >
> > > > > > > On Wed, Jan 27, 2021 at 5:59 PM Ismael Juma  >
> > > > wrote:
> > > > > > >
> > > > > > > > Thanks Sophie! +1
> > > > > > > >
> > > > > > > > Ismael
> > > > > > > >
> > > > > > > > On Wed, Jan 27, 2021 at 2:45 PM Sophie Blee-Goldman <
> > > > > > sop...@confluent.io>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi all,
> > > > > > > > >
> > > > > > > > > I'd like to volunteer as release manager for a 2.6.2
> release.
> > > > This
> > > > > is
> > > > > > > > being
> > > > > > > > > accelerated
> > > > > > > > > to address a critical regression in Kafka Streams for
> Windows
> > > > > users.
> > > > > > > > >
> > > > > > > > > You can find the release plan on the wiki:
> > > > > > > > >
> > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.6.2
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > Sophie
> > > > > > > > >
> > > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-02-09 Thread Sophie Blee-Goldman
What do you think Ismael? I agreed initially because I saw the commit
message says it fixes a performance regression. But admittedly I don't have
much context on this particular issue

If it's low risk then I don't have a strong argument against including it.
However
I aim to cut the rc tomorrow or Thursday, and if it hasn't been
cherrypicked by then
I won't block the release on it.

On Tue, Feb 9, 2021 at 4:53 PM Luke Chen  wrote:

> Hi Ismael,
> Yes, I agree it's like an improvement, not a bug. I don't insist on putting
> it into 2.6, just want to bring it to your attention.
> In my opinion, this issue will block users who adopt the scala 2.13.4 or
> later to use Kafka 2.6.
> So if we still have time, we can consider to cherry-pick the fix into 2.6
> and 2.7.
>
> What do you think?
>
> Thank you.
> Luke
>
> On Wed, Feb 10, 2021 at 3:24 AM Ismael Juma  wrote:
>
> > Can you elaborate why this needs to be in 2.6? Seems like an improvement
> > versus a critical bug fix.
> >
> > Ismael
> >
> > On Mon, Feb 8, 2021 at 6:39 PM Luke Chen  wrote:
> >
> > > Hi Sophie,
> > > I found there is 1 issue that should be cherry-picked into 2.6 and 2.7
> > > branches: KAFKA-12312 <
> https://issues.apache.org/jira/browse/KAFKA-12312
> > >.
> > > Simply put, *Scala* *2.13.4* is released at the end of 2020, and we
> > > upgraded to it and fixed some compatible issues on this PR
> > > , more specifically, it's
> > here
> > > <
> > >
> >
> https://github.com/apache/kafka/pull/9643/files#diff-fda3fb44e69a19600913bd951431fb0035996c76325b1c1d84d6f34bec281205R292
> > > >
> > > .
> > > We only merged this fix on *trunk*(which will be on 2.8), but we didn't
> > > tell users (or we didn't know there'll be compatible issues) not to
> adopt
> > > the latest *Scala* *2.13.4*.
> > >
> > > Therefore, I think we should cherry-pick this fix into 2.6 and 2.7
> > > branches. What do you think?
> > >
> > > Thank you.
> > > Luke
> > >
> > >
> > >
> > >
> > >
> > > On Tue, Feb 9, 2021 at 3:10 AM Sophie Blee-Goldman <
> sop...@confluent.io>
> > > wrote:
> > >
> > > > Hey all,
> > > >
> > > > Since all outstanding bugfixes seem to have made their way over to
> the
> > > 2.6
> > > > branch by now, I plan to move ahead with cutting an RC. As always,
> > please
> > > > let me know if you uncover and critical or blocker bugs that affect
> 2.6
> > > >
> > > > Thanks!
> > > > Sophie
> > > >
> > > > On Thu, Jan 28, 2021 at 9:25 AM John Roesler 
> > > wrote:
> > > >
> > > > > Thanks so much for stepping up, Sophie!
> > > > >
> > > > > I'm +1
> > > > >
> > > > > -John
> > > > >
> > > > > On Wed, 2021-01-27 at 17:59 -0500, Bill Bejeck wrote:
> > > > > > Thanks for taking this on Sophie. +1
> > > > > >
> > > > > > Bill
> > > > > >
> > > > > > On Wed, Jan 27, 2021 at 5:59 PM Ismael Juma 
> > > wrote:
> > > > > >
> > > > > > > Thanks Sophie! +1
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Wed, Jan 27, 2021 at 2:45 PM Sophie Blee-Goldman <
> > > > > sop...@confluent.io>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > I'd like to volunteer as release manager for a 2.6.2 release.
> > > This
> > > > is
> > > > > > > being
> > > > > > > > accelerated
> > > > > > > > to address a critical regression in Kafka Streams for Windows
> > > > users.
> > > > > > > >
> > > > > > > > You can find the release plan on the wiki:
> > > > > > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.6.2
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Sophie
> > > > > > > >
> > > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
>


Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #117

2021-02-09 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10021: Changed Kafka backing stores to use shared admin 
client to get end offsets and create topics (#9780)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #89

2021-02-09 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10021: Changed Kafka backing stores to use shared admin 
client to get end offsets and create topics (#9780)


--
[...truncated 3.17 MB...]

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED


Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-02-09 Thread Luke Chen
Hi Ismael,
Yes, I agree it's like an improvement, not a bug. I don't insist on putting
it into 2.6, just want to bring it to your attention.
In my opinion, this issue will block users who adopt the scala 2.13.4 or
later to use Kafka 2.6.
So if we still have time, we can consider to cherry-pick the fix into 2.6
and 2.7.

What do you think?

Thank you.
Luke

On Wed, Feb 10, 2021 at 3:24 AM Ismael Juma  wrote:

> Can you elaborate why this needs to be in 2.6? Seems like an improvement
> versus a critical bug fix.
>
> Ismael
>
> On Mon, Feb 8, 2021 at 6:39 PM Luke Chen  wrote:
>
> > Hi Sophie,
> > I found there is 1 issue that should be cherry-picked into 2.6 and 2.7
> > branches: KAFKA-12312  >.
> > Simply put, *Scala* *2.13.4* is released at the end of 2020, and we
> > upgraded to it and fixed some compatible issues on this PR
> > , more specifically, it's
> here
> > <
> >
> https://github.com/apache/kafka/pull/9643/files#diff-fda3fb44e69a19600913bd951431fb0035996c76325b1c1d84d6f34bec281205R292
> > >
> > .
> > We only merged this fix on *trunk*(which will be on 2.8), but we didn't
> > tell users (or we didn't know there'll be compatible issues) not to adopt
> > the latest *Scala* *2.13.4*.
> >
> > Therefore, I think we should cherry-pick this fix into 2.6 and 2.7
> > branches. What do you think?
> >
> > Thank you.
> > Luke
> >
> >
> >
> >
> >
> > On Tue, Feb 9, 2021 at 3:10 AM Sophie Blee-Goldman 
> > wrote:
> >
> > > Hey all,
> > >
> > > Since all outstanding bugfixes seem to have made their way over to the
> > 2.6
> > > branch by now, I plan to move ahead with cutting an RC. As always,
> please
> > > let me know if you uncover and critical or blocker bugs that affect 2.6
> > >
> > > Thanks!
> > > Sophie
> > >
> > > On Thu, Jan 28, 2021 at 9:25 AM John Roesler 
> > wrote:
> > >
> > > > Thanks so much for stepping up, Sophie!
> > > >
> > > > I'm +1
> > > >
> > > > -John
> > > >
> > > > On Wed, 2021-01-27 at 17:59 -0500, Bill Bejeck wrote:
> > > > > Thanks for taking this on Sophie. +1
> > > > >
> > > > > Bill
> > > > >
> > > > > On Wed, Jan 27, 2021 at 5:59 PM Ismael Juma 
> > wrote:
> > > > >
> > > > > > Thanks Sophie! +1
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Wed, Jan 27, 2021 at 2:45 PM Sophie Blee-Goldman <
> > > > sop...@confluent.io>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I'd like to volunteer as release manager for a 2.6.2 release.
> > This
> > > is
> > > > > > being
> > > > > > > accelerated
> > > > > > > to address a critical regression in Kafka Streams for Windows
> > > users.
> > > > > > >
> > > > > > > You can find the release plan on the wiki:
> > > > > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.6.2
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Sophie
> > > > > > >
> > > > > >
> > > >
> > > >
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-12319) Flaky test ConnectionQuotasTest.testListenerConnectionRateLimitWhenActualRateAboveLimit()

2021-02-09 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-12319:
--

 Summary: Flaky test 
ConnectionQuotasTest.testListenerConnectionRateLimitWhenActualRateAboveLimit()
 Key: KAFKA-12319
 URL: https://issues.apache.org/jira/browse/KAFKA-12319
 Project: Kafka
  Issue Type: Test
Reporter: Justine Olshan


I've seen this test fail a few times locally. But recently I saw it fail on a 
PR build on Jenkins.
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-10041/7/testReport/junit/kafka.network/ConnectionQuotasTest/Build___JDK_11___testListenerConnectionRateLimitWhenActualRateAboveLimit__/
h3. Error Message

java.util.concurrent.ExecutionException: org.opentest4j.AssertionFailedError: 
Expected rate (30 +- 7), but got 37.436825357209706 (600 connections / 16.027 
sec) ==> expected: <30.0> but was: <37.436825357209706>
h3. Stacktrace

{{java.util.concurrent.ExecutionException: org.opentest4j.AssertionFailedError: 
Expected rate (30 +- 7), but got 37.436825357209706 (600 connections / 16.027 
sec) ==> expected: <30.0> but was: <37.436825357209706> at 
java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205) at 
kafka.network.ConnectionQuotasTest.$anonfun$testListenerConnectionRateLimitWhenActualRateAboveLimit$3(ConnectionQuotasTest.scala:411)
 at scala.collection.immutable.List.foreach(List.scala:333) at 
kafka.network.ConnectionQuotasTest.testListenerConnectionRateLimitWhenActualRateAboveLimit(ConnectionQuotasTest.scala:411)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688)
 at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
 at 
org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
 at 
org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
 at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
 at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at 

[jira] [Created] (KAFKA-12318) system tests need to fetch Topic IDs via Admin Client instead of via ZooKeeper

2021-02-09 Thread Ron Dagostino (Jira)
Ron Dagostino created KAFKA-12318:
-

 Summary: system tests need to fetch Topic IDs via Admin Client 
instead of via ZooKeeper
 Key: KAFKA-12318
 URL: https://issues.apache.org/jira/browse/KAFKA-12318
 Project: Kafka
  Issue Type: Task
  Components: system tests
Affects Versions: 3.0.0, 2.8.0
Reporter: Ron Dagostino


https://github.com/apache/kafka/commit/86b9fdef2b9e6ef3429313afbaa18487d6e2906e#diff-2b222ad67f56a2876410aba3eeecd78e8b26217192dde72a035c399dc4d3988bR1033-R1052
 introduced a topic_id() function in the system tests.  This function is 
currently coded to talk directly to ZooKeeper.  This will be a problem when 
running a Raft-based metadata quorum -- ZooKeeper won't be available.  This 
method needs to be rewritten to leverage the Admin Client.

This does not have to be fixed in 2.8 -- the method is only used in 
upgrade/downgrade-related system tests, and those system tests aren't being 
performed for Raft-based metadata quorums in the release (Raft-based metadata 
quorums will only be alpha/preview functionality at that point with 
upgrades/downgrades unsupported).  But it probably will have to be fixed for 
the next release after that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #37

2021-02-09 Thread Apache Jenkins Server
See 


Changes:

[github] Backport Jenkinsfile fix and remove Travis build for 2.5 (#10087)


--
[...truncated 3.10 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #467

2021-02-09 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #498

2021-02-09 Thread Apache Jenkins Server
See 


Changes:

[github] JUnit extensions for integration tests (#9986)

[github] KAFKA-10021: Changed Kafka backing stores to use shared admin client 
to get end offsets and create topics (#9780)


--
[...truncated 7.21 MB...]

ControllerEventManagerTest > testSuccessfulEvent() PASSED

ControllerEventManagerTest > testMetricsCleanedOnClose() STARTED

ControllerEventManagerTest > testMetricsCleanedOnClose() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition() PASSED

ControllerChannelManagerTest > testUpdateMetadataInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion() PASSED

ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testStopReplicaInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() STARTED

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() PASSED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() PASSED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
STARTED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
PASSED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaRequestSent() STARTED

ControllerChannelManagerTest > testStopReplicaRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() PASSED

FeatureZNodeTest > testEncodeDecode() STARTED

FeatureZNodeTest > testEncodeDecode() PASSED

FeatureZNodeTest > testDecodeSuccess() STARTED

FeatureZNodeTest > testDecodeSuccess() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPaths() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPaths() PASSED

ExtendedAclStoreTest > shouldRoundTripChangeNode() STARTED

ExtendedAclStoreTest > shouldRoundTripChangeNode() PASSED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() STARTED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() PASSED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() STARTED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() PASSED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #521

2021-02-09 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10021: Changed Kafka backing stores to use shared admin client 
to get end offsets and create topics (#9780)


--
[...truncated 3.62 MB...]
AuthorizerIntegrationTest > testCommitWithNoGroupAccess() STARTED

AuthorizerIntegrationTest > testCommitWithNoGroupAccess() PASSED

AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl() STARTED

AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl() PASSED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
STARTED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() PASSED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
STARTED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
PASSED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() STARTED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
PASSED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() STARTED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() PASSED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
STARTED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
PASSED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
STARTED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
PASSED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() STARTED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() PASSED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() STARTED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() PASSED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
STARTED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
PASSED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testClose() STARTED

SslProducerSendTest > testClose() PASSED

SslProducerSendTest > testFlush() STARTED

SslProducerSendTest > testFlush() PASSED

SslProducerSendTest > testSendToPartition() STARTED

SslProducerSendTest > testSendToPartition() PASSED

SslProducerSendTest > testSendOffset() STARTED

SslProducerSendTest > testSendOffset() PASSED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() PASSED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() STARTED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() PASSED

ProducerCompressionTest > [1] compression=none STARTED

ProducerCompressionTest > [1] compression=none PASSED

ProducerCompressionTest > [2] compression=gzip STARTED

ProducerCompressionTest > [2] compression=gzip PASSED

ProducerCompressionTest > [3] compression=snappy STARTED

ProducerCompressionTest > [3] compression=snappy PASSED

ProducerCompressionTest > [4] compression=lz4 STARTED

ProducerCompressionTest > [4] compression=lz4 PASSED

ProducerCompressionTest > [5] compression=zstd STARTED

ProducerCompressionTest > [5] compression=zstd PASSED

MetricsTest > testMetrics() STARTED

MetricsTest > testMetrics() PASSED

ProducerFailureHandlingTest > testCannotSendToInternalTopic() STARTED


Jenkins build is back to normal : Kafka » kafka-2.7-jdk8 #116

2021-02-09 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-2.6-jdk8 #88

2021-02-09 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-02-09 Thread Ismael Juma
Can you elaborate why this needs to be in 2.6? Seems like an improvement
versus a critical bug fix.

Ismael

On Mon, Feb 8, 2021 at 6:39 PM Luke Chen  wrote:

> Hi Sophie,
> I found there is 1 issue that should be cherry-picked into 2.6 and 2.7
> branches: KAFKA-12312 .
> Simply put, *Scala* *2.13.4* is released at the end of 2020, and we
> upgraded to it and fixed some compatible issues on this PR
> , more specifically, it's here
> <
> https://github.com/apache/kafka/pull/9643/files#diff-fda3fb44e69a19600913bd951431fb0035996c76325b1c1d84d6f34bec281205R292
> >
> .
> We only merged this fix on *trunk*(which will be on 2.8), but we didn't
> tell users (or we didn't know there'll be compatible issues) not to adopt
> the latest *Scala* *2.13.4*.
>
> Therefore, I think we should cherry-pick this fix into 2.6 and 2.7
> branches. What do you think?
>
> Thank you.
> Luke
>
>
>
>
>
> On Tue, Feb 9, 2021 at 3:10 AM Sophie Blee-Goldman 
> wrote:
>
> > Hey all,
> >
> > Since all outstanding bugfixes seem to have made their way over to the
> 2.6
> > branch by now, I plan to move ahead with cutting an RC. As always, please
> > let me know if you uncover and critical or blocker bugs that affect 2.6
> >
> > Thanks!
> > Sophie
> >
> > On Thu, Jan 28, 2021 at 9:25 AM John Roesler 
> wrote:
> >
> > > Thanks so much for stepping up, Sophie!
> > >
> > > I'm +1
> > >
> > > -John
> > >
> > > On Wed, 2021-01-27 at 17:59 -0500, Bill Bejeck wrote:
> > > > Thanks for taking this on Sophie. +1
> > > >
> > > > Bill
> > > >
> > > > On Wed, Jan 27, 2021 at 5:59 PM Ismael Juma 
> wrote:
> > > >
> > > > > Thanks Sophie! +1
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Wed, Jan 27, 2021 at 2:45 PM Sophie Blee-Goldman <
> > > sop...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to volunteer as release manager for a 2.6.2 release.
> This
> > is
> > > > > being
> > > > > > accelerated
> > > > > > to address a critical regression in Kafka Streams for Windows
> > users.
> > > > > >
> > > > > > You can find the release plan on the wiki:
> > > > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.6.2
> > > > > >
> > > > > > Thanks,
> > > > > > Sophie
> > > > > >
> > > > >
> > >
> > >
> > >
> >
>


[jira] [Created] (KAFKA-12317) Relax non-null key requirement for left KStream-KTable join

2021-02-09 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-12317:
---

 Summary: Relax non-null key requirement for left KStream-KTable 
join
 Key: KAFKA-12317
 URL: https://issues.apache.org/jira/browse/KAFKA-12317
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


Currently, for a stream-table join KafkaStreams drops all stream records with a 
null-key, because for a null-key the join is undefined: ie, we don't have an 
attribute the do the table lookup (we consider the stream-record as malformed). 
Note, that we define the semantics of _left_ join as: keep the stream record if 
no KTable record was found.

We could relax the definition of _left_ join though, and not drop non-key 
stream records, and call the ValueJoiner with a `null` table record instead: if 
the stream record key is `null`, we could treat is as "failed table lookup" 
instead of treating the stream record as corrupted.

If we make this change, users that want to keep the current behavior, can add a 
`filter()` before the join to drop `null`-key records from the stream 
explicitly.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.6.2 release

2021-02-09 Thread Sophie Blee-Goldman
Sure, you can go ahead and merge that back to the 2.6 branch. I'll hold off
on cutting the RC until then

On Mon, Feb 8, 2021 at 6:39 PM Luke Chen  wrote:

> Hi Sophie,
> I found there is 1 issue that should be cherry-picked into 2.6 and 2.7
> branches: KAFKA-12312 .
> Simply put, *Scala* *2.13.4* is released at the end of 2020, and we
> upgraded to it and fixed some compatible issues on this PR
> , more specifically, it's here
> <
> https://github.com/apache/kafka/pull/9643/files#diff-fda3fb44e69a19600913bd951431fb0035996c76325b1c1d84d6f34bec281205R292
> >
> .
> We only merged this fix on *trunk*(which will be on 2.8), but we didn't
> tell users (or we didn't know there'll be compatible issues) not to adopt
> the latest *Scala* *2.13.4*.
>
> Therefore, I think we should cherry-pick this fix into 2.6 and 2.7
> branches. What do you think?
>
> Thank you.
> Luke
>
>
>
>
>
> On Tue, Feb 9, 2021 at 3:10 AM Sophie Blee-Goldman 
> wrote:
>
> > Hey all,
> >
> > Since all outstanding bugfixes seem to have made their way over to the
> 2.6
> > branch by now, I plan to move ahead with cutting an RC. As always, please
> > let me know if you uncover and critical or blocker bugs that affect 2.6
> >
> > Thanks!
> > Sophie
> >
> > On Thu, Jan 28, 2021 at 9:25 AM John Roesler 
> wrote:
> >
> > > Thanks so much for stepping up, Sophie!
> > >
> > > I'm +1
> > >
> > > -John
> > >
> > > On Wed, 2021-01-27 at 17:59 -0500, Bill Bejeck wrote:
> > > > Thanks for taking this on Sophie. +1
> > > >
> > > > Bill
> > > >
> > > > On Wed, Jan 27, 2021 at 5:59 PM Ismael Juma 
> wrote:
> > > >
> > > > > Thanks Sophie! +1
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Wed, Jan 27, 2021 at 2:45 PM Sophie Blee-Goldman <
> > > sop...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to volunteer as release manager for a 2.6.2 release.
> This
> > is
> > > > > being
> > > > > > accelerated
> > > > > > to address a critical regression in Kafka Streams for Windows
> > users.
> > > > > >
> > > > > > You can find the release plan on the wiki:
> > > > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.6.2
> > > > > >
> > > > > > Thanks,
> > > > > > Sophie
> > > > > >
> > > > >
> > >
> > >
> > >
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #520

2021-02-09 Thread Apache Jenkins Server
See 


Changes:

[github] JUnit extensions for integration tests (#9986)


--
[...truncated 3.44 MB...]

AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransaction()
 STARTED

AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnEndTransaction()
 PASSED

AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnSendOffsetsToTxn()
 STARTED

AuthorizerIntegrationTest > 
shouldThrowTransactionalIdAuthorizationExceptionWhenNoTransactionAccessOnSendOffsetsToTxn()
 PASSED

AuthorizerIntegrationTest > testCommitWithNoGroupAccess() STARTED

AuthorizerIntegrationTest > testCommitWithNoGroupAccess() PASSED

AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl() STARTED

AuthorizerIntegrationTest > 
testTransactionalProducerInitTransactionsNoDescribeTransactionalIdAcl() PASSED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
STARTED

AuthorizerIntegrationTest > testAuthorizeByResourceTypeDenyTakesPrecedence() 
PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithDescribe() PASSED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
STARTED

AuthorizerIntegrationTest > testCreateTopicAuthorizationWithClusterCreate() 
PASSED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testCommitWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() STARTED

AuthorizerIntegrationTest > testAuthorizationWithTopicExisting() PASSED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
STARTED

AuthorizerIntegrationTest > testUnauthorizedDeleteRecordsWithoutDescribe() 
PASSED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testMetadataWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() STARTED

AuthorizerIntegrationTest > testProduceWithTopicDescribe() PASSED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() STARTED

AuthorizerIntegrationTest > testDescribeGroupApiWithNoGroupAcl() PASSED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
STARTED

AuthorizerIntegrationTest > testPatternSubscriptionMatchingInternalTopic() 
PASSED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
STARTED

AuthorizerIntegrationTest > testSendOffsetsWithNoConsumerGroupDescribeAccess() 
PASSED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() STARTED

AuthorizerIntegrationTest > testOffsetFetchTopicDescribe() PASSED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() STARTED

AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead() PASSED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() STARTED

AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId() PASSED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
STARTED

AuthorizerIntegrationTest > testSimpleConsumeWithExplicitSeekAndNoGroupAccess() 
PASSED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendNonCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testClose() STARTED

SslProducerSendTest > testClose() PASSED

SslProducerSendTest > testFlush() STARTED

SslProducerSendTest > testFlush() PASSED

SslProducerSendTest > testSendToPartition() STARTED

SslProducerSendTest > testSendToPartition() PASSED

SslProducerSendTest > testSendOffset() STARTED

SslProducerSendTest > testSendOffset() PASSED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() STARTED

SslProducerSendTest > testSendCompressedMessageWithCreateTime() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread() PASSED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() STARTED

SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread() PASSED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() STARTED

SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion() PASSED

ProducerCompressionTest > [1] compression=none STARTED

ProducerCompressionTest > [1] compression=none PASSED

ProducerCompressionTest > [2] compression=gzip STARTED

ProducerCompressionTest > [2] compression=gzip PASSED

ProducerCompressionTest > [3] 

Re: [DISCUSS] KIP-708: Rack aware Kafka Streams with pluggable StandbyTask assignor

2021-02-09 Thread Levani Kokhreidze
Hello all,

I’ve updated KIP-708 [1] to reflect the latest discussion outcomes. 
I’m looking forward to your feedback.

Regards,
Levani

[1] - 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awarness+for+Kafka+Streams

> On 2. Feb 2021, at 22:03, Levani Kokhreidze  wrote:
> 
> Hi John.
> 
> Thanks a lot for this detailed analysis! 
> Yes, that is what I had in mind as well. 
> I also like that idea of having “task.assignment.awareness” configuration
> to tell which instance tags can be used for rack awareness.
> I may borrow it for this KIP if you don’t mind :) 
> 
> Thanks again John for this discussion, it’s really valuable.
> 
> I’ll update the proposal and share it once again in this discussion thread.
> 
> Regards,
> Levani 
> 
>> On 2. Feb 2021, at 18:47, John Roesler > > wrote:
>> 
>> Hi Levani,
>> 
>> 1. Thanks for the details.
>> 
>> I figured it must be something like this two-dimensional definition of 
>> "rack".
>> 
>> It does seem like, if we make the config take a list of tags, we can define
>> the semantics to be that the system will make a best effort to distribute
>> the standbys over each rack dimension.
>> 
>> In your example, there are two clusters and three AZs. The example
>> configs would be:
>> 
>> Node 1:
>> instance.tag.cluster: K8s_Cluster1
>> instance.tag.zone: eu-central-1a
>> task.assignment.awareness: cluster,zone
>> 
>> Node 2:
>> instance.tag.cluster: K8s_Cluster1
>> instance.tag.zone: eu-central-1b
>> task.assignment.awareness: cluster,zone
>> 
>> Node 3:
>> instance.tag.cluster: K8s_Cluster1
>> instance.tag.zone: eu-central-1c
>> task.assignment.awareness: cluster,zone
>> 
>> Node 4:
>> instance.tag.cluster: K8s_Cluster2
>> instance.tag.zone: eu-central-1a
>> task.assignment.awareness: cluster,zone
>> 
>> Node 5:
>> instance.tag.cluster: K8s_Cluster2
>> instance.tag.zone: eu-central-1b
>> task.assignment.awareness: cluster,zone
>> 
>> Node 6:
>> instance.tag.cluster: K8s_Cluster2
>> instance.tag.zone: eu-central-1c
>> task.assignment.awareness: cluster,zone
>> 
>> 
>> Now, if we have a task 0_0 with an active and two replicas,
>> there are three total copies of the task to distribute over:
>> * 6 instances
>> * 2 clusters
>> * 3 zones
>> 
>> There is a constraint that we _cannot_ assign two copies of a task
>> to a single instance, but it seems like the default rack awareness
>> would permit us to assign two copies of a task to a rack, if (and only
>> if) the number of copies is greater than the number of racks.
>> 
>> So, the assignment we would get is like this:
>> * assigned to three different instances
>> * one copy in each of zone a, b, and c
>> * two copies in one cluster and one in the other cluster
>> 
>> For example, we might have 0_0 assigned to:
>> * Node 1 (cluster 1, zone a)
>> * Node 5 (cluster 2, zone b)
>> * Node 3 (cluster 1, zone c)
>> 
>> Is that what you were also thinking?
>> 
>> Thanks,
>> -John
>> 
>> On Tue, Feb 2, 2021, at 02:24, Levani Kokhreidze wrote:
>>> Hi John,
>>> 
>>> 1. Main reason was that it seemed easier change compared to having 
>>> multiple tags assigned to each host.
>>> 
>>> ---
>>> 
>>> Answering your question what use-case I have in mind:
>>> Lets say we have two Kubernetes clusters running the same Kafka Streams 
>>> application. 
>>> And each Kubernetes cluster is spanned across multiple AZ. 
>>> So the setup overall looks something like this:
>>> 
>>> K8s_Cluster1 [eu-central-1a, eu-central-1b, eu-central-1c]
>>> K8s_Cluster2 [eu-central-1a, eu-central-1b, eu-central-1c]
>>> 
>>> Now, if Kafka Streams application is launched in K8s_Clister1: 
>>> eu-central-1a,
>>> ideally I would want standby task to be created in the different K8s 
>>> cluster and region.
>>> So in this example it can be K8s_Cluster2: [eu-central-1b, 
>>> eu-central-1c]
>>> 
>>> But giving it a bit more thought, this can be implemented if we change 
>>> semantics of “tags” a bit.
>>> So instead of doing full match with tags, we can do iterative matching 
>>> and it should work.
>>> (If this is what you had in mind, apologies for the misunderstanding).
>>> 
>>> If we consider the same example as mentioned above, for the active task 
>>> we would
>>> have following tags: [K8s_Cluster1, eu-central-1a]. In order to 
>>> distribute standby task
>>> in the different K8s cluster, plus in the different AWS region, standby 
>>> task assignment 
>>> algorithm can compare each tag by index. So steps would be something 
>>> like:
>>> 
>>> // this will result in selecting client in the different K8s cluster
>>> 1. clientsInDifferentCluster = (tagsOfActiveTask[0] != allClientTags[0])
>>> // this will result in selecting the client in different AWS region
>>> 2. selectedClientForStandbyTask = (tagsOfActiveTask[1] != 
>>> clientsInDifferentCluster[1] )
>>> 
>>> WDYT?
>>> 
>>> If you agree with the use-case I’ve mentioned, the pluggable assignor 
>>> can be differed to another KIP, yes.
>>> As it won’t be required for this KIP 

[jira] [Created] (KAFKA-12316) Configuration is not defined: topic.creation.default.partitions

2021-02-09 Thread Goltseva Taisiia (Jira)
Goltseva Taisiia created KAFKA-12316:


 Summary: Configuration is not defined: 
topic.creation.default.partitions
 Key: KAFKA-12316
 URL: https://issues.apache.org/jira/browse/KAFKA-12316
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.6.1
Reporter: Goltseva Taisiia


Hi, guys!

The KIP was implemented:

[https://cwiki.apache.org/confluence/display/KAFKA/KIP-158%3A+Kafka+Connect+should+allow+source+connectors+to+set+topic-specific+settings+for+new+topics]

 

But it seems you forget to add changes to the class:

[https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java]

 

I suppose we need to add something like 'validateClientOverrides()' for configs 
starting with 'topic.creation' prefix. Like this:

[https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java#L420]

 

For now when I create, for example, a Postgres Source connector and do not 
specify login and password (it's mandatory parameters) I get this:
{code:java}
{"error_code": 400,"message": "Connector configuration is invalid and 
contains the following 1 error(s):\nConfiguration is not defined: 
topic.creation.default.partitions\nConfiguration is not defined: 
topic.creation.test1.retention.ms\nConfiguration is not defined: 
topic.creation.test1.include\nConfiguration is not defined: 
topic.creation.test1.partitions\nConfiguration is not defined: 
topic.creation.default.replication.factor\nA value is required\nYou can also 
find the above list of errors at the endpoint 
`/connector-plugins/{connectorType}/config/validate`"}{code}

But it should be just:
 
{code:java}
{ "error_code": 400, "message": "Connector configuration is invalid and 
contains the following 1 error(s):\nA value is required\nYou can also find the 
above list of errors at the endpoint 
`/connector-plugins/{connectorType}/config/validate`" }{code}

So, I think, a little change of AbstractHerder class is required.
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)