Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #123

2020-10-07 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #153

2020-10-07 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #120

2020-10-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9274: fix incorrect default value for `task.timeout.ms` config 
(#9385)

[github] KAFKA-10564: only process non-empty task directories when internally 
cleaning obsolete state stores (#9373)


--
[...truncated 6.72 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED


Jenkins build is back to normal : Kafka » kafka-2.6-jdk8 #30

2020-10-07 Thread Apache Jenkins Server
See 




Re: KIP-675: Convert KTable to a KStream using the previous value

2020-10-07 Thread Javier Freire Riobo
I have done a small demo example. I hope it serves as a clarification.

https://github.com/javierfreire/KTableToKStreamTest

Thank you very much

El mié., 7 oct. 2020 a las 3:01, Matthias J. Sax ()
escribió:

> Thanks for the KIP.
>
> I am not sure if I understand the motivation. In particular the KIP says:
>
> > The main problem, apart from needing more code, is that if the same
> event is received twice at the same time and the commit time is not 0, the
> difference is deleted and nothing is emitted.
>
> Can you elaborate? Maybe you can provide a concrete example? I don't
> understand the relationship between "the same event is received twice"
> and a "non-zero commit time".
>
>
> -Matthias
>
> On 10/6/20 6:25 AM, Javier Freire Riobo wrote:
> > Hi all,
> >
> > I'd like to propose these changes to the Kafka Streams API.
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-675%3A+Convert+KTable+to+a+KStream+using+the+previous+value
> >
> > This is a proposal to convert a KTable to a KStream knowing the previous
> > value of the registry.
> >
> > I also opened a proof-of-concept PR:
> >
> > PR#9321:  https://github.com/apache/kafka/pull/9381
> >
> > What do you think?
> >
> > Cheers,
> > Javier Freire
> >
>


[jira] [Created] (KAFKA-10584) IndexSearchType should use sealed trait instead of Enumeration

2020-10-07 Thread Jun Rao (Jira)
Jun Rao created KAFKA-10584:
---

 Summary: IndexSearchType should use sealed trait instead of 
Enumeration
 Key: KAFKA-10584
 URL: https://issues.apache.org/jira/browse/KAFKA-10584
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Jun Rao


In Scala, we prefer sealed traits over Enumeration since the former gives you 
exhaustiveness checking. With Scala Enumeration, you don't get a warning if you 
add a new value that is not handled in a given pattern match.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10583) Thread-safety of AdminClient is not documented

2020-10-07 Thread Adem Efe Gencer (Jira)
Adem Efe Gencer created KAFKA-10583:
---

 Summary: Thread-safety of AdminClient is not documented
 Key: KAFKA-10583
 URL: https://issues.apache.org/jira/browse/KAFKA-10583
 Project: Kafka
  Issue Type: Task
Affects Versions: 2.6.0, 2.5.0, 2.4.0, 2.3.0
Reporter: Adem Efe Gencer
Assignee: Colin McCabe


Other than a Stack Overflow comment (see 
[https://stackoverflow.com/a/61738065]) by Colin Patrick McCabe and a proposed 
design note on 
[KIP-117|https://cwiki.apache.org/confluence/display/KAFKA/KIP-117%3A+Add+a+public+AdminClient+API+for+Kafka+admin+operations]
 wiki, there is no source that verifies the thread-safety of KafkaAdminClient.

Please update JavaDocs of KafkaAdminClient class and/or Admin interface to 
clarify its thread-safety.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9930) Prevent ReplicaFetcherThread from throwing UnknownTopicOrPartitionException upon topic creation and deletion.

2020-10-07 Thread Adem Efe Gencer (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adem Efe Gencer resolved KAFKA-9930.

Resolution: Fixed

> Prevent ReplicaFetcherThread from throwing UnknownTopicOrPartitionException 
> upon topic creation and deletion.
> -
>
> Key: KAFKA-9930
> URL: https://issues.apache.org/jira/browse/KAFKA-9930
> Project: Kafka
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 0.10.0.0, 0.11.0.0, 1.0.0, 1.1.0, 2.0.0, 2.1.0, 2.2.0, 
> 2.3.0, 2.4.0, 2.5.0
>Reporter: Adem Efe Gencer
>Assignee: Adem Efe Gencer
>Priority: Minor
>
> When does UnknownTopicOrPartitionException typically occur?
>  * Upon a topic creation, a follower broker of a new partition starts replica 
> fetcher before the prospective leader broker of the new partition receives 
> the leadership information from the controller. Apache Kafka has a an open 
> issue about this (see KAFKA-6221)
>  * Upon a topic deletion, a follower broker of a to-be-deleted partition 
> starts replica fetcher after the leader broker of the to-be-deleted partition 
> processes the deletion information from the controller.
>  * As expected, clusters with frequent topic creation and deletion report 
> UnknownTopicOrPartitionException with relatively higher frequency.
> What is the impact?
>  * Exception tracking systems identify the error logs with 
> UnknownTopicOrPartitionException as an exception. This results in a lot of 
> noise for a transient issue that is expected to recover by itself and a 
> natural process in Kafka due to its asynchronous state propagation.
> Why not move it to a lower than warn-level log?
>  * Despite typically being a transient issue, 
> UnknownTopicOrPartitionException may also indicate real issues if it doesn't 
> fix itself after a short period of time. To ensure detection of such 
> scenarios, we set the log level to warn.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Is AdminClient of Kafka thread-safe?

2020-10-07 Thread Efe Gencer
Hi All,

Other than a Stack Overflow comment (see https://stackoverflow.com/a/61738065) 
by Colin Patrick McCabe (CC'd),
there is no source that verifies the thread-safety of KafkaAdminClient.

  *   In particular, JavaDocs of KafkaAdminClient class and Admin interface 
have no discussion on thread-safety.

I would appreciate information on the thread-safety of the AdminClient.

Best,
Efe


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #121

2020-10-07 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove `TargetVoters` from `DescribeQuorum` (#9376)

[github] KAFKA-10186; Abort transaction with pending data with 
TransactionAbortedException (#9280)

[github] KAFKA-10402: Upgrade system tests to python3 (#9196)


--
[...truncated 3.38 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

[jira] [Resolved] (KAFKA-10362) When resuming Streams active task with EOS, the checkpoint file should be deleted

2020-10-07 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-10362.
---
Fix Version/s: 2.7.0
   Resolution: Fixed

> When resuming Streams active task with EOS, the checkpoint file should be 
> deleted
> -
>
> Key: KAFKA-10362
> URL: https://issues.apache.org/jira/browse/KAFKA-10362
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0
>Reporter: Guozhang Wang
>Assignee: Sharath Bhat
>Priority: Major
>  Labels: newbie++
> Fix For: 2.7.0
>
>
> Today when we suspend a task we commit and along with the commit we always 
> write checkpoint file even if we are eosEnabled (since the state is already 
> SUSPENDED). But the suspended task may later be resumed and in that case the 
> checkpoint file should be deleted since it should only be written when it is 
> cleanly closed.
> With our latest rebalance protocol in KIP-429, resume would not be called 
> since all suspended tasks would be closed, but with the old eager protocol it 
> may still be called — I think that may be the reason we did not get it often.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-10-07 Thread Matthias J. Sax
I synced with John in-person and he emphasized his concerns about
breaking code if we change the state machine. From an impl point of
view, I am concerned that maintaining two state machines at the same
time, might be very complex. John had the idea though, that we could
actually do an internal translation: Internally, we switch the state
machine to the new one, but translate new-stated to old-state before
doing the callback? (We only need two separate "state enums" and we add
a new method to register callbacks for the new state enums and deprecate
the existing method).

However, also with regard to the work Guozhang pointed out, I am
wondering if we should split out a independent KIP just for the state
machine changes? It seems complex enough be itself. We would hold-off
this KIP until the state machine change is done and resume it afterwards?

Thoughts?

-Matthias

On 10/6/20 8:55 PM, Guozhang Wang wrote:
> Sorry I'm late to the party.
> 
> Matthias raised a point to me regarding the recent development of moving
> restoration from stream threads to separate restore threads and allowing
> the stream threads to process any processible tasks even when some other
> tasks are still being restored by the restore threads:
> 
> https://issues.apache.org/jira/browse/KAFKA-10526
> https://issues.apache.org/jira/browse/KAFKA-10577
> 
> That would cause the restoration of non-global states to be more similar to
> global states such that some tasks would be processed even though the state
> of the stream thread is not yet in RUNNING (because today we only transit
> to it when ALL assigned tasks have completed restoration and are
> processible).
> 
> Also, as Sophie already mentioned, today during REBALANCING (in stream
> thread level, it is PARTITION_REVOKED -> PARTITION_ASSIGNED) some tasks may
> still be processed, and because of KIP-429 the RUNNING -> PARTITION_REVOKED
> -> PARTITION_ASSIGNED can be within a single call and hence be very
> "transient", whereas PARTITION_ASSIGNED -> RUNNING could still take time as
> it only do the transition when all tasks are processible.
> 
> So I think it makes sense to add a RESTORING state at the stream client
> level, defined as "at least one of the state stores assigned to this
> client, either global or non-global, is still restoring", and emphasize
> that during this state the client may still be able to process records,
> just probably not in full-speed.
> 
> As for REBALANCING, I think it is a bit less relevant to this KIP but
> here's a dump of my thoughts: if we can capture the period when "some tasks
> do not belong to any clients and hence processing is not full-speed" it
> would still be valuable, but unfortunately right now since
> onPartitionRevoked is not triggered each time on all clients, today's
> transition would just make a lot of very short REBALANCING state period
> which is not very useful really. So if we still want to keep that state
> maybe we can consider the following tweak: at the thread level, we replace
> PARTITION_REVOKED / PARTITION_ASSIGNED with just a single REBALANCING
> state, and we will transit to this state upon onPartitionRevoked, but we
> will only transit out of this state upon onAssignment when the assignor
> decides there's no follow-up rebalance immediately (note we also schedule
> future rebalances for workload balancing, but that would still trigger
> transiting out of it). On the client level, we would enter REBALANCING when
> any threads enter REBALANCING and we would transit out of it when all
> transits out of it. In this case, it is possible that during a rebalance,
> only those clients that have to revoke some partition would enter the
> REBALANCING state while others that only get additional tasks would not
> enter this state at all.
> 
> With all that being said, I think the discussion around REBALANCING is less
> relevant to this KIP, and even for RESTORING I honestly think maybe we can
> make it in another KIP out of 406. It will, admittedly leave us in a
> temporary phase where the FSM of Kafka Streams is not perfect, but that
> helps making incremental development progress for 406 itself.
> 
> 
> Guozhang
> 
> 
> On Mon, Oct 5, 2020 at 2:37 PM Sophie Blee-Goldman 
> wrote:
> 
>> It seems a little misleading, but I actually have no real qualms about
>> transitioning to the
>> REBALANCING state *after* RESTORING. One of the side effects of KIP-429 was
>> that in
>> most cases we actually don't transition to REBALANCING at all until the
>> very end of the
>> rebalance, so REBALANCING doesn't really mean all that much any more. These
>> days
>> the majority of the time an instance spends in the REBALANCING state is
>> actually spent
>> on restoration anyways.
>>
>> If users are listening in on the REBALANCING -> RUNNING transition, then
>> they might
>> also be listening for the RUNNING -> REBALANCING transition, so we may need
>> to actually
>> go RUNNING -> REBALANCING -> RESTORING -> REBALANCING -> RUNNING. This
>> 

[jira] [Resolved] (KAFKA-9497) Brokers start up even if SASL provider is not loaded and throw NPE when clients connect

2020-10-07 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-9497.
---
Resolution: Duplicate

> Brokers start up even if SASL provider is not loaded and throw NPE when 
> clients connect
> ---
>
> Key: KAFKA-9497
> URL: https://issues.apache.org/jira/browse/KAFKA-9497
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.2, 0.11.0.3, 1.1.1, 2.4.0
>Reporter: Rajini Sivaram
>Assignee: Ron Dagostino
>Priority: Major
> Fix For: 2.7.0
>
>
> Note: This is not a regression, this has been the behaviour since SASL was 
> first implemented in Kafka.
>  
> Sasl.createSaslServer and Sasl.createSaslClient may return null if a SASL 
> provider that works for the specified configs cannot be created. We don't 
> currently handle this case. As a result broker/client throws 
> NullPointerException if a provider has not been loaded. On the broker-side, 
> we allow brokers to start up successfully even if SASL provider for its 
> enabled mechanisms are not found. For SASL mechanisms 
> PLAIN/SCRAM-xx/OAUTHBEARER, the login module in Kafka loads the SASL 
> providers. If the login module is incorrectly configured, brokers startup and 
> then fail client connections when hitting NPE. Clients see disconnections 
> during authentication as a result. It is difficult to tell from the client or 
> broker logs why the failure occurred. We should fail during startup if SASL 
> providers are not found and provide better diagnostics for this case.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10582) Mirror Maker 2 not replicating new topics until restart

2020-10-07 Thread Robert Martin (Jira)
Robert Martin created KAFKA-10582:
-

 Summary: Mirror Maker 2 not replicating new topics until restart
 Key: KAFKA-10582
 URL: https://issues.apache.org/jira/browse/KAFKA-10582
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 2.5.1
 Environment: RHEL 7 Linux.
Reporter: Robert Martin


We are using Mirror Maker 2 from the 2.5.1 release for replication on some 
clusters.  Replication is working as expected for existing topics.  When we 
create a new topic, however, Mirror Maker 2 creates the replicated topic as 
expected but never starts replicating it.  If we restart Mirror Maker 2 within 
2-3 minutes the topic starts replicating as expected.  From documentation we 
haveve seen it appears this should start replicating without a restart based on 
the settings we have.

*Example:*
Create topic "mytesttopic" on source cluster
MirrorMaker 2 creates "source.mytesttopioc" on target cluster with no issue
MirrorMaker 2 does not replicate "mytesttopic" -> "source.mytesttopic"
Restart MirrorMaker 2 and now replication works for "mytesttopic" -> 
"source.mytesttopic"

*Example config:*
name = source->target
group.id = source-to-target

# specify any number of cluster aliases
clusters = source, target

# connection information for each cluster
# This is a comma separated host:port pairs for each cluster
# for e.g. "A_host1:9092, A_host2:9092, A_host3:9092"
source.bootstrap.servers = sourcehosts:9092
target.bootstrap.servers = targethosts:9092

# enable and configure individual replication flows
source->target.enabled = true

# regex which defines which topics gets replicated. For eg "foo-.*"
source->target.topics = .*

target->source = false
target->source.topics = .*

# Setting replication factor of newly created remote topics
replication.factor=3

# Internal Topic Settings  
#
# The replication factor for mm2 internal topics "heartbeats", 
"B.checkpoints.internal" and
# "mm2-offset-syncs.B.internal"
# For anything other than development testing, a value greater than 1 is 
recommended to ensure availability such as 3.
checkpoints.topic.replication.factor=3
heartbeats.topic.replication.factor=3
offset-syncs.topic.replication.factor=3

# The replication factor for connect internal topics "mm2-configs.B.internal", 
"mm2-offsets.B.internal" and
# "mm2-status.B.internal"
# For anything other than development testing, a value greater than 1 is 
recommended to ensure availability such as 3.
offset.storage.replication.factor=3
status.storage.replication.factor=3
config.storage.replication.factor=3

# customize as needed
# replication.policy.separator = _
# sync.topic.acls.enabled = false
# emit.heartbeats.interval.seconds = 5

# tasks.max results in parallelism
tasks.max = 16
refresh.topics.enabled = true
sync.topic.configs.enabled = true
# Setting the below too low can result in performance issues
refresh.topics.interval.seconds = 300
refresh.groups.interval.seconds = 300
readahead.queue.capacity = 100

emit.checkpoints.enabled = true
emit.checkpoints.interval.seconds = 5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10028) Implement write path for feature versioning scheme

2020-10-07 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-10028.
-
Fix Version/s: 2.7.0
   Resolution: Fixed

merged the PR to trunk

> Implement write path for feature versioning scheme
> --
>
> Key: KAFKA-10028
> URL: https://issues.apache.org/jira/browse/KAFKA-10028
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Kowshik Prakasam
>Assignee: Kowshik Prakasam
>Priority: Major
> Fix For: 2.7.0
>
>
> Goal is to implement various classes and integration for the write path of 
> the feature versioning system 
> ([KIP-584|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features]).
>  This is preceded by the read path implementation (KAFKA-10027). The write 
> path implementation involves developing the new controller API: 
> UpdateFeatures that enables transactional application of a set of 
> cluster-wide feature updates to the ZK {{'/features'}} node, along with 
> required ACL permissions.
>  
> Details about the write path are explained [in this 
> part|https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-ChangestoKafkaController]
>  of the KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10122) Consumer should allow heartbeat during rebalance as well

2020-10-07 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-10122.
---
Fix Version/s: 2.6.1
   2.7.0
   Resolution: Fixed

> Consumer should allow heartbeat during rebalance as well
> 
>
> Key: KAFKA-10122
> URL: https://issues.apache.org/jira/browse/KAFKA-10122
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.7.0, 2.6.1
>
>
> Today we disable heartbeats if the {{state != MemberState.STABLE}}. And if a 
> rebalance failed we set the state to UNJOINED. In the old API {{poll(long)}} 
> it is okay since we always try to complete the rebalance successfully within 
> the same call, so we would not be in UNJOINED or REBALANCING for a very long 
> time.
> But with the new {{poll(Duration)}} we may actually return while we are still 
> in UNJOINED or REBALANCING and it may take some time (smaller than 
> max.poll.interval but larger than session.timeout) before the next poll call, 
> and since heartbeat is disabled during this period of time we could be kicked 
> by the coordinator.
> The proposal I have is
> 1) allow heartbeat to be sent during REBALANCING as well.
> 2) when join/sync response has retriable error, do not set the state to 
> UNJOINED but stay with REBALANCING.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #120

2020-10-07 Thread Apache Jenkins Server
See 


Changes:

[Ismael Juma] Revert "KAFKA-10469: Resolve logger levels hierarchically (#9266)"


--
[...truncated 3.41 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #151

2020-10-07 Thread Apache Jenkins Server
See 


Changes:

[Ismael Juma] Revert "KAFKA-10469: Resolve logger levels hierarchically (#9266)"


--
[...truncated 3.38 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED


Re: [DISCUSSION] Upgrade system tests to python 3

2020-10-07 Thread Guozhang Wang
Hello Nikolay,

I've merged the PR to trunk. Thanks for your huge effort and patience going
through the review!

Guozhang

On Wed, Oct 7, 2020 at 6:52 AM Nikolay Izhikov  wrote:

> Great news!
> Thanks Magnus!
>
> I’ve updated the PR.
>
> Looks like we ready to merge it.
>
> > 7 окт. 2020 г., в 15:29, Magnus Edenhill 
> написал(а):
> >
> > Hi,
> >
> > ducktape v0.8.0 is now released.
> >
> > Regards,
> > Magnus
> >
> >
> > Den ons 7 okt. 2020 kl 10:50 skrev Nikolay Izhikov  >:
> >
> >> Hello.
> >>
> >> Got 4 approvals for PR [1]
> >> The only thing we need to be able to merge it is a ducktape 0.8 release.
> >> If ducktape team need any help with the release, please, let me know.
> >>
> >> [1] https://github.com/apache/kafka/pull/9196
> >>
> >>
> >>> 21 сент. 2020 г., в 12:58, Nikolay Izhikov 
> >> написал(а):
> >>>
> >>> Hello.
> >>>
> >>> I fixed two system tests that fails in trunk, also.
> >>>
> >>>
> streams_upgrade_test.py::StreamsUpgradeTest.test_version_probing_upgrade
> >>> streams_static_membership_test.py
> >>>
> >>> Please, take a look at my PR [1]
> >>>
> >>> [1] https://github.com/apache/kafka/pull/9312
> >>>
>  20 сент. 2020 г., в 06:11, Guozhang Wang 
> >> написал(а):
> 
>  I've triggered a system test on top of your branch.
> 
>  Maybe you could also re-run the jenkins unit tests since currently all
> >> of
>  them fails but you've only touched on system tests, so I'd like to
> >> confirm
>  at least one successful run.
> 
>  On Wed, Sep 16, 2020 at 3:37 AM Nikolay Izhikov 
> >> wrote:
> 
> > Hello, Guozhang.
> >
> >> I can help run the test suite once your PR is cleanly rebased to
> >> verify
> > the whole suite works
> >
> > Thank you for joining to the review.
> >
> > 1. PR rebased on the current trunk.
> >
> > 2. I triggered all tests in my private environment to verify them
> after
> > rebase.
> >  Will inform you once tests passed on my environment.
> >
> > 3. We need a new ducktape release [1] to be able to merge PR [2].
> >  For now, PR based on the ducktape trunk branch [3], not some
> > specific release.
> >  If ducktape team need any help with the release, please, let me
> > know.
> >
> > [1] https://github.com/confluentinc/ducktape/issues/245
> > [2] https://github.com/apache/kafka/pull/9196
> > [3]
> >
> >>
> https://github.com/apache/kafka/pull/9196/files#diff-9235a7bdb1ca9268681c0e56f3f3609bR39
> >
> >> 16 сент. 2020 г., в 07:32, Guozhang Wang 
> > написал(а):
> >>
> >> Hello Nikolay,
> >>
> >> I can help run the test suite once your PR is cleanly rebased to
> >> verify
> > the
> >> whole suite works and then I can merge (I'm trusting Ivan and Magnus
> >> here
> >> for their reviews :)
> >>
> >> Guozhang
> >>
> >> On Mon, Sep 14, 2020 at 3:56 AM Nikolay Izhikov <
> nizhi...@apache.org>
> > wrote:
> >>
> >>> Hello!
> >>>
> >>> I got 2 approvals from Ivan Daschinskiy and Magnus Edenhill.
> >>> Committers, please, join the review.
> >>>
>  3 сент. 2020 г., в 11:06, Nikolay Izhikov  >
> >>> написал(а):
> 
>  Hello!
> 
>  Just a friendly reminder.
> 
>  Patch to resolve some kind of technical debt - python2 in system
> >> tests
> >>> is ready!
>  Can someone, please, take a look?
> 
>  https://github.com/apache/kafka/pull/9196
> 
> > 28 авг. 2020 г., в 11:19, Nikolay Izhikov <
> nizhikov@gmail.com>
> >>> написал(а):
> >
> > Hello!
> >
> > Any feedback on this?
> > What I should additionally do to prepare system tests migration?
> >
> >> 24 авг. 2020 г., в 11:17, Nikolay Izhikov <
> nizhikov@gmail.com
> >>>
> >>> написал(а):
> >>
> >> Hello.
> >>
> >> PR [1] is ready.
> >> Please, review.
> >>
> >> But, I need help with the two following questions:
> >>
> >> 1. We need a new release of ducktape which includes fixes [2],
> [3]
> > for
> >>> python3.
> >> I created the issue in ducktape repo [4].
> >> Can someone help me with the release?
> >>
> >> 2. I know that some companies run system tests for the trunk on
> a
> >>> regular bases.
> >> Can someone show me some results of these runs?
> >> So, I can compare failures in my PR and in the trunk.
> >>
> >> Results [5] of run all for my PR available in the ticket [6]
> >>
> >> ```
> >> SESSION REPORT (ALL TESTS)
> >> ducktape version: 0.8.0
> >> session_id:   2020-08-23--002
> >> run time: 1010 minutes 46.483 seconds
> >> tests run:684
> >> passed:   505
> >> failed:   9
> >> 

[jira] [Resolved] (KAFKA-10186) Aborting transaction with pending data should throw non-fatal exception

2020-10-07 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10186.
-
Fix Version/s: 2.7.0
   Resolution: Fixed

> Aborting transaction with pending data should throw non-fatal exception
> ---
>
> Key: KAFKA-10186
> URL: https://issues.apache.org/jira/browse/KAFKA-10186
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Reporter: Sophie Blee-Goldman
>Assignee: Gokul Srinivas
>Priority: Major
>  Labels: needs-kip, newbie, newbie++
> Fix For: 2.7.0
>
>
> Currently if you try to abort a transaction with any pending (non-flushed) 
> data, the send exception is set to
> {code:java}
>  KafkaException("Failing batch since transaction was aborted"){code}
> This exception type is generally considered fatal, but this is a valid state 
> to be in -- the point of throwing the exception is to alert that the records 
> will not be sent, not that you are in an unrecoverable error state.
> We should throw a different (possibly new) type of exception here to 
> distinguish from fatal and recoverable errors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-676: Respect the logging hierarchy

2020-10-07 Thread John Roesler
Ah, thanks Tom,

My only concern was that we might silently start logging a
lot more or less after the upgrade, but if the logging
behavior won't change at all, then the concern is moot.

Since the KIP is only to make the APIs return an accurrate
representation of the actual log level, I have no concerns
at all.

Thanks,
-John

On Wed, 2020-10-07 at 17:00 +0100, Tom Bentley wrote:
> Hi John,
> 
> You're right, but note that this affects the level the broker/connect
> worker was _reporting_ for that logger, not the level at which the logger
> was actually logging, which would be TRACE both before and after upgrading.
> 
> I've added more of an explanation to the KIP, since it wasn't very clear.
> 
> Thanks for taking a look.
> 
> Tom
> 
> On Wed, Oct 7, 2020 at 4:29 PM John Roesler  wrote:
> 
> > Thanks for this KIP Tom,
> > 
> > Just to clarify the impact: In your KIP you described a
> > situation in which the root logger is configured at INFO, an
> > "kafka.foo" is configured at TRACE, and then "kafka.foo.bar"
> > is resolved to INFO.
> > 
> > Assuming this goes into 3.0, would it be the case that if I
> > had the above configuration, after upgrade, "kafka.foo.bar"
> > would just switch from INFO to TRACE on its own?
> > 
> > It seems like it must, since it's not configured explicitly,
> > and we are changing the inheritance rule from "inherit
> > directly from root" to "inherit from the closest configured
> > ancestor in the hierarchy".
> > 
> > Am I thinking about this right?
> > 
> > Thanks,
> > -John
> > 
> > On Wed, 2020-10-07 at 15:42 +0100, Tom Bentley wrote:
> > > Hi all,
> > > 
> > > I would like to start discussion on a small KIP which seeks to rectify an
> > > inconsistency between how Kafka reports logger levels and how logger
> > > configuration is inherited hierarchically in log4j.
> > > 
> > > 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
> > > <
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy?moved=true
> > > 
> > > If you have a few minutes to have a look I'd be grateful for any
> > feedback.
> > > Many thanks,
> > > 
> > > Tom



Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-07 Thread Bill Bejeck
Hi Anna,

I've updated the table to only show KAFKA-10023 as going into 2.7

Thanks,
Bill

On Tue, Oct 6, 2020 at 6:51 PM Anna Povzner  wrote:

> Hi Bill,
>
> Regarding KIP-612, only the first half of the KIP will get into 2.7
> release: Broker-wide and per-listener connection rate limits, including
> corresponding configs and metric (KAFKA-10023). I see that the table in the
> release plan tags KAFKA-10023 as "old", not sure what it refers to. Note
> that while KIP-612 was approved prior to 2.6 release, none of the
> implementation went into 2.6 release.
>
> The second half of the KIP that adds per-IP connection rate limiting will
> need to be postponed (KAFKA-10024) till the following release.
>
> Thanks,
> Anna
>
> On Tue, Oct 6, 2020 at 2:30 PM Bill Bejeck  wrote:
>
> > Hi Kowshik,
> >
> > Given that the new feature is contained in the PR and the tooling is
> > follow-on work (minor work, but that's part of the submitted PR), I think
> > this is fine.
> >
> > Thanks,
> > BIll
> >
> > On Tue, Oct 6, 2020 at 5:00 PM Kowshik Prakasam 
> > wrote:
> >
> > > Hey Bill,
> > >
> > > For KIP-584 , we are in
> > the
> > > process of reviewing/merging the write path PR into AK trunk:
> > > https://github.com/apache/kafka/pull/9001 . As far as the KIP goes,
> this
> > > PR
> > > is a major milestone. The PR merge will hopefully be done before EOD
> > > tomorrow in time for the feature freeze. Beyond this PR, couple things
> > are
> > > left to be completed for this KIP: (1) tooling support and (2)
> > implementing
> > > support for feature version deprecation in the broker . In particular,
> > (1)
> > > is important for this KIP and the code changes are external to the
> broker
> > > (since it is a separate tool we intend to build). As of now, we won't
> be
> > > able to merge the tooling changes before feature freeze date. Would it
> be
> > > ok to merge the tooling changes before code freeze on 10/22? The
> tooling
> > > requirements are explained here:
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-584
> > > 
> > >
> > >
> >
> %3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport
> > >
> > > I would love to hear thoughts from Boyang and Jun as well.
> > >
> > >
> > > Thanks,
> > > Kowshik
> > >
> > >
> > >
> > > On Mon, Oct 5, 2020 at 3:29 PM Bill Bejeck  wrote:
> > >
> > > > Hi John,
> > > >
> > > > I've updated the list of expected KIPs for 2.7.0 with KIP-478.
> > > >
> > > > Thanks,
> > > > Bill
> > > >
> > > > On Mon, Oct 5, 2020 at 11:26 AM John Roesler 
> > > wrote:
> > > >
> > > > > Hi Bill,
> > > > >
> > > > > Sorry about this, but I've just noticed that KIP-478 is
> > > > > missing from the list. The url is:
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API
> > > > >
> > > > > The KIP was accepted a long time ago, and the implementation
> > > > > has been trickling in since 2.6 branch cut. However, most of
> > > > > the public API implementation is done now, so I think at
> > > > > this point, we can call it "released in 2.7.0". I'll make
> > > > > sure it's done by feature freeze.
> > > > >
> > > > > Thanks,
> > > > > -John
> > > > >
> > > > > On Thu, 2020-10-01 at 13:49 -0400, Bill Bejeck wrote:
> > > > > > All,
> > > > > >
> > > > > > With the KIP acceptance deadline passing yesterday, I've updated
> > the
> > > > > > planned KIP content section of the 2.7.0 release plan
> > > > > > <
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
> > > > > >
> > > > > > .
> > > > > >
> > > > > > Removed proposed KIPs for 2.7.0 not getting approval
> > > > > >
> > > > > >1. KIP-653
> > > > > ><
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2
> > > > > >
> > > > > >2. KIP-608
> > > > > ><
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Expose+Kafka+Metrics+in+Authorizer
> > > > > >
> > > > > >3. KIP-508
> > > > > ><
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable
> > > > > >
> > > > > >
> > > > > > KIPs added
> > > > > >
> > > > > >1. KIP-671
> > > > > ><
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Introduce+Kafka+Streams+Specific+Uncaught+Exception+Handler
> > > > > >
> > > > > >
> > > > > >
> > > > > > Please let me know if I've missed anything.
> > > > > >
> > > > > > Thanks,
> > > > > > Bill
> > > > > >
> > > > > > On Thu, Sep 24, 2020 at 1:47 PM Bill Bejeck 
> > > wrote:
> > > > > >
> > > > > > > Hi All,
> > > > > > >
> > > > > > > Just a reminder that the KIP freeze is next Wednesday,
> September
> > > > 30th.
> > > > > > > Any KIP aiming to go in the 2.7.0 release needs to be 

Re: [DISCUSS] KIP-676: Respect the logging hierarchy

2020-10-07 Thread Tom Bentley
Hi John,

You're right, but note that this affects the level the broker/connect
worker was _reporting_ for that logger, not the level at which the logger
was actually logging, which would be TRACE both before and after upgrading.

I've added more of an explanation to the KIP, since it wasn't very clear.

Thanks for taking a look.

Tom

On Wed, Oct 7, 2020 at 4:29 PM John Roesler  wrote:

> Thanks for this KIP Tom,
>
> Just to clarify the impact: In your KIP you described a
> situation in which the root logger is configured at INFO, an
> "kafka.foo" is configured at TRACE, and then "kafka.foo.bar"
> is resolved to INFO.
>
> Assuming this goes into 3.0, would it be the case that if I
> had the above configuration, after upgrade, "kafka.foo.bar"
> would just switch from INFO to TRACE on its own?
>
> It seems like it must, since it's not configured explicitly,
> and we are changing the inheritance rule from "inherit
> directly from root" to "inherit from the closest configured
> ancestor in the hierarchy".
>
> Am I thinking about this right?
>
> Thanks,
> -John
>
> On Wed, 2020-10-07 at 15:42 +0100, Tom Bentley wrote:
> > Hi all,
> >
> > I would like to start discussion on a small KIP which seeks to rectify an
> > inconsistency between how Kafka reports logger levels and how logger
> > configuration is inherited hierarchically in log4j.
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy?moved=true
> >
> >
> > If you have a few minutes to have a look I'd be grateful for any
> feedback.
> >
> > Many thanks,
> >
> > Tom
>
>


Re: [DISCUSS] KIP-676: Respect the logging hierarchy

2020-10-07 Thread John Roesler
Thanks for this KIP Tom,

Just to clarify the impact: In your KIP you described a
situation in which the root logger is configured at INFO, an
"kafka.foo" is configured at TRACE, and then "kafka.foo.bar"
is resolved to INFO.

Assuming this goes into 3.0, would it be the case that if I
had the above configuration, after upgrade, "kafka.foo.bar"
would just switch from INFO to TRACE on its own? 

It seems like it must, since it's not configured explicitly,
and we are changing the inheritance rule from "inherit
directly from root" to "inherit from the closest configured
ancestor in the hierarchy".

Am I thinking about this right?

Thanks,
-John

On Wed, 2020-10-07 at 15:42 +0100, Tom Bentley wrote:
> Hi all,
> 
> I would like to start discussion on a small KIP which seeks to rectify an
> inconsistency between how Kafka reports logger levels and how logger
> configuration is inherited hierarchically in log4j.
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
> 
> 
> If you have a few minutes to have a look I'd be grateful for any feedback.
> 
> Many thanks,
> 
> Tom



[DISCUSS] KIP-676: Respect the logging hierarchy

2020-10-07 Thread Tom Bentley
Hi all,

I would like to start discussion on a small KIP which seeks to rectify an
inconsistency between how Kafka reports logger levels and how logger
configuration is inherited hierarchically in log4j.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy


If you have a few minutes to have a look I'd be grateful for any feedback.

Many thanks,

Tom


Re: [DISCUSSION] Upgrade system tests to python 3

2020-10-07 Thread Nikolay Izhikov
Great news! 
Thanks Magnus!

I’ve updated the PR.

Looks like we ready to merge it.

> 7 окт. 2020 г., в 15:29, Magnus Edenhill  написал(а):
> 
> Hi,
> 
> ducktape v0.8.0 is now released.
> 
> Regards,
> Magnus
> 
> 
> Den ons 7 okt. 2020 kl 10:50 skrev Nikolay Izhikov :
> 
>> Hello.
>> 
>> Got 4 approvals for PR [1]
>> The only thing we need to be able to merge it is a ducktape 0.8 release.
>> If ducktape team need any help with the release, please, let me know.
>> 
>> [1] https://github.com/apache/kafka/pull/9196
>> 
>> 
>>> 21 сент. 2020 г., в 12:58, Nikolay Izhikov 
>> написал(а):
>>> 
>>> Hello.
>>> 
>>> I fixed two system tests that fails in trunk, also.
>>> 
>>> streams_upgrade_test.py::StreamsUpgradeTest.test_version_probing_upgrade
>>> streams_static_membership_test.py
>>> 
>>> Please, take a look at my PR [1]
>>> 
>>> [1] https://github.com/apache/kafka/pull/9312
>>> 
 20 сент. 2020 г., в 06:11, Guozhang Wang 
>> написал(а):
 
 I've triggered a system test on top of your branch.
 
 Maybe you could also re-run the jenkins unit tests since currently all
>> of
 them fails but you've only touched on system tests, so I'd like to
>> confirm
 at least one successful run.
 
 On Wed, Sep 16, 2020 at 3:37 AM Nikolay Izhikov 
>> wrote:
 
> Hello, Guozhang.
> 
>> I can help run the test suite once your PR is cleanly rebased to
>> verify
> the whole suite works
> 
> Thank you for joining to the review.
> 
> 1. PR rebased on the current trunk.
> 
> 2. I triggered all tests in my private environment to verify them after
> rebase.
>  Will inform you once tests passed on my environment.
> 
> 3. We need a new ducktape release [1] to be able to merge PR [2].
>  For now, PR based on the ducktape trunk branch [3], not some
> specific release.
>  If ducktape team need any help with the release, please, let me
> know.
> 
> [1] https://github.com/confluentinc/ducktape/issues/245
> [2] https://github.com/apache/kafka/pull/9196
> [3]
> 
>> https://github.com/apache/kafka/pull/9196/files#diff-9235a7bdb1ca9268681c0e56f3f3609bR39
> 
>> 16 сент. 2020 г., в 07:32, Guozhang Wang 
> написал(а):
>> 
>> Hello Nikolay,
>> 
>> I can help run the test suite once your PR is cleanly rebased to
>> verify
> the
>> whole suite works and then I can merge (I'm trusting Ivan and Magnus
>> here
>> for their reviews :)
>> 
>> Guozhang
>> 
>> On Mon, Sep 14, 2020 at 3:56 AM Nikolay Izhikov 
> wrote:
>> 
>>> Hello!
>>> 
>>> I got 2 approvals from Ivan Daschinskiy and Magnus Edenhill.
>>> Committers, please, join the review.
>>> 
 3 сент. 2020 г., в 11:06, Nikolay Izhikov 
>>> написал(а):
 
 Hello!
 
 Just a friendly reminder.
 
 Patch to resolve some kind of technical debt - python2 in system
>> tests
>>> is ready!
 Can someone, please, take a look?
 
 https://github.com/apache/kafka/pull/9196
 
> 28 авг. 2020 г., в 11:19, Nikolay Izhikov 
>>> написал(а):
> 
> Hello!
> 
> Any feedback on this?
> What I should additionally do to prepare system tests migration?
> 
>> 24 авг. 2020 г., в 11:17, Nikolay Izhikov >> 
>>> написал(а):
>> 
>> Hello.
>> 
>> PR [1] is ready.
>> Please, review.
>> 
>> But, I need help with the two following questions:
>> 
>> 1. We need a new release of ducktape which includes fixes [2], [3]
> for
>>> python3.
>> I created the issue in ducktape repo [4].
>> Can someone help me with the release?
>> 
>> 2. I know that some companies run system tests for the trunk on a
>>> regular bases.
>> Can someone show me some results of these runs?
>> So, I can compare failures in my PR and in the trunk.
>> 
>> Results [5] of run all for my PR available in the ticket [6]
>> 
>> ```
>> SESSION REPORT (ALL TESTS)
>> ducktape version: 0.8.0
>> session_id:   2020-08-23--002
>> run time: 1010 minutes 46.483 seconds
>> tests run:684
>> passed:   505
>> failed:   9
>> ignored:  170
>> ```
>> 
>> [1] https://github.com/apache/kafka/pull/9196
>> [2]
>>> 
> 
>> https://github.com/confluentinc/ducktape/commit/23bd5ab53802e3a1e1da1ddf3630934f33b02305
>> [3]
>>> 
> 
>> https://github.com/confluentinc/ducktape/commit/bfe53712f83b025832d29a43cde3de3d7803106f
>> [4] https://github.com/confluentinc/ducktape/issues/245
>> [5]
>>> https://issues.apache.org/jira/secure/attachment/13010366/report.txt
>> [6] 

[jira] [Created] (KAFKA-10581) Ability to filter events at Kafka broker based on Kafka header value

2020-10-07 Thread Bhukailas Reddy (Jira)
Bhukailas Reddy created KAFKA-10581:
---

 Summary: Ability to filter events at Kafka broker based on Kafka 
header value
 Key: KAFKA-10581
 URL: https://issues.apache.org/jira/browse/KAFKA-10581
 Project: Kafka
  Issue Type: New Feature
  Components: clients, consumer
Reporter: Bhukailas Reddy


Provide an ability to filter kafka message events at Kafka broker based on 
consumer's interest



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSSION] Upgrade system tests to python 3

2020-10-07 Thread Magnus Edenhill
Hi,

ducktape v0.8.0 is now released.

Regards,
Magnus


Den ons 7 okt. 2020 kl 10:50 skrev Nikolay Izhikov :

> Hello.
>
> Got 4 approvals for PR [1]
> The only thing we need to be able to merge it is a ducktape 0.8 release.
>  If ducktape team need any help with the release, please, let me know.
>
> [1] https://github.com/apache/kafka/pull/9196
>
>
> > 21 сент. 2020 г., в 12:58, Nikolay Izhikov 
> написал(а):
> >
> > Hello.
> >
> > I fixed two system tests that fails in trunk, also.
> >
> > streams_upgrade_test.py::StreamsUpgradeTest.test_version_probing_upgrade
> > streams_static_membership_test.py
> >
> > Please, take a look at my PR [1]
> >
> > [1] https://github.com/apache/kafka/pull/9312
> >
> >> 20 сент. 2020 г., в 06:11, Guozhang Wang 
> написал(а):
> >>
> >> I've triggered a system test on top of your branch.
> >>
> >> Maybe you could also re-run the jenkins unit tests since currently all
> of
> >> them fails but you've only touched on system tests, so I'd like to
> confirm
> >> at least one successful run.
> >>
> >> On Wed, Sep 16, 2020 at 3:37 AM Nikolay Izhikov 
> wrote:
> >>
> >>> Hello, Guozhang.
> >>>
>  I can help run the test suite once your PR is cleanly rebased to
> verify
> >>> the whole suite works
> >>>
> >>> Thank you for joining to the review.
> >>>
> >>> 1. PR rebased on the current trunk.
> >>>
> >>> 2. I triggered all tests in my private environment to verify them after
> >>> rebase.
> >>>   Will inform you once tests passed on my environment.
> >>>
> >>> 3. We need a new ducktape release [1] to be able to merge PR [2].
> >>>   For now, PR based on the ducktape trunk branch [3], not some
> >>> specific release.
> >>>   If ducktape team need any help with the release, please, let me
> >>> know.
> >>>
> >>> [1] https://github.com/confluentinc/ducktape/issues/245
> >>> [2] https://github.com/apache/kafka/pull/9196
> >>> [3]
> >>>
> https://github.com/apache/kafka/pull/9196/files#diff-9235a7bdb1ca9268681c0e56f3f3609bR39
> >>>
>  16 сент. 2020 г., в 07:32, Guozhang Wang 
> >>> написал(а):
> 
>  Hello Nikolay,
> 
>  I can help run the test suite once your PR is cleanly rebased to
> verify
> >>> the
>  whole suite works and then I can merge (I'm trusting Ivan and Magnus
> here
>  for their reviews :)
> 
>  Guozhang
> 
>  On Mon, Sep 14, 2020 at 3:56 AM Nikolay Izhikov 
> >>> wrote:
> 
> > Hello!
> >
> > I got 2 approvals from Ivan Daschinskiy and Magnus Edenhill.
> > Committers, please, join the review.
> >
> >> 3 сент. 2020 г., в 11:06, Nikolay Izhikov 
> > написал(а):
> >>
> >> Hello!
> >>
> >> Just a friendly reminder.
> >>
> >> Patch to resolve some kind of technical debt - python2 in system
> tests
> > is ready!
> >> Can someone, please, take a look?
> >>
> >> https://github.com/apache/kafka/pull/9196
> >>
> >>> 28 авг. 2020 г., в 11:19, Nikolay Izhikov 
> > написал(а):
> >>>
> >>> Hello!
> >>>
> >>> Any feedback on this?
> >>> What I should additionally do to prepare system tests migration?
> >>>
>  24 авг. 2020 г., в 11:17, Nikolay Izhikov  >
> > написал(а):
> 
>  Hello.
> 
>  PR [1] is ready.
>  Please, review.
> 
>  But, I need help with the two following questions:
> 
>  1. We need a new release of ducktape which includes fixes [2], [3]
> >>> for
> > python3.
>  I created the issue in ducktape repo [4].
>  Can someone help me with the release?
> 
>  2. I know that some companies run system tests for the trunk on a
> > regular bases.
>  Can someone show me some results of these runs?
>  So, I can compare failures in my PR and in the trunk.
> 
>  Results [5] of run all for my PR available in the ticket [6]
> 
>  ```
>  SESSION REPORT (ALL TESTS)
>  ducktape version: 0.8.0
>  session_id:   2020-08-23--002
>  run time: 1010 minutes 46.483 seconds
>  tests run:684
>  passed:   505
>  failed:   9
>  ignored:  170
>  ```
> 
>  [1] https://github.com/apache/kafka/pull/9196
>  [2]
> >
> >>>
> https://github.com/confluentinc/ducktape/commit/23bd5ab53802e3a1e1da1ddf3630934f33b02305
>  [3]
> >
> >>>
> https://github.com/confluentinc/ducktape/commit/bfe53712f83b025832d29a43cde3de3d7803106f
>  [4] https://github.com/confluentinc/ducktape/issues/245
>  [5]
> > https://issues.apache.org/jira/secure/attachment/13010366/report.txt
>  [6] https://issues.apache.org/jira/browse/KAFKA-10402
> 
> > 14 авг. 2020 г., в 21:26, Ismael Juma 
> >>> написал(а):
> >
> > +1
> >
> > On Fri, Aug 14, 2020 at 7:42 AM John Roesler <
> 

Re: [DISCUSSION] Upgrade system tests to python 3

2020-10-07 Thread Nikolay Izhikov
Hello.

Got 4 approvals for PR [1]
The only thing we need to be able to merge it is a ducktape 0.8 release.
 If ducktape team need any help with the release, please, let me know.

[1] https://github.com/apache/kafka/pull/9196


> 21 сент. 2020 г., в 12:58, Nikolay Izhikov  
> написал(а):
> 
> Hello.
> 
> I fixed two system tests that fails in trunk, also.
> 
> streams_upgrade_test.py::StreamsUpgradeTest.test_version_probing_upgrade
> streams_static_membership_test.py
> 
> Please, take a look at my PR [1]
> 
> [1] https://github.com/apache/kafka/pull/9312
> 
>> 20 сент. 2020 г., в 06:11, Guozhang Wang  написал(а):
>> 
>> I've triggered a system test on top of your branch.
>> 
>> Maybe you could also re-run the jenkins unit tests since currently all of
>> them fails but you've only touched on system tests, so I'd like to confirm
>> at least one successful run.
>> 
>> On Wed, Sep 16, 2020 at 3:37 AM Nikolay Izhikov  wrote:
>> 
>>> Hello, Guozhang.
>>> 
 I can help run the test suite once your PR is cleanly rebased to verify
>>> the whole suite works
>>> 
>>> Thank you for joining to the review.
>>> 
>>> 1. PR rebased on the current trunk.
>>> 
>>> 2. I triggered all tests in my private environment to verify them after
>>> rebase.
>>>   Will inform you once tests passed on my environment.
>>> 
>>> 3. We need a new ducktape release [1] to be able to merge PR [2].
>>>   For now, PR based on the ducktape trunk branch [3], not some
>>> specific release.
>>>   If ducktape team need any help with the release, please, let me
>>> know.
>>> 
>>> [1] https://github.com/confluentinc/ducktape/issues/245
>>> [2] https://github.com/apache/kafka/pull/9196
>>> [3]
>>> https://github.com/apache/kafka/pull/9196/files#diff-9235a7bdb1ca9268681c0e56f3f3609bR39
>>> 
 16 сент. 2020 г., в 07:32, Guozhang Wang 
>>> написал(а):
 
 Hello Nikolay,
 
 I can help run the test suite once your PR is cleanly rebased to verify
>>> the
 whole suite works and then I can merge (I'm trusting Ivan and Magnus here
 for their reviews :)
 
 Guozhang
 
 On Mon, Sep 14, 2020 at 3:56 AM Nikolay Izhikov 
>>> wrote:
 
> Hello!
> 
> I got 2 approvals from Ivan Daschinskiy and Magnus Edenhill.
> Committers, please, join the review.
> 
>> 3 сент. 2020 г., в 11:06, Nikolay Izhikov 
> написал(а):
>> 
>> Hello!
>> 
>> Just a friendly reminder.
>> 
>> Patch to resolve some kind of technical debt - python2 in system tests
> is ready!
>> Can someone, please, take a look?
>> 
>> https://github.com/apache/kafka/pull/9196
>> 
>>> 28 авг. 2020 г., в 11:19, Nikolay Izhikov 
> написал(а):
>>> 
>>> Hello!
>>> 
>>> Any feedback on this?
>>> What I should additionally do to prepare system tests migration?
>>> 
 24 авг. 2020 г., в 11:17, Nikolay Izhikov 
> написал(а):
 
 Hello.
 
 PR [1] is ready.
 Please, review.
 
 But, I need help with the two following questions:
 
 1. We need a new release of ducktape which includes fixes [2], [3]
>>> for
> python3.
 I created the issue in ducktape repo [4].
 Can someone help me with the release?
 
 2. I know that some companies run system tests for the trunk on a
> regular bases.
 Can someone show me some results of these runs?
 So, I can compare failures in my PR and in the trunk.
 
 Results [5] of run all for my PR available in the ticket [6]
 
 ```
 SESSION REPORT (ALL TESTS)
 ducktape version: 0.8.0
 session_id:   2020-08-23--002
 run time: 1010 minutes 46.483 seconds
 tests run:684
 passed:   505
 failed:   9
 ignored:  170
 ```
 
 [1] https://github.com/apache/kafka/pull/9196
 [2]
> 
>>> https://github.com/confluentinc/ducktape/commit/23bd5ab53802e3a1e1da1ddf3630934f33b02305
 [3]
> 
>>> https://github.com/confluentinc/ducktape/commit/bfe53712f83b025832d29a43cde3de3d7803106f
 [4] https://github.com/confluentinc/ducktape/issues/245
 [5]
> https://issues.apache.org/jira/secure/attachment/13010366/report.txt
 [6] https://issues.apache.org/jira/browse/KAFKA-10402
 
> 14 авг. 2020 г., в 21:26, Ismael Juma 
>>> написал(а):
> 
> +1
> 
> On Fri, Aug 14, 2020 at 7:42 AM John Roesler 
> wrote:
> 
>> Thanks Nikolay,
>> 
>> No objection. This would be very nice to have.
>> 
>> Thanks,
>> John
>> 
>> On Fri, Aug 14, 2020, at 09:18, Nikolay Izhikov wrote:
>>> Hello.
>>> 
 If anyone's interested in porting it to Python 3 it would be a
>>> good
>> change.
>>>