[jira] [Created] (KAFKA-10596) Adding some state to TransactionMetadata which explicitly indicates that the transaction was timed out

2020-10-09 Thread HaiyuanZhao (Jira)
HaiyuanZhao created KAFKA-10596:
---

 Summary: Adding some state to TransactionMetadata which explicitly 
indicates that the transaction was timed out
 Key: KAFKA-10596
 URL: https://issues.apache.org/jira/browse/KAFKA-10596
 Project: Kafka
  Issue Type: Improvement
Reporter: HaiyuanZhao
Assignee: HaiyuanZhao






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10584) IndexSearchType should use sealed trait instead of Enumeration

2020-10-09 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-10584.

Fix Version/s: 2.7.0
   Resolution: Fixed

> IndexSearchType should use sealed trait instead of Enumeration
> --
>
> Key: KAFKA-10584
> URL: https://issues.apache.org/jira/browse/KAFKA-10584
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Jun Rao
>Assignee: huxihx
>Priority: Major
>  Labels: newbie
> Fix For: 2.7.0
>
>
> In Scala, we prefer sealed traits over Enumeration since the former gives you 
> exhaustiveness checking. With Scala Enumeration, you don't get a warning if 
> you add a new value that is not handled in a given pattern match.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #132

2020-10-09 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #161

2020-10-09 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9393: DeleteRecords may cause extreme lock contention for large 
partition directories (#7929)


--
[...truncated 3.42 MB...]
org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > filterNot a KStream should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > filterNot a KStream should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > foreach a KStream should 
run foreach actions on records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > foreach a KStream should 
run foreach actions on records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > peek a KStream should run 
peek actions on records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > peek a KStream should run 
peek actions on records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > selectKey a KStream should 
select a new key STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > selectKey a KStream should 
select a new key PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > repartition should 
repartition a KStream STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > repartition should 
repartition a KStream PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreams should 
join correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreams should 
join correctly records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > transform a KStream should 
transform correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > transform a KStream should 
transform correctly records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransform a KStream 
should flatTransform correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransform a KStream 
should flatTransform correctly records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues a 
KStream should correctly flatTransform values in records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues a 
KStream should correctly flatTransform values in records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues with 
key in a KStream should correctly flatTransformValues in records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues with 
key in a KStream should correctly flatTransformValues in records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreamToTables 
should join correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreamToTables 
should join correctly records PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes PASSED


[jira] [Created] (KAFKA-10595) Explain idempotent producer in max.in.flight.requests.per.connection

2020-10-09 Thread Yeva Byzek (Jira)
Yeva Byzek created KAFKA-10595:
--

 Summary: Explain idempotent producer in 
max.in.flight.requests.per.connection
 Key: KAFKA-10595
 URL: https://issues.apache.org/jira/browse/KAFKA-10595
 Project: Kafka
  Issue Type: Improvement
  Components: docs
Reporter: Yeva Byzek


A user asked:

 
{quote}Is the idempotent producer also a total order producer? meaning, despite 
having max.inflight > 1, it will keep message production ordering? My 
understanding of this has always been no, but I'd like to confirm...
{quote}
 

I believe a contributing factor to this question is that 
[https://kafka.apache.org/documentation/#max.in.flight.requests.per.connection] 
reads

 
{quote}Note that if this setting is set to be greater than 1 and there are 
failed sends, there is a risk of message re-ordering due to retries (i.e., if 
retries are enabled).
{quote}
 

Suggestion: it may be clearer if we augmented this description to say that 
message re-ordering would not happen if {{enable.idempotent=true}} 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Can I get a review for a documentation update (KAFKA-10473)?

2020-10-09 Thread James Cheng
Hi,

Would someone be able to review this pull request for me?

This is a small documentation change.

Thanks,
-James

> On Sep 24, 2020, at 11:53 PM, James Cheng  wrote:
> 
> Hi,
> 
> Can I get a review from one of the commiters for this documentation update?
> 
> I am adding docs for the following JMX metrics:
>   kafka.log,type=Log,name=Size
>   kafka.log,type=Log,name=NumLogSegments
>   kafka.log,type=Log,name=LogStartOffset
>   kafka.log,type=Log,name=LogEndOffset
> 
> 
> https://issues.apache.org/jira/browse/KAFKA-10473 
> 
> https://github.com/apache/kafka/pull/9276 
> 
> 
> The pull request page lists lots of failed checks. However, this pull request 
> only modifies an HTML file, and the test failures don't seem related to my 
> changes.
> 
> Thanks,
> -James
> 



Re: [VOTE] KIP-630: Kafka Raft Snapshot

2020-10-09 Thread Jose Garcia Sancio
Thanks for the votes Jun, Jason, Ron, Lucas and Guozhang.

Thanks for the feedback Ron and Jun.

Agree with your comments Ron. I have updated those configurations to
metadata.snapshot.min.changed_records.ratio and
metadata.snapshot.min.new_records.size. I thought of using "clenable"
to keep it consistent with the configuration of compaction policy.
Snapshots are different enough that that consistency is not needed.

Jun, I missed the incorrect mention of OFFSET_OUT_OF_RANGE. I have
replaced it with POSITION_OUT_OF_RANGE.

Changes to the KIP are here:
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=158864763=39=37

I believe that we have enough votes to accept this KIP. I'll close the
voting on Monday.

On Mon, Oct 5, 2020 at 2:04 PM Jun Rao  wrote:
>
> Hi, Jose,
>
> Thanks for the KIP. +1. A couple of minor comments below.
>
> 1. The new configuration names suggested by Ron sound reasonable.
> 2. It seems that OFFSET_OUT_OF_RANGE in the wiki needs to be changed to
> POSITION_OUT_OF_RANGE.
>
> Jun
>
> On Mon, Oct 5, 2020 at 9:46 AM Jason Gustafson  wrote:
>
> > +1 Thanks for the KIP!
> >
> > -Jason
> >
> > On Mon, Oct 5, 2020 at 9:03 AM Ron Dagostino  wrote:
> >
> > > Thanks for the KIP, Jose.  +1 (non-binding) from me.
> > >
> > > I do have one comment/confusion.
> > >
> > > Upon re-reading the latest version, I am confused about the name of
> > > the proposed "metadata.snapshot.min.records" config.  Is this a size,
> > > or is it a count?  I think it is about a size but want to be sure.  I
> > > also wonder if it is about changes (updates/deletes) rather than just
> > > additions/accretions, or is it independent of that?
> > >
> > > I'm also unclear about the definition of the
> > > "metadata.snapshot.min.cleanable.ratio" config -- is that a ratio of a
> > > *number* of new records to the number of snapshot records?  Or is it a
> > > *size* ratio?  I think it is a ratio of numbers of records rather than
> > > a ratio of sizes.  I think this one is also about changes
> > > (updates/deletes) rather than just additions/accretions.
> > >
> > > I'm wondering if we can be clearer with the names of these two configs
> > > to make their definitions more apparent.  For example, assuming
> > > certain definitions as mentioned above:
> > >
> > > metadata.snapshot.min.new_records.size -- the minimum size of new
> > > records required before a snapshot can occur
> > > metadata.snapshot.min.change_records.ratio -- the minimum ratio of the
> > > number of change (i.e. not simply accretion) records to the number of
> > > records in the last snapshot (if any) that must be achieved before a
> > > snapshot can occur.
> > >
> > > For example, if there is no snapshot yet, then ".new_records.size"
> > > must be written before a snapshot is allowed.  If there is a snapshot
> > > with N records, then before a snapshot is allowed both
> > > ".new_records.size" must be written and ".change_records.ratio" must
> > > be satisfied such that the number of changes (not accretions) divided
> > > by N meets the ratio.
> > >
> > > Ron
> > >
> > >
> > >
> > >
> > >
> > > On Fri, Oct 2, 2020 at 8:14 PM Lucas Bradstreet 
> > > wrote:
> > > >
> > > > Thanks for the KIP! Non-binding +1
> > > >
> > > > On Fri, Oct 2, 2020 at 3:30 PM Guozhang Wang 
> > wrote:
> > > >
> > > > > Thanks Jose! +1 from me.
> > > > >
> > > > > On Fri, Oct 2, 2020 at 3:18 PM Jose Garcia Sancio <
> > > jsan...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I would like to start a vote on KIP-630.
> > > > > >
> > > > > > KIP: https://cwiki.apache.org/confluence/x/exV4CQ
> > > > > > Discussion Thread:
> > > > > >
> > > > > >
> > > > >
> > >
> > https://lists.apache.org/thread.html/r9468d1f276385695a2d6d48f6dfbdc504c445fc5745aaa606d138fed%40%3Cdev.kafka.apache.org%3E
> > > > > >
> > > > > > Thank you
> > > > > > --
> > > > > > -Jose
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > >
> >



-- 
-Jose


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #160

2020-10-09 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #128

2020-10-09 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-9393) DeleteRecords may cause extreme lock contention for large partition directories

2020-10-09 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-9393.

Fix Version/s: 2.7.0
 Assignee: Gardner Vickers
   Resolution: Fixed

merged to trunk.

> DeleteRecords may cause extreme lock contention for large partition 
> directories
> ---
>
> Key: KAFKA-9393
> URL: https://issues.apache.org/jira/browse/KAFKA-9393
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Lucas Bradstreet
>Assignee: Gardner Vickers
>Priority: Major
> Fix For: 2.7.0
>
>
> DeleteRecords, frequently used by KStreams triggers a 
> Log.maybeIncrementLogStartOffset call, calling 
> kafka.log.ProducerStateManager.listSnapshotFiles which calls 
> java.io.File.listFiles on the partition dir. The time taken to list this 
> directory can be extreme for partitions with many small segments (e.g 2) 
> taking multiple seconds to finish. This causes lock contention for the log, 
> and if produce requests are also occurring for the same log can cause a 
> majority of request handler threads to become blocked waiting for the 
> DeleteRecords call to finish.
> I believe this is a problem going back to the initial implementation of the 
> transactional producer, but I need to confirm how far back it goes.
> One possible solution is to maintain a producer state snapshot aligned to the 
> log segment, and simply delete it whenever we delete a segment. This would 
> ensure that we never have to perform a directory scan.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10509) Add metric to track throttle time due to hitting connection rate quota

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-10509.
-
Resolution: Fixed

Resolved via [https://github.com/apache/kafka/pull/9317] merged on 9/28.

> Add metric to track throttle time due to hitting connection rate quota
> --
>
> Key: KAFKA-10509
> URL: https://issues.apache.org/jira/browse/KAFKA-10509
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>Priority: Major
> Fix For: 2.7.0
>
>
> See KIP-612.
>  
> kafka.network:type=socket-server-metrics,name=connection-accept-throttle-time,listener=\{listenerName}
>  * Type: SampledStat.Avg
>  * Description: Average throttle time due to violating per-listener or 
> broker-wide connection acceptance rate quota on a given listener.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-6078.

Resolution: Fixed

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.7.0
>
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8940) Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8940.

Resolution: Fixed

> Flaky Test SmokeTestDriverIntegrationTest.shouldWorkWithRebalance
> -
>
> Key: KAFKA-8940
> URL: https://issues.apache.org/jira/browse/KAFKA-8940
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test
>
> I lost the screen shot unfortunately... it reports the set of expected 
> records does not match the received records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10140) Incremental config api excludes plugin config changes

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-10140.
-
Resolution: Fixed

> Incremental config api excludes plugin config changes
> -
>
> Key: KAFKA-10140
> URL: https://issues.apache.org/jira/browse/KAFKA-10140
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Priority: Critical
> Fix For: 2.7.0
>
>
> I was trying to alter the jmx metric filters using the incremental alter 
> config api and hit this error:
> ```
> java.util.NoSuchElementException: key not found: metrics.jmx.blacklist
>   at scala.collection.MapLike.default(MapLike.scala:235)
>   at scala.collection.MapLike.default$(MapLike.scala:234)
>   at scala.collection.AbstractMap.default(Map.scala:65)
>   at scala.collection.MapLike.apply(MapLike.scala:144)
>   at scala.collection.MapLike.apply$(MapLike.scala:143)
>   at scala.collection.AbstractMap.apply(Map.scala:65)
>   at kafka.server.AdminManager.listType$1(AdminManager.scala:681)
>   at 
> kafka.server.AdminManager.$anonfun$prepareIncrementalConfigs$1(AdminManager.scala:693)
>   at 
> kafka.server.AdminManager.prepareIncrementalConfigs(AdminManager.scala:687)
>   at 
> kafka.server.AdminManager.$anonfun$incrementalAlterConfigs$1(AdminManager.scala:618)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:273)
>   at scala.collection.immutable.Map$Map1.foreach(Map.scala:154)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:273)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:266)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:108)
>   at 
> kafka.server.AdminManager.incrementalAlterConfigs(AdminManager.scala:589)
>   at 
> kafka.server.KafkaApis.handleIncrementalAlterConfigsRequest(KafkaApis.scala:2698)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:188)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:78)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> ```
> It looks like we are only allowing changes to the keys defined in 
> `KafkaConfig` through this API. This excludes config changes to any plugin 
> components such as `JmxReporter`. 
> Note that I was able to use the regular `alterConfig` API to change this 
> config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6824) Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-6824.

Resolution: Fixed

> Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener
> 
>
> Key: KAFKA-6824
> URL: https://issues.apache.org/jira/browse/KAFKA-6824
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Anna Povzner
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
>
> Observed two failures of this test (both in PR builds) :(
>  
> *Failure #1: (JDK 7 and Scala 2.11 )*
> *17:20:49* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *17:20:49*     java.lang.AssertionError: expected:<10> but was:<12>
> *17:20:49*         at org.junit.Assert.fail(Assert.java:88)
> *17:20:49*         at org.junit.Assert.failNotEquals(Assert.java:834)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:645)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:631)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:959)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:784)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
>  
> *Failure #2: (JDK 8)*
> *18:46:23* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *18:46:23*     java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$verifyProduceConsume$3(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
> *18:46:23*         at scala.collection.Iterator.foreach(Iterator.scala:929)
> *18:46:23*         at scala.collection.Iterator.foreach$(Iterator.scala:929)
> *18:46:23*         at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)
> *18:46:23*         at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> *18:46:23*         at 
> scala.collection.TraversableLike.map(TraversableLike.scala:234)
> *18:46:23*         at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:227)
> *18:46:23*         at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:816)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
> *18:46:23*
> *18:46:23*         Caused by:
> *18:46:23*         
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8257) Flaky Test DynamicConnectionQuotaTest#testDynamicListenerConnectionQuota

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8257.

Resolution: Fixed

> Flaky Test DynamicConnectionQuotaTest#testDynamicListenerConnectionQuota
> 
>
> Key: KAFKA-8257
> URL: https://issues.apache.org/jira/browse/KAFKA-8257
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3566/tests]
> {quote}java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:392)
> at kafka.server.BaseRequestTest.receiveResponse(BaseRequestTest.scala:87)
> at kafka.server.BaseRequestTest.sendAndReceive(BaseRequestTest.scala:148)
> at 
> kafka.network.DynamicConnectionQuotaTest.verifyConnection(DynamicConnectionQuotaTest.scala:229)
> at 
> kafka.network.DynamicConnectionQuotaTest.$anonfun$testDynamicListenerConnectionQuota$4(DynamicConnectionQuotaTest.scala:133)
> at 
> kafka.network.DynamicConnectionQuotaTest.$anonfun$testDynamicListenerConnectionQuota$4$adapted(DynamicConnectionQuotaTest.scala:133)
> at scala.collection.Iterator.foreach(Iterator.scala:941)
> at scala.collection.Iterator.foreach$(Iterator.scala:941)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
> at scala.collection.IterableLike.foreach(IterableLike.scala:74)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicListenerConnectionQuota(DynamicConnectionQuotaTest.scala:133){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8139) Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8139.

Resolution: Fixed

> Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh
> 
>
> Key: KAFKA-8139
> URL: https://issues.apache.org/jira/browse/KAFKA-8139
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMetadataRefresh/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 12 milliseconds at java.lang.Object.wait(Native Method) at 
> java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:334) at 
> java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:391) at 
> java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:719) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379) at 
> scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440)
>  at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult(Tasks.scala:423) 
> at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult$(Tasks.scala:416)
>  at 
> scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:60)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult$(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:84)
>  at 
> scala.collection.parallel.ParIterableLike.foreach(ParIterableLike.scala:465) 
> at 
> scala.collection.parallel.ParIterableLike.foreach$(ParIterableLike.scala:464) 
> at scala.collection.parallel.mutable.ParArray.foreach(ParArray.scala:58) at 
> kafka.utils.TestUtils$.shutdownServers(TestUtils.scala:201) at 
> kafka.integration.KafkaServerTestHarness.tearDown(KafkaServerTestHarness.scala:113)
>  at 
> kafka.api.IntegrationTestHarness.tearDown(IntegrationTestHarness.scala:134) 
> at 
> kafka.api.AdminClientIntegrationTest.tearDown(AdminClientIntegrationTest.scala:87)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.tearDown(SaslSslAdminClientIntegrationTest.scala:90){quote}
> STDOUT
> {quote}[2019-03-20 16:30:35,739] ERROR [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer:159) 
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is not set at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
>  at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
> org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:694) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
>  at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
> kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
> kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
>  at scala.collection.Iterator.foreach(Iterator.scala:941) at 
> scala.collection.Iterator.foreach$(Iterator.scala:941) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
> scala.collection.IterableLike.foreach(IterableLike.scala:74) at 

[jira] [Resolved] (KAFKA-8092) Flaky Test GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8092.

Resolution: Fixed

> Flaky Test 
> GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess
> --
>
> Key: KAFKA-8092
> URL: https://issues.apache.org/jira/browse/KAFKA-8092
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/64/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testSendOffsetsWithNoConsumerGroupDescribeAccess/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
> STDOUT
> {quote}[2019-03-11 16:08:29,319] ERROR [KafkaApi-0] Error when handling 
> request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38324,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38324-127.0.0.1:59458-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:29,933] ERROR [Consumer 
> clientId=consumer-99, groupId=my-group] Offset commit failed on partition 
> topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
> failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-11 16:08:29,933] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-11 16:08:31,370] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=33310,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:33310-127.0.0.1:49676-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:34,437] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=35999,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:35999-127.0.0.1:48268-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:40,978] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38267,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38267-127.0.0.1:53148-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> 

[jira] [Resolved] (KAFKA-8076) Flaky Test ProduceRequestTest#testSimpleProduceRequest

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8076.

Resolution: Fixed

> Flaky Test ProduceRequestTest#testSimpleProduceRequest
> --
>
> Key: KAFKA-8076
> URL: https://issues.apache.org/jira/browse/KAFKA-8076
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.server/ProduceRequestTest/testSimpleProduceRequest/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.server.ProduceRequestTest.createTopicAndFindPartitionWithLeader(ProduceRequestTest.scala:91)
>  at 
> kafka.server.ProduceRequestTest.testSimpleProduceRequest(ProduceRequestTest.scala:42)
> {quote}
> STDOUT
> {quote}[2019-03-08 01:42:24,797] ERROR [ReplicaFetcher replicaId=0, 
> leaderId=2, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-08 01:42:38,287] WARN Unable to 
> read additional data from client sessionid 0x100712b09280002, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7647) Flaky test LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-7647.

Resolution: Fixed

> Flaky test 
> LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
> -
>
> Key: KAFKA-7647
> URL: https://issues.apache.org/jira/browse/KAFKA-7647
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 2.1.1, 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
>
> {code}
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
>     java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:834)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8137) Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8137.

Resolution: Fixed

> Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound
> --
>
> Key: KAFKA-8137
> URL: https://issues.apache.org/jira/browse/KAFKA-8137
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/LegacyAdminClientTest/testOffsetsForTimesWhenOffsetNotFound/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at 
> kafka.api.LegacyAdminClientTest.setUp(LegacyAdminClientTest.scala:73){quote}
> STDOUT
> {quote}[2019-03-20 16:28:10,089] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,093] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,493] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,724] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,388] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,394] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,224] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,249] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:49,255] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, 

[jira] [Resolved] (KAFKA-8084) Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8084.

Resolution: Fixed

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers
> 
>
> Key: KAFKA-8084
> URL: https://issues.apache.org/jira/browse/KAFKA-8084
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersOfExistingGroupWithNoMembers/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersOfExistingGroupWithNoMembers(DescribeConsumerGroupTest.scala:283){quote}
> STDOUT
> {quote}TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
> CLIENT-ID foo 0 0 0 0 - - - TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG 
> CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - COORDINATOR (ID) 
> ASSIGNMENT-STRATEGY STATE #MEMBERS localhost:45812 (0) Empty 0{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8108) Flaky Test kafka.api.ClientIdQuotaTest.testThrottledProducerConsumer

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8108.

Resolution: Fixed

> Flaky Test kafka.api.ClientIdQuotaTest.testThrottledProducerConsumer
> 
>
> Key: KAFKA-8108
> URL: https://issues.apache.org/jira/browse/KAFKA-8108
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>Reporter: Guozhang Wang
>Priority: Critical
>  Labels: flaky-test
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testThrottledProducerConsumer(BaseQuotaTest.scala:82)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testThrottledProducerConsumer/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8303) Flaky Test SaslSslAdminClientIntegrationTest#testLogStartOffsetCheckpoint

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8303.

Resolution: Fixed

> Flaky Test SaslSslAdminClientIntegrationTest#testLogStartOffsetCheckpoint
> -
>
> Key: KAFKA-8303
> URL: https://issues.apache.org/jira/browse/KAFKA-8303
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, security, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/21274/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testLogStartOffsetCheckpoint/]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.api.AdminClientIntegrationTest$$anonfun$testLogStartOffsetCheckpoint$2.apply$mcZ$sp(AdminClientIntegrationTest.scala:820)
>  at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:789) at 
> kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpoint(AdminClientIntegrationTest.scala:813){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7988) Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-7988.

Resolution: Fixed

> Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize
> 
>
> Key: KAFKA-7988
> URL: https://issues.apache.org/jira/browse/KAFKA-7988
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/30/]
> {quote}kafka.server.DynamicBrokerReconfigurationTest > testThreadPoolResize 
> FAILED java.lang.AssertionError: Invalid threads: expected 6, got 5: 
> List(ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-1, 
> ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-2, ReplicaFetcherThread-0-1) 
> at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreads(DynamicBrokerReconfigurationTest.scala:1260)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.maybeVerifyThreadPoolSize$1(DynamicBrokerReconfigurationTest.scala:531)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.resizeThreadPool$1(DynamicBrokerReconfigurationTest.scala:550)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.reducePoolSize$1(DynamicBrokerReconfigurationTest.scala:536)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$testThreadPoolResize$3(DynamicBrokerReconfigurationTest.scala:559)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreadPoolResize$1(DynamicBrokerReconfigurationTest.scala:558)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize(DynamicBrokerReconfigurationTest.scala:572){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8079) Flaky Test EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8079.

Resolution: Fixed

> Flaky Test 
> EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange
> -
>
> Key: KAFKA-8079
> URL: https://issues.apache.org/jira/browse/KAFKA-8079
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3445/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.$anonfun$shouldSurviveFastLeaderChange$2(EpochDrivenReplicationProtocolAcceptanceTest.scala:294)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldSurviveFastLeaderChange(EpochDrivenReplicationProtocolAcceptanceTest.scala:273){quote}
> STDOUT
> {quote}[2019-03-08 01:16:02,452] ERROR [ReplicaFetcher replicaId=101, 
> leaderId=100, fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:23,677] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
> fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:35,779] ERROR [Controller id=100] Error completing 
> preferred replica leader election for partition topic1-0 
> (kafka.controller.KafkaController:76)
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> topic1-0 under strategy PreferredReplicaPartitionLeaderElectionStrategy
> at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
> at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
> at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
> at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
> at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
> at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$onPreferredReplicaElection(KafkaController.scala:649)
> at 
> kafka.controller.KafkaController.$anonfun$checkAndTriggerAutoLeaderRebalance$6(KafkaController.scala:1008)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:128)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerAutoLeaderRebalance(KafkaController.scala:989)
> at 
> kafka.controller.KafkaController$AutoPreferredReplicaLeaderElection$.process(KafkaController.scala:1020)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
> Dumping /tmp/kafka-2158669830092629415/topic1-0/.log
> Starting offset: 0
> baseOffset: 0 lastOffset: 0 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 0 CreateTime: 1552007783877 size: 141 magic: 
> 2 compresscodec: SNAPPY crc: 2264724941 isvalid: true
> baseOffset: 1 lastOffset: 1 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 141 CreateTime: 1552007784731 size: 141 
> magic: 2 compresscodec: SNAPPY crc: 14988968 isvalid: true
> baseOffset: 2 lastOffset: 2 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 282 CreateTime: 1552007784734 

[jira] [Resolved] (KAFKA-8113) Flaky Test ListOffsetsRequestTest#testResponseIncludesLeaderEpoch

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8113.

Resolution: Fixed

> Flaky Test ListOffsetsRequestTest#testResponseIncludesLeaderEpoch
> -
>
> Key: KAFKA-8113
> URL: https://issues.apache.org/jira/browse/KAFKA-8113
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3468/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.server.ListOffsetsRequestTest.fetchOffsetAndEpoch$1(ListOffsetsRequestTest.scala:136)
> at 
> kafka.server.ListOffsetsRequestTest.testResponseIncludesLeaderEpoch(ListOffsetsRequestTest.scala:151){quote}
> STDOUT
> {quote}[2019-03-15 17:16:13,029] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=1, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-15 17:16:13,231] ERROR [KafkaApi-0] Error while responding to offset 
> request (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ReplicaNotAvailableException: Partition 
> topic-0 is not available{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8087) Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8087.

Resolution: Fixed

> Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId
> -
>
> Key: KAFKA-8087
> URL: https://issues.apache.org/jira/browse/KAFKA-8087
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/PlaintextConsumerTest/testConsumingWithNullGroupId/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT
> {quote}[2019-03-09 08:39:02,022] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=2, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,022] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,202] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,204] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,511] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,512] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:06,568] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,582] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,787] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 

[jira] [Resolved] (KAFKA-8077) Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8077.

Resolution: Fixed

> Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords
> ---
>
> Key: KAFKA-8077
> URL: https://issues.apache.org/jira/browse/KAFKA-8077
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.0.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/237/tests]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.api.AdminClientIntegrationTest.sendRecords(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest.testConsumeAfterDeleteRecords(AdminClientIntegrationTest.scala:909)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
> This server does not host this topic-partition.{quote}
> STDERR
> {quote}Exception in thread "Thread-1638" 
> org.apache.kafka.common.errors.InterruptException: 
> java.lang.InterruptedException
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeThrowInterruptException(ConsumerNetworkClient.java:504)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1247)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> at 
> kafka.api.AdminClientIntegrationTest$$anon$1.run(AdminClientIntegrationTest.scala:1132)
> Caused by: java.lang.InterruptedException
> ... 7 more{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8075) Flaky Test GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8075.

Resolution: Fixed

> Flaky Test 
> GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit
> --
>
> Key: KAFKA-8075
> URL: https://issues.apache.org/jira/browse/KAFKA-8075
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testTransactionalProducerTopicAuthorizationExceptionInCommit/]
> {quote}org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 3000ms.{quote}
> STDOUT
> {quote}[2019-03-08 01:48:45,226] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Offset commit failed on partition topic-0 at offset 5: Not 
> authorized to access topics: [Topic authorization failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-08 01:48:45,227] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-08 01:48:57,870] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=43610,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:43610-127.0.0.1:44870-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:14,858] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44107,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44107-127.0.0.1:38156-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:21,984] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=39025,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:39025-127.0.0.1:41474-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:39,438] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44798,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44798-127.0.0.1:58496-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. Error: Consumer group 'my-group' does not 
> exist. [2019-03-08 01:49:55,502] WARN Ignoring unexpected runtime exception 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:236) 
> java.nio.channels.CancelledKeyException at 
> sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at 
> sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:205)
>  at java.lang.Thread.run(Thread.java:748) [2019-03-08 01:50:02,720] WARN 
> Unable to read additional data from client sessionid 0x1007131d81c0001, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-08 01:50:03,855] 
> ERROR [KafkaApi-0] Error when handling request: 

[jira] [Resolved] (KAFKA-8138) Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8138.

Resolution: Fixed

> Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes
> ---
>
> Key: KAFKA-8138
> URL: https://issues.apache.org/jira/browse/KAFKA-8138
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchRecordLargerThanFetchMaxBytes/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT (truncated)
> {quote}[2019-03-20 16:10:19,759] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,760] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,963] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,964] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,975] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8141) Flaky Test FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8141.

Resolution: Fixed

> Flaky Test 
> FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled
> -
>
> Key: KAFKA-8141
> URL: https://issues.apache.org/jira/browse/KAFKA-8141
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.server/FetchRequestDownConversionConfigTest/testV1FetchWithDownConversionDisabled/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73){quote}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Request to get subscribed for Kafka developers mailing list

2020-10-09 Thread Matthias J. Sax
Subscribing is self-service: https://kafka.apache.org/contact

-Matthias

On 10/9/20 12:12 PM, Srinath Thota wrote:
> Hi,
> 
> I would like to get enrolled in the Kafka developers Mailing List. I am 
> Srinath Thota currently working as a consultant in India.
> 
> Thank you, Let me know if any additional information needed.
> 
> 
> Thanks,
> Srinath Thota
> 


[jira] [Created] (KAFKA-10594) Enhance Raft exception handling

2020-10-09 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-10594:
---

 Summary: Enhance Raft exception handling
 Key: KAFKA-10594
 URL: https://issues.apache.org/jira/browse/KAFKA-10594
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen
Assignee: Boyang Chen


The current exception handling on the Raft implementation is superficial, for 
example we don't treat file system exception and request handling exception 
differently. It's necessary to decide what kind of exception should be fatal, 
what kind of exception should be responding to the client, and what exception 
could be retried.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8269) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8269.

Resolution: Duplicate

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8269
> URL: https://issues.apache.org/jira/browse/KAFKA-8269
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3573/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:659){quote}
> It's a long LOG. This might be interesting:
> {quote}[2019-04-20 21:30:37,936] ERROR [ReplicaFetcher replicaId=4, 
> leaderId=5, fetcherId=0] Error for partition 
> testCreateWithReplicaAssignment-0cpsXnG35w-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-20 21:30:48,600] WARN Unable to read additional data from client 
> sessionid 0x10510a59d3c0004, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376)
> [2019-04-20 21:30:48,908] WARN Unable to read additional data from client 
> sessionid 0x10510a59d3c0003, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376)
> [2019-04-20 21:30:48,919] ERROR [RequestSendThread controllerId=0] Controller 
> 0 fails to send a request to broker localhost:43520 (id: 5 rack: rack3) 
> (kafka.controller.RequestSendThread:76)
> java.lang.InterruptedException
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
> at kafka.utils.ShutdownableThread.pause(ShutdownableThread.scala:75)
> at 
> kafka.controller.RequestSendThread.backoff$1(ControllerChannelManager.scala:224)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:252)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
> [2019-04-20 21:30:48,920] ERROR [RequestSendThread controllerId=0] Controller 
> 0 fails to send a request to broker localhost:33570 (id: 4 rack: rack3) 
> (kafka.controller.RequestSendThread:76)
> java.lang.InterruptedException
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
> at kafka.utils.ShutdownableThread.pause(ShutdownableThread.scala:75)
> at 
> kafka.controller.RequestSendThread.backoff$1(ControllerChannelManager.scala:224)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:252)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
> [2019-04-20 21:31:28,942] ERROR [ReplicaFetcher replicaId=3, leaderId=1, 
> fetcherId=0] Error for partition under-min-isr-topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-20 21:31:28,973] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition under-min-isr-topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #131

2020-10-09 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Implement ApiError#equals and hashCode (#9390)


--
[...truncated 3.42 MB...]

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task 

Request to get subscribed for Kafka developers mailing list

2020-10-09 Thread Srinath Thota
Hi,

I would like to get enrolled in the Kafka developers Mailing List. I am Srinath 
Thota currently working as a consultant in India.

Thank you, Let me know if any additional information needed.


Thanks,
Srinath Thota


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #159

2020-10-09 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: ACLs for secured cluster system tests (#9378)


--
[...truncated 6.84 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #127

2020-10-09 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: ACLs for secured cluster system tests (#9378)


--
[...truncated 6.78 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #130

2020-10-09 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: ACLs for secured cluster system tests (#9378)


--
[...truncated 3.43 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED


how to consumer dynamic listener topic

2020-10-09 Thread kolnick




| |
kolnick
|
|
koln...@163.com
|
签名由网易邮箱大师定制



Re: [VOTE] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-10-09 Thread Boyang Chen
Thanks Jason for the great thoughts, and we basically decided to shift the
gear for a limited impersonation approach offline.

The goal here is to simplify the handling logic by relying on the active
controller to do the actual authorization for resources in the original
client request. We are also adding the `KafkaPrincipalSerde` type to
provide the functionality for principal serialization/deserialization so
that it could embed in the Envelope and send to the active controller.
Before 3.0, customized principal builders could optionally extend the serde
type, which is required after 3.0 is released. Either way having the
capability to serde KafkaPrincipal becomes a prerequisite to enable
redirection besides IBP. Additionally, we add a forwardingPrincipal field
to the Authorizer context for authorization and audit logging purposes,
instead of going tagged fields in header.

The KIP is updated to reflect the current approach, thanks.



On Fri, Sep 25, 2020 at 5:55 PM Jason Gustafson  wrote:

> Hey All,
>
> So the main thing the EnvelopeRequest gives us is a way to avoid converting
> older API versions in order to attach the initial principal name and the
> clientId. It also saves the need to add the initial principal and client id
> as a tagged field to all of the forwarded protocols, which is nice. We
> still have the challenge of advertising API versions which are compatible
> with both the broker receiving the request and the controller that the
> request is ultimately forwarded to, but not sure I see a way around that.
>
> I realize I might be walking into a minefield here, but since the envelope
> is being revisited, it seems useful to compare the approach suggested above
> with the option relying on impersonation. I favor the use of impersonation
> because it makes forwarding simpler. As the proposal stands, we will have
> to maintain logic for each forwarded API to unpack, authorize, and repack
> any forwarded requests which flow through the broker. This is probably not
> a huge concern from an efficiency perspective as long as we are talking
> about just the Admin APIs, but it does have a big maintenance cost since
> we'll need to ensure that every new field gets properly carried through. It
> would be nice if we just didn't have to think about that. We also might
> eventually come up with reasons to extend forwarding to non-admin APIs, so
> it would be nice to start with an efficient approach.
>
> It seems like the main difference comes down to where the authorization is
> done. Suppose that broker B receives an AlterConfig request from the client
> in order to change topic configs and wants to forward to controller C.
>
> Option 1 (no impersonation): B authorizes AlterConfigs for the included
> topics with the client principal. Rejected topics are stripped out of the
> request.  Authorized topics are repackaged into a new request and sent in
> an envelope to C, which verifies ClusterAction permission with the broker
> principal and assumes authorization for the underlying request
> Option 2 (with impersonation): B authenticates the client, but does no
> authorization and forwards the request in an envelope to C containing the
> authenticated principal. C checks ClusterAction for the envelope request
> using the broker principal and AlterConfigs for the underlying request
> using the forwarded client principal.
>
> In either case, broker B implicitly gets AlterConfigs permission for the
> topic. This is true even without the envelope and seems like a reasonable
> requirement. The broker should itself be authorized to perform any action
> that it might have to forward requests for. As far as I know, all the
> proposals we have considered require this. The main question from a
> security perspective is whether any of these proposals require additional
> unnecessary access, which is probably the main doubt about impersonation.
> However, there are a couple ways we can restrict it:
>
> 1. We can restrict the principals that are allowed to be impersonated
> 2. We can restrict the actions that are possible through impersonation.
>
> Considering the first point, there's probably no reason to allow
> impersonation of superusers. Additionally, a custom authorizer could forbid
> impersonation outside of a particular group. To support this, it would be
> helpful to extend `KafkaPrincipal` or `AuthorizableRequestContext` so that
> it indicates whether a request is an impersonated request.
>
> Considering the second point, it doesn't make sense to allow arbitrary
> requests to be forwarded. We know exactly the set of forwardable APIs and
> we can reject any other APIs without even looking at the principal. This is
> the nice thing that the Envelope request gives us. I don't know if we would
> ever have finer-grained restrictions, but in principle I don't see why we
> couldn't.
>
> In the future, I guess we could go even further so that the broker itself
> wouldn't need the same permissions as the client. If 

[jira] [Created] (KAFKA-10593) Kafka Hazelcast Process down

2020-10-09 Thread Cihan YILDIZ (Jira)
Cihan YILDIZ created KAFKA-10593:


 Summary: Kafka Hazelcast Process down
 Key: KAFKA-10593
 URL: https://issues.apache.org/jira/browse/KAFKA-10593
 Project: Kafka
  Issue Type: Bug
Reporter: Cihan YILDIZ


hi all,

we use kafka hazelcast product and sometime the hazelcast process going down. 
when l checked the logs seem as below, what the problem is ?

pls help me.

 

Regards

 

[2020-10-09 16:30:05,915] ERROR Uncaught exception in scheduled task 
'isr-expiration' (kafka.utils.KafkaScheduler)
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /brokers/topics/EcommerceExcelEvent/partitions/2/state
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at kafka.zookeeper.AsyncResponse.resultException(ZooKeeperClient.scala:465)
 at kafka.zk.KafkaZkClient.conditionalUpdatePath(KafkaZkClient.scala:621)
 at kafka.utils.ReplicationUtils$.updateLeaderAndIsr(ReplicationUtils.scala:34)
 at kafka.cluster.Partition.updateIsr(Partition.scala:670)
 at kafka.cluster.Partition.$anonfun$maybeShrinkIsr$1(Partition.scala:513)
 at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
 at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
 at kafka.cluster.Partition.maybeShrinkIsr(Partition.scala:504)
 at 
kafka.server.ReplicaManager.$anonfun$maybeShrinkIsr$2(ReplicaManager.scala:1335)
 at 
kafka.server.ReplicaManager.$anonfun$maybeShrinkIsr$2$adapted(ReplicaManager.scala:1335)
 at scala.collection.Iterator.foreach(Iterator.scala:929)
 at scala.collection.Iterator.foreach$(Iterator.scala:929)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
 at kafka.server.ReplicaManager.maybeShrinkIsr(ReplicaManager.scala:1335)
 at kafka.server.ReplicaManager.$anonfun$startup$1(ReplicaManager.scala:322)
 at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
 at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10592) system tests not running after python3 merge

2020-10-09 Thread Ron Dagostino (Jira)
Ron Dagostino created KAFKA-10592:
-

 Summary: system tests not running after python3 merge
 Key: KAFKA-10592
 URL: https://issues.apache.org/jira/browse/KAFKA-10592
 Project: Kafka
  Issue Type: Task
  Components: system tests
Reporter: Ron Dagostino
Assignee: Nikolay Izhikov


We are seeing these errors on system tests due to the python3 merge:

[ERROR:2020-10-08 21:03:51,341]: Failed to import 
kafkatest.sanity_checks.test_performance_services, which may indicate a broken 
test that cannot be loaded: ImportError: No module named server
[ERROR:2020-10-08 21:03:51,351]: Failed to import 
kafkatest.benchmarks.core.benchmark_test, which may indicate a broken test that 
cannot be loaded: ImportError: No module named server
[ERROR:2020-10-08 21:03:51,501]: Failed to import 
kafkatest.tests.core.throttling_test, which may indicate a broken test that 
cannot be loaded: ImportError: No module named server
[ERROR:2020-10-08 21:03:51,598]: Failed to import 
kafkatest.tests.client.quota_test, which may indicate a broken test that cannot 
be loaded: ImportError: No module named server

I ran one of the system tests at the commit prior to the python3 merge 
(https://github.com/apache/kafka/commit/40a23cc0c2e1efa8632f59b093672221a3c03c36)
 and it ran fine:

http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/2020-10-09--001.1602255415--rondagostino--rtd_just_before_python3_merge--40a23cc0c/report.html

I ran the exact same test file at the next commit -- the python3 commit at 
https://github.com/apache/kafka/commit/4e65030e055104a7526e85b563a11890c61d6ddf 
-- and it failed with the import error.  The test results show no report.html 
file because nothing ran: 
http://testing.confluent.io/confluent-kafka-system-test-results/?prefix=2020-10-09--001.1602251990--apache--trunk--7947c18b5/

Not sure when this began because I do see these tests running successfully 
during the development process as documented in 
https://issues.apache.org/jira/browse/KAFKA-10402 (`tests run:684` as 
recently as 9/20 in that ticket).  But the PR build (rebased onto latest trunk) 
showed the above import errors and only 606 tests run.  I assume those 4 files 
mentioned include 78 tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: New release branch 2.7

2020-10-09 Thread Bill Bejeck
Hi Gwen,

I've looked it over, and the work involved is low-risk, so I'm fine with
you cherry-picking the PRs into 2.7

Bill

On Thu, Oct 8, 2020 at 8:00 PM Gwen Shapira  wrote:

> Hey Bill!
>
> Thank you for driving this release!
>
> We are only now starting to work on KIP-629, so we missed feature
> freeze. However the KIP is a collection of small changes (either a
> single method or literally just renames). We are wondering how you
> feel about us cherry-picking the PRs into 2.7 branch?
> For reference:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase
>
> Gwen
>
> On Thu, Oct 8, 2020 at 5:27 AM Bill Bejeck  wrote:
> >
> > Hello Kafka developers and friends,
> >
> > As promised, we now have a release branch for the 2.7 release (with 2.7.0
> > as the version).
> > Trunk has been bumped to 2.8.0-SNAPSHOT.
> >
> > I'll be going over the JIRAs to move every non-blocker from this release
> to
> > the next release.
> >
> > From this point, most changes should go to trunk.
> > *Blockers (existing and new that we discover while testing the release)
> > will be double-committed. *Please discuss with your reviewer whether your
> > PR should go to trunk or to trunk+release so they can merge accordingly.
> >
> > *Please help us test the release! *
> >
> > Thanks!
> >
> > Bill Bejeck
>
>
>
> --
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [DISCUSSION] python code style checks

2020-10-09 Thread John Roesler
Thanks Nikolay,

Sounds good to me in principle. I think linters serve a useful purpose.

When it comes to the specific configuration, maybe you can open a POC PR so we 
can see what the formatter will do. 

Thanks,
John

On Fri, Oct 9, 2020, at 05:25, Brandon Brown wrote:
> I love that idea, black is a really great formatter (opinionated with 
> minimal config). 
> 
> Brandon Brown
> 
> > On Oct 9, 2020, at 5:55 AM, Nikolay Izhikov  wrote:
> > 
> > Hello!
> > 
> > Kafka uses relatively strict code style for Java code.
> > Code style enforces during project build.
> > 
> > But, for now, we doesn’t check python test code style.
> > I’ve checked system tests code with the default pylint settings and got the 
> > following results - "Your code has been rated at 5.98/10»
> > 
> > I propose to add python code style checks to the codebase and process and 
> > fix code style issues.
> > 
> > What do you think?
>


Re: KIP-675: Convert KTable to a KStream using the previous value

2020-10-09 Thread Javier Freire Riobo
I see. It's great hidden functionality and all you have to do is define a
good API and expose it. I think the easiest thing would be to expose a
first operator, and then add more operators if they are claimed. I think it
is hidden for not complicating the API.

Thank you

El vie., 9 oct. 2020 a las 7:26, Matthias J. Sax ()
escribió:

> I agree that there are cases when it is useful to get the old and new
> value. In fact, the DSL internally often tracks old and new value via a
> `Change` value type.
>
> We did have some discussion that it might be useful to expose this
> currently internal `Change` type in the public API. But if we do this,
> it would not be limited to a single operator.
>
>
> -Matthias
>
> On 10/8/20 1:41 PM, Javier Freire Riobo wrote:
> > You're right. The behavior is correct with the cache disabled.
> >
> > Anyway I think the operator I propose can be useful. The need to
> generate a
> > value from the previous and current value of a record can be quite
> common.
> > I think the only way to implement it is through an aggregate using a
> helper
> > class. It is simpler and more natural to be able to receive the previous
> > and current values in a function.
> >
> > Anyway thank you very much. I have been working with Kafka for a short
> > time, but I find it an amazing tool. Congratulations.
> >
> > El jue., 8 oct. 2020 a las 21:10, Matthias J. Sax ()
> > escribió:
> >
> >> I guess I understand now.
> >>
> >> However, it seems to be an "issue" with record caching. Setting the
> >> commit interval to zero would flush the cache each time, but it is not
> >> the "right" config change. You should just disable the `KTable` cache
> >> instead.
> >>
> >> You can disable caching globally by setting `cache.max.bytes.buffering`
> >> configuration parameter to zero.
> >>
> >> Or you can disable caching for an individual KTable via
> >> `Materialized#withCachingDisabled()` that you can pass into your
> >> `aggregation()` operator.
> >>
> >> Thus, overall, I don't see the need for a new operator.
> >>
> >>
> >> -Matthias
> >>
> >>
> >> On 10/7/20 1:51 PM, Javier Freire Riobo wrote:
> >>> I have done a small demo example. I hope it serves as a clarification.
> >>>
> >>> https://github.com/javierfreire/KTableToKStreamTest
> >>>
> >>> Thank you very much
> >>>
> >>> El mié., 7 oct. 2020 a las 3:01, Matthias J. Sax ()
> >>> escribió:
> >>>
>  Thanks for the KIP.
> 
>  I am not sure if I understand the motivation. In particular the KIP
> >> says:
> 
> > The main problem, apart from needing more code, is that if the same
>  event is received twice at the same time and the commit time is not 0,
> >> the
>  difference is deleted and nothing is emitted.
> 
>  Can you elaborate? Maybe you can provide a concrete example? I don't
>  understand the relationship between "the same event is received twice"
>  and a "non-zero commit time".
> 
> 
>  -Matthias
> 
>  On 10/6/20 6:25 AM, Javier Freire Riobo wrote:
> > Hi all,
> >
> > I'd like to propose these changes to the Kafka Streams API.
> >
> >
> 
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-675%3A+Convert+KTable+to+a+KStream+using+the+previous+value
> >
> > This is a proposal to convert a KTable to a KStream knowing the
> >> previous
> > value of the registry.
> >
> > I also opened a proof-of-concept PR:
> >
> > PR#9321:  https://github.com/apache/kafka/pull/9381
> >
> > What do you think?
> >
> > Cheers,
> > Javier Freire
> >
> 
> >>>
> >>
> >
>


Re: [VOTE] KIP-676: Respect the logging hierarchy

2020-10-09 Thread Dongjin Lee
+1. (Non-binding)

Thanks,
Dongjin

On Fri, Oct 9, 2020 at 6:50 PM Tom Bentley  wrote:

> Hi all,
>
> KIP-676 is pretty trivial and the comments on the discussion thread seem to
> be favourable, so I'd like to start a vote on it.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy?moved=true
> >
>
> Please take a look if you have time.
>
> Many thanks,
>
> Tom
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*




*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: [DISCUSSION] python code style checks

2020-10-09 Thread Brandon Brown
I love that idea, black is a really great formatter (opinionated with minimal 
config). 

Brandon Brown

> On Oct 9, 2020, at 5:55 AM, Nikolay Izhikov  wrote:
> 
> Hello!
> 
> Kafka uses relatively strict code style for Java code.
> Code style enforces during project build.
> 
> But, for now, we doesn’t check python test code style.
> I’ve checked system tests code with the default pylint settings and got the 
> following results - "Your code has been rated at 5.98/10»
> 
> I propose to add python code style checks to the codebase and process and fix 
> code style issues.
> 
> What do you think?


Re: [VOTE] KIP-653: Upgrade log4j to log4j2

2020-10-09 Thread Tom Bentley
+1 non-binding.

Thanks for your efforts on this Dongjin.

Tom

On Wed, Oct 7, 2020 at 6:45 AM Dongjin Lee  wrote:

> As of present:
>
> - Binding: +2 (Gwen, John)
> - Non-binding: +1 (David)
>
> Now we need one more binding +1.
>
> Thanks,
> Dongjin
>
> On Wed, Oct 7, 2020 at 1:37 AM David Jacot  wrote:
>
> > Thanks for driving this, Dongjin!
> >
> > The KIP looks good to me. I’m +1 (non-binding).
> >
> > Best,
> > David
> >
> > Le mar. 6 oct. 2020 à 17:23, Dongjin Lee  a écrit :
> >
> > > As of present:
> > >
> > > - Binding: +2 (Gwen, John)
> > > - Non-binding: 0
> > >
> > > Thanks,
> > > Dongjin
> > >
> > > On Sat, Oct 3, 2020 at 10:51 AM John Roesler 
> > wrote:
> > >
> > > > Thanks for the KIP, Dongjin!
> > > >
> > > > I’ve just reviewed the KIP document, and it looks good to me.
> > > >
> > > > I’m +1 (binding)
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Fri, Oct 2, 2020, at 19:11, Gwen Shapira wrote:
> > > > > +1 (binding)
> > > > >
> > > > > A very welcome update :)
> > > > >
> > > > > On Tue, Sep 22, 2020 at 9:09 AM Dongjin Lee 
> > > wrote:
> > > > > >
> > > > > > Hi devs,
> > > > > >
> > > > > > Here I open the vote for KIP-653: Upgrade log4j to log4j2. It
> > > replaces
> > > > the
> > > > > > obsolete log4j logging library into the current standard, log4j2,
> > > with
> > > > > > maintaining backward-compatibility.
> > > > > >
> > > > > > Thanks,
> > > > > > Dongjin
> > > > > >
> > > > > > --
> > > > > > *Dongjin Lee*
> > > > > >
> > > > > > *A hitchhiker in the mathematical world.*
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > *github:  github.com/dongjinleekr
> > > > > > keybase:
> > > > https://keybase.io/dongjinleekr
> > > > > > linkedin:
> > > > kr.linkedin.com/in/dongjinleekr
> > > > > > speakerdeck:
> > > > speakerdeck.com/dongjin
> > > > > > *
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Gwen Shapira
> > > > > Engineering Manager | Confluent
> > > > > 650.450.2760 | @gwenshap
> > > > > Follow us: Twitter | blog
> > > > >
> > > >
> > >
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > > *A hitchhiker in the mathematical world.*
> > >
> > >
> > >
> > >
> > > *github:  github.com/dongjinleekr
> > > keybase:
> > https://keybase.io/dongjinleekr
> > > linkedin:
> > kr.linkedin.com/in/dongjinleekr
> > > speakerdeck:
> > > speakerdeck.com/dongjin
> > > *
> > >
> >
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck:
> speakerdeck.com/dongjin
> *
>


[DISCUSSION] python code style checks

2020-10-09 Thread Nikolay Izhikov
Hello!

Kafka uses relatively strict code style for Java code.
Code style enforces during project build.

But, for now, we doesn’t check python test code style.
I’ve checked system tests code with the default pylint settings and got the 
following results - "Your code has been rated at 5.98/10»

I propose to add python code style checks to the codebase and process and fix 
code style issues.

What do you think?

[VOTE] KIP-676: Respect the logging hierarchy

2020-10-09 Thread Tom Bentley
Hi all,

KIP-676 is pretty trivial and the comments on the discussion thread seem to
be favourable, so I'd like to start a vote on it.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy


Please take a look if you have time.

Many thanks,

Tom


[jira] [Resolved] (KAFKA-10591) kafka_2.13-2.6.0 vulnerabilities

2020-10-09 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee resolved KAFKA-10591.
-
Resolution: Duplicate

See: [KAFKA-9366|http://https://issues.apache.org/jira/browse/KAFKA-9366]

> kafka_2.13-2.6.0 vulnerabilities
> 
>
> Key: KAFKA-10591
> URL: https://issues.apache.org/jira/browse/KAFKA-10591
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Manvindar
>Priority: Major
>  Labels: vulnerabilities
> Attachments: kafka.png
>
>
> I scanned Kafka image in Anchore and got few vulnerabilities, is there a fix 
> for them or its is false positive? 
> Two of them are in log4j-1.2.17, can we bump it to version 2?
>  
> !kafka.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #126

2020-10-09 Thread Apache Jenkins Server
See 


Changes:

[cshapi] MINOR update comments and docs to be gender-neutral


--
[...truncated 6.79 MB...]

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task 

[jira] [Created] (KAFKA-10591) kafka_2.13-2.6.0 vulnerabilities

2020-10-09 Thread Manvindar (Jira)
Manvindar created KAFKA-10591:
-

 Summary: kafka_2.13-2.6.0 vulnerabilities
 Key: KAFKA-10591
 URL: https://issues.apache.org/jira/browse/KAFKA-10591
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Manvindar
 Attachments: image (13).png, kafka.png

I scanned Kafka image in Anchore and got few vulnerabilities, is there a fix 
for them or its is false positive? 
Two of them are in log4j-1.2.17, can we bump it to version 2?

 

!kafka.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)