[jira] [Created] (KAFKA-9731) Increased fetch request rate with leader selector due to HW propagation

2020-03-17 Thread Vahid Hashemian (Jira)
Vahid Hashemian created KAFKA-9731:
--

 Summary: Increased fetch request rate with leader selector due to 
HW propagation
 Key: KAFKA-9731
 URL: https://issues.apache.org/jira/browse/KAFKA-9731
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 2.4.1, 2.4.0
Reporter: Vahid Hashemian
 Attachments: image-2020-03-17-10-19-08-987.png

KIP-392 adds high watermark propagation to followers as a means to better sync 
up followers HW with leader. The issue we have noticed after trying out 2.4.0 
and 2.4.1 is a spike in fetch request rate in the default selector case 
(leader), that does not really require this high watermark propagation:

!image-2020-03-17-10-19-08-987.png|width=811,height=354!

This spike causes an increase in resource allocation (CPU) on the brokers.

An easy solution would be to disable this propagation (at least) for the 
default leader selector case to improve the backward compatibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9205) Add an option to enforce rack-aware partition reassignment

2019-11-19 Thread Vahid Hashemian (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977592#comment-16977592
 ] 

Vahid Hashemian commented on KAFKA-9205:


[~sbellapu] KIP process is not that difficult. If you have access to the wiki 
you can easily create one and start discussion on it in the mailing list (and 
after enough discussion/time you do a vote). The KIP page has all the necessary 
info: 
[https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals].
 You can also take some of the recent KIPs as an example.

Since there is an existing option for disabling rack aware mode, this change 
should be designed in a way that either makes use of that option, or works well 
alongside it (without causing confusion); and at the same time preserves 
backward compatibility (i.e. existing default behavior should ideally not 
change).

> Add an option to enforce rack-aware partition reassignment
> --
>
> Key: KAFKA-9205
> URL: https://issues.apache.org/jira/browse/KAFKA-9205
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, tools
>Reporter: Vahid Hashemian
>Priority: Minor
>
> One regularly used healing operation on Kafka clusters is replica 
> reassignments for topic partitions. For example, when there is a skew in 
> inbound/outbound traffic of a broker replica reassignment can be used to move 
> some leaders/followers from the broker; or if there is a skew in disk usage 
> of brokers, replica reassignment can more some partitions to other brokers 
> that have more disk space available.
> In Kafka clusters that span across multiple data centers (or availability 
> zones), high availability is a priority; in the sense that when a data center 
> goes offline the cluster should be able to resume normal operation by 
> guaranteeing partition replicas in all data centers.
> This guarantee is currently the responsibility of the on-call engineer that 
> performs the reassignment or the tool that automatically generates the 
> reassignment plan for improving the cluster health (e.g. by considering the 
> rack configuration value of each broker in the cluster). the former, is quite 
> error-prone, and the latter, would lead to duplicate code in all such admin 
> tools (which are not error free either). Not all use cases can make use the 
> default assignment strategy that is used by --generate option; and current 
> rack aware enforcement applies to this option only.
> It would be great for the built-in replica assignment API and tool provided 
> by Kafka to support a rack aware verification option for --execute scenario 
> that would simply return an error when [some] brokers in any replica set 
> share a common rack. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9205) Add an option to enforce rack-aware partition reassignment

2019-11-18 Thread Vahid Hashemian (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976895#comment-16976895
 ] 

Vahid Hashemian commented on KAFKA-9205:


This will still likely require a KIP since the default behavior could change.

> Add an option to enforce rack-aware partition reassignment
> --
>
> Key: KAFKA-9205
> URL: https://issues.apache.org/jira/browse/KAFKA-9205
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, tools
>Reporter: Vahid Hashemian
>Priority: Minor
>
> One regularly used healing operation on Kafka clusters is replica 
> reassignments for topic partitions. For example, when there is a skew in 
> inbound/outbound traffic of a broker replica reassignment can be used to move 
> some leaders/followers from the broker; or if there is a skew in disk usage 
> of brokers, replica reassignment can more some partitions to other brokers 
> that have more disk space available.
> In Kafka clusters that span across multiple data centers (or availability 
> zones), high availability is a priority; in the sense that when a data center 
> goes offline the cluster should be able to resume normal operation by 
> guaranteeing partition replicas in all data centers.
> This guarantee is currently the responsibility of the on-call engineer that 
> performs the reassignment or the tool that automatically generates the 
> reassignment plan for improving the cluster health (e.g. by considering the 
> rack configuration value of each broker in the cluster). the former, is quite 
> error-prone, and the latter, would lead to duplicate code in all such admin 
> tools (which are not error free either). Not all use cases can make use the 
> default assignment strategy that is used by --generate option; and current 
> rack aware enforcement applies to this option only.
> It would be great for the built-in replica assignment API and tool provided 
> by Kafka to support a rack aware verification option for --execute scenario 
> that would simply return an error when [some] brokers in any replica set 
> share a common rack. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9205) Add an option to enforce rack-aware partition reassignment

2019-11-18 Thread Vahid Hashemian (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-9205:
---
Description: 
One regularly used healing operation on Kafka clusters is replica reassignments 
for topic partitions. For example, when there is a skew in inbound/outbound 
traffic of a broker replica reassignment can be used to move some 
leaders/followers from the broker; or if there is a skew in disk usage of 
brokers, replica reassignment can more some partitions to other brokers that 
have more disk space available.

In Kafka clusters that span across multiple data centers (or availability 
zones), high availability is a priority; in the sense that when a data center 
goes offline the cluster should be able to resume normal operation by 
guaranteeing partition replicas in all data centers.

This guarantee is currently the responsibility of the on-call engineer that 
performs the reassignment or the tool that automatically generates the 
reassignment plan for improving the cluster health (e.g. by considering the 
rack configuration value of each broker in the cluster). the former, is quite 
error-prone, and the latter, would lead to duplicate code in all such admin 
tools (which are not error free either). Not all use cases can make use the 
default assignment strategy that is used by --generate option; and current rack 
aware enforcement applies to this option only.

It would be great for the built-in replica assignment API and tool provided by 
Kafka to support a rack aware verification option for --execute scenario that 
would simply return an error when [some] brokers in any replica set share a 
common rack. 

  was:
One regularly used healing operation on Kafka clusters is replica reassignments 
for topic partitions. For example, when there is a skew in inbound/outbound 
traffic of a broker replica reassignment can be used to move some 
leaders/followers from the broker; or if there is a skew in disk usage of 
brokers, replica reassignment can more some partitions to other brokers that 
have more disk space available.

In Kafka clusters that span across multiple data centers (or availability 
zones), high availability is a priority; in the sense that when a data center 
goes offline the cluster should be able to resume normal operation by 
guaranteeing partition replicas in all data centers.

This guarantee is currently the responsibility of the on-call engineer that 
performs the reassignment or the tool that automatically generates the 
reassignment plan for improving the cluster health (e.g. by considering the 
rack configuration value of each broker in the cluster). the former, is quite 
error-prone, and the latter, would lead to duplicate code in all such admin 
tools (which are not error free either).

It would be great for the built-in replica assignment API and tool provided by 
Kafka to support a rack aware verification option that would simply return an 
error when [some] brokers in any replica set share a common rack. 


> Add an option to enforce rack-aware partition reassignment
> --
>
> Key: KAFKA-9205
> URL: https://issues.apache.org/jira/browse/KAFKA-9205
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, tools
>Reporter: Vahid Hashemian
>Priority: Minor
>  Labels: needs-kip
>
> One regularly used healing operation on Kafka clusters is replica 
> reassignments for topic partitions. For example, when there is a skew in 
> inbound/outbound traffic of a broker replica reassignment can be used to move 
> some leaders/followers from the broker; or if there is a skew in disk usage 
> of brokers, replica reassignment can more some partitions to other brokers 
> that have more disk space available.
> In Kafka clusters that span across multiple data centers (or availability 
> zones), high availability is a priority; in the sense that when a data center 
> goes offline the cluster should be able to resume normal operation by 
> guaranteeing partition replicas in all data centers.
> This guarantee is currently the responsibility of the on-call engineer that 
> performs the reassignment or the tool that automatically generates the 
> reassignment plan for improving the cluster health (e.g. by considering the 
> rack configuration value of each broker in the cluster). the former, is quite 
> error-prone, and the latter, would lead to duplicate code in all such admin 
> tools (which are not error free either). Not all use cases can make use the 
> default assignment strategy that is used by --generate option; and current 
> rack aware enforcement applies to this option only.
> It would be great for the built-in replica assignment API and tool provided 
> by Kafka to support a rack aware verification option for --execute scenario 

[jira] [Commented] (KAFKA-9205) Add an option to enforce rack-aware partition reassignment

2019-11-18 Thread Vahid Hashemian (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976888#comment-16976888
 ] 

Vahid Hashemian commented on KAFKA-9205:


Thanks [~sbellapu] for the pointer. KIP-36 and the current implementation 
enforces rack aware assignment when generating an assignment (using the 
--generate option). If a custom reassignment algorithm is used to generate the 
assignment, or if the reassignment is manually generated on ad-hoc basic, the 
tool does not enforce rack awareness when run with --execute option. It would 
be great if enforcement can be implemented in --execute scenario too. I updated 
the description too. 

> Add an option to enforce rack-aware partition reassignment
> --
>
> Key: KAFKA-9205
> URL: https://issues.apache.org/jira/browse/KAFKA-9205
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, tools
>Reporter: Vahid Hashemian
>Priority: Minor
>
> One regularly used healing operation on Kafka clusters is replica 
> reassignments for topic partitions. For example, when there is a skew in 
> inbound/outbound traffic of a broker replica reassignment can be used to move 
> some leaders/followers from the broker; or if there is a skew in disk usage 
> of brokers, replica reassignment can more some partitions to other brokers 
> that have more disk space available.
> In Kafka clusters that span across multiple data centers (or availability 
> zones), high availability is a priority; in the sense that when a data center 
> goes offline the cluster should be able to resume normal operation by 
> guaranteeing partition replicas in all data centers.
> This guarantee is currently the responsibility of the on-call engineer that 
> performs the reassignment or the tool that automatically generates the 
> reassignment plan for improving the cluster health (e.g. by considering the 
> rack configuration value of each broker in the cluster). the former, is quite 
> error-prone, and the latter, would lead to duplicate code in all such admin 
> tools (which are not error free either). Not all use cases can make use the 
> default assignment strategy that is used by --generate option; and current 
> rack aware enforcement applies to this option only.
> It would be great for the built-in replica assignment API and tool provided 
> by Kafka to support a rack aware verification option for --execute scenario 
> that would simply return an error when [some] brokers in any replica set 
> share a common rack. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9205) Add an option to enforce rack-aware partition reassignment

2019-11-18 Thread Vahid Hashemian (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-9205:
---
Labels:   (was: needs-kip)

> Add an option to enforce rack-aware partition reassignment
> --
>
> Key: KAFKA-9205
> URL: https://issues.apache.org/jira/browse/KAFKA-9205
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, tools
>Reporter: Vahid Hashemian
>Priority: Minor
>
> One regularly used healing operation on Kafka clusters is replica 
> reassignments for topic partitions. For example, when there is a skew in 
> inbound/outbound traffic of a broker replica reassignment can be used to move 
> some leaders/followers from the broker; or if there is a skew in disk usage 
> of brokers, replica reassignment can more some partitions to other brokers 
> that have more disk space available.
> In Kafka clusters that span across multiple data centers (or availability 
> zones), high availability is a priority; in the sense that when a data center 
> goes offline the cluster should be able to resume normal operation by 
> guaranteeing partition replicas in all data centers.
> This guarantee is currently the responsibility of the on-call engineer that 
> performs the reassignment or the tool that automatically generates the 
> reassignment plan for improving the cluster health (e.g. by considering the 
> rack configuration value of each broker in the cluster). the former, is quite 
> error-prone, and the latter, would lead to duplicate code in all such admin 
> tools (which are not error free either). Not all use cases can make use the 
> default assignment strategy that is used by --generate option; and current 
> rack aware enforcement applies to this option only.
> It would be great for the built-in replica assignment API and tool provided 
> by Kafka to support a rack aware verification option for --execute scenario 
> that would simply return an error when [some] brokers in any replica set 
> share a common rack. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9205) Add an option to enforce rack-aware partition reassignment

2019-11-18 Thread Vahid Hashemian (Jira)
Vahid Hashemian created KAFKA-9205:
--

 Summary: Add an option to enforce rack-aware partition reassignment
 Key: KAFKA-9205
 URL: https://issues.apache.org/jira/browse/KAFKA-9205
 Project: Kafka
  Issue Type: Improvement
  Components: admin, tools
Reporter: Vahid Hashemian


One regularly used healing operation on Kafka clusters is replica reassignments 
for topic partitions. For example, when there is a skew in inbound/outbound 
traffic of a broker replica reassignment can be used to move some 
leaders/followers from the broker; or if there is a skew in disk usage of 
brokers, replica reassignment can more some partitions to other brokers that 
have more disk space available.

In Kafka clusters that span across multiple data centers (or availability 
zones), high availability is a priority; in the sense that when a data center 
goes offline the cluster should be able to resume normal operation by 
guaranteeing partition replicas in all data centers.

This guarantee is currently the responsibility of the on-call engineer that 
performs the reassignment or the tool that automatically generates the 
reassignment plan for improving the cluster health (e.g. by considering the 
rack configuration value of each broker in the cluster). the former, is quite 
error-prone, and the latter, would lead to duplicate code in all such admin 
tools (which are not error free either).

It would be great for the built-in replica assignment API and tool provided by 
Kafka to support a rack aware verification option that would simply return an 
error when [some] brokers in any replica set share a common rack. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7026) Sticky assignor could assign a partition to multiple consumers (KIP-341)

2019-09-23 Thread Vahid Hashemian (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936147#comment-16936147
 ] 

Vahid Hashemian commented on KAFKA-7026:


[~redbrick9] Can you please provide detailed steps to reproduce? Thanks!

> Sticky assignor could assign a partition to multiple consumers (KIP-341)
> 
>
> Key: KAFKA-7026
> URL: https://issues.apache.org/jira/browse/KAFKA-7026
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Major
>  Labels: kip
> Fix For: 2.3.0
>
>
> In the following scenario sticky assignor assigns a topic partition to two 
> consumers in the group:
>  # Create a topic {{test}} with a single partition
>  # Start consumer {{c1}} in group {{sticky-group}} ({{c1}} becomes group 
> leader and gets {{test-0}})
>  # Start consumer {{c2}}  in group {{sticky-group}} ({{c1}} holds onto 
> {{test-0}}, {{c2}} does not get any partition) 
>  # Pause {{c1}} (e.g. using Java debugger) ({{c2}} becomes leader and takes 
> over {{test-0}}, {{c1}} leaves the group)
>  # Resume {{c1}}
> At this point both {{c1}} and {{c2}} will have {{test-0}} assigned to them.
>  
> The reason is {{c1}} still has kept its previous assignment ({{test-0}}) from 
> the last assignment it received from the leader (itself) and did not get the 
> next round of assignments (when {{c2}} became leader) because it was paused. 
> Both {{c1}} and {{c2}} enter the rebalance supplying {{test-0}} as their 
> existing assignment. The sticky assignor code does not currently check and 
> avoid this duplication.
>  
> Note: This issue was originally reported on 
> [StackOverflow|https://stackoverflow.com/questions/50761842/kafka-stickyassignor-breaking-delivery-to-single-consumer-in-the-group].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8395) Add an ability to backup log segment files on truncation

2019-06-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8395:
---
Fix Version/s: (was: 2.2.1)
   2.2.2
   2.3.0

> Add an ability to backup log segment files on truncation
> 
>
> Key: KAFKA-8395
> URL: https://issues.apache.org/jira/browse/KAFKA-8395
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Michael Axiak
>Priority: Minor
> Fix For: 2.3.0, 2.2.2
>
>
> At HubSpot, we believe we hit a combination of bugs [1] [2], which may have 
> caused us to lose data. In this scenario, as part of metadata conflict 
> resolution a slowly starting up broker recovered an offset of zero and 
> truncated segment files.
> As part of a belt-and-suspenders approach to reducing this risk in the 
> future, I propose adding the ability to rename/backup these files and 
> allowing kafka to move on. Note that this breaks the ordering guarantees, but 
> allows one to recover the data and decide later how to approach it.
> This feature should be turned off by default but enabled with a configuration 
> option.
> (A pull request is following soon on Github)
> 1: https://issues.apache.org/jira/browse/KAFKA-2178
>  2: https://issues.apache.org/jira/browse/KAFKA-1120
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6951) Implement offset expiration semantics for unsubscribed topics

2019-05-14 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16839753#comment-16839753
 ] 

Vahid Hashemian commented on KAFKA-6951:


Hi [~apovzner]. Thanks for catching this. I added a comment in the KIP to point 
this out, but I feel that's not enough.

Perhaps a follow up KIP for that remaining feature makes more sense?

> Implement offset expiration semantics for unsubscribed topics
> -
>
> Key: KAFKA-6951
> URL: https://issues.apache.org/jira/browse/KAFKA-6951
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Major
>
> [This 
> portion|https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets#KIP-211:ReviseExpirationSemanticsofConsumerGroupOffsets-UnsubscribingfromaTopic]
>  of KIP-211 will be implemented separately from the main PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6789) Add retry logic in AdminClient requests

2019-05-10 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837620#comment-16837620
 ] 

Vahid Hashemian commented on KAFKA-6789:


Updated.

> Add retry logic in AdminClient requests
> ---
>
> Key: KAFKA-6789
> URL: https://issues.apache.org/jira/browse/KAFKA-6789
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Guozhang Wang
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.1
>
>
> In KafkaAdminClient, today we treat all error codes as fatal and set the 
> exception accordingly in the returned futures. But for some error codes they 
> can be retried internally, for example, COORDINATOR_LOADING_IN_PROGRESS. We 
> could consider adding the retry logic internally in the admin client so that 
> users would not need to retry themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8240) Source.equals() can fail with NPE

2019-05-10 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8240:
---
Fix Version/s: (was: 2.2.2)
   2.2.1

> Source.equals() can fail with NPE
> -
>
> Key: KAFKA-8240
> URL: https://issues.apache.org/jira/browse/KAFKA-8240
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: beginner, easy-fix, newbie
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> Reported on an PR: 
> [https://github.com/apache/kafka/pull/5284/files/1df6208f48b6b72091fea71323d94a16102ffd13#r270607795]
> InternalTopologyBuilder#Source.equals() might fail with NPE if 
> `topicPattern==null`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6789) Add retry logic in AdminClient requests

2019-05-10 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-6789:
---
Fix Version/s: (was: 2.2.2)
   2.2.1

> Add retry logic in AdminClient requests
> ---
>
> Key: KAFKA-6789
> URL: https://issues.apache.org/jira/browse/KAFKA-6789
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Guozhang Wang
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.1
>
>
> In KafkaAdminClient, today we treat all error codes as fatal and set the 
> exception accordingly in the returned futures. But for some error codes they 
> can be retried internally, for example, COORDINATOR_LOADING_IN_PROGRESS. We 
> could consider adding the retry logic internally in the admin client so that 
> users would not need to retry themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8240) Source.equals() can fail with NPE

2019-05-10 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837618#comment-16837618
 ] 

Vahid Hashemian commented on KAFKA-8240:


[~mjsax] Sorry I missed it. Updated. Feel free to update any other that should 
also be included in RC1. Thanks.

> Source.equals() can fail with NPE
> -
>
> Key: KAFKA-8240
> URL: https://issues.apache.org/jira/browse/KAFKA-8240
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: beginner, easy-fix, newbie
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> Reported on an PR: 
> [https://github.com/apache/kafka/pull/5284/files/1df6208f48b6b72091fea71323d94a16102ffd13#r270607795]
> InternalTopologyBuilder#Source.equals() might fail with NPE if 
> `topicPattern==null`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8348) Document of kafkaStreams improvement

2019-05-10 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8348:
---
Fix Version/s: (was: 2.2.2)
   2.2.1

> Document of kafkaStreams improvement
> 
>
> Key: KAFKA-8348
> URL: https://issues.apache.org/jira/browse/KAFKA-8348
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, streams
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1
>Reporter: Lifei Chen
>Assignee: Lifei Chen
>Priority: Minor
> Fix For: 1.0.3, 1.1.2, 2.0.2, 2.3.0, 2.1.2, 2.2.1
>
>
> there is an out of date and error example in kafkaStreams.java for current 
> version.
>  * Map is not supported for initial StreamsConfig properties
>  * `int` does not support `toString`
> related code:
> {code:java}
> // kafkaStreams.java
> * 
> * A simple example might look like this:
> * {@code
> * Properties props = new Properties();
> * props.put(StreamsConfig.APPLICATION_ID_CONFIG, 
> "my-stream-processing-application");
> * props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
> * props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, 
> Serdes.String().getClass());
> * props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, 
> Serdes.String().getClass());
> *
> * StreamsBuilder builder = new StreamsBuilder();
> * builder.stream("my-input-topic").mapValues(value -> 
> String.valueOf(value.length())).to("my-output-topic");
> *
> * KafkaStreams streams = new KafkaStreams(builder.build(), props);
> * streams.start();
> * }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8348) Document of kafkaStreams improvement

2019-05-10 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837314#comment-16837314
 ] 

Vahid Hashemian commented on KAFKA-8348:


Sounds good [~mjsax].

> Document of kafkaStreams improvement
> 
>
> Key: KAFKA-8348
> URL: https://issues.apache.org/jira/browse/KAFKA-8348
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, streams
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1
>Reporter: Lifei Chen
>Assignee: Lifei Chen
>Priority: Minor
> Fix For: 1.0.3, 1.1.2, 2.0.2, 2.3.0, 2.1.2, 2.2.2
>
>
> there is an out of date and error example in kafkaStreams.java for current 
> version.
>  * Map is not supported for initial StreamsConfig properties
>  * `int` does not support `toString`
> related code:
> {code:java}
> // kafkaStreams.java
> * 
> * A simple example might look like this:
> * {@code
> * Properties props = new Properties();
> * props.put(StreamsConfig.APPLICATION_ID_CONFIG, 
> "my-stream-processing-application");
> * props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
> * props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, 
> Serdes.String().getClass());
> * props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, 
> Serdes.String().getClass());
> *
> * StreamsBuilder builder = new StreamsBuilder();
> * builder.stream("my-input-topic").mapValues(value -> 
> String.valueOf(value.length())).to("my-output-topic");
> *
> * KafkaStreams streams = new KafkaStreams(builder.build(), props);
> * streams.start();
> * }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8289) KTable, Long> can't be suppressed

2019-05-04 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-8289.

Resolution: Fixed

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107501398},k=B,v=12
> 2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> 

[jira] [Resolved] (KAFKA-7946) Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup

2019-05-03 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-7946.

   Resolution: Fixed
Fix Version/s: 2.2.1

> Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup
> ---
>
> Key: KAFKA-7946
> URL: https://issues.apache.org/jira/browse/KAFKA-7946
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.NullPointerException at 
> kafka.admin.DeleteConsumerGroupsTest.testDeleteNonEmptyGroup(DeleteConsumerGroupsTest.scala:96){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7946) Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup

2019-05-03 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832206#comment-16832206
 ] 

Vahid Hashemian edited comment on KAFKA-7946 at 5/3/19 11:42 PM:
-

This has been fixed by [https://github.com/apache/kafka/pull/6312.]

Resolving the ticket.


was (Author: vahid):
Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup
> ---
>
> Key: KAFKA-7946
> URL: https://issues.apache.org/jira/browse/KAFKA-7946
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.NullPointerException at 
> kafka.admin.DeleteConsumerGroupsTest.testDeleteNonEmptyGroup(DeleteConsumerGroupsTest.scala:96){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8289) KTable, Long> can't be suppressed

2019-05-03 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832582#comment-16832582
 ] 

Vahid Hashemian edited comment on KAFKA-8289 at 5/3/19 3:39 PM:


[~vvcephei] Since, according to your earlier comment, there seems to be a 
workaround for the reported issue, I didn't think this was blocking the bug fix 
release, and thought it could be included in a follow-up release. In any case, 
it seems that the fix is merged to trunk. Is there a reason this ticket is not 
resolved yet?


was (Author: vahid):
[~vvcephei] Since, according to your earlier comment, there seems to be a 
workaround for the reported issue. That's why I didn't think this was blocking 
the bug fix release, and can be included in a follow-up release. In any case, 
it seems that the fix is merged to trunk. Is there a reason this ticket is not 
resolved yet?

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- 

[jira] [Commented] (KAFKA-8240) Source.equals() can fail with NPE

2019-05-03 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832585#comment-16832585
 ] 

Vahid Hashemian commented on KAFKA-8240:


[~mjsax] I see the PR is merged. Can this ticket be resolved?

> Source.equals() can fail with NPE
> -
>
> Key: KAFKA-8240
> URL: https://issues.apache.org/jira/browse/KAFKA-8240
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: beginner, easy-fix, newbie
>
> Reported on an PR: 
> [https://github.com/apache/kafka/pull/5284/files/1df6208f48b6b72091fea71323d94a16102ffd13#r270607795]
> InternalTopologyBuilder#Source.equals() might fail with NPE if 
> `topicPattern==null`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8289) KTable, Long> can't be suppressed

2019-05-03 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832582#comment-16832582
 ] 

Vahid Hashemian commented on KAFKA-8289:


[~vvcephei] Since, according to your earlier comment, there seems to be a 
workaround for the reported issue. That's why I didn't think this was blocking 
the bug fix release, and can be included in a follow-up release. In any case, 
it seems that the fix is merged to trunk. Is there a reason this ticket is not 
resolved yet?

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   

[jira] [Updated] (KAFKA-8123) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8123:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated
>  
> 
>
> Key: KAFKA-8123
> URL: https://issues.apache.org/jira/browse/KAFKA-8123
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3474/tests]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for produce quota not updated: Client 
> small-quota-producer-client apiKey PRODUCE requests 1 requestTime 
> 0.015790873650539786 throttleTime 1000.0
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:423)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:421)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
> at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
> at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:421)
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated(RequestQuotaTest.scala:130){quote}
> STDOUT
> {quote}[2019-03-18 21:42:16,637] ERROR [KafkaApi-0] Error when handling 
> request: clientId=unauthorized-CONTROLLED_SHUTDOWN, correlationId=1, 
> api=CONTROLLED_SHUTDOWN, body=\{broker_id=0,broker_epoch=9223372036854775807} 
> (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47612-1, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,655] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-STOP_REPLICA, correlationId=1, api=STOP_REPLICA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,delete_partitions=true,partitions=[{topic=topic-1,partition_ids=[0]}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:42118-127.0.0.1:47614-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,657] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-LEADER_AND_ISR, correlationId=1, api=LEADER_AND_ISR, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],is_new=true}]}],live_leaders=[\{id=0,host=localhost,port=0}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=1, connectionId=127.0.0.1:42118-127.0.0.1:47616-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,668] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-UPDATE_METADATA, correlationId=1, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],offline_replicas=[]}]}],live_brokers=[\{id=0,end_points=[{port=0,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47618-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,725] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-STOP_REPLICA, correlationId=2, api=STOP_REPLICA, 

[jira] [Updated] (KAFKA-8289) KTable, Long> can't be suppressed

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8289:
---
Fix Version/s: 2.2.2

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107501398},k=B,v=12
> 2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> 

[jira] [Updated] (KAFKA-8229) Connect Sink Task updates nextCommit when commitRequest is true

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8229:
---
Fix Version/s: 2.2.2

> Connect Sink Task updates nextCommit when commitRequest is true
> ---
>
> Key: KAFKA-8229
> URL: https://issues.apache.org/jira/browse/KAFKA-8229
> Project: Kafka
>  Issue Type: Bug
>Reporter: Scott Reynolds
>Priority: Major
> Fix For: 2.3.0, 2.2.2
>
>
> Today, when a WorkerSinkTask uses context.requestCommit(), the next call to 
> iteration will cause the commit to happen. As part of the commit execution it 
> will also change the nextCommit milliseconds.
> This creates some weird behaviors when a SinkTask calls context.requestCommit 
> multiple times. In our case, we were calling requestCommit when the number of 
> kafka records we processed exceed a threshold. This resulted in the 
> nextCommit being several days in the future and caused it to only commit when 
> the record threshold was reached.
> We expected the task to commit when the record threshold was reached OR when 
> the timer went off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8140) Flaky Test SaslSslAdminClientIntegrationTest#testDescribeAndAlterConfigs

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8140:
---
Fix Version/s: 2.2.2

> Flaky Test SaslSslAdminClientIntegrationTest#testDescribeAndAlterConfigs
> 
>
> Key: KAFKA-8140
> URL: https://issues.apache.org/jira/browse/KAFKA-8140
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testDescribeAndAlterConfigs/]
> {quote}java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is not set at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
>  at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
> org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:694) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
>  at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
> kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
> kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
>  at scala.collection.Iterator.foreach(Iterator.scala:941) at 
> scala.collection.Iterator.foreach$(Iterator.scala:941) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
> scala.collection.IterableLike.foreach(IterableLike.scala:74) at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:73) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:56) at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:100)
>  at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:81) 
> at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
> kafka.api.AdminClientIntegrationTest.setUp(AdminClientIntegrationTest.scala:79)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.setUp(SaslSslAdminClientIntegrationTest.scala:64){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8141) Flaky Test FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8141:
---
Fix Version/s: 2.2.2

> Flaky Test 
> FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled
> -
>
> Key: KAFKA-8141
> URL: https://issues.apache.org/jira/browse/KAFKA-8141
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.server/FetchRequestDownConversionConfigTest/testV1FetchWithDownConversionDisabled/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73){quote}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8139) Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8139:
---
Fix Version/s: 2.2.2

> Flaky Test SaslSslAdminClientIntegrationTest#testMetadataRefresh
> 
>
> Key: KAFKA-8139
> URL: https://issues.apache.org/jira/browse/KAFKA-8139
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testMetadataRefresh/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 12 milliseconds at java.lang.Object.wait(Native Method) at 
> java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:334) at 
> java.util.concurrent.ForkJoinTask.doJoin(ForkJoinTask.java:391) at 
> java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:719) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync(Tasks.scala:379) at 
> scala.collection.parallel.ForkJoinTasks$WrappedTask.sync$(Tasks.scala:379) at 
> scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:440)
>  at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult(Tasks.scala:423) 
> at 
> scala.collection.parallel.ForkJoinTasks.executeAndWaitResult$(Tasks.scala:416)
>  at 
> scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:60)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTasks.executeAndWaitResult$(Tasks.scala:555)
>  at 
> scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:84)
>  at 
> scala.collection.parallel.ParIterableLike.foreach(ParIterableLike.scala:465) 
> at 
> scala.collection.parallel.ParIterableLike.foreach$(ParIterableLike.scala:464) 
> at scala.collection.parallel.mutable.ParArray.foreach(ParArray.scala:58) at 
> kafka.utils.TestUtils$.shutdownServers(TestUtils.scala:201) at 
> kafka.integration.KafkaServerTestHarness.tearDown(KafkaServerTestHarness.scala:113)
>  at 
> kafka.api.IntegrationTestHarness.tearDown(IntegrationTestHarness.scala:134) 
> at 
> kafka.api.AdminClientIntegrationTest.tearDown(AdminClientIntegrationTest.scala:87)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.tearDown(SaslSslAdminClientIntegrationTest.scala:90){quote}
> STDOUT
> {quote}[2019-03-20 16:30:35,739] ERROR [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer:159) 
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is not set at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
>  at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98) at 
> org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:70)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:121)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at kafka.network.Processor.(SocketServer.scala:694) at 
> kafka.network.SocketServer.newProcessor(SocketServer.scala:344) at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:253)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:252) at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:216)
>  at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:214)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:214)
>  at kafka.network.SocketServer.startup(SocketServer.scala:114) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:253) at 
> kafka.utils.TestUtils$.createServer(TestUtils.scala:140) at 
> kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:101)
>  at scala.collection.Iterator.foreach(Iterator.scala:941) at 
> scala.collection.Iterator.foreach$(Iterator.scala:941) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
> 

[jira] [Updated] (KAFKA-8138) Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8138:
---
Fix Version/s: 2.2.2

> Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes
> ---
>
> Key: KAFKA-8138
> URL: https://issues.apache.org/jira/browse/KAFKA-8138
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchRecordLargerThanFetchMaxBytes/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT (truncated)
> {quote}[2019-03-20 16:10:19,759] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,760] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,963] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,964] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,975] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8136) Flaky Test MetadataRequestTest#testAllTopicsRequest

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8136:
---
Fix Version/s: 2.2.2

> Flaky Test MetadataRequestTest#testAllTopicsRequest
> ---
>
> Key: KAFKA-8136
> URL: https://issues.apache.org/jira/browse/KAFKA-8136
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/78/testReport/junit/kafka.server/MetadataRequestTest/testAllTopicsRequest/]
> {quote}java.lang.AssertionError: Partition [t2,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at 
> kafka.server.MetadataRequestTest.testAllTopicsRequest(MetadataRequestTest.scala:201){quote}
> STDOUT
> {quote}[2019-03-20 00:05:17,921] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition replicaDown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:23,520] WARN Unable to 
> read additional data from client sessionid 0x10033b4d8c6, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 
> 00:05:23,794] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] 
> Error for partition testAutoCreate_Topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:30,735] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:31,156] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> notInternal-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:31,156] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
> notInternal-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:37,817] WARN Unable to 
> read additional data from client sessionid 0x10033b51c370002, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-20 
> 00:05:51,571] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] 
> Error for partition t1-2 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:05:51,571] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> t1-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:06:22,153] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> t1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:06:22,622] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> t2-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 00:06:35,106] ERROR 
> 

[jira] [Updated] (KAFKA-8137) Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8137:
---
Fix Version/s: 2.2.2

> Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound
> --
>
> Key: KAFKA-8137
> URL: https://issues.apache.org/jira/browse/KAFKA-8137
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/LegacyAdminClientTest/testOffsetsForTimesWhenOffsetNotFound/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at 
> kafka.api.LegacyAdminClientTest.setUp(LegacyAdminClientTest.scala:73){quote}
> STDOUT
> {quote}[2019-03-20 16:28:10,089] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,093] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,493] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,724] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,388] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,394] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,224] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,249] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:49,255] ERROR 
> 

[jira] [Updated] (KAFKA-8132) Flaky Test MirrorMakerIntegrationTest #testCommaSeparatedRegex

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8132:
---
Fix Version/s: 2.2.2

> Flaky Test MirrorMakerIntegrationTest #testCommaSeparatedRegex
> --
>
> Key: KAFKA-8132
> URL: https://issues.apache.org/jira/browse/KAFKA-8132
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker, unit tests
>Affects Versions: 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/150/tests]
> {quote}kafka.tools.MirrorMaker$NoRecordsException
> at kafka.tools.MirrorMaker$ConsumerWrapper.receive(MirrorMaker.scala:483)
> at 
> kafka.tools.MirrorMakerIntegrationTest$$anonfun$testCommaSeparatedRegex$1.apply$mcZ$sp(MirrorMakerIntegrationTest.scala:92)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:742)
> at 
> kafka.tools.MirrorMakerIntegrationTest.testCommaSeparatedRegex(MirrorMakerIntegrationTest.scala:90){quote}
> STDOUT (repeatable outputs):
> {quote}[2019-03-19 21:47:06,115] ERROR [Consumer clientId=consumer-1029, 
> groupId=test-group] Offset commit failed on partition nonexistent-topic1-0 at 
> offset 0: This server does not host this topic-partition. 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812){quote} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8119) KafkaConfig listener accessors may fail during dynamic update

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8119:
---
Fix Version/s: 2.2.2

> KafkaConfig listener accessors may fail during dynamic update
> -
>
> Key: KAFKA-8119
> URL: https://issues.apache.org/jira/browse/KAFKA-8119
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.3.0, 2.2.2
>
>
> Noticed a test failure in DynamicBrokerReconfigurationTest where the test 
> accessing `KafkaConfig#listeners` during dynamic update of listeners threw an 
> exception. In general, most dynamic configs can be updated independently, but 
> listeners and listener security protocol map need to be updated together when 
> new listeners that are not in the map are added or an entry is removed from 
> the map along with the listener. We don't expect to see this failure in the 
> implementation code because dynamic config updates are on a single thread and 
> SocketServer processes the full update together and validates the full config 
> prior to applying the changes. But we should ensure that 
> KafkaConfig#listeners, KafkaConfig#advertisedListeners etc. work even if a 
> dynamic update occurs during the call since these methods are used in tests 
> and could potentially be used in implementation code in future from different 
> threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8110) Flaky Test DescribeConsumerGroupTest#testDescribeMembersWithConsumersWithoutAssignedPartitions

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8110:
---
Fix Version/s: 2.2.2

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeMembersWithConsumersWithoutAssignedPartitions
> --
>
> Key: KAFKA-8110
> URL: https://issues.apache.org/jira/browse/KAFKA-8110
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/67/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersWithConsumersWithoutAssignedPartitions/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersWithConsumersWithoutAssignedPartitions(DescribeConsumerGroupTest.scala:372){quote}
> STDOUT
> {quote}[2019-03-14 20:01:52,347] WARN Ignoring unexpected runtime exception 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:236) 
> java.nio.channels.CancelledKeyException at 
> sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at 
> sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:205)
>  at java.lang.Thread.run(Thread.java:748) TOPIC PARTITION CURRENT-OFFSET 
> LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - TOPIC 
> PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID foo 0 
> 0 0 0 - - - COORDINATOR (ID) ASSIGNMENT-STRATEGY STATE #MEMBERS 
> localhost:44669 (0){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8133) Flaky Test MetadataRequestTest#testNoTopicsRequest

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8133:
---
Fix Version/s: 2.2.2

> Flaky Test MetadataRequestTest#testNoTopicsRequest
> --
>
> Key: KAFKA-8133
> URL: https://issues.apache.org/jira/browse/KAFKA-8133
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.1-jdk8/detail/kafka-2.1-jdk8/151/tests]
> {quote}org.apache.kafka.common.errors.TopicExistsException: Topic 't1' 
> already exists.{quote}
> STDOUT:
> {quote}[2019-03-20 03:49:00,982] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition isr-after-broker-shutdown-0 at 
> offset 0 (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:00,982] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition isr-after-broker-shutdown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:15,319] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition replicaDown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:15,319] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition replicaDown-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:20,049] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition testAutoCreate_Topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition __consumer_offsets-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,080] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,538] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition notInternal-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:27,538] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition notInternal-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:28,863] WARN Unable to read additional data from client 
> sessionid 0x102fbd81b150003, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376)
> [2019-03-20 03:49:40,478] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition t1-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:40,921] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition t2-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-20 03:49:40,922] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition t2-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host 

[jira] [Updated] (KAFKA-8084) Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8084:
---
Fix Version/s: 2.2.2

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers
> 
>
> Key: KAFKA-8084
> URL: https://issues.apache.org/jira/browse/KAFKA-8084
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersOfExistingGroupWithNoMembers/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersOfExistingGroupWithNoMembers(DescribeConsumerGroupTest.scala:283){quote}
> STDOUT
> {quote}TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
> CLIENT-ID foo 0 0 0 0 - - - TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG 
> CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - COORDINATOR (ID) 
> ASSIGNMENT-STRATEGY STATE #MEMBERS localhost:45812 (0) Empty 0{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8085) Flaky Test ResetConsumerGroupOffsetTest#testResetOffsetsByDuration

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8085:
---
Fix Version/s: 2.2.2

> Flaky Test ResetConsumerGroupOffsetTest#testResetOffsetsByDuration
> --
>
> Key: KAFKA-8085
> URL: https://issues.apache.org/jira/browse/KAFKA-8085
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/ResetConsumerGroupOffsetTest/testResetOffsetsByDuration/]
> {quote}java.lang.AssertionError: Expected that consumer group has consumed 
> all messages from topic/partition. at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.admin.ResetConsumerGroupOffsetTest.awaitConsumerProgress(ResetConsumerGroupOffsetTest.scala:364)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.produceConsumeAndShutdown(ResetConsumerGroupOffsetTest.scala:359)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsByDuration(ResetConsumerGroupOffsetTest.scala:146){quote}
> STDOUT
> {quote}[2019-03-09 08:39:29,856] WARN Unable to read additional data from 
> client sessionid 0x105f6adb208, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-09 08:39:46,373] 
> WARN Unable to read additional data from client sessionid 0x105f6adf4c50001, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8092) Flaky Test GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8092:
---
Fix Version/s: 2.2.2

> Flaky Test 
> GroupAuthorizerIntegrationTest#testSendOffsetsWithNoConsumerGroupDescribeAccess
> --
>
> Key: KAFKA-8092
> URL: https://issues.apache.org/jira/browse/KAFKA-8092
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/64/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testSendOffsetsWithNoConsumerGroupDescribeAccess/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
> STDOUT
> {quote}[2019-03-11 16:08:29,319] ERROR [KafkaApi-0] Error when handling 
> request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38324,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38324-127.0.0.1:59458-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:29,933] ERROR [Consumer 
> clientId=consumer-99, groupId=my-group] Offset commit failed on partition 
> topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
> failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-11 16:08:29,933] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-11 16:08:31,370] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=33310,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:33310-127.0.0.1:49676-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:34,437] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=35999,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:35999-127.0.0.1:48268-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-11 16:08:40,978] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=38267,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:38267-127.0.0.1:53148-0, 
> 

[jira] [Updated] (KAFKA-8087) Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8087:
---
Fix Version/s: 2.2.2

> Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId
> -
>
> Key: KAFKA-8087
> URL: https://issues.apache.org/jira/browse/KAFKA-8087
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/PlaintextConsumerTest/testConsumingWithNullGroupId/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT
> {quote}[2019-03-09 08:39:02,022] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=2, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,022] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,202] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,204] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,511] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,512] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:06,568] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,582] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,787] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, 

[jira] [Updated] (KAFKA-8086) Flaky Test GroupAuthorizerIntegrationTest#testPatternSubscriptionWithTopicAndGroupRead

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8086:
---
Fix Version/s: 2.2.2

> Flaky Test 
> GroupAuthorizerIntegrationTest#testPatternSubscriptionWithTopicAndGroupRead
> --
>
> Key: KAFKA-8086
> URL: https://issues.apache.org/jira/browse/KAFKA-8086
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testPatternSubscriptionWithTopicAndGroupRead/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
> STDOUT
> {quote}[2019-03-09 08:40:34,220] ERROR [KafkaApi-0] Error when handling 
> request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=41020,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:41020-127.0.0.1:52304-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:40:35,336] ERROR [Consumer 
> clientId=consumer-98, groupId=my-group] Offset commit failed on partition 
> topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
> failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-09 08:40:35,336] ERROR [Consumer clientId=consumer-98, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-09 08:40:41,649] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=36903,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:36903-127.0.0.1:44978-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:40:53,898] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=41067,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:41067-127.0.0.1:40882-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:42:07,717] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=46276,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:46276-127.0.0.1:41362-0, 
> 

[jira] [Updated] (KAFKA-8077) Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8077:
---
Fix Version/s: 2.2.2

> Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords
> ---
>
> Key: KAFKA-8077
> URL: https://issues.apache.org/jira/browse/KAFKA-8077
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.0.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.0.2, 2.3.0, 2.1.2, 2.2.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/237/tests]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.api.AdminClientIntegrationTest.sendRecords(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest.testConsumeAfterDeleteRecords(AdminClientIntegrationTest.scala:909)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
> This server does not host this topic-partition.{quote}
> STDERR
> {quote}Exception in thread "Thread-1638" 
> org.apache.kafka.common.errors.InterruptException: 
> java.lang.InterruptedException
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeThrowInterruptException(ConsumerNetworkClient.java:504)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1247)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> at 
> kafka.api.AdminClientIntegrationTest$$anon$1.run(AdminClientIntegrationTest.scala:1132)
> Caused by: java.lang.InterruptedException
> ... 7 more{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8082) Flaky Test ProducerFailureHandlingTest#testNotEnoughReplicasAfterBrokerShutdown

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8082:
---
Fix Version/s: 2.2.2

> Flaky Test 
> ProducerFailureHandlingTest#testNotEnoughReplicasAfterBrokerShutdown
> ---
>
> Key: KAFKA-8082
> URL: https://issues.apache.org/jira/browse/KAFKA-8082
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/61/testReport/junit/kafka.api/ProducerFailureHandlingTest/testNotEnoughReplicasAfterBrokerShutdown/]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotEnoughReplicasAfterAppendException: 
> Messages are written to the log, but to fewer in-sync replicas than required. 
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:98)
>  at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:67)
>  at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
>  at 
> kafka.api.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:270){quote}
> STDOUT
> {quote}[2019-03-09 03:59:24,897] ERROR [ReplicaFetcher replicaId=0, 
> leaderId=1, fetcherId=0] Error for partition topic-1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:28,028] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:42,046] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> minisrtest-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:42,245] ERROR 
> [ReplicaManager broker=1] Error processing append operation on partition 
> minisrtest-0 (kafka.server.ReplicaManager:76) 
> org.apache.kafka.common.errors.NotEnoughReplicasException: The size of the 
> current ISR Set(1, 0) is insufficient to satisfy the min.isr requirement of 3 
> for partition minisrtest-0 [2019-03-09 04:00:01,212] ERROR [ReplicaFetcher 
> replicaId=1, leaderId=0, fetcherId=0] Error for partition topic-1-0 at offset 
> 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:02,214] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:03,216] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:23,144] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:24,146] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:25,148] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:44,607] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> minisrtest2-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8083) Flaky Test DelegationTokenRequestsTest#testDelegationTokenRequests

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8083:
---
Fix Version/s: 2.2.2

> Flaky Test DelegationTokenRequestsTest#testDelegationTokenRequests
> --
>
> Key: KAFKA-8083
> URL: https://issues.apache.org/jira/browse/KAFKA-8083
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/61/testReport/junit/kafka.server/DelegationTokenRequestsTest/testDelegationTokenRequests/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
> kafka.server.DelegationTokenRequestsTest.setUp(DelegationTokenRequestsTest.scala:46){quote}
> STDOUT
> {quote}[2019-03-09 04:01:31,789] WARN SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/kafka1872564121337557452.tmp'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it. 
> (org.apache.zookeeper.ClientCnxn:1011) [2019-03-09 04:01:31,789] ERROR 
> [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) 
> [2019-03-09 04:01:31,793] WARN SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/kafka1872564121337557452.tmp'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it. 
> (org.apache.zookeeper.ClientCnxn:1011) [2019-03-09 04:01:31,794] ERROR 
> [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8075) Flaky Test GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832229#comment-16832229
 ] 

Vahid Hashemian commented on KAFKA-8075:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit
> --
>
> Key: KAFKA-8075
> URL: https://issues.apache.org/jira/browse/KAFKA-8075
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testTransactionalProducerTopicAuthorizationExceptionInCommit/]
> {quote}org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 3000ms.{quote}
> STDOUT
> {quote}[2019-03-08 01:48:45,226] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Offset commit failed on partition topic-0 at offset 5: Not 
> authorized to access topics: [Topic authorization failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-08 01:48:45,227] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-08 01:48:57,870] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=43610,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:43610-127.0.0.1:44870-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:14,858] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44107,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44107-127.0.0.1:38156-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:21,984] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=39025,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:39025-127.0.0.1:41474-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:39,438] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44798,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44798-127.0.0.1:58496-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. Error: Consumer group 'my-group' does not 
> exist. [2019-03-08 01:49:55,502] WARN Ignoring unexpected runtime exception 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:236) 
> java.nio.channels.CancelledKeyException at 
> sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at 
> sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:205)
>  at java.lang.Thread.run(Thread.java:748) [2019-03-08 01:50:02,720] WARN 
> Unable to read additional data from client sessionid 0x1007131d81c0001, 
> likely client has closed socket 
> 

[jira] [Updated] (KAFKA-8075) Flaky Test GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8075:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test 
> GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit
> --
>
> Key: KAFKA-8075
> URL: https://issues.apache.org/jira/browse/KAFKA-8075
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testTransactionalProducerTopicAuthorizationExceptionInCommit/]
> {quote}org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 3000ms.{quote}
> STDOUT
> {quote}[2019-03-08 01:48:45,226] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Offset commit failed on partition topic-0 at offset 5: Not 
> authorized to access topics: [Topic authorization failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-08 01:48:45,227] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-08 01:48:57,870] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=43610,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:43610-127.0.0.1:44870-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:14,858] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44107,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44107-127.0.0.1:38156-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:21,984] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=39025,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:39025-127.0.0.1:41474-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:39,438] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44798,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44798-127.0.0.1:58496-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. Error: Consumer group 'my-group' does not 
> exist. [2019-03-08 01:49:55,502] WARN Ignoring unexpected runtime exception 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:236) 
> java.nio.channels.CancelledKeyException at 
> sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at 
> sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:205)
>  at java.lang.Thread.run(Thread.java:748) [2019-03-08 01:50:02,720] WARN 
> Unable to read additional data from client sessionid 0x1007131d81c0001, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) 

[jira] [Updated] (KAFKA-8076) Flaky Test ProduceRequestTest#testSimpleProduceRequest

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8076:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test ProduceRequestTest#testSimpleProduceRequest
> --
>
> Key: KAFKA-8076
> URL: https://issues.apache.org/jira/browse/KAFKA-8076
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.server/ProduceRequestTest/testSimpleProduceRequest/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.server.ProduceRequestTest.createTopicAndFindPartitionWithLeader(ProduceRequestTest.scala:91)
>  at 
> kafka.server.ProduceRequestTest.testSimpleProduceRequest(ProduceRequestTest.scala:42)
> {quote}
> STDOUT
> {quote}[2019-03-08 01:42:24,797] ERROR [ReplicaFetcher replicaId=0, 
> leaderId=2, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-08 01:42:38,287] WARN Unable to 
> read additional data from client sessionid 0x100712b09280002, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8073) Transient failure in kafka.api.UserQuotaTest.testThrottledProducerConsumer

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8073:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Transient failure in kafka.api.UserQuotaTest.testThrottledProducerConsumer
> --
>
> Key: KAFKA-8073
> URL: https://issues.apache.org/jira/browse/KAFKA-8073
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Bill Bejeck
>Priority: Critical
> Fix For: 2.3.0, 2.2.2
>
>
> Failed in build [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20134/]
>  
> Stacktrace and STDOUT
> {noformat}
> Error Message
> java.lang.AssertionError: Client with id=QuotasTestProducer-1 should have 
> been throttled
> Stacktrace
> java.lang.AssertionError: Client with id=QuotasTestProducer-1 should have 
> been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testThrottledProducerConsumer(BaseQuotaTest.scala:82)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> 

[jira] [Commented] (KAFKA-8073) Transient failure in kafka.api.UserQuotaTest.testThrottledProducerConsumer

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832227#comment-16832227
 ] 

Vahid Hashemian commented on KAFKA-8073:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Transient failure in kafka.api.UserQuotaTest.testThrottledProducerConsumer
> --
>
> Key: KAFKA-8073
> URL: https://issues.apache.org/jira/browse/KAFKA-8073
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Bill Bejeck
>Priority: Critical
> Fix For: 2.3.0, 2.2.2
>
>
> Failed in build [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20134/]
>  
> Stacktrace and STDOUT
> {noformat}
> Error Message
> java.lang.AssertionError: Client with id=QuotasTestProducer-1 should have 
> been throttled
> Stacktrace
> java.lang.AssertionError: Client with id=QuotasTestProducer-1 should have 
> been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testThrottledProducerConsumer(BaseQuotaTest.scala:82)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> 

[jira] [Commented] (KAFKA-8076) Flaky Test ProduceRequestTest#testSimpleProduceRequest

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832230#comment-16832230
 ] 

Vahid Hashemian commented on KAFKA-8076:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test ProduceRequestTest#testSimpleProduceRequest
> --
>
> Key: KAFKA-8076
> URL: https://issues.apache.org/jira/browse/KAFKA-8076
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.server/ProduceRequestTest/testSimpleProduceRequest/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.server.ProduceRequestTest.createTopicAndFindPartitionWithLeader(ProduceRequestTest.scala:91)
>  at 
> kafka.server.ProduceRequestTest.testSimpleProduceRequest(ProduceRequestTest.scala:42)
> {quote}
> STDOUT
> {quote}[2019-03-08 01:42:24,797] ERROR [ReplicaFetcher replicaId=0, 
> leaderId=2, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-08 01:42:38,287] WARN Unable to 
> read additional data from client sessionid 0x100712b09280002, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8064) Flaky Test DeleteTopicTest #testRecreateTopicAfterDeletion

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832225#comment-16832225
 ] 

Vahid Hashemian commented on KAFKA-8064:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DeleteTopicTest #testRecreateTopicAfterDeletion
> --
>
> Key: KAFKA-8064
> URL: https://issues.apache.org/jira/browse/KAFKA-8064
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/54/testReport/junit/kafka.admin/DeleteTopicTest/testRecreateTopicAfterDeletion/]
> {quote}java.lang.AssertionError: Admin path /admin/delete_topic/test path not 
> deleted even after a replica is restarted at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.verifyTopicDeletion(TestUtils.scala:1056) at 
> kafka.admin.DeleteTopicTest.testRecreateTopicAfterDeletion(DeleteTopicTest.scala:283){quote}
> STDOUT
> {quote}[2019-03-07 16:05:05,661] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition test-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:26,122] WARN Unable to 
> read additional data from client sessionid 0x1006f1dd1a60003, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-07 
> 16:05:36,511] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] 
> Error for partition test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:36,512] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:43,418] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:43,422] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:47,649] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:47,649] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:51,668] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. WARNING: If partitions are increased for 
> a topic that has a key, the partition logic or ordering of the messages will 
> be affected Adding partitions succeeded! [2019-03-07 16:05:56,135] WARN 
> Unable to read additional data from client sessionid 0x1006f1e2abb0006, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-07 16:06:00,286] 
> ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for 
> partition test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:06:00,357] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:06:01,289] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> 

[jira] [Commented] (KAFKA-8068) Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroup

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832226#comment-16832226
 ] 

Vahid Hashemian commented on KAFKA-8068:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroup
> ---
>
> Key: KAFKA-8068
> URL: https://issues.apache.org/jira/browse/KAFKA-8068
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/55/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersOfExistingGroup/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersOfExistingGroup(DescribeConsumerGroupTest.scala:154){quote}
>  
> STDOUT
> {quote}[2019-03-07 18:55:40,194] WARN Unable to read additional data from 
> client sessionid 0x1006fb9a65f0001, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) TOPIC PARTITION 
> CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - 
> - TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
> CLIENT-ID foo 0 0 0 0 - - - COORDINATOR (ID) ASSIGNMENT-STRATEGY STATE 
> #MEMBERS localhost:35213 (0) Empty 0 [2019-03-07 18:58:42,206] WARN Unable to 
> read additional data from client sessionid 0x1006fbc6962, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8068) Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroup

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8068:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroup
> ---
>
> Key: KAFKA-8068
> URL: https://issues.apache.org/jira/browse/KAFKA-8068
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/55/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersOfExistingGroup/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersOfExistingGroup(DescribeConsumerGroupTest.scala:154){quote}
>  
> STDOUT
> {quote}[2019-03-07 18:55:40,194] WARN Unable to read additional data from 
> client sessionid 0x1006fb9a65f0001, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) TOPIC PARTITION 
> CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - 
> - TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
> CLIENT-ID foo 0 0 0 0 - - - COORDINATOR (ID) ASSIGNMENT-STRATEGY STATE 
> #MEMBERS localhost:35213 (0) Empty 0 [2019-03-07 18:58:42,206] WARN Unable to 
> read additional data from client sessionid 0x1006fbc6962, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8064) Flaky Test DeleteTopicTest #testRecreateTopicAfterDeletion

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8064:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test DeleteTopicTest #testRecreateTopicAfterDeletion
> --
>
> Key: KAFKA-8064
> URL: https://issues.apache.org/jira/browse/KAFKA-8064
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/54/testReport/junit/kafka.admin/DeleteTopicTest/testRecreateTopicAfterDeletion/]
> {quote}java.lang.AssertionError: Admin path /admin/delete_topic/test path not 
> deleted even after a replica is restarted at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.verifyTopicDeletion(TestUtils.scala:1056) at 
> kafka.admin.DeleteTopicTest.testRecreateTopicAfterDeletion(DeleteTopicTest.scala:283){quote}
> STDOUT
> {quote}[2019-03-07 16:05:05,661] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition test-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:26,122] WARN Unable to 
> read additional data from client sessionid 0x1006f1dd1a60003, likely client 
> has closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-07 
> 16:05:36,511] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] 
> Error for partition test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:36,512] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:43,418] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:43,422] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:47,649] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:47,649] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:05:51,668] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. WARNING: If partitions are increased for 
> a topic that has a key, the partition logic or ordering of the messages will 
> be affected Adding partitions succeeded! [2019-03-07 16:05:56,135] WARN 
> Unable to read additional data from client sessionid 0x1006f1e2abb0006, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-07 16:06:00,286] 
> ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for 
> partition test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:06:00,357] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-07 16:06:01,289] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> 

[jira] [Commented] (KAFKA-8059) Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832224#comment-16832224
 ] 

Vahid Hashemian commented on KAFKA-8059:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota
> -
>
> Key: KAFKA-8059
> URL: https://issues.apache.org/jira/browse/KAFKA-8059
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/46/tests]
> {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception 
> java.io.IOException to be thrown, but no exception was thrown
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
> at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
> at org.scalatest.Assertions$class.intercept(Assertions.scala:822)
> at org.scalatest.junit.JUnitSuite.intercept(JUnitSuite.scala:71)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicConnectionQuota(DynamicConnectionQuotaTest.scala:82){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8059) Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8059:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota
> -
>
> Key: KAFKA-8059
> URL: https://issues.apache.org/jira/browse/KAFKA-8059
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/46/tests]
> {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception 
> java.io.IOException to be thrown, but no exception was thrown
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
> at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
> at org.scalatest.Assertions$class.intercept(Assertions.scala:822)
> at org.scalatest.junit.JUnitSuite.intercept(JUnitSuite.scala:71)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicConnectionQuota(DynamicConnectionQuotaTest.scala:82){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8055) Flaky Test LogCleanerParameterizedIntegrationTest#cleanerTest

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8055:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test LogCleanerParameterizedIntegrationTest#cleanerTest
> -
>
> Key: KAFKA-8055
> URL: https://issues.apache.org/jira/browse/KAFKA-8055
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/49/]
> {quote}java.lang.AssertionError: log cleaner should have processed up to 
> offset 588, but lastCleaned=295 at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.log.LogCleanerParameterizedIntegrationTest.checkLastCleaned(LogCleanerParameterizedIntegrationTest.scala:284)
>  at 
> kafka.log.LogCleanerParameterizedIntegrationTest.cleanerTest(LogCleanerParameterizedIntegrationTest.scala:77){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8051) remove KafkaMbean when network close

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1683#comment-1683
 ] 

Vahid Hashemian commented on KAFKA-8051:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> remove KafkaMbean when network close
> 
>
> Key: KAFKA-8051
> URL: https://issues.apache.org/jira/browse/KAFKA-8051
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: limeng
>Priority: Critical
> Fix For: 2.2.2
>
>
> the  broker server will be oom when 
>  * a large number of clients frequently close and reconnect
>  * the clientId changes every time when reconnect,that gives rise to too much 
> kafkaMbean in broker
> the reason is that broker forget to remove kafkaMbean when detect connection 
> closes.
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8055) Flaky Test LogCleanerParameterizedIntegrationTest#cleanerTest

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832223#comment-16832223
 ] 

Vahid Hashemian commented on KAFKA-8055:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test LogCleanerParameterizedIntegrationTest#cleanerTest
> -
>
> Key: KAFKA-8055
> URL: https://issues.apache.org/jira/browse/KAFKA-8055
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.2.2
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/49/]
> {quote}java.lang.AssertionError: log cleaner should have processed up to 
> offset 588, but lastCleaned=295 at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.log.LogCleanerParameterizedIntegrationTest.checkLastCleaned(LogCleanerParameterizedIntegrationTest.scala:284)
>  at 
> kafka.log.LogCleanerParameterizedIntegrationTest.cleanerTest(LogCleanerParameterizedIntegrationTest.scala:77){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8050) remove KafkaMbean when network close

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8050:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> remove KafkaMbean when network close
> 
>
> Key: KAFKA-8050
> URL: https://issues.apache.org/jira/browse/KAFKA-8050
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: limeng
>Priority: Critical
> Fix For: 2.2.2
>
>
> the  broker server will be oom when 
>  * a large number of clients frequently close and reconnect
>  * the clientId changes every time when reconnect,that gives rise to too much 
> kafkaMbean in broker
> the reason is that broker forget to remove kafkaMbean when detect connection 
> closes.
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8049) remove KafkaMbean when network close

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832220#comment-16832220
 ] 

Vahid Hashemian commented on KAFKA-8049:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> remove KafkaMbean when network close
> 
>
> Key: KAFKA-8049
> URL: https://issues.apache.org/jira/browse/KAFKA-8049
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: limeng
>Priority: Critical
> Fix For: 2.2.2
>
>
> the  broker server will be oom when 
>  * a large number of clients frequently close and reconnect
>  * the clientId changes every time when reconnect,that gives rise to too much 
> kafkaMbean in broker
> the reason is that broker forget to remove kafkaMbean when detect connection 
> closes.
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8050) remove KafkaMbean when network close

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832221#comment-16832221
 ] 

Vahid Hashemian commented on KAFKA-8050:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> remove KafkaMbean when network close
> 
>
> Key: KAFKA-8050
> URL: https://issues.apache.org/jira/browse/KAFKA-8050
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: limeng
>Priority: Critical
> Fix For: 2.2.2
>
>
> the  broker server will be oom when 
>  * a large number of clients frequently close and reconnect
>  * the clientId changes every time when reconnect,that gives rise to too much 
> kafkaMbean in broker
> the reason is that broker forget to remove kafkaMbean when detect connection 
> closes.
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8051) remove KafkaMbean when network close

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8051:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> remove KafkaMbean when network close
> 
>
> Key: KAFKA-8051
> URL: https://issues.apache.org/jira/browse/KAFKA-8051
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: limeng
>Priority: Critical
> Fix For: 2.2.2
>
>
> the  broker server will be oom when 
>  * a large number of clients frequently close and reconnect
>  * the clientId changes every time when reconnect,that gives rise to too much 
> kafkaMbean in broker
> the reason is that broker forget to remove kafkaMbean when detect connection 
> closes.
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8049) remove KafkaMbean when network close

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8049:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> remove KafkaMbean when network close
> 
>
> Key: KAFKA-8049
> URL: https://issues.apache.org/jira/browse/KAFKA-8049
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: limeng
>Priority: Critical
> Fix For: 2.2.2
>
>
> the  broker server will be oom when 
>  * a large number of clients frequently close and reconnect
>  * the clientId changes every time when reconnect,that gives rise to too much 
> kafkaMbean in broker
> the reason is that broker forget to remove kafkaMbean when detect connection 
> closes.
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8048) remove KafkaMbean when network close

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8048:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> remove KafkaMbean when network close
> 
>
> Key: KAFKA-8048
> URL: https://issues.apache.org/jira/browse/KAFKA-8048
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: limeng
>Priority: Critical
> Fix For: 2.2.2
>
>
> the  broker server will be oom when 
>  * a large number of clients frequently close and reconnect
>  * the clientId changes every time when reconnect,that gives rise to too much 
> kafkaMbean in broker
> the reason is that broker forget to remove kafkaMbean when detect connection 
> closes.
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8048) remove KafkaMbean when network close

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832218#comment-16832218
 ] 

Vahid Hashemian commented on KAFKA-8048:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> remove KafkaMbean when network close
> 
>
> Key: KAFKA-8048
> URL: https://issues.apache.org/jira/browse/KAFKA-8048
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: limeng
>Priority: Critical
> Fix For: 2.2.2
>
>
> the  broker server will be oom when 
>  * a large number of clients frequently close and reconnect
>  * the clientId changes every time when reconnect,that gives rise to too much 
> kafkaMbean in broker
> the reason is that broker forget to remove kafkaMbean when detect connection 
> closes.
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8047) remove KafkaMbean when network close

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832217#comment-16832217
 ] 

Vahid Hashemian commented on KAFKA-8047:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> remove KafkaMbean when network close
> 
>
> Key: KAFKA-8047
> URL: https://issues.apache.org/jira/browse/KAFKA-8047
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: limeng
>Priority: Critical
> Fix For: 2.2.2
>
>
> the  broker server will be oom when 
>  * a large number of clients frequently close and reconnect
>  * the clientId changes every time when reconnect,that gives rise to too much 
> kafkaMbean in broker
> the reason is that broker forget to remove kafkaMbean when detect connection 
> closes.
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8003) Flaky Test TransactionsTest #testFencingOnTransactionExpiration

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8003:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test TransactionsTest #testFencingOnTransactionExpiration
> ---
>
> Key: KAFKA-8003
> URL: https://issues.apache.org/jira/browse/KAFKA-8003
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/34/]
> {quote}java.lang.AssertionError: expected:<1> but was:<0> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.junit.Assert.assertEquals(Assert.java:631) at 
> kafka.api.TransactionsTest.testFencingOnTransactionExpiration(TransactionsTest.scala:510){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8003) Flaky Test TransactionsTest #testFencingOnTransactionExpiration

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832215#comment-16832215
 ] 

Vahid Hashemian commented on KAFKA-8003:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test TransactionsTest #testFencingOnTransactionExpiration
> ---
>
> Key: KAFKA-8003
> URL: https://issues.apache.org/jira/browse/KAFKA-8003
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/34/]
> {quote}java.lang.AssertionError: expected:<1> but was:<0> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.junit.Assert.assertEquals(Assert.java:631) at 
> kafka.api.TransactionsTest.testFencingOnTransactionExpiration(TransactionsTest.scala:510){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8047) remove KafkaMbean when network close

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8047:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> remove KafkaMbean when network close
> 
>
> Key: KAFKA-8047
> URL: https://issues.apache.org/jira/browse/KAFKA-8047
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: limeng
>Priority: Critical
> Fix For: 2.2.2
>
>
> the  broker server will be oom when 
>  * a large number of clients frequently close and reconnect
>  * the clientId changes every time when reconnect,that gives rise to too much 
> kafkaMbean in broker
> the reason is that broker forget to remove kafkaMbean when detect connection 
> closes.
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7978) Flaky Test SaslSslAdminClientIntegrationTest#testConsumerGroups

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7978:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test SaslSslAdminClientIntegrationTest#testConsumerGroups
> ---
>
> Key: KAFKA-7978
> URL: https://issues.apache.org/jira/browse/KAFKA-7978
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]
> {quote}java.lang.AssertionError: expected:<2> but was:<0> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.junit.Assert.assertEquals(Assert.java:631) at 
> kafka.api.AdminClientIntegrationTest.testConsumerGroups(AdminClientIntegrationTest.scala:1157)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7988) Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7988:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize
> 
>
> Key: KAFKA-7988
> URL: https://issues.apache.org/jira/browse/KAFKA-7988
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/30/]
> {quote}kafka.server.DynamicBrokerReconfigurationTest > testThreadPoolResize 
> FAILED java.lang.AssertionError: Invalid threads: expected 6, got 5: 
> List(ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-1, 
> ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-2, ReplicaFetcherThread-0-1) 
> at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreads(DynamicBrokerReconfigurationTest.scala:1260)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.maybeVerifyThreadPoolSize$1(DynamicBrokerReconfigurationTest.scala:531)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.resizeThreadPool$1(DynamicBrokerReconfigurationTest.scala:550)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.reducePoolSize$1(DynamicBrokerReconfigurationTest.scala:536)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$testThreadPoolResize$3(DynamicBrokerReconfigurationTest.scala:559)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreadPoolResize$1(DynamicBrokerReconfigurationTest.scala:558)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize(DynamicBrokerReconfigurationTest.scala:572){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7988) Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832212#comment-16832212
 ] 

Vahid Hashemian commented on KAFKA-7988:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize
> 
>
> Key: KAFKA-7988
> URL: https://issues.apache.org/jira/browse/KAFKA-7988
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/30/]
> {quote}kafka.server.DynamicBrokerReconfigurationTest > testThreadPoolResize 
> FAILED java.lang.AssertionError: Invalid threads: expected 6, got 5: 
> List(ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-1, 
> ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-2, ReplicaFetcherThread-0-1) 
> at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreads(DynamicBrokerReconfigurationTest.scala:1260)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.maybeVerifyThreadPoolSize$1(DynamicBrokerReconfigurationTest.scala:531)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.resizeThreadPool$1(DynamicBrokerReconfigurationTest.scala:550)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.reducePoolSize$1(DynamicBrokerReconfigurationTest.scala:536)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$testThreadPoolResize$3(DynamicBrokerReconfigurationTest.scala:559)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreadPoolResize$1(DynamicBrokerReconfigurationTest.scala:558)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize(DynamicBrokerReconfigurationTest.scala:572){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7964) Flaky Test ConsumerBounceTest#testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832209#comment-16832209
 ] 

Vahid Hashemian commented on KAFKA-7964:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> ConsumerBounceTest#testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
> --
>
> Key: KAFKA-7964
> URL: https://issues.apache.org/jira/browse/KAFKA-7964
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: expected:<100> but was:<0> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.junit.Assert.assertEquals(Assert.java:631) at 
> kafka.api.ConsumerBounceTest.receiveExactRecords(ConsumerBounceTest.scala:551)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize$2(ConsumerBounceTest.scala:409)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize$2$adapted(ConsumerBounceTest.scala:408)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize(ConsumerBounceTest.scala:408){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7969) Flaky Test DescribeConsumerGroupTest#testDescribeOffsetsOfExistingGroupWithNoMembers

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7969:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeOffsetsOfExistingGroupWithNoMembers
> 
>
> Key: KAFKA-7969
> URL: https://issues.apache.org/jira/browse/KAFKA-7969
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/24/]
> {quote}java.lang.AssertionError: Expected no active member in describe group 
> results, state: Some(Empty), assignments: Some(List()) at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeOffsetsOfExistingGroupWithNoMembers(DescribeConsumerGroupTest.scala:278{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KAFKA-7978) Flaky Test SaslSslAdminClientIntegrationTest#testConsumerGroups

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian closed KAFKA-7978.
--

Closing as duplicate.

> Flaky Test SaslSslAdminClientIntegrationTest#testConsumerGroups
> ---
>
> Key: KAFKA-7978
> URL: https://issues.apache.org/jira/browse/KAFKA-7978
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]
> {quote}java.lang.AssertionError: expected:<2> but was:<0> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.junit.Assert.assertEquals(Assert.java:631) at 
> kafka.api.AdminClientIntegrationTest.testConsumerGroups(AdminClientIntegrationTest.scala:1157)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7947) Flaky Test EpochDrivenReplicationProtocolAcceptanceTest#shouldFollowLeaderEpochBasicWorkflow

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832207#comment-16832207
 ] 

Vahid Hashemian commented on KAFKA-7947:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> EpochDrivenReplicationProtocolAcceptanceTest#shouldFollowLeaderEpochBasicWorkflow
> 
>
> Key: KAFKA-7947
> URL: https://issues.apache.org/jira/browse/KAFKA-7947
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.AssertionError: expected: startOffset=0), EpochEntry(epoch=1, startOffset=1))> but 
> was: startOffset=1))> at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:144) at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldFollowLeaderEpochBasicWorkflow(EpochDrivenReplicationProtocolAcceptanceTest.scala:101){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7957) Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7957:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate
> -
>
> Key: KAFKA-7957
> URL: https://issues.apache.org/jira/browse/KAFKA-7957
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/18/]
> {quote}java.lang.AssertionError: Messages not sent at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:356) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:766) at 
> kafka.server.DynamicBrokerReconfigurationTest.startProduceConsume(DynamicBrokerReconfigurationTest.scala:1270)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testMetricsReporterUpdate(DynamicBrokerReconfigurationTest.scala:650){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7964) Flaky Test ConsumerBounceTest#testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7964:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test 
> ConsumerBounceTest#testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
> --
>
> Key: KAFKA-7964
> URL: https://issues.apache.org/jira/browse/KAFKA-7964
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: expected:<100> but was:<0> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.junit.Assert.assertEquals(Assert.java:631) at 
> kafka.api.ConsumerBounceTest.receiveExactRecords(ConsumerBounceTest.scala:551)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize$2(ConsumerBounceTest.scala:409)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize$2$adapted(ConsumerBounceTest.scala:408)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize(ConsumerBounceTest.scala:408){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7957) Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832208#comment-16832208
 ] 

Vahid Hashemian commented on KAFKA-7957:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate
> -
>
> Key: KAFKA-7957
> URL: https://issues.apache.org/jira/browse/KAFKA-7957
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/18/]
> {quote}java.lang.AssertionError: Messages not sent at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:356) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:766) at 
> kafka.server.DynamicBrokerReconfigurationTest.startProduceConsume(DynamicBrokerReconfigurationTest.scala:1270)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testMetricsReporterUpdate(DynamicBrokerReconfigurationTest.scala:650){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7946) Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7946:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup
> ---
>
> Key: KAFKA-7946
> URL: https://issues.apache.org/jira/browse/KAFKA-7946
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.NullPointerException at 
> kafka.admin.DeleteConsumerGroupsTest.testDeleteNonEmptyGroup(DeleteConsumerGroupsTest.scala:96){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7946) Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832206#comment-16832206
 ] 

Vahid Hashemian commented on KAFKA-7946:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup
> ---
>
> Key: KAFKA-7946
> URL: https://issues.apache.org/jira/browse/KAFKA-7946
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.NullPointerException at 
> kafka.admin.DeleteConsumerGroupsTest.testDeleteNonEmptyGroup(DeleteConsumerGroupsTest.scala:96){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832205#comment-16832205
 ] 

Vahid Hashemian commented on KAFKA-7937:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
> 
>
> Key: KAFKA-7937
> URL: https://issues.apache.org/jira/browse/KAFKA-7937
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.2.0, 2.1.1, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline
> {quote}kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsNotExistingGroup FAILED 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.CoordinatorNotAvailableException: The 
> coordinator is not available. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89)
>  Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: 
> The coordinator is not available.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7947) Flaky Test EpochDrivenReplicationProtocolAcceptanceTest#shouldFollowLeaderEpochBasicWorkflow

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7947:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test 
> EpochDrivenReplicationProtocolAcceptanceTest#shouldFollowLeaderEpochBasicWorkflow
> 
>
> Key: KAFKA-7947
> URL: https://issues.apache.org/jira/browse/KAFKA-7947
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
> {quote}java.lang.AssertionError: expected: startOffset=0), EpochEntry(epoch=1, startOffset=1))> but 
> was: startOffset=1))> at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:144) at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldFollowLeaderEpochBasicWorkflow(EpochDrivenReplicationProtocolAcceptanceTest.scala:101){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7540) Flaky Test ConsumerBounceTest#testClose

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832204#comment-16832204
 ] 

Vahid Hashemian commented on KAFKA-7540:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test ConsumerBounceTest#testClose
> ---
>
> Key: KAFKA-7540
> URL: https://issues.apache.org/jira/browse/KAFKA-7540
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0
>Reporter: John Roesler
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> Observed on Java 8: 
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/17314/testReport/junit/kafka.api/ConsumerBounceTest/testClose/]
>  
> Stacktrace:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: -1
>   at 
> kafka.integration.KafkaServerTestHarness.killBroker(KafkaServerTestHarness.scala:146)
>   at 
> kafka.api.ConsumerBounceTest.checkCloseWithCoordinatorFailure(ConsumerBounceTest.scala:238)
>   at kafka.api.ConsumerBounceTest.testClose(ConsumerBounceTest.scala:211)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> 

[jira] [Updated] (KAFKA-7540) Flaky Test ConsumerBounceTest#testClose

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7540:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test ConsumerBounceTest#testClose
> ---
>
> Key: KAFKA-7540
> URL: https://issues.apache.org/jira/browse/KAFKA-7540
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0
>Reporter: John Roesler
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> Observed on Java 8: 
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/17314/testReport/junit/kafka.api/ConsumerBounceTest/testClose/]
>  
> Stacktrace:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: -1
>   at 
> kafka.integration.KafkaServerTestHarness.killBroker(KafkaServerTestHarness.scala:146)
>   at 
> kafka.api.ConsumerBounceTest.checkCloseWithCoordinatorFailure(ConsumerBounceTest.scala:238)
>   at kafka.api.ConsumerBounceTest.testClose(ConsumerBounceTest.scala:211)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   

[jira] [Updated] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-7937:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
> 
>
> Key: KAFKA-7937
> URL: https://issues.apache.org/jira/browse/KAFKA-7937
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.2.0, 2.1.1, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline
> {quote}kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsNotExistingGroup FAILED 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.CoordinatorNotAvailableException: The 
> coordinator is not available. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89)
>  Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: 
> The coordinator is not available.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6824) Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832203#comment-16832203
 ] 

Vahid Hashemian commented on KAFKA-6824:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener
> 
>
> Key: KAFKA-6824
> URL: https://issues.apache.org/jira/browse/KAFKA-6824
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Anna Povzner
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> Observed two failures of this test (both in PR builds) :(
>  
> *Failure #1: (JDK 7 and Scala 2.11 )*
> *17:20:49* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *17:20:49*     java.lang.AssertionError: expected:<10> but was:<12>
> *17:20:49*         at org.junit.Assert.fail(Assert.java:88)
> *17:20:49*         at org.junit.Assert.failNotEquals(Assert.java:834)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:645)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:631)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:959)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:784)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
>  
> *Failure #2: (JDK 8)*
> *18:46:23* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *18:46:23*     java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$verifyProduceConsume$3(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
> *18:46:23*         at scala.collection.Iterator.foreach(Iterator.scala:929)
> *18:46:23*         at scala.collection.Iterator.foreach$(Iterator.scala:929)
> *18:46:23*         at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)
> *18:46:23*         at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> *18:46:23*         at 
> scala.collection.TraversableLike.map(TraversableLike.scala:234)
> *18:46:23*         at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:227)
> *18:46:23*         at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:816)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
> *18:46:23*
> *18:46:23*         Caused by:
> *18:46:23*         
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-2933) Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832202#comment-16832202
 ] 

Vahid Hashemian commented on KAFKA-2933:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment
> -
>
> Key: KAFKA-2933
> URL: https://issues.apache.org/jira/browse/KAFKA-2933
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, unit tests
>Affects Versions: 0.10.0.0, 2.2.0
>Reporter: Guozhang Wang
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: flaky-test, transient-unit-test-failure
> Fix For: 2.3.0, 2.2.2
>
>
> {code}
> kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment FAILED
> java.lang.AssertionError: Did not get valid assignment for partitions 
> [topic1-2, topic2-0, topic1-4, topic-1, topic-0, topic2-1, topic1-0, 
> topic1-3, topic1-1, topic2-2] after we changed subscription
> at org.junit.Assert.fail(Assert.java:88)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:747)
> at 
> kafka.api.PlaintextConsumerTest.validateGroupAssignment(PlaintextConsumerTest.scala:644)
> at 
> kafka.api.PlaintextConsumerTest.changeConsumerGroupSubscriptionAndValidateAssignment(PlaintextConsumerTest.scala:663)
> at 
> kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment(PlaintextConsumerTest.scala:461)
> java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
> multi-threaded access
> {code}
> Example: https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1582/console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6824) Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-6824:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Flaky Test DynamicBrokerReconfigurationTest#testAddRemoveSslListener
> 
>
> Key: KAFKA-6824
> URL: https://issues.apache.org/jira/browse/KAFKA-6824
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Anna Povzner
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.2
>
>
> Observed two failures of this test (both in PR builds) :(
>  
> *Failure #1: (JDK 7 and Scala 2.11 )*
> *17:20:49* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *17:20:49*     java.lang.AssertionError: expected:<10> but was:<12>
> *17:20:49*         at org.junit.Assert.fail(Assert.java:88)
> *17:20:49*         at org.junit.Assert.failNotEquals(Assert.java:834)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:645)
> *17:20:49*         at org.junit.Assert.assertEquals(Assert.java:631)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:959)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:784)
> *17:20:49*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
>  
> *Failure #2: (JDK 8)*
> *18:46:23* kafka.server.DynamicBrokerReconfigurationTest > 
> testAddRemoveSslListener FAILED
> *18:46:23*     java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:77)
> *18:46:23*         at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$verifyProduceConsume$3(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
> *18:46:23*         at scala.collection.Iterator.foreach(Iterator.scala:929)
> *18:46:23*         at scala.collection.Iterator.foreach$(Iterator.scala:929)
> *18:46:23*         at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach(IterableLike.scala:71)
> *18:46:23*         at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:70)
> *18:46:23*         at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> *18:46:23*         at 
> scala.collection.TraversableLike.map(TraversableLike.scala:234)
> *18:46:23*         at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:227)
> *18:46:23*         at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyProduceConsume(DynamicBrokerReconfigurationTest.scala:953)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyRemoveListener(DynamicBrokerReconfigurationTest.scala:816)
> *18:46:23*         at 
> kafka.server.DynamicBrokerReconfigurationTest.testAddRemoveSslListener(DynamicBrokerReconfigurationTest.scala:705)
> *18:46:23*
> *18:46:23*         Caused by:
> *18:46:23*         
> org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
> not the leader for that topic-partition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-2933) Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-2933:
---
Fix Version/s: (was: 2.2.1)
   2.2.2

> Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment
> -
>
> Key: KAFKA-2933
> URL: https://issues.apache.org/jira/browse/KAFKA-2933
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, unit tests
>Affects Versions: 0.10.0.0, 2.2.0
>Reporter: Guozhang Wang
>Assignee: Jason Gustafson
>Priority: Critical
>  Labels: flaky-test, transient-unit-test-failure
> Fix For: 2.3.0, 2.2.2
>
>
> {code}
> kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment FAILED
> java.lang.AssertionError: Did not get valid assignment for partitions 
> [topic1-2, topic2-0, topic1-4, topic-1, topic-0, topic2-1, topic1-0, 
> topic1-3, topic1-1, topic2-2] after we changed subscription
> at org.junit.Assert.fail(Assert.java:88)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:747)
> at 
> kafka.api.PlaintextConsumerTest.validateGroupAssignment(PlaintextConsumerTest.scala:644)
> at 
> kafka.api.PlaintextConsumerTest.changeConsumerGroupSubscriptionAndValidateAssignment(PlaintextConsumerTest.scala:663)
> at 
> kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment(PlaintextConsumerTest.scala:461)
> java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
> multi-threaded access
> {code}
> Example: https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1582/console



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8077) Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832201#comment-16832201
 ] 

Vahid Hashemian commented on KAFKA-8077:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords
> ---
>
> Key: KAFKA-8077
> URL: https://issues.apache.org/jira/browse/KAFKA-8077
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.0.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.0.2, 2.3.0, 2.1.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/237/tests]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.api.AdminClientIntegrationTest.sendRecords(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest.testConsumeAfterDeleteRecords(AdminClientIntegrationTest.scala:909)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
> This server does not host this topic-partition.{quote}
> STDERR
> {quote}Exception in thread "Thread-1638" 
> org.apache.kafka.common.errors.InterruptException: 
> java.lang.InterruptedException
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeThrowInterruptException(ConsumerNetworkClient.java:504)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1247)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> at 
> kafka.api.AdminClientIntegrationTest$$anon$1.run(AdminClientIntegrationTest.scala:1132)
> Caused by: java.lang.InterruptedException
> ... 7 more{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8082) Flaky Test ProducerFailureHandlingTest#testNotEnoughReplicasAfterBrokerShutdown

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8082:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test 
> ProducerFailureHandlingTest#testNotEnoughReplicasAfterBrokerShutdown
> ---
>
> Key: KAFKA-8082
> URL: https://issues.apache.org/jira/browse/KAFKA-8082
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/61/testReport/junit/kafka.api/ProducerFailureHandlingTest/testNotEnoughReplicasAfterBrokerShutdown/]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotEnoughReplicasAfterAppendException: 
> Messages are written to the log, but to fewer in-sync replicas than required. 
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:98)
>  at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:67)
>  at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
>  at 
> kafka.api.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:270){quote}
> STDOUT
> {quote}[2019-03-09 03:59:24,897] ERROR [ReplicaFetcher replicaId=0, 
> leaderId=1, fetcherId=0] Error for partition topic-1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:28,028] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:42,046] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> minisrtest-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:42,245] ERROR 
> [ReplicaManager broker=1] Error processing append operation on partition 
> minisrtest-0 (kafka.server.ReplicaManager:76) 
> org.apache.kafka.common.errors.NotEnoughReplicasException: The size of the 
> current ISR Set(1, 0) is insufficient to satisfy the min.isr requirement of 3 
> for partition minisrtest-0 [2019-03-09 04:00:01,212] ERROR [ReplicaFetcher 
> replicaId=1, leaderId=0, fetcherId=0] Error for partition topic-1-0 at offset 
> 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:02,214] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:03,216] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:23,144] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:24,146] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:25,148] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:44,607] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> minisrtest2-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8077) Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8077:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords
> ---
>
> Key: KAFKA-8077
> URL: https://issues.apache.org/jira/browse/KAFKA-8077
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.0.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.0.2, 2.3.0, 2.1.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/237/tests]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.api.AdminClientIntegrationTest.sendRecords(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest.testConsumeAfterDeleteRecords(AdminClientIntegrationTest.scala:909)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
> This server does not host this topic-partition.{quote}
> STDERR
> {quote}Exception in thread "Thread-1638" 
> org.apache.kafka.common.errors.InterruptException: 
> java.lang.InterruptedException
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeThrowInterruptException(ConsumerNetworkClient.java:504)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1247)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> at 
> kafka.api.AdminClientIntegrationTest$$anon$1.run(AdminClientIntegrationTest.scala:1132)
> Caused by: java.lang.InterruptedException
> ... 7 more{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8083) Flaky Test DelegationTokenRequestsTest#testDelegationTokenRequests

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832199#comment-16832199
 ] 

Vahid Hashemian commented on KAFKA-8083:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test DelegationTokenRequestsTest#testDelegationTokenRequests
> --
>
> Key: KAFKA-8083
> URL: https://issues.apache.org/jira/browse/KAFKA-8083
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/61/testReport/junit/kafka.server/DelegationTokenRequestsTest/testDelegationTokenRequests/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
> kafka.server.DelegationTokenRequestsTest.setUp(DelegationTokenRequestsTest.scala:46){quote}
> STDOUT
> {quote}[2019-03-09 04:01:31,789] WARN SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/kafka1872564121337557452.tmp'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it. 
> (org.apache.zookeeper.ClientCnxn:1011) [2019-03-09 04:01:31,789] ERROR 
> [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) 
> [2019-03-09 04:01:31,793] WARN SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/kafka1872564121337557452.tmp'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it. 
> (org.apache.zookeeper.ClientCnxn:1011) [2019-03-09 04:01:31,794] ERROR 
> [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8082) Flaky Test ProducerFailureHandlingTest#testNotEnoughReplicasAfterBrokerShutdown

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832200#comment-16832200
 ] 

Vahid Hashemian commented on KAFKA-8082:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> ProducerFailureHandlingTest#testNotEnoughReplicasAfterBrokerShutdown
> ---
>
> Key: KAFKA-8082
> URL: https://issues.apache.org/jira/browse/KAFKA-8082
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/61/testReport/junit/kafka.api/ProducerFailureHandlingTest/testNotEnoughReplicasAfterBrokerShutdown/]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NotEnoughReplicasAfterAppendException: 
> Messages are written to the log, but to fewer in-sync replicas than required. 
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:98)
>  at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:67)
>  at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
>  at 
> kafka.api.ProducerFailureHandlingTest.testNotEnoughReplicasAfterBrokerShutdown(ProducerFailureHandlingTest.scala:270){quote}
> STDOUT
> {quote}[2019-03-09 03:59:24,897] ERROR [ReplicaFetcher replicaId=0, 
> leaderId=1, fetcherId=0] Error for partition topic-1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:28,028] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:42,046] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> minisrtest-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 03:59:42,245] ERROR 
> [ReplicaManager broker=1] Error processing append operation on partition 
> minisrtest-0 (kafka.server.ReplicaManager:76) 
> org.apache.kafka.common.errors.NotEnoughReplicasException: The size of the 
> current ISR Set(1, 0) is insufficient to satisfy the min.isr requirement of 3 
> for partition minisrtest-0 [2019-03-09 04:00:01,212] ERROR [ReplicaFetcher 
> replicaId=1, leaderId=0, fetcherId=0] Error for partition topic-1-0 at offset 
> 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:02,214] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:03,216] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:23,144] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:24,146] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:25,148] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> topic-1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 04:00:44,607] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> minisrtest2-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}
>  



--
This 

[jira] [Updated] (KAFKA-8083) Flaky Test DelegationTokenRequestsTest#testDelegationTokenRequests

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8083:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test DelegationTokenRequestsTest#testDelegationTokenRequests
> --
>
> Key: KAFKA-8083
> URL: https://issues.apache.org/jira/browse/KAFKA-8083
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/61/testReport/junit/kafka.server/DelegationTokenRequestsTest/testDelegationTokenRequests/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
> kafka.server.DelegationTokenRequestsTest.setUp(DelegationTokenRequestsTest.scala:46){quote}
> STDOUT
> {quote}[2019-03-09 04:01:31,789] WARN SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/kafka1872564121337557452.tmp'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it. 
> (org.apache.zookeeper.ClientCnxn:1011) [2019-03-09 04:01:31,789] ERROR 
> [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) 
> [2019-03-09 04:01:31,793] WARN SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/kafka1872564121337557452.tmp'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it. 
> (org.apache.zookeeper.ClientCnxn:1011) [2019-03-09 04:01:31,794] ERROR 
> [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8084) Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8084:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers
> 
>
> Key: KAFKA-8084
> URL: https://issues.apache.org/jira/browse/KAFKA-8084
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersOfExistingGroupWithNoMembers/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersOfExistingGroupWithNoMembers(DescribeConsumerGroupTest.scala:283){quote}
> STDOUT
> {quote}TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
> CLIENT-ID foo 0 0 0 0 - - - TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG 
> CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - COORDINATOR (ID) 
> ASSIGNMENT-STRATEGY STATE #MEMBERS localhost:45812 (0) Empty 0{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8084) Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers

2019-05-02 Thread Vahid Hashemian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832196#comment-16832196
 ] 

Vahid Hashemian commented on KAFKA-8084:


Removed Fix Version 2.2.1 as this issue is not blocking that release.

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers
> 
>
> Key: KAFKA-8084
> URL: https://issues.apache.org/jira/browse/KAFKA-8084
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersOfExistingGroupWithNoMembers/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersOfExistingGroupWithNoMembers(DescribeConsumerGroupTest.scala:283){quote}
> STDOUT
> {quote}TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
> CLIENT-ID foo 0 0 0 0 - - - TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG 
> CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - COORDINATOR (ID) 
> ASSIGNMENT-STRATEGY STATE #MEMBERS localhost:45812 (0) Empty 0{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8085) Flaky Test ResetConsumerGroupOffsetTest#testResetOffsetsByDuration

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8085:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test ResetConsumerGroupOffsetTest#testResetOffsetsByDuration
> --
>
> Key: KAFKA-8085
> URL: https://issues.apache.org/jira/browse/KAFKA-8085
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/ResetConsumerGroupOffsetTest/testResetOffsetsByDuration/]
> {quote}java.lang.AssertionError: Expected that consumer group has consumed 
> all messages from topic/partition. at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.admin.ResetConsumerGroupOffsetTest.awaitConsumerProgress(ResetConsumerGroupOffsetTest.scala:364)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.produceConsumeAndShutdown(ResetConsumerGroupOffsetTest.scala:359)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsByDuration(ResetConsumerGroupOffsetTest.scala:146){quote}
> STDOUT
> {quote}[2019-03-09 08:39:29,856] WARN Unable to read additional data from 
> client sessionid 0x105f6adb208, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-09 08:39:46,373] 
> WARN Unable to read additional data from client sessionid 0x105f6adf4c50001, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8086) Flaky Test GroupAuthorizerIntegrationTest#testPatternSubscriptionWithTopicAndGroupRead

2019-05-02 Thread Vahid Hashemian (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-8086:
---
Fix Version/s: (was: 2.2.1)

> Flaky Test 
> GroupAuthorizerIntegrationTest#testPatternSubscriptionWithTopicAndGroupRead
> --
>
> Key: KAFKA-8086
> URL: https://issues.apache.org/jira/browse/KAFKA-8086
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testPatternSubscriptionWithTopicAndGroupRead/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.AuthorizerIntegrationTest.setUp(AuthorizerIntegrationTest.scala:242){quote}
> STDOUT
> {quote}[2019-03-09 08:40:34,220] ERROR [KafkaApi-0] Error when handling 
> request: clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=41020,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:41020-127.0.0.1:52304-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:40:35,336] ERROR [Consumer 
> clientId=consumer-98, groupId=my-group] Offset commit failed on partition 
> topic-0 at offset 5: Not authorized to access topics: [Topic authorization 
> failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-09 08:40:35,336] ERROR [Consumer clientId=consumer-98, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-09 08:40:41,649] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=36903,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:36903-127.0.0.1:44978-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:40:53,898] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=41067,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:41067-127.0.0.1:40882-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-09 08:42:07,717] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=46276,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:46276-127.0.0.1:41362-0, 
> 

  1   2   3   >