[jira] [Assigned] (KAFKA-7293) Merge followed by groupByKey/join might violate co-partioning

2019-02-04 Thread Lee Dongjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lee Dongjin reassigned KAFKA-7293:
--

Assignee: Lee Dongjin

> Merge followed by groupByKey/join might violate co-partioning
> -
>
> Key: KAFKA-7293
> URL: https://issues.apache.org/jira/browse/KAFKA-7293
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Lee Dongjin
>Priority: Major
>
> The merge() operations can be applied to input KStreams that have a different 
> number of tasks (ie, input topic partitions). For this case, the input topics 
> are not co-partitioned and thus the result KStream is not partitioned even if 
> each input KStream is partitioned by its own.
> Because, no "repartitionRequired" flag is set on the input KStreams, the flag 
> is also not set on the output KStream. Hence, if a groupByKey() or join() 
> operation is applied the output KStream, we don't insert a repartition topic. 
> However, repartitioning would be required because the KStream is not 
> partitioned.
> We cannot detect this during compile time, because the number or partitions 
> is unknown, and thus, we cannot decide if repartitioning is required or not. 
> However, we can add a runtime check similar to joins() that checks if data is 
> correctly (co-)partitioned and if not, we can raise a runtime exception.
> Note, for merge() in contrast to join(), we should only check for 
> co-partitioning, if the merge() is followed by a groupByKey() or join() 
> operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7897) Invalid use of epoch cache following message format downgrade

2019-02-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16760382#comment-16760382
 ] 

ASF GitHub Bot commented on KAFKA-7897:
---

hachikuji commented on pull request #6232: KAFKA-7897; Clear leader epoch cache 
after message format downgrade
URL: https://github.com/apache/kafka/pull/6232
 
 
   If the message format is downgraded, we should clear the leader epoch cache 
so that it is not mistakenly used for truncation. We want to revert to 
truncation by high watermark.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Invalid use of epoch cache following message format downgrade
> -
>
> Key: KAFKA-7897
> URL: https://issues.apache.org/jira/browse/KAFKA-7897
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> Message format downgrades are not supported, but they generally work as long 
> as broker/clients at least can continue to parse both message formats. After 
> a downgrade, the truncation logic should revert to using the high watermark, 
> but currently we use the existence of any cached epoch as the sole 
> prerequisite in order to leverage OffsetsForLeaderEpoch. This has the effect 
> of causing a massive truncation after startup which causes re-replication.
> I think our options to fix this are to either 1) clear the cache when we 
> notice a downgrade, or 2) forbid downgrades and raise an error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7897) Invalid use of epoch cache following message format downgrade

2019-02-04 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-7897:
---
Description: 
Message format downgrades are not supported, but they generally work as long as 
broker/clients at least can continue to parse both message formats. After a 
downgrade, the truncation logic should revert to using the high watermark, but 
currently we use the existence of any cached epoch as the sole prerequisite in 
order to leverage OffsetsForLeaderEpoch. This has the effect of causing a 
massive truncation after startup which causes re-replication.

I think our options to fix this are to either 1) clear the cache when we notice 
a downgrade, or 2) forbid downgrades and raise an error.

  was:
Message format downgrades are not supported, but they generally work as long as 
broker/clients at least can continue to parse both message formats. After a 
downgrade, the truncation logic should revert to using the high watermark, but 
currently we use the existence of any cached epoch as the requirement in order 
to leverage OffsetsForLeaderEpoch. This has the effect of causing a massive 
truncation after startup which causes re-replication.

I think our options to fix this are to either 1) clear the cache when we notice 
a downgrade, or 2) forbid downgrades and raise an error.


> Invalid use of epoch cache following message format downgrade
> -
>
> Key: KAFKA-7897
> URL: https://issues.apache.org/jira/browse/KAFKA-7897
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> Message format downgrades are not supported, but they generally work as long 
> as broker/clients at least can continue to parse both message formats. After 
> a downgrade, the truncation logic should revert to using the high watermark, 
> but currently we use the existence of any cached epoch as the sole 
> prerequisite in order to leverage OffsetsForLeaderEpoch. This has the effect 
> of causing a massive truncation after startup which causes re-replication.
> I think our options to fix this are to either 1) clear the cache when we 
> notice a downgrade, or 2) forbid downgrades and raise an error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7897) Invalid use of epoch cache following message format downgrade

2019-02-04 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7897:
--

 Summary: Invalid use of epoch cache following message format 
downgrade
 Key: KAFKA-7897
 URL: https://issues.apache.org/jira/browse/KAFKA-7897
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Message format downgrades are not supported, but they generally work as long as 
broker/clients at least can continue to parse both message formats. After a 
downgrade, the truncation logic should revert to using the high watermark, but 
currently we use the existence of any cached epoch as the requirement in order 
to leverage OffsetsForLeaderEpoch. This has the effect of causing a massive 
truncation after startup which causes re-replication.

I think our options to fix this are to either 1) clear the cache when we notice 
a downgrade, or 2) forbid downgrades and raise an error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7834) Extend collected logs in system test services to include heap dumps

2019-02-04 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7834.
--
   Resolution: Fixed
Fix Version/s: (was: 1.1.2)
   (was: 3.0.0)
   (was: 1.0.3)
   2.3.0

Issue resolved by pull request 6158
[https://github.com/apache/kafka/pull/6158]

> Extend collected logs in system test services to include heap dumps
> ---
>
> Key: KAFKA-7834
> URL: https://issues.apache.org/jira/browse/KAFKA-7834
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Major
> Fix For: 2.3.0, 2.2.0, 2.0.2
>
>
> Overall I'd suggest enabling by default: 
> {\{-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="}}
> in the major system test services, so that a heap dump is captured on OOM. 
> Given these flags, we should also extend the set of collected logs in each 
> service to include the predetermined filename for the heap dump. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7834) Extend collected logs in system test services to include heap dumps

2019-02-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16760356#comment-16760356
 ] 

ASF GitHub Bot commented on KAFKA-7834:
---

ewencp commented on pull request #6158: KAFKA-7834: Extend collected logs in 
system test services to include heap dumps
URL: https://github.com/apache/kafka/pull/6158
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Extend collected logs in system test services to include heap dumps
> ---
>
> Key: KAFKA-7834
> URL: https://issues.apache.org/jira/browse/KAFKA-7834
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Major
> Fix For: 1.0.3, 1.1.2, 3.0.0, 2.2.0, 2.0.2
>
>
> Overall I'd suggest enabling by default: 
> {\{-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="}}
> in the major system test services, so that a heap dump is captured on OOM. 
> Given these flags, we should also extend the set of collected logs in each 
> service to include the predetermined filename for the heap dump. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7804) Update the docs for KIP-377

2019-02-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16760254#comment-16760254
 ] 

ASF GitHub Bot commented on KAFKA-7804:
---

omkreddy commented on pull request #6118: KAFKA-7804: Update docs for 
topic-command related KIP-377
URL: https://github.com/apache/kafka/pull/6118
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Update the docs for KIP-377
> ---
>
> Key: KAFKA-7804
> URL: https://issues.apache.org/jira/browse/KAFKA-7804
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Viktor Somogyi
>Assignee: Viktor Somogyi
>Priority: Major
> Fix For: 2.2.0
>
>
> KIP-377 introduced the {{--bootstrap-server}} option to the 
> {{kafka-topics.sh}} command. The documentation (examples and notable changes) 
> should be updated accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7883) Add schema.namespace support to SetSchemaMetadata SMT in Kafka Connect

2019-02-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérémy Thulliez resolved KAFKA-7883.

Resolution: Workaround

> Add schema.namespace support to SetSchemaMetadata SMT in Kafka Connect
> --
>
> Key: KAFKA-7883
> URL: https://issues.apache.org/jira/browse/KAFKA-7883
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 2.1.0
>Reporter: Jérémy Thulliez
>Priority: Minor
>  Labels: features
>
> When using a connector with AvroConverter & SchemaRegistry, users should be 
> able to specify the namespace in the SMT.
> Currently, only "schema.version" and "schema.name" can be specified.
> This is needed because if not specified, generated classes (from avro schema) 
>  are in the default package and not accessible.
> Currently, the workaround is to add a Transformation implementation to the 
> connect classpath.
> It should be native.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7883) Add schema.namespace support to SetSchemaMetadata SMT in Kafka Connect

2019-02-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/KAFKA-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16760107#comment-16760107
 ] 

Jérémy Thulliez commented on KAFKA-7883:


Actually, there is a native way to do it :

 "transforms" : "AddNamespace", 
 "transforms.AddNamespace.type" : 
"org.apache.kafka.connect.transforms.SetSchemaMetadata$Value", 
 "transforms.AddNamespace.schema.name" : "my.namespace.NameOfTheSchema"

 

"my.namespace" will become the namespace

"NameOfTheSchema" the name

> Add schema.namespace support to SetSchemaMetadata SMT in Kafka Connect
> --
>
> Key: KAFKA-7883
> URL: https://issues.apache.org/jira/browse/KAFKA-7883
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 2.1.0
>Reporter: Jérémy Thulliez
>Priority: Minor
>  Labels: features
>
> When using a connector with AvroConverter & SchemaRegistry, users should be 
> able to specify the namespace in the SMT.
> Currently, only "schema.version" and "schema.name" can be specified.
> This is needed because if not specified, generated classes (from avro schema) 
>  are in the default package and not accessible.
> Currently, the workaround is to add a Transformation implementation to the 
> connect classpath.
> It should be native.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7832) Use automatic RPC generation in CreateTopics

2019-02-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16760103#comment-16760103
 ] 

ASF GitHub Bot commented on KAFKA-7832:
---

cmccabe commented on pull request #5972: KAFKA-7832 Use automatic RPC 
generation in CreateTopics
URL: https://github.com/apache/kafka/pull/5972
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Use automatic RPC generation in CreateTopics
> 
>
> Key: KAFKA-7832
> URL: https://issues.apache.org/jira/browse/KAFKA-7832
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Major
>
> Use automatic RPC generation for the CreateTopics RPC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7890) Invalidate ClusterConnectionState cache for a broker if the hostname of the broker changes.

2019-02-04 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe reassigned KAFKA-7890:
--

Assignee: Rajini Sivaram  (was: Stanislav Kozlovski)

> Invalidate ClusterConnectionState cache for a broker if the hostname of the 
> broker changes.
> ---
>
> Key: KAFKA-7890
> URL: https://issues.apache.org/jira/browse/KAFKA-7890
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 2.1.0
>Reporter: Mark Cho
>Assignee: Rajini Sivaram
>Priority: Major
>
> We've ran into a similar issue as this ticket: 
> [https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-7755]
> The fix for KAFKA-7755 doesn't work for this case as the hostname is not 
> updated when resolving the addresses.
> `ClusterConnectionStates::connecting` method makes an assumption that broker 
> ID will always map to same hostname. In our case, when a broker is terminated 
> in AWS, it is replaced by a different instance under the same broker ID. 
> In this case, the consumer fails to connect to the right host when the broker 
> ID returns to the cluster. For example, we see the following line in DEBUG 
> logs:
> {code:java}
> Initiating connection to node 100.66.7.94:7101 (id: 1 rack: us-east-1c) using 
> address /100.66.14.165
> {code}
> It tries to connect to the new broker instance using the wrong (old) IP 
> address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7890) Invalidate ClusterConnectionState cache for a broker if the hostname of the broker changes.

2019-02-04 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe reassigned KAFKA-7890:
--

Assignee: Stanislav Kozlovski

> Invalidate ClusterConnectionState cache for a broker if the hostname of the 
> broker changes.
> ---
>
> Key: KAFKA-7890
> URL: https://issues.apache.org/jira/browse/KAFKA-7890
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 2.1.0
>Reporter: Mark Cho
>Assignee: Stanislav Kozlovski
>Priority: Major
>
> We've ran into a similar issue as this ticket: 
> [https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-7755]
> The fix for KAFKA-7755 doesn't work for this case as the hostname is not 
> updated when resolving the addresses.
> `ClusterConnectionStates::connecting` method makes an assumption that broker 
> ID will always map to same hostname. In our case, when a broker is terminated 
> in AWS, it is replaced by a different instance under the same broker ID. 
> In this case, the consumer fails to connect to the right host when the broker 
> ID returns to the cluster. For example, we see the following line in DEBUG 
> logs:
> {code:java}
> Initiating connection to node 100.66.7.94:7101 (id: 1 rack: us-east-1c) using 
> address /100.66.14.165
> {code}
> It tries to connect to the new broker instance using the wrong (old) IP 
> address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7896) Add some Log4J Kafka Properties for Producing to Secured Brokers

2019-02-04 Thread Rohan Desai (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16760045#comment-16760045
 ] 

Rohan Desai commented on KAFKA-7896:


Thanks [~dongjin]! I did also file a KIP, which is here: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-425%3A+Add+some+Log4J+Kafka+Appender+Properties+for+Producing+to+Secured+Brokers

> Add some Log4J Kafka Properties for Producing to Secured Brokers
> 
>
> Key: KAFKA-7896
> URL: https://issues.apache.org/jira/browse/KAFKA-7896
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rohan Desai
>Assignee: Rohan Desai
>Priority: Major
>
> The existing Log4J Kafka appender supports producing to brokers that use the 
> GSSAPI (kerberos) sasl mechanism, and only support configuring jaas via a 
> jaas config file. Filing this issue to cover extending this to include the 
> PLAIN mechanism and to support configuring jaas via an in-line configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7895) Ktable supress operator emitting more than one record for the same key per window

2019-02-04 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759979#comment-16759979
 ] 

John Roesler commented on KAFKA-7895:
-

Hi [~mjsax],

Thanks for thinking about that. I've been hacking on this issue for a little 
over a day now, with no luck so far.

Given that we don't know how long it will take to repro the issue or figure it 
out and fix it, I'd hold off on blocking the release.

If I have a breakthrough, I'll figure out what stage the release is in.

It is a serious report, though, so in either case, once we have a fix, I would 
request an immediate bugfix release.

Thanks,

-John

(cc [~cmccabe])

> Ktable supress operator emitting more than one record for the same key per 
> window
> -
>
> Key: KAFKA-7895
> URL: https://issues.apache.org/jira/browse/KAFKA-7895
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.1.1
>Reporter: prasanthi
>Assignee: John Roesler
>Priority: Major
>
> Hi, We are using kstreams to get the aggregated counts per vendor(key) within 
> a specified window.
> Here's how we configured the suppress operator to emit one final record per 
> key/window.
> {code:java}
> KTable, Long> windowedCount = groupedStream
>  .windowedBy(TimeWindows.of(Duration.ofMinutes(1)).grace(ofMillis(5L)))
>  .count(Materialized.with(Serdes.Integer(),Serdes.Long()))
>  .suppress(Suppressed.untilWindowCloses(unbounded()));
> {code}
> But we are getting more than one record for the same key/window as shown 
> below.
> {code:java}
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1039
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1162
> [KTABLE-TOSTREAM-10]: [9@154906704/154906710], 6584
> [KTABLE-TOSTREAM-10]: [88@154906704/154906710], 107
> [KTABLE-TOSTREAM-10]: [108@154906704/154906710], 315
> [KTABLE-TOSTREAM-10]: [119@154906704/154906710], 119
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 746
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 809{code}
> Could you please take a look?
> Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7885) Streams: TopologyDescription violates equals-hashCode contract.

2019-02-04 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759976#comment-16759976
 ] 

John Roesler commented on KAFKA-7885:
-

Hi [~MonCalamari],

Thanks for the report!

I've also commented on the PR, but could you please elaborate on how the 
current code violates the contract?

Thanks,

-John

> Streams: TopologyDescription violates equals-hashCode contract.
> ---
>
> Key: KAFKA-7885
> URL: https://issues.apache.org/jira/browse/KAFKA-7885
> Project: Kafka
>  Issue Type: Bug
>Reporter: Piotr Fras
>Assignee: Piotr Fras
>Priority: Minor
>
> As per JavaSE documentation:
> > If two objects are *equal* according to the *equals*(Object) method, then 
> >calling the *hashCode* method on each of the two objects must produce the 
> >same integer result.
>  
> This is not the case for TopologyDescription.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7329) Continuous warning message of LEADER_NOT_AVAILABLE

2019-02-04 Thread Axel Rose (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759751#comment-16759751
 ] 

Axel Rose edited comment on KAFKA-7329 at 2/4/19 2:17 PM:
--

same here, using dockerized kafka
{quote}$ docker-compose -f docker-compose-single-broker.yml up
 $ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 
1 --partitions 1 --topic test
 $ bin/kafka-topics.sh --list --zookeeper localhost:2181
 test
 $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
 >abc
 [2019-02-04 11:26:42,769] WARN [Producer clientId=console-producer] Error 
while fetching metadata with correlation id 1 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
 [2019-02-04 11:26:42,870] WARN [Producer clientId=console-producer] Error 
while fetching metadata with correlation id 2 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
{quote}

It's ok though if I used a downloaded release

The difference using a different test

OK:
{quote}
~/w/kafka_2.11-2.1.0 ❯❯❯ bin/kafka-topics.sh --describe --zookeeper 
localhost:2181 --topic test
Topic:test  PartitionCount:1ReplicationFactor:1 Configs:
Topic: test Partition: 0Leader: 0   Replicas: 0 Isr: 0
{quote}

NOT ok with dockerized container:
{quote}
~/w/kafka_2.11-2.1.0 ❯❯❯ bin/kafka-topics.sh --describe --zookeeper 
localhost:2181 --topic test
Topic:test  PartitionCount:1ReplicationFactor:1 Configs:
Topic: test Partition: 0Leader: 1001Replicas: 1001  Isr: 
1001
{quote}



was (Author: axelrose):
same here, using dockerized kafka
{quote}$ docker-compose -f docker-compose-single-broker.yml up
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 
1 --partitions 1 --topic test
$ bin/kafka-topics.sh --list --zookeeper localhost:2181
test
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>abc
[2019-02-04 11:26:42,769] WARN [Producer clientId=console-producer] Error while 
fetching metadata with correlation id 1 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2019-02-04 11:26:42,870] WARN [Producer clientId=console-producer] Error while 
fetching metadata with correlation id 2 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
{quote}

> Continuous warning message of LEADER_NOT_AVAILABLE
> --
>
> Key: KAFKA-7329
> URL: https://issues.apache.org/jira/browse/KAFKA-7329
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 2.0.0
> Environment: macOS - High Sierra; Java 1.8
>Reporter: Vasudevan Seshadri
>Priority: Major
>
> I am running kafka version kafka_2.11-2.0.0. I have followed the instruction 
> mentioned in quick start and was able to run zookeeper and server (broker 
> with id=0) without any issues. Note: I have NOT changed any config file 
> entries. Everything is same as downloaded by zip file
> I also have created two topics as "test" and "topic_test"
> Issue: Whenever I run producer or consumer and try to publish or consume on 
> any of the above topics, following error is thrown continuously/non-stop: 
> [2018-08-22 22:36:34,380] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 1 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,474] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 2 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,579] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 3 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,685] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 4 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> Am I missing any settings?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-817) Implement a zookeeper path-based controlled shutdown tool

2019-02-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-817.

Resolution: Won't Fix

I'll close this for now as I've received no feedback to the contrary.

> Implement a zookeeper path-based controlled shutdown tool
> -
>
> Key: KAFKA-817
> URL: https://issues.apache.org/jira/browse/KAFKA-817
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, tools
>Affects Versions: 0.8.1
>Reporter: Joel Koshy
>Assignee: Neha Narkhede
>Priority: Major
>
> The controlled shutdown tool currently depends on jmxremote.port being 
> exposed. Apparently, this is often not exposed in production environments and 
> makes the script unusable. We can move to a zk-based approach in which the 
> controller watches a path that lists shutting down brokers. This will also 
> make it consistent with the pattern used in some of the other 
> replication-related tools.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-1015) documentation for inbuilt offset management

2019-02-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-1015.
-
Resolution: Fixed

I'll close this as there was no conflicting feedback here or on the mailing 
list. Not sure what the correct Resolution here would be, so I'll keep it 
simple and go with "fixed".

> documentation for inbuilt offset management
> ---
>
> Key: KAFKA-1015
> URL: https://issues.apache.org/jira/browse/KAFKA-1015
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Tejas Patil
>Assignee: Tejas Patil
>Priority: Minor
>
> Add documentation for inbuilt offset management and update existing documents 
> if needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7895) Ktable supress operator emitting more than one record for the same key per window

2019-02-04 Thread Michael Bragg (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759763#comment-16759763
 ] 

Michael Bragg commented on KAFKA-7895:
--

I am experiencing what appears similar/related. I’m seeing quite a large number 
of duplicates being created when my topology comes back up after being 
restarted. (Caching is disabled and Im using a short commit internal). I’m 
using a groupBy key -> windowBy (time) -> aggregate -> suppress. The suppress 
is configured until window closes, unbounded, window-size 1 hour, grace 1 min. 
Its there are a smaller number of duplicates created whilst the topology is 
running, but its much smaller compared to when its restarted.

> Ktable supress operator emitting more than one record for the same key per 
> window
> -
>
> Key: KAFKA-7895
> URL: https://issues.apache.org/jira/browse/KAFKA-7895
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.1.1
>Reporter: prasanthi
>Assignee: John Roesler
>Priority: Major
>
> Hi, We are using kstreams to get the aggregated counts per vendor(key) within 
> a specified window.
> Here's how we configured the suppress operator to emit one final record per 
> key/window.
> {code:java}
> KTable, Long> windowedCount = groupedStream
>  .windowedBy(TimeWindows.of(Duration.ofMinutes(1)).grace(ofMillis(5L)))
>  .count(Materialized.with(Serdes.Integer(),Serdes.Long()))
>  .suppress(Suppressed.untilWindowCloses(unbounded()));
> {code}
> But we are getting more than one record for the same key/window as shown 
> below.
> {code:java}
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1039
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1162
> [KTABLE-TOSTREAM-10]: [9@154906704/154906710], 6584
> [KTABLE-TOSTREAM-10]: [88@154906704/154906710], 107
> [KTABLE-TOSTREAM-10]: [108@154906704/154906710], 315
> [KTABLE-TOSTREAM-10]: [119@154906704/154906710], 119
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 746
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 809{code}
> Could you please take a look?
> Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-659) Support request pipelining in the network server

2019-02-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-659.

Resolution: Fixed

I'll close this for now since no-one objected here or on the mailing list, so 
I'll assume my understanding of this being fixed by now is correct.

> Support request pipelining in the network server
> 
>
> Key: KAFKA-659
> URL: https://issues.apache.org/jira/browse/KAFKA-659
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Jay Kreps
>Priority: Major
>
> Currently the network layer in kafka will only process a single request at a 
> time from a given connection. The protocol is designed to allow pipelining of 
> requests which would improve latency.
> There are two changes that would have to made for this to work, in my 
> understanding:
> 1. Currently once a completed request is read from a socket the server does 
> not register for "read interest" again until a response is sent. The server 
> would have to register for read interest immediately to allow reading more 
> requests.
> 2. Currently the socket server adds all requests to a single "request 
> channel" that serves as a work queue for all the background i/o threads. One 
> requirement for Kafka is to do in order processing of requests from a given 
> socket. This is currently achieved by not reading any new requests from a 
> socket until the currently outstanding request is processed. To maintain this 
> guarantee we would have to guarantee that all requests from a particular 
> socket went to the same I/O thread. A simple way to do this would be to have 
> work queue per I/O thread. One downside of this is that pinning requests to 
> I/O threads will add latency variance--if that thread stalls due to a slow 
> I/O no other thread can pick up the slack. So perhaps there is a better way 
> that isn't overly complex?
> Would be good to nail down the design for this as a first step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7329) Continuous warning message of LEADER_NOT_AVAILABLE

2019-02-04 Thread Axel Rose (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759751#comment-16759751
 ] 

Axel Rose edited comment on KAFKA-7329 at 2/4/19 10:32 AM:
---

same here, using dockerized kafka

{{ }}
{{ $ docker-compose -f docker-compose-single-broker.yml up}}
{{ $ bin/kafka-topics.sh --create --zookeeper localhost:2181 
--replication-factor 1 --partitions 1 --topic test}}
{{ $ bin/kafka-topics.sh --list --zookeeper localhost:2181}}
{{ test}}
{{ $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test}}
{{ >abc}}
{{ [2019-02-04 11:26:42,769] WARN [Producer clientId=console-producer] Error 
while fetching metadata with correlation id 1 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)}}
{{ [2019-02-04 11:26:42,870] WARN [Producer clientId=console-producer] Error 
while fetching metadata with correlation id 2 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)}}


was (Author: axelrose):
same here, using dockerized kafka

 

```
$ docker-compose -f docker-compose-single-broker.yml up
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 
1 --partitions 1 --topic test
$ bin/kafka-topics.sh --list --zookeeper localhost:2181
test
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>abc
[2019-02-04 11:26:42,769] WARN [Producer clientId=console-producer] Error while 
fetching metadata with correlation id 1 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2019-02-04 11:26:42,870] WARN [Producer clientId=console-producer] Error while 
fetching metadata with correlation id 2 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)```

> Continuous warning message of LEADER_NOT_AVAILABLE
> --
>
> Key: KAFKA-7329
> URL: https://issues.apache.org/jira/browse/KAFKA-7329
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 2.0.0
> Environment: macOS - High Sierra; Java 1.8
>Reporter: Vasudevan Seshadri
>Priority: Major
>
> I am running kafka version kafka_2.11-2.0.0. I have followed the instruction 
> mentioned in quick start and was able to run zookeeper and server (broker 
> with id=0) without any issues. Note: I have NOT changed any config file 
> entries. Everything is same as downloaded by zip file
> I also have created two topics as "test" and "topic_test"
> Issue: Whenever I run producer or consumer and try to publish or consume on 
> any of the above topics, following error is thrown continuously/non-stop: 
> [2018-08-22 22:36:34,380] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 1 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,474] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 2 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,579] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 3 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,685] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 4 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> Am I missing any settings?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-217) Client test suite

2019-02-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/KAFKA-217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-217.

Resolution: Won't Fix

I'll close this for now since no-one objected here or on the mailing list. If 
we decide to do something like this later on we can always reopen or create a 
new issue.

> Client test suite
> -
>
> Key: KAFKA-217
> URL: https://issues.apache.org/jira/browse/KAFKA-217
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>Priority: Major
>
> It would be great to get a comprehensive test suite that we could run against 
> clients to certify them.
> The first step here would be work out a design approach that makes it easy to 
> certify the correctness of a client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7329) Continuous warning message of LEADER_NOT_AVAILABLE

2019-02-04 Thread Axel Rose (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759751#comment-16759751
 ] 

Axel Rose edited comment on KAFKA-7329 at 2/4/19 10:32 AM:
---

same here, using dockerized kafka
{quote}$ docker-compose -f docker-compose-single-broker.yml up
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 
1 --partitions 1 --topic test
$ bin/kafka-topics.sh --list --zookeeper localhost:2181
test
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>abc
[2019-02-04 11:26:42,769] WARN [Producer clientId=console-producer] Error while 
fetching metadata with correlation id 1 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2019-02-04 11:26:42,870] WARN [Producer clientId=console-producer] Error while 
fetching metadata with correlation id 2 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
{quote}


was (Author: axelrose):
same here, using dockerized kafka

{{ }}
{{ $ docker-compose -f docker-compose-single-broker.yml up}}
{{ $ bin/kafka-topics.sh --create --zookeeper localhost:2181 
--replication-factor 1 --partitions 1 --topic test}}
{{ $ bin/kafka-topics.sh --list --zookeeper localhost:2181}}
{{ test}}
{{ $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test}}
{{ >abc}}
{{ [2019-02-04 11:26:42,769] WARN [Producer clientId=console-producer] Error 
while fetching metadata with correlation id 1 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)}}
{{ [2019-02-04 11:26:42,870] WARN [Producer clientId=console-producer] Error 
while fetching metadata with correlation id 2 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)}}

> Continuous warning message of LEADER_NOT_AVAILABLE
> --
>
> Key: KAFKA-7329
> URL: https://issues.apache.org/jira/browse/KAFKA-7329
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 2.0.0
> Environment: macOS - High Sierra; Java 1.8
>Reporter: Vasudevan Seshadri
>Priority: Major
>
> I am running kafka version kafka_2.11-2.0.0. I have followed the instruction 
> mentioned in quick start and was able to run zookeeper and server (broker 
> with id=0) without any issues. Note: I have NOT changed any config file 
> entries. Everything is same as downloaded by zip file
> I also have created two topics as "test" and "topic_test"
> Issue: Whenever I run producer or consumer and try to publish or consume on 
> any of the above topics, following error is thrown continuously/non-stop: 
> [2018-08-22 22:36:34,380] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 1 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,474] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 2 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,579] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 3 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,685] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 4 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> Am I missing any settings?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7329) Continuous warning message of LEADER_NOT_AVAILABLE

2019-02-04 Thread Axel Rose (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759751#comment-16759751
 ] 

Axel Rose edited comment on KAFKA-7329 at 2/4/19 10:31 AM:
---

same here, using dockerized kafka

 

```
$ docker-compose -f docker-compose-single-broker.yml up
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 
1 --partitions 1 --topic test
$ bin/kafka-topics.sh --list --zookeeper localhost:2181
test
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>abc
[2019-02-04 11:26:42,769] WARN [Producer clientId=console-producer] Error while 
fetching metadata with correlation id 1 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)
[2019-02-04 11:26:42,870] WARN [Producer clientId=console-producer] Error while 
fetching metadata with correlation id 2 : \{test=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient)```


was (Author: axelrose):
same here, using dockerized kafka

> Continuous warning message of LEADER_NOT_AVAILABLE
> --
>
> Key: KAFKA-7329
> URL: https://issues.apache.org/jira/browse/KAFKA-7329
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 2.0.0
> Environment: macOS - High Sierra; Java 1.8
>Reporter: Vasudevan Seshadri
>Priority: Major
>
> I am running kafka version kafka_2.11-2.0.0. I have followed the instruction 
> mentioned in quick start and was able to run zookeeper and server (broker 
> with id=0) without any issues. Note: I have NOT changed any config file 
> entries. Everything is same as downloaded by zip file
> I also have created two topics as "test" and "topic_test"
> Issue: Whenever I run producer or consumer and try to publish or consume on 
> any of the above topics, following error is thrown continuously/non-stop: 
> [2018-08-22 22:36:34,380] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 1 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,474] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 2 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,579] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 3 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,685] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 4 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> Am I missing any settings?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7329) Continuous warning message of LEADER_NOT_AVAILABLE

2019-02-04 Thread Axel Rose (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759751#comment-16759751
 ] 

Axel Rose commented on KAFKA-7329:
--

same here, using dockerized kafka

> Continuous warning message of LEADER_NOT_AVAILABLE
> --
>
> Key: KAFKA-7329
> URL: https://issues.apache.org/jira/browse/KAFKA-7329
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 2.0.0
> Environment: macOS - High Sierra; Java 1.8
>Reporter: Vasudevan Seshadri
>Priority: Major
>
> I am running kafka version kafka_2.11-2.0.0. I have followed the instruction 
> mentioned in quick start and was able to run zookeeper and server (broker 
> with id=0) without any issues. Note: I have NOT changed any config file 
> entries. Everything is same as downloaded by zip file
> I also have created two topics as "test" and "topic_test"
> Issue: Whenever I run producer or consumer and try to publish or consume on 
> any of the above topics, following error is thrown continuously/non-stop: 
> [2018-08-22 22:36:34,380] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 1 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,474] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 2 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,579] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 3 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> [2018-08-22 22:36:34,685] WARN [Producer clientId=console-producer] Error 
> while fetching metadata with correlation id 4 : 
> \{topic_test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> Am I missing any settings?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7896) Add some Log4J Kafka Properties for Producing to Secured Brokers

2019-02-04 Thread Lee Dongjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lee Dongjin reassigned KAFKA-7896:
--

Assignee: Rohan Desai

> Add some Log4J Kafka Properties for Producing to Secured Brokers
> 
>
> Key: KAFKA-7896
> URL: https://issues.apache.org/jira/browse/KAFKA-7896
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rohan Desai
>Assignee: Rohan Desai
>Priority: Major
>
> The existing Log4J Kafka appender supports producing to brokers that use the 
> GSSAPI (kerberos) sasl mechanism, and only support configuring jaas via a 
> jaas config file. Filing this issue to cover extending this to include the 
> PLAIN mechanism and to support configuring jaas via an in-line configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7896) Add some Log4J Kafka Properties for Producing to Secured Brokers

2019-02-04 Thread Lee Dongjin (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759731#comment-16759731
 ] 

Lee Dongjin commented on KAFKA-7896:


Here is the PR: 
[https://github.com/apache/kafka/pull/6216|https://github.com/apache/kafka/pull/6216]

> Add some Log4J Kafka Properties for Producing to Secured Brokers
> 
>
> Key: KAFKA-7896
> URL: https://issues.apache.org/jira/browse/KAFKA-7896
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rohan Desai
>Priority: Major
>
> The existing Log4J Kafka appender supports producing to brokers that use the 
> GSSAPI (kerberos) sasl mechanism, and only support configuring jaas via a 
> jaas config file. Filing this issue to cover extending this to include the 
> PLAIN mechanism and to support configuring jaas via an in-line configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7888) kafka cluster not recovering - Shrinking ISR from 14,13 to 13 (kafka.cluster.Partition) continously

2019-02-04 Thread Kemal ERDEN (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16759713#comment-16759713
 ] 

Kemal ERDEN commented on KAFKA-7888:


Thanks [~junrao], we'll do that once it's released. In the meantime we've 
incremented the zk connection and session timeouts to 30 seconds from the 
default 6 seconds.
We're thinking of splitting the ZKs and the broker boxes (3 servers to 6). Can 
you see these changes helping solve the problem?
Do you have any other recommendations to rectify/reduce the probability of 
having this problem?
Thanks.


> kafka cluster not recovering - Shrinking ISR from 14,13 to 13 
> (kafka.cluster.Partition) continously
> ---
>
> Key: KAFKA-7888
> URL: https://issues.apache.org/jira/browse/KAFKA-7888
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, replication, zkclient
>Affects Versions: 2.1.0
> Environment: using kafka_2.12-2.1.0
> 3 ZKs 3 Broker cluster, using 3 boxes (1 ZK and 1 broker on each box), 
> default.replication factor: 2, 
> offset replication factor was 1 when the error happened, increased to 2 after 
> seeing this error by reassigning-partitions.
> compression: default (producer) on broker but sending gzip from producers.
> linux (redhat) etx4 kafka logs on single local disk
>Reporter: Kemal ERDEN
>Priority: Major
> Attachments: combined.log, producer.log
>
>
> we're seeing the following repeating logs on our kafka cluster from time to 
> time which seems to cause messages expiring on Producers and the cluster 
> going into a non-recoverable state. The only fix seems to be to restart 
> brokers.
> {{Shrinking ISR from 14,13 to 13 (kafka.cluster.Partition)}}
>  {{Cached zkVersion [21] not equal to that in zookeeper, skip updating ISR 
> (kafka.cluster.Partition)}}
>  and later on the following log is repeated:
> {{Got user-level KeeperException when processing sessionid:0xe046aa4f8e6 
> type:setData cxid:0x2df zxid:0xa01fd txntype:-1 reqpath:n/a Error 
> Path:/brokers/topics/ucTrade/partitions/6/state Error:KeeperErrorCode = 
> BadVersion for /brokers/topics/ucTrade/partitions/6/state}}
> We haven't interfered with any of the brokers/zookeepers whilst this happened.
> I've attached a combined log which represents a combination of controller, 
> server and state change logs from each broker (ids 13,14 and 15, log files 
> have the suffix b13, b14, b15 respectively)
> We have increased the heaps from 1g to 6g for the brokers and from 512m to 4g 
> for the zookeepers since this happened but not sure if it is relevant. the ZK 
> logs are unfortunately overwritten so can't provide those.
> We produce varying message sizes but some messages are relatively large (6mb) 
> but we use compression on the producers (set to gzip).
> I've attached some logs from one of our producers as well.
> producer.properties that we've changed:
> spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
> spring.kafka.producer.compression-type=gzip
> spring.kafka.producer.retries=5
> spring.kafka.producer.acks=-1
> spring.kafka.producer.batch-size=1048576
> spring.kafka.producer.properties.linger.ms=200
> spring.kafka.producer.properties.request.timeout.ms=60
> spring.kafka.producer.properties.max.block.ms=24
> spring.kafka.producer.properties.max.request.size=104857600
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)