[jira] [Created] (KAFKA-8203) plaintext connections to SSL secured broker can be handled more elegantly

2019-04-09 Thread Jorg Heymans (JIRA)
Jorg Heymans created KAFKA-8203:
---

 Summary: plaintext connections to SSL secured broker can be 
handled more elegantly
 Key: KAFKA-8203
 URL: https://issues.apache.org/jira/browse/KAFKA-8203
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.1.1
Reporter: Jorg Heymans


Mailing list thread: 
[https://lists.apache.org/thread.html/39935157351c0ad590e6cf02027816d664f1fd3724a25c1133a3bba6@%3Cusers.kafka.apache.org%3E]

-reproduced here

We have our brokers secured with these standard properties

 
{code:java}
listeners=SSL://a.b.c:9030 
ssl.truststore.location=... 
ssl.truststore.password=... 
ssl.keystore.location=... 
ssl.keystore.password=... 
ssl.key.password=... 
ssl.client.auth=required 
ssl.enabled.protocols=TLSv1.2 {code}
It's a bit surprising to see that when a (java) client attempts to connect 
without having SSL configured, so doing a PLAINTEXT connection instead, it does 
not get a TLS exception indicating that SSL is required. Somehow i would have 
expected a hard transport-level exception making it clear that non-SSL 
connections are not allowed, instead the client sees this (when debug logging 
is enabled)


{code:java}
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 
21234bee31165527 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - 
[Consumer clientId=consumer-1, groupId=my-test-group] Kafka consumer 
initialized [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - 
[Consumer clientId=consumer-1, groupId=my-test-group] Subscribed to topic(s): 
events [main] DEBUG 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer 
clientId=consumer-1, groupId=my-test-group] Sending FindCoordinator request to 
broker a.b.c:9030 (id: -1 rack: null) [main] DEBUG 
org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, 
groupId=my-test-group] Initiating connection to node a.b.c:9030 (id: -1 rack: 
null) using address /a.b.c [main] DEBUG org.apache.kafka.common.metrics.Metrics 
- Added sensor with name node--1.bytes-sent [main] DEBUG 
org.apache.kafka.common.metrics.Metrics - Added sensor with name 
node--1.bytes-received [main] DEBUG org.apache.kafka.common.metrics.Metrics - 
Added sensor with name node--1.latency [main] DEBUG 
org.apache.kafka.common.network.Selector - [Consumer clientId=consumer-1, 
groupId=my-test-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 
131072, SO_TIMEOUT = 0 to node -1 [main] DEBUG 
org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, 
groupId=my-test-group] Completed connection to node -1. Fetching API versions. 
[main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer 
clientId=consumer-1, groupId=my-test-group] Initiating API versions fetch from 
node -1. [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer 
clientId=consumer-1, groupId=my-test-group] Connection with /a.b.c disconnected 
java.io.EOFException at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119)
 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381) 
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342) at 
org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609) at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541) 
at org.apache.kafka.common.network.Selector.poll(Selector.java:467) at 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535) at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1214)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179) 
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1164) 
at eu.europa.ec.han.TestConsumer.main(TestConsumer.java:22) [main] DEBUG 
org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, 
groupId=my-test-group] Node -1 disconnected. [main] DEBUG 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer 
clientId=consumer-1, groupId=my-test-group] Cancelled request with header 
RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=2, clientId=consumer-1, 
correlationId=0) due to node -1 being disconnected [main] DEBUG 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer 

[jira] [Commented] (KAFKA-7685) Support loading trust stores from classpath

2019-04-11 Thread Jorg Heymans (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16815466#comment-16815466
 ] 

Jorg Heymans commented on KAFKA-7685:
-

Alternatively, also consider accepting just an InputStream pointing to the 
location of the truststore. The classpath is definitely not the only place 
people are storing certificate stores. 
{code:java}
Inputstream trustStoreInputStream = ... 
props.put(SSL_TRUSTSTORE_LOCATION_CONFIG, trustStoreInputStream);{code}

> Support loading trust stores from classpath
> ---
>
> Key: KAFKA-7685
> URL: https://issues.apache.org/jira/browse/KAFKA-7685
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.1.0
>Reporter: Noa Resare
>Priority: Minor
>
> Certificate pinning as well as authenticating kafka brokers using a 
> non-public CA certificate maintained inside an organisation is desirable to a 
> lot of users. This can be accomplished today using the 
> {{ssl.truststore.location}} configuration property. Unfortunately, this value 
> is always interpreted as a filesystem path which makes distribution of such 
> an alternative truststore a needlessly cumbersome process. If we had the 
> ability to load a trust store from the classpath as well as from a file, the 
> trust store could be shipped in a jar that could be declared as a regular 
> maven style dependency.
> If we did this by supporting prefixing {{ssl.truststore.location}} with 
> {{classpath:}} this could be a backwards compatible change, one that builds 
> on prior design patterns established by for example the Spring project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8573) kafka-topics.cmd OOM when connecting to a secure cluster without SSL properties

2019-06-20 Thread Jorg Heymans (JIRA)
Jorg Heymans created KAFKA-8573:
---

 Summary: kafka-topics.cmd OOM when connecting to a secure cluster 
without SSL properties
 Key: KAFKA-8573
 URL: https://issues.apache.org/jira/browse/KAFKA-8573
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 2.2.1
Reporter: Jorg Heymans


When using kafka-topics.cmd to connect to an SSL secured cluster, without 
specifying '--command-config=my-ssl.properties' on OOM is triggered:

 
{noformat}
[2019-06-20 14:25:07,998] ERROR Uncaught exception in thread 
'kafka-admin-client-thread | adminclient-1': 
(org.apache.kafka.common.utils.KafkaThread)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1131)
at java.lang.Thread.run(Thread.java:748){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8573) kafka-topics.cmd OOM when connecting to a secure cluster without SSL properties

2019-06-20 Thread Jorg Heymans (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorg Heymans updated KAFKA-8573:

Environment: 
Windows 7

openjdk version "1.8.0_212-1-ojdkbuild"
OpenJDK Runtime Environment (build 1.8.0_212-1-ojdkbuild-b04)
OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
Description: 
When using kafka-topics.cmd to connect to an SSL secured cluster, without 
specifying '--command-config=my-ssl.properties' an OOM is triggered:

 
{noformat}
[2019-06-20 14:25:07,998] ERROR Uncaught exception in thread 
'kafka-admin-client-thread | adminclient-1': 
(org.apache.kafka.common.utils.KafkaThread)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1131)
at java.lang.Thread.run(Thread.java:748){noformat}
 

  was:
When using kafka-topics.cmd to connect to an SSL secured cluster, without 
specifying '--command-config=my-ssl.properties' on OOM is triggered:

 
{noformat}
[2019-06-20 14:25:07,998] ERROR Uncaught exception in thread 
'kafka-admin-client-thread | adminclient-1': 
(org.apache.kafka.common.utils.KafkaThread)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at 
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1131)
at java.lang.Thread.run(Thread.java:748){noformat}


> kafka-topics.cmd OOM when connecting to a secure cluster without SSL 
> properties
> ---
>
> Key: KAFKA-8573
> URL: https://issues.apache.org/jira/browse/KAFKA-8573
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.2.1
> Environment: Windows 7
> openjdk version "1.8.0_212-1-ojdkbuild"
> OpenJDK Runtime Environment (build 1.8.0_212-1-ojdkbuild-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
>Reporter: Jorg Heymans
>Priority: Minor
>
> When using kafka-topics.cmd to connect to an SSL secured cluster, without 
> specifying '--command-config=my-ssl.properties' an OOM is triggered:
>  
> {noformat}
> [2019-06-20 14:25:07,998] ERROR Uncaught exception in thread 
> 'kafka-admin-client-thread | adminclient-1': 
> (org.apache.kafka.common.utils.KafkaThread)
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
> at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
> at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
> at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
> at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
> at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1131)
> at java.lang.Thread.run(Thread.java:748){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8573) kafka-topics.cmd OOM when connecting to a secure cluster without SSL properties

2019-07-14 Thread Jorg Heymans (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorg Heymans resolved KAFKA-8573.
-
Resolution: Duplicate

> kafka-topics.cmd OOM when connecting to a secure cluster without SSL 
> properties
> ---
>
> Key: KAFKA-8573
> URL: https://issues.apache.org/jira/browse/KAFKA-8573
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.2.1
> Environment: Windows 7
> openjdk version "1.8.0_212-1-ojdkbuild"
> OpenJDK Runtime Environment (build 1.8.0_212-1-ojdkbuild-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
>Reporter: Jorg Heymans
>Priority: Minor
>
> When using kafka-topics.cmd to connect to an SSL secured cluster, without 
> specifying '--command-config=my-ssl.properties' an OOM is triggered:
>  
> {noformat}
> [2019-06-20 14:25:07,998] ERROR Uncaught exception in thread 
> 'kafka-admin-client-thread | adminclient-1': 
> (org.apache.kafka.common.utils.KafkaThread)
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
> at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
> at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
> at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
> at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
> at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1131)
> at java.lang.Thread.run(Thread.java:748){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-3980) JmxReporter uses excessive memory causing OutOfMemoryException

2020-03-05 Thread Jorg Heymans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorg Heymans updated KAFKA-3980:

Attachment: image-2020-03-05-13-21-32-924.png

> JmxReporter uses excessive memory causing OutOfMemoryException
> --
>
> Key: KAFKA-3980
> URL: https://issues.apache.org/jira/browse/KAFKA-3980
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.9.0.1
>Reporter: Andrew Jorgensen
>Priority: Major
> Attachments: heap_img.png, histogram.PNG, 
> image-2020-03-05-13-21-32-924.png, issue.jpg, report.PNG
>
>
> I have some nodes in a kafka cluster that occasionally will run out of memory 
> whenever I restart the producers. I was able to take a heap dump from both a 
> recently restarted Kafka node which weighed in at about 20 MB and a node that 
> has been running for 2 months is using over 700MB of memory. Looking at the 
> heap dump it looks like the JmxReporter is holding on to metrics and causing 
> them to build up over time. 
> !http://imgur.com/N6Cd0Ku.png!
> !http://imgur.com/kQBqA2j.png!
> The ultimate problem this causes is that there is a chance when I restart the 
> producers it will cause the node to experience an Java heap space exception 
> and OOM. The nodes  then fail to startup correctly and write a -1 as the 
> leader number to the partitions they were responsible for effectively 
> resetting the offset and rendering that partition unavailable. The kafka 
> process then needs to go be restarted in order to re-assign the node to the 
> partition that it owns.
> I have a few questions:
> 1. I am not quite sure why there are so many client id entries in that 
> JmxReporter map.
> 2. Is there a way to have the JmxReporter release metrics after a set amount 
> of time or a way to turn certain high cardinality metrics like these off?
> I can provide any logs or heap dumps if more information is needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-3980) JmxReporter uses excessive memory causing OutOfMemoryException

2020-03-05 Thread Jorg Heymans (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorg Heymans updated KAFKA-3980:

Attachment: image-2020-03-05-13-23-19-376.png

> JmxReporter uses excessive memory causing OutOfMemoryException
> --
>
> Key: KAFKA-3980
> URL: https://issues.apache.org/jira/browse/KAFKA-3980
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.9.0.1
>Reporter: Andrew Jorgensen
>Priority: Major
> Attachments: heap_img.png, histogram.PNG, 
> image-2020-03-05-13-21-32-924.png, image-2020-03-05-13-23-19-376.png, 
> issue.jpg, report.PNG
>
>
> I have some nodes in a kafka cluster that occasionally will run out of memory 
> whenever I restart the producers. I was able to take a heap dump from both a 
> recently restarted Kafka node which weighed in at about 20 MB and a node that 
> has been running for 2 months is using over 700MB of memory. Looking at the 
> heap dump it looks like the JmxReporter is holding on to metrics and causing 
> them to build up over time. 
> !http://imgur.com/N6Cd0Ku.png!
> !http://imgur.com/kQBqA2j.png!
> The ultimate problem this causes is that there is a chance when I restart the 
> producers it will cause the node to experience an Java heap space exception 
> and OOM. The nodes  then fail to startup correctly and write a -1 as the 
> leader number to the partitions they were responsible for effectively 
> resetting the offset and rendering that partition unavailable. The kafka 
> process then needs to go be restarted in order to re-assign the node to the 
> partition that it owns.
> I have a few questions:
> 1. I am not quite sure why there are so many client id entries in that 
> JmxReporter map.
> 2. Is there a way to have the JmxReporter release metrics after a set amount 
> of time or a way to turn certain high cardinality metrics like these off?
> I can provide any logs or heap dumps if more information is needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-3980) JmxReporter uses excessive memory causing OutOfMemoryException

2020-03-05 Thread Jorg Heymans (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17052080#comment-17052080
 ] 

Jorg Heymans commented on KAFKA-3980:
-

I would like to add to this that it seems to not just be a brokers issue. 
KafkaHQ ([https://github.com/tchiotludo/kafkahq/)] is a web based topic browser 
and cluster viewer. After a couple of days of usage we're seeing about 600mb of 
memory taken up by JMX metrics in the application, this seems excessive. Is 
there anything that can be done to disable jmx reporting for kafka-client?

 

!image-2020-03-05-13-21-32-924.png!

 

!image-2020-03-05-13-23-19-376.png!

> JmxReporter uses excessive memory causing OutOfMemoryException
> --
>
> Key: KAFKA-3980
> URL: https://issues.apache.org/jira/browse/KAFKA-3980
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.9.0.1
>Reporter: Andrew Jorgensen
>Priority: Major
> Attachments: heap_img.png, histogram.PNG, 
> image-2020-03-05-13-21-32-924.png, image-2020-03-05-13-23-19-376.png, 
> issue.jpg, report.PNG
>
>
> I have some nodes in a kafka cluster that occasionally will run out of memory 
> whenever I restart the producers. I was able to take a heap dump from both a 
> recently restarted Kafka node which weighed in at about 20 MB and a node that 
> has been running for 2 months is using over 700MB of memory. Looking at the 
> heap dump it looks like the JmxReporter is holding on to metrics and causing 
> them to build up over time. 
> !http://imgur.com/N6Cd0Ku.png!
> !http://imgur.com/kQBqA2j.png!
> The ultimate problem this causes is that there is a chance when I restart the 
> producers it will cause the node to experience an Java heap space exception 
> and OOM. The nodes  then fail to startup correctly and write a -1 as the 
> leader number to the partitions they were responsible for effectively 
> resetting the offset and rendering that partition unavailable. The kafka 
> process then needs to go be restarted in order to re-assign the node to the 
> partition that it owns.
> I have a few questions:
> 1. I am not quite sure why there are so many client id entries in that 
> JmxReporter map.
> 2. Is there a way to have the JmxReporter release metrics after a set amount 
> of time or a way to turn certain high cardinality metrics like these off?
> I can provide any logs or heap dumps if more information is needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9202) serde in ConsoleConsumer with access to headers

2019-11-17 Thread Jorg Heymans (Jira)
Jorg Heymans created KAFKA-9202:
---

 Summary: serde in ConsoleConsumer with access to headers
 Key: KAFKA-9202
 URL: https://issues.apache.org/jira/browse/KAFKA-9202
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 2.3.0
Reporter: Jorg Heymans


ML thread here : 
[https://lists.apache.org/thread.html/ab8c3094945cb9f9312fd3614a5b4454f24756cfa1a702ef5c739c8f@%3Cusers.kafka.apache.org%3E]

 

The Deserializer interface has two methods, one that gives access to the 
headers and one that does not. ConsoleConsumer.scala only calls the latter 
method. It would be nice if it were to call the default method that provides 
header access, so that custom serde that depends on headers becomes possible. 

Currently it does this:

 
{code:java}
deserializer.map(_.deserialize(topic, nonNullBytes).toString.
getBytes(StandardCharsets.UTF_8)).getOrElse(nonNullBytes)
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)