[jira] [Created] (KAFKA-6925) Memory leak in org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl

2018-05-21 Thread Marcin Kuthan (JIRA)
Marcin Kuthan created KAFKA-6925:


 Summary: Memory leak in 
org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
 Key: KAFKA-6925
 URL: https://issues.apache.org/jira/browse/KAFKA-6925
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.1
Reporter: Marcin Kuthan


The retained heap of 
org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
 is surprisingly high for long running job. Over 100MB of heap for every stream 
after a week of uptime, when for the same application a few hours after start 
takes 2MB.

For the problematic instance majority of memory StreamsMetricsThreadImpl is 
occupied by hash map entries in parentSensors, over 8000 elements 100+kB each. 
For fresh instance there are less than 200 elements.

Below you could find retained set report generated from Eclipse Mat but I'm not 
fully sure about correctness due to complex object graph in the metrics related 
code.

 
{code:java}
Class Name | Objects | Shallow Heap
---
org.apache.kafka.common.metrics.KafkaMetric | 140,476 | 4,495,232
org.apache.kafka.common.MetricName | 140,476 | 4,495,232
org.apache.kafka.common.metrics.stats.SampledStat$Sample | 73,599 | 3,532,752
org.apache.kafka.common.metrics.stats.Meter | 42,104 | 1,347,328
org.apache.kafka.common.metrics.stats.Count | 42,104 | 1,347,328
org.apache.kafka.common.metrics.stats.Rate | 42,104 | 1,010,496
org.apache.kafka.common.metrics.stats.Total | 42,104 | 1,010,496
org.apache.kafka.common.metrics.stats.Max | 28,134 | 900,288
org.apache.kafka.common.metrics.stats.Avg | 28,134 | 900,288
org.apache.kafka.common.metrics.Sensor | 3,164 | 202,496
org.apache.kafka.common.metrics.Sensor[] | 3,164 | 71,088
org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl|
 1 | 56
---
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6925) Memory leak in org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl

2018-05-21 Thread Marcin Kuthan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcin Kuthan updated KAFKA-6925:
-
Description: 
The retained heap of 
org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
 is surprisingly high for long running job. Over 100MB of heap for every stream 
after a week of uptime, when for the same application a few hours after start 
heap takes 2MB.

For the problematic instance majority of memory StreamsMetricsThreadImpl is 
occupied by hash map entries in parentSensors, over 8000 elements 100+kB each. 
For fresh instance there are less than 200 elements.

Below you could find retained set report generated from Eclipse Mat but I'm not 
fully sure about correctness due to complex object graph in the metrics related 
code.

 
{code:java}
Class Name | Objects | Shallow Heap
---
org.apache.kafka.common.metrics.KafkaMetric | 140,476 | 4,495,232
org.apache.kafka.common.MetricName | 140,476 | 4,495,232
org.apache.kafka.common.metrics.stats.SampledStat$Sample | 73,599 | 3,532,752
org.apache.kafka.common.metrics.stats.Meter | 42,104 | 1,347,328
org.apache.kafka.common.metrics.stats.Count | 42,104 | 1,347,328
org.apache.kafka.common.metrics.stats.Rate | 42,104 | 1,010,496
org.apache.kafka.common.metrics.stats.Total | 42,104 | 1,010,496
org.apache.kafka.common.metrics.stats.Max | 28,134 | 900,288
org.apache.kafka.common.metrics.stats.Avg | 28,134 | 900,288
org.apache.kafka.common.metrics.Sensor | 3,164 | 202,496
org.apache.kafka.common.metrics.Sensor[] | 3,164 | 71,088
org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl|
 1 | 56
---
{code}
 

  was:
The retained heap of 
org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
 is surprisingly high for long running job. Over 100MB of heap for every stream 
after a week of uptime, when for the same application a few hours after start 
takes 2MB.

For the problematic instance majority of memory StreamsMetricsThreadImpl is 
occupied by hash map entries in parentSensors, over 8000 elements 100+kB each. 
For fresh instance there are less than 200 elements.

Below you could find retained set report generated from Eclipse Mat but I'm not 
fully sure about correctness due to complex object graph in the metrics related 
code.

 
{code:java}
Class Name | Objects | Shallow Heap
---
org.apache.kafka.common.metrics.KafkaMetric | 140,476 | 4,495,232
org.apache.kafka.common.MetricName | 140,476 | 4,495,232
org.apache.kafka.common.metrics.stats.SampledStat$Sample | 73,599 | 3,532,752
org.apache.kafka.common.metrics.stats.Meter | 42,104 | 1,347,328
org.apache.kafka.common.metrics.stats.Count | 42,104 | 1,347,328
org.apache.kafka.common.metrics.stats.Rate | 42,104 | 1,010,496
org.apache.kafka.common.metrics.stats.Total | 42,104 | 1,010,496
org.apache.kafka.common.metrics.stats.Max | 28,134 | 900,288
org.apache.kafka.common.metrics.stats.Avg | 28,134 | 900,288
org.apache.kafka.common.metrics.Sensor | 3,164 | 202,496
org.apache.kafka.common.metrics.Sensor[] | 3,164 | 71,088
org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl|
 1 | 56
---
{code}
 


> Memory leak in 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
> -
>
> Key: KAFKA-6925
> URL: https://issues.apache.org/jira/browse/KAFKA-6925
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.1
>Reporter: Marcin Kuthan
>Priority: Major
>
> The retained heap of 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
>  is surprisingly high for long running job. Over 100MB of heap for every 
> stream after a week of uptime, when for the same application a few hours 
> after start heap takes 2MB.
> For the problematic instance majority of memory StreamsMetricsThreadImpl is 
> occupied by hash map entries in parentSensors, over 8000 elements 100+kB 
> each. For fresh instance there are less than 200 elements.
> Below you could find retained set report generated from Eclipse Mat but I'm 
> not fully sure about correctness due to complex object graph in the metrics 
> related code.
>  
> {code:java}
> Class Name | Objects | Shallow Heap
> 

[jira] [Updated] (KAFKA-6925) Memory leak in org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl

2018-05-21 Thread Marcin Kuthan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcin Kuthan updated KAFKA-6925:
-
Description: 
The retained heap of 
org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
 is surprisingly high for long running job. Over 100MB of heap for every stream 
after a week of uptime, when for the same application a few hours after start 
heap takes 2MB.

For the problematic instance majority of memory StreamsMetricsThreadImpl is 
occupied by hash map entries in parentSensors, over 8000 elements 100+kB each. 
For fresh instance there are less than 200 elements.

Below you could find retained set report generated from Eclipse Mat but I'm not 
fully sure about correctness due to complex object graph in the metrics related 
code. Number of objects in single StreamThread$StreamsMetricsThreadImpl  
instance.

 
{code:java}
Class Name | Objects | Shallow Heap
---
org.apache.kafka.common.metrics.KafkaMetric | 140,476 | 4,495,232
org.apache.kafka.common.MetricName | 140,476 | 4,495,232
org.apache.kafka.common.metrics.stats.SampledStat$Sample | 73,599 | 3,532,752
org.apache.kafka.common.metrics.stats.Meter | 42,104 | 1,347,328
org.apache.kafka.common.metrics.stats.Count | 42,104 | 1,347,328
org.apache.kafka.common.metrics.stats.Rate | 42,104 | 1,010,496
org.apache.kafka.common.metrics.stats.Total | 42,104 | 1,010,496
org.apache.kafka.common.metrics.stats.Max | 28,134 | 900,288
org.apache.kafka.common.metrics.stats.Avg | 28,134 | 900,288
org.apache.kafka.common.metrics.Sensor | 3,164 | 202,496
org.apache.kafka.common.metrics.Sensor[] | 3,164 | 71,088
org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl|
 1 | 56
---
{code}
 

  was:
The retained heap of 
org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
 is surprisingly high for long running job. Over 100MB of heap for every stream 
after a week of uptime, when for the same application a few hours after start 
heap takes 2MB.

For the problematic instance majority of memory StreamsMetricsThreadImpl is 
occupied by hash map entries in parentSensors, over 8000 elements 100+kB each. 
For fresh instance there are less than 200 elements.

Below you could find retained set report generated from Eclipse Mat but I'm not 
fully sure about correctness due to complex object graph in the metrics related 
code.

 
{code:java}
Class Name | Objects | Shallow Heap
---
org.apache.kafka.common.metrics.KafkaMetric | 140,476 | 4,495,232
org.apache.kafka.common.MetricName | 140,476 | 4,495,232
org.apache.kafka.common.metrics.stats.SampledStat$Sample | 73,599 | 3,532,752
org.apache.kafka.common.metrics.stats.Meter | 42,104 | 1,347,328
org.apache.kafka.common.metrics.stats.Count | 42,104 | 1,347,328
org.apache.kafka.common.metrics.stats.Rate | 42,104 | 1,010,496
org.apache.kafka.common.metrics.stats.Total | 42,104 | 1,010,496
org.apache.kafka.common.metrics.stats.Max | 28,134 | 900,288
org.apache.kafka.common.metrics.stats.Avg | 28,134 | 900,288
org.apache.kafka.common.metrics.Sensor | 3,164 | 202,496
org.apache.kafka.common.metrics.Sensor[] | 3,164 | 71,088
org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl|
 1 | 56
---
{code}
 


> Memory leak in 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
> -
>
> Key: KAFKA-6925
> URL: https://issues.apache.org/jira/browse/KAFKA-6925
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.1
>Reporter: Marcin Kuthan
>Priority: Major
>
> The retained heap of 
> org.apache.kafka.streams.processor.internals.StreamThread$StreamsMetricsThreadImpl
>  is surprisingly high for long running job. Over 100MB of heap for every 
> stream after a week of uptime, when for the same application a few hours 
> after start heap takes 2MB.
> For the problematic instance majority of memory StreamsMetricsThreadImpl is 
> occupied by hash map entries in parentSensors, over 8000 elements 100+kB 
> each. For fresh instance there are less than 200 elements.
> Below you could find retained set report generated from Eclipse Mat but I'm 
> not fully sure about correctness due to complex object graph in the metrics 
> related code. Number of objects in single 
> StreamThread$StreamsMetricsThreadImpl 

[jira] [Created] (KAFKA-16057) Admin Client connections.max.idle.ms should be 9 minutes by default

2023-12-28 Thread Marcin Kuthan (Jira)
Marcin Kuthan created KAFKA-16057:
-

 Summary: Admin Client connections.max.idle.ms should be 9 minutes 
by default
 Key: KAFKA-16057
 URL: https://issues.apache.org/jira/browse/KAFKA-16057
 Project: Kafka
  Issue Type: Bug
Reporter: Marcin Kuthan


Producer and consumer define connections.max.idle.ms to 9 minutes but Admin 
uses 5 minutes by default.

When the connection.max.idle.ms is equal to metadata.max.age.ms (5 minutes) 
admin client disconnects frequently. I observe the following log in Kafka 
Connect cluster:
{code:java}
[AdminClient clientId=MyClientName--shared-admin] Node XYVZ disconnected. {code}
AdminClient is trying to fetch metadata on every 5 minutes but the connection 
has been already closed due to the connection.max.idle.ms.

As a workaround I defined connection.max.idle.ms property explicitly to 9 
minutes in Kafka Connect configuration. This way admin, producers and consumers 
use the same configuration.

I'm wondering why Admin uses different default for connection.max.idle.ms than 
Consumer / Producer. Bug or feature?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)