[
https://issues.apache.org/jira/browse/KAFKA-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Greg Harris resolved KAFKA-12818.
---------------------------------
Resolution: Duplicate
> Memory leakage when kafka connect 2.7 uses directory config provider
> --------------------------------------------------------------------
>
> Key: KAFKA-12818
> URL: https://issues.apache.org/jira/browse/KAFKA-12818
> Project: Kafka
> Issue Type: Bug
> Components: connect
> Affects Versions: 2.7.0
> Environment: Azure AKS / Kubernetes v1.20
> Reporter: Viktor Utkin
> Priority: Critical
> Attachments: Screenshot 2021-05-20 at 14.53.05.png
>
>
> Hi, we noticed a Memory leakage problem when kafka connect 2.7 uses directory
> config provider. We've got an OOM in kubernetes environment. K8s kills
> container when limit reached. At same time we've not get any OOM in Java.
> Heap dump did't show us anything interesting.
> JVM config:
> {code:java}
> -XX:+HeapDumpOnOutOfMemoryError
> -XX:HeapDumpPath=/tmp/
> -XX:+UseContainerSupport
> -XX:+OptimizeStringConcat
> -XX:MaxRAMPercentage=75.0
> -XX:InitialRAMPercentage=50.0
> -XX:MaxMetaspaceSize=256M
> -XX:MaxDirectMemorySize=256M
> -XX:+UseStringDeduplication
> -XX:+AlwaysActAsServerClassMachine{code}
>
> Kafka Connect config:
> {code:java}
> "config.providers": "directory"
> "config.providers.directory.class":
> "org.apache.kafka.common.config.provider.DirectoryConfigProvider"{code}
>
> Kubernetes pod resources limits:
> {code:java}
> resources:
> requests:
> cpu: 1500m
> memory: 2Gi
> limits:
> cpu: 3000m
> memory: 3Gi
> {code}
>
> doker image used: confluentinc/cp-kafka-connect:6.1.1
--
This message was sent by Atlassian Jira
(v8.20.10#820010)