Thanks for the reply. Actually my Lambda consumers are consuming batched
messages from a Kinesis queue in AWS, process them, and send results to Kafka.
Even with 'reserve concurrency' AWS will frequently stop and
re-initiate/-invoke the function for different batches - resulting in
recreation of the producers. Hope there is a solution for this - otherwise I
could not use Lambda or Spot-Instances in AWS with Kafka.
Am Donnerstag, 15. August 2019, 09:28:17 MESZ hat Jörn Franke
Folgendes geschrieben:
Even if it is not a memory leak it is not a good practice. You can put the
messages on SQS and have a lambda function listening to the SQS queue with
reserve concurrency to put it on Kafka
> Am 15.08.2019 um 08:52 schrieb Tianning Zhang
> :
>
> Dear all,
>
> I am using Amazon AWS Lambda functions to produce messages to a Kafka
> cluster. As I can not control how frequently a Lambda function is
> initiated/invoked and I can not share object between invocations - I have to
> create a new Kafka producer for each invocation and clean it up after the
> invocation finishes. Each producer also set to the same "client.id".
> I notice that after deploying the lambda functions the heap size at the
> brokers increases quickly - which finally resulted GC problems and problems
> at the brokers. It is very likely that this increase is connected to the
> Lambda producers.
> I know that it is recommended to reuse single producer instance for message
> production. But in this case (with AWS Lambda) this is not possible.
> My question is that if it is possible that high number of producer
> creation/cleanup can lead to memory leaks at the brokers?
> I am using Kafka cluster with 5 brokers - version 1.0.1. Kafka client lib
> tested with versions 0.11.0.03, 1.0.1 and 2.3.0.
> Thanks in advance
> Tianning Zhang
>
> T: +49(30)509691-8301M: +49 172 7095686E: tianning.zh...@awin.com
>
> Eichhornstraße 310785 Berlin
> www.awin.com
>