If you're using structured streaming you can pass in options as
kafka.<option> into options as documented. If you're using Spark in batch
form you'll want to do a foreach on a KafkaProducer via a Broadcast.

All KafkaProducer specific options
<https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html>
will
need to be prepended by *kafka.*

https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html


On Wed, Mar 26, 2025 at 4:11 PM Jungtaek Lim <kabhwan.opensou...@gmail.com>
wrote:

> Sorry I missed this. Did you make sure that you add "kafka." as prefix on
> kafka side config when specifying Kafka source/sink option?
>
> On Mon, Feb 24, 2025 at 10:31 PM Abhishek Singla <
> abhisheksingla...@gmail.com> wrote:
>
>> Hi Team,
>>
>> I am using spark to read from S3 and write to Kafka.
>>
>> Spark Version: 3.1.2
>> Scala Version: 2.12
>> Spark Kafka connector: spark-sql-kafka-0-10_2.12
>>
>> I want to throttle kafka producer. I tried using *linger.ms
>> <http://linger.ms>* and *batch.size* config but I can see in *ProducerConfig:
>> ProducerConfig values* at runtime that they are not being set. Is there
>> something I am missing? Is there any other way to throttle kafka writes?
>>
>> *dataset.write().format("kafka").options(options).save();*
>>
>> Regards,
>> Abhishek Singla
>>
>>
>>
>>
>>

-- 
-dan

Reply via email to