[ 
https://issues.apache.org/jira/browse/SPARK-27042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated SPARK-27042:
----------------------------------
    Description: If a task is failing due to a cached corrupted KafkaProducer 
and the task is retried in the same executor then the task is getting the same 
KafkaProducer over and over again unless it's invalidated with the timeout 
configured by "spark.kafka.producer.cache.timeout" which is not really 
probable. After several retries the query stops.  (was: If a task is failing 
due to a cached Kafka producer and the task is retried in the same executor 
then the task is getting the same KafkaProducer over and over again unless it's 
invalidated with the timeout configured by "spark.kafka.producer.cache.timeout" 
which is not really probable. After several retries the query stops.)

> Query fails if task is failing due to corrupt cached Kafka producer
> -------------------------------------------------------------------
>
>                 Key: SPARK-27042
>                 URL: https://issues.apache.org/jira/browse/SPARK-27042
>             Project: Spark
>          Issue Type: Bug
>          Components: Structured Streaming
>    Affects Versions: 2.1.3, 2.2.3, 2.3.3, 2.4.0, 3.0.0
>            Reporter: Gabor Somogyi
>            Priority: Major
>
> If a task is failing due to a cached corrupted KafkaProducer and the task is 
> retried in the same executor then the task is getting the same KafkaProducer 
> over and over again unless it's invalidated with the timeout configured by 
> "spark.kafka.producer.cache.timeout" which is not really probable. After 
> several retries the query stops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to