AFAIK, Spark does not pass this config to the consumer on purpose...
It's not a Kafka issues -- IIRC, there is Spark JIRA ticket for this.


On 2/12/18 11:04 AM, Mina Aslani wrote:
> Hi,
> I am getting below error
> Caused by: org.apache.kafka.clients.consumer.OffsetOutOfRangeException:
> Offsets out of range with no configured reset policy for partitions:
> {topic1-0=304337}
> as soon as I submit a spark app to my cluster.
> I am using below dependency
> name: 'spark-streaming-kafka-0-10_2.11', version: '2.2.0' And setting the
> consumer's reset config(e.g. AUTO_OFFSET_RESET_CONFIG) to "earliest".
> As per the exception
> should be thrown only when the consumer's reset config has not been set
> (e.g. default=none).
> Wondering what is the cause and how to fix.
> Best regards,
> Mina

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to