Hi Matthias,
Are you referring to https://issues.apache.org/jira/browse/SPARK-19976?
Look like that the jira was not fixed. (e.g. Resolution: "Not a Problem").
So, is there any suggested workaround?

Regards,
Mina


On Mon, Feb 12, 2018 at 3:03 PM, Matthias J. Sax <matth...@confluent.io>
wrote:

> AFAIK, Spark does not pass this config to the consumer on purpose...
> It's not a Kafka issues -- IIRC, there is Spark JIRA ticket for this.
>
> -Matthias
>
> On 2/12/18 11:04 AM, Mina Aslani wrote:
> > Hi,
> >
> > I am getting below error
> > Caused by: org.apache.kafka.clients.consumer.OffsetOutOfRangeException:
> > Offsets out of range with no configured reset policy for partitions:
> > {topic1-0=304337}
> > as soon as I submit a spark app to my cluster.
> >
> > I am using below dependency
> > name: 'spark-streaming-kafka-0-10_2.11', version: '2.2.0' And setting
> the
> > consumer's reset config(e.g. AUTO_OFFSET_RESET_CONFIG) to "earliest".
> > As per https://kafka.apache.org/0110/documentation.html the exception
> > should be thrown only when the consumer's reset config has not been set
> > (e.g. default=none).
> > Wondering what is the cause and how to fix.
> >
> > Best regards,
> > Mina
> >
>
>

Reply via email to