To work around an out of space issue in a Direct Kafka Streaming
application we create topics with a low retention policy (retention.ms=30)
which works fine from Kafka perspective. However this results
into OffsetOutOfRangeException in Spark job (red line below). Is there any
configuration
) wrapped in a
>> Try construct, but since the exception happens in the executor, the Try
>> construct didn't take effect. Do you have any ideas of how to handle this?
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-li
If you have a reproduction you should open a JIRA. It would be great if
there is a fix. I'm just saying I know a similar issue does not exist in
structured streaming.
On Fri, Mar 10, 2017 at 7:46 AM, Justin Miller <
justin.mil...@protectwise.com> wrote:
> Hi Michael,
>
> I'm experiencing a
andle this?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-gracefully-handle-Kafka-OffsetOutOfRangeException-tp26534.html
>
> <http://apache-spark-user-list.1001560.n3.nabble.com/H
how to handle this?
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/How-to-gracefully-handle-Kafka-
> OffsetOutOfRangeException-tp26534.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
@n3.nabble.com> wrote:
> Did you find out how ?
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-
> gracefully-handle-Kafka-OffsetOutOfRangeE
to handle this and not have my spark job crash? I
> have
> >> > no
> >> > option of increasing the kafka retention period.
> >> >
> >> > I tried to have the DStream returned by createDirectStream() wrapped
> in
> >> > a
> >
s and not have my spark job crash? I have
>> > no
>> > option of increasing the kafka retention period.
>> >
>> > I tried to have the DStream returned by createDirectStream() wrapped in
>> > a
>> > Try construct, but since the exception happens i
am() wrapped in a
> > Try construct, but since the exception happens in the executor, the Try
> > construct didn't take effect. Do you have any ideas of how to handle
> this?
> >
> >
> >
> > --
> > View this message
t; http://apache-spark-user-list.1001560.n3.nabble.com/How-to-gracefully-handle-Kafka-OffsetOutOfRangeException-tp26534.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
gracefully-handle-Kafka-OffsetOutOfRangeException-tp26534.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-ma
ect. Do you have any ideas of how to handle this?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-gracefully-handle-Kafka-OffsetOutOfRangeException-t
the
OffsetOutOfRangeException from Kafka, as we would expect. As we work
towards more efficient processing of that topic, or get more resources, I'd
like to be able to log the error and continue the application without
failing. Is there a place where I can catch that error before it gets
erPartition Eventually, when the data is aged off, I get the
> OffsetOutOfRangeException from Kafka, as we would expect. As we work
> towards more efficient processing of that topic, or get more resources, I'd
> like to be able to log the error and continue the application without
> fai
14 matches
Mail list logo