[
https://issues.apache.org/jira/browse/SPARK-12177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335369#comment-15335369
]
Jinxia Liu edited comment on SPARK-12177 at 6/17/16 7:27 AM:
-------------------------------------------------------------
Thanks Cody, I checked the example you gave, one possible problem is that, if
during connector downtime, a new kafka partition is added to a given topic,
without polling, this change won't be caught, thus leave the newly added
partition unconsumed.
but if use poll(0) as the start/compute method did, there will be
NoOffsetException(thrown by Fetcher::resetOffset).
I am thinking, if the use case is to consume from earliest offset when no
committed offset is found, whether I should set the auto.offset.reset to
earliest instead.
perhaps once the KAFKA-3370 gets fixed, we can adopt that to avoid extra
handling for initialization.
was (Author: [email protected]):
Thanks Cody, I will check your example.
I can move the extra handling logic to DriverConsumer, perhaps once the
KAFKA-3370 gets fixed, we can adopt that to avoid extra handling for
initialization.
> Update KafkaDStreams to new Kafka 0.10 Consumer API
> ---------------------------------------------------
>
> Key: SPARK-12177
> URL: https://issues.apache.org/jira/browse/SPARK-12177
> Project: Spark
> Issue Type: Improvement
> Components: Streaming
> Affects Versions: 1.6.0
> Reporter: Nikita Tarasenko
> Labels: consumer, kafka
>
> Kafka 0.9 already released and it introduce new consumer API that not
> compatible with old one. So, I added new consumer api. I made separate
> classes in package org.apache.spark.streaming.kafka.v09 with changed API. I
> didn't remove old classes for more backward compatibility. User will not need
> to change his old spark applications when he uprgade to new Spark version.
> Please rewiew my changes
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]