[
https://issues.apache.org/jira/browse/SPARK-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15453289#comment-15453289
]
Ofir Manor commented on SPARK-15406:
------------------------------------
Cody, why do you think Structured Streaming support for Kafka requires that
specific feature (time-indexing)?
Personally, I have a couple of objections:
# Technically, I think starting a stream from a timestamp is really nice, but
definitely optional. We could start by letting the user choose between latest,
oldest and user-provided Kafka offset object (offset per partition per topic),
like every other Kafka consumer today. We could definitely add timestamp as
another option when that is released
For me, starting by latest and by user-provided Kafka offset is what I'd like
to use, though I can see myself wanting to use start from timestamp in some
cases.
# Non-technically, I think this Kafka source would be very popular. So, I think
wishing to support only the next (future) release of Kafka is
counter-productive, as it won't work with most Kafka clusters out there after
the release.
Of course, supporting currently deployed Kafka cluster likely means 0.8.2.x
support, which is the old consumer... So it is additional, duplicate work, but
I think is critical.
WDYT?
> Structured streaming support for consuming from Kafka
> -----------------------------------------------------
>
> Key: SPARK-15406
> URL: https://issues.apache.org/jira/browse/SPARK-15406
> Project: Spark
> Issue Type: New Feature
> Reporter: Cody Koeninger
>
> Structured streaming doesn't have support for kafka yet. I personally feel
> like time based indexing would make for a much better interface, but it's
> been pushed back to kafka 0.10.1
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]