gaborgsomogyi commented on a change in pull request #23747: [SPARK-26848][SQL]
Introduce new option to Kafka source: offset by timestamp (starting/ending)
URL: https://github.com/apache/spark/pull/23747#discussion_r266813089
##########
File path:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/JsonUtils.scala
##########
@@ -76,6 +76,16 @@ private object JsonUtils {
}
}
+ def topicTimestamps(str: String): Map[String, Long] = {
+ try {
+ Serialization.read[Map[String, Long]](str)
+ } catch {
+ case NonFatal(x) =>
+ throw new IllegalArgumentException(
+ s"""Expected e.g. {"topicA": 1549597128110,"topicB": 1549597120110},
got $str""")
Review comment:
> I think referring offsets was mostly needed when Kafka didn't have
timestamp as index
Agree but such example scenario would force user to fall back again: Given a
cluster where there is no time sync because admins forgot it. Huge amount of
messages generated with bad local time but consistently within partitions. User
knows the time diffs but such case offsets has to be used. Let's hear others
whether it worth.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]