gaborgsomogyi commented on a change in pull request #23747: [SPARK-26848][SQL]
Introduce new option to Kafka source: offset by timestamp (starting/ending)
URL: https://github.com/apache/spark/pull/23747#discussion_r266783048
##########
File path:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala
##########
@@ -135,23 +135,79 @@ private[kafka010] class KafkaOffsetReader(
def fetchSpecificOffsets(
partitionOffsets: Map[TopicPartition, Long],
reportDataLoss: String => Unit): KafkaSourceOffset = {
- val fetched = runUninterruptibly {
- withRetriesWithoutInterrupt {
- // Poll to get the latest assigned partitions
- consumer.poll(0)
- val partitions = consumer.assignment()
+ val fnAssertParametersWithPartitions: ju.Set[TopicPartition] => Unit = {
partitions =>
+ assert(partitions.asScala == partitionOffsets.keySet,
+ "If startingOffsets contains specific offsets, you must specify all
TopicPartitions.\n" +
+ "Use -1 for latest, -2 for earliest, if you don't care.\n" +
+ s"Specified: ${partitionOffsets.keySet} Assigned:
${partitions.asScala}")
+ logDebug(s"Partitions assigned to consumer: $partitions. Seeking to
$partitionOffsets")
+ }
- // Call `position` to wait until the potential offset request
triggered by `poll(0)` is
- // done. This is a workaround for KAFKA-7703, which an async
`seekToBeginning` triggered by
- // `poll(0)` may reset offsets that should have been set by another
request.
- partitions.asScala.map(p => p -> consumer.position(p)).foreach(_ => {})
+ val fnRetrievePartitionOffsets: ju.Set[TopicPartition] =>
Map[TopicPartition, Long] = { _ =>
+ partitionOffsets
+ }
+
+ val fnAssertFetchedOffsets: Map[TopicPartition, Long] => Unit = { fetched
=>
+ partitionOffsets.foreach {
+ case (tp, off) if off != KafkaOffsetRangeLimit.LATEST &&
+ off != KafkaOffsetRangeLimit.EARLIEST =>
+ if (fetched(tp) != off) {
+ reportDataLoss(
+ s"startingOffsets for $tp was $off but consumer reset to
${fetched(tp)}")
+ }
+ case _ =>
+ // no real way to check that beginning or end is reasonable
+ }
+ }
+
+ fetchSpecificOffsets0(fnAssertParametersWithPartitions,
fnRetrievePartitionOffsets,
+ fnAssertFetchedOffsets)
+ }
+
+ def fetchSpecificTimestampBasedOffsets(topicTimestamps: Map[String, Long]):
KafkaSourceOffset = {
+ val fnAssertParametersWithPartitions: ju.Set[TopicPartition] => Unit = {
partitions =>
+ val assignedTopics = partitions.asScala.map(_.topic())
+ assert(assignedTopics == topicTimestamps.keySet,
+ "If starting/endingOffsetsByTimestamp contains specific offsets, you
must specify all " +
+ s"topics. Specified: ${topicTimestamps.keySet} Assigned:
$assignedTopics")
+ logDebug(s"Partitions assigned to consumer: $partitions. Seeking to
$topicTimestamps")
+ }
+
+ val fnRetrievePartitionOffsets: ju.Set[TopicPartition] =>
Map[TopicPartition, Long] = {
+ partitions => {
+ val partitionTimestamps: ju.Map[TopicPartition, java.lang.Long] =
+ partitions.asScala.map { topicAndPartition =>
+ topicAndPartition ->
java.lang.Long.valueOf(topicTimestamps(topicAndPartition.topic()))
+ }.toMap.asJava
+
+ val offsetForTime: ju.Map[TopicPartition, OffsetAndTimestamp] =
+ consumer.offsetsForTimes(partitionTimestamps)
Review comment:
> For the first case I think LATEST should be provided, and due to lack of
information we have to apply this to second case as well.
I've taken a deeper look at Kafka consumer and it works as we've discussed.
It's fine.
> we run consumer.poll and consumer.assignment
The actual implementation is good as long as the user is not able to provide
TopicPartitions without some control. I've added this here because of my
[comment](https://github.com/apache/spark/pull/23747#discussion_r266412216). If
we agree on having topic only and not TopicPartition as parameter then this
discussion could be resolved.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]