Github user tdas commented on a diff in the pull request: https://github.com/apache/spark/pull/22042#discussion_r210423180 --- Diff: external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala --- @@ -91,6 +90,17 @@ private[kafka010] case class InternalKafkaConsumer( kafkaParams: ju.Map[String, Object]) extends Logging { import InternalKafkaConsumer._ + /** + * The internal object returned by the `fetchData` method. If `record` is empty, it means it is + * invisible (either a transaction message, or an aborted message when the consumer's + * `isolation.level` is `read_committed`), and the caller should use `nextOffsetToFetch` to fetch + * instead. + */ + private case class FetchedRecord( + record: Option[ConsumerRecord[Array[Byte], Array[Byte]]], --- End diff -- Can;t we reuse the objects here. And do we need to have an Option, thus creating a lot of Option objects all the time?
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org