karuppayya commented on code in PR #37205:
URL: https://github.com/apache/spark/pull/37205#discussion_r926973808


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/BatchScanExec.scala:
##########
@@ -109,13 +109,15 @@ case class BatchScanExec(
   override lazy val readerFactory: PartitionReaderFactory = 
batch.createReaderFactory()
 
   override lazy val inputRDD: RDD[InternalRow] = {
-    if (filteredPartitions.isEmpty && outputPartitioning == SinglePartition) {
+    val rdd = if (filteredPartitions.isEmpty && outputPartitioning == 
SinglePartition) {
       // return an empty RDD with 1 partition if dynamic filtering removed the 
only split
       sparkContext.parallelize(Array.empty[InternalRow], 1)
     } else {
       new DataSourceRDD(
         sparkContext, filteredPartitions, readerFactory, supportsColumnar, 
customMetrics)
     }
+    postDriverMetrics()

Review Comment:
   @viirya @cloud-fan I will create a follow up PR for the write paths.
   As for read paths, I have updated ContinuousScanExec, MicroBatchScanExec 
similar to BatchScanExec.
   Can you please point me to test case that I can run to verify if the 
behavior is as expected.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to