richardc-db commented on code in PR #50849:
URL: https://github.com/apache/spark/pull/50849#discussion_r2107913492


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala:
##########
@@ -319,3 +319,43 @@ case class RDDScanExec(
 
   override def getStream: Option[SparkDataStream] = stream
 }
+
+/**
+ * A physical plan node for `OneRowRelation` for scans with no 'FROM' clause.
+ *
+ * We do not extend `RDDScanExec` in order to avoid complexity due to 
`TreeNode.makeCopy` and
+ * `TreeNode`'s general use of reflection.
+ */
+case class OneRowRelationExec() extends LeafExecNode
+  with InputRDDCodegen {
+
+  override val nodeName: String = s"Scan OneRowRelation"
+
+  override val output: Seq[Attribute] = Nil
+
+  private val rdd: RDD[InternalRow] = 
session.sparkContext.parallelize(Seq(InternalRow.empty), 1)

Review Comment:
   done, thanks for the help! ended up doing 
   ```
   private val rdd: RDD[InternalRow] = {
       val numOutputRows = longMetric("numOutputRows")
       session
         .sparkContext
         .parallelize(Seq(InternalRow()), 1)
         .mapPartitionsInternal { _ =>
           val proj = UnsafeProjection.create(Seq.empty[Expression])
           Iterator(proj.apply(InternalRow.empty)).map { r =>
             numOutputRows += 1
             r
           }
         }
     }
   ```
   to ensure the metrics are filled properly.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to