hvanhovell commented on code in PR #48829:
URL: https://github.com/apache/spark/pull/48829#discussion_r1847039697


##########
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -95,13 +95,12 @@ private[sql] object Dataset {
   def ofRows(sparkSession: SparkSession, logicalPlan: LogicalPlan): DataFrame =
     sparkSession.withActive {
       val qe = sparkSession.sessionState.executePlan(logicalPlan)
-      val encoder = if (qe.isLazyAnalysis) {
-        RowEncoder.encoderFor(new StructType())
+      if (qe.isLazyAnalysis) {

Review Comment:
   You don't have to create a RowEncoder at this point, as it should work with 
any schema. For normal encoders we do have to bind them because the underlying 
schema can be incompatible with the encoder. So, long story short, @cloud-fan's 
suggestion is absolutely fine.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to