HyukjinKwon commented on a change in pull request #34359:
URL: https://github.com/apache/spark/pull/34359#discussion_r747132197



##########
File path: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
##########
@@ -511,6 +511,20 @@ class SparkSession private(
     createDataset(data.asScala.toSeq)
   }
 
+  /**
+   * Creates a [[Dataset]] from an RDD of spark.sql.catalyst.InternalRow. This 
method allows
+   * the caller to create externally the InternalRow set, as we as define the 
schema externally.
+   *
+   * @since 3.3.0
+   */
+  def createDataset(data: RDD[InternalRow], schema: StructType): DataFrame = {

Review comment:
       I was thinking about making it as a developer API, and users would be 
able to do:
   
   ```scala
   val rdd: RDD[InternalRow] = ...
   val attributtes = schema.fields.map(f => AttributeReference(f.name, 
f.dataType, f.nullable, f.metadata)())
   Dataset.ofRows(spark, org.apache.spark.sql.execution.LogicalRDD(attributtes, 
rdd)(spark))
   ```
   
   But I am not sure if this is the best approach. Probably we should better 
raise this discussion into Spark dev mailing ([email protected]) list, and 
discuss further.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to