ravwojdyla commented on code in PR #36430:
URL: https://github.com/apache/spark/pull/36430#discussion_r887126464


##########
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -1593,6 +1593,35 @@ class Dataset[T] private[sql](
   @scala.annotation.varargs
   def select(col: String, cols: String*): DataFrame = select((col +: 
cols).map(Column(_)) : _*)
 
+  /**
+   * Selects a set of columns via schema object.
+   */
+  def select(schema: StructType): DataFrame = {
+    val attrs = logicalPlan.output
+    val attrs_map = attrs.map { a => (a.name, a) }.toMap
+    val new_attrs = AttributeMap(schema.map { f =>

Review Comment:
   @HyukjinKwon @jiangxb1987 one thing we could do is to validate that the 
schema we are selecting/swapping by is compatible with the current schema 
(including nested columns). Would this be an acceptable solution for you?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to