cloud-fan commented on a change in pull request #27499: [SPARK-30590][SQL] 
Untyped select API cannot take typed column expression

 File path: 
 @@ -394,4 +403,21 @@ class DatasetAggregatorSuite extends QueryTest with 
SharedSparkSession {
     checkAnswer(group, Row("bob", Row(true, 3)) :: Nil)
     checkDataset([OptionBooleanIntData], OptionBooleanIntData("bob", 
Some((true, 3))))
+  test("SPARK-30590: select multiple typed column expressions") {
+    val df = Seq((1, 2, 3, 4, 5, 6)).toDF("a", "b", "c", "d", "e", "f")
+    val fooAgg = (i: Int) => FooAgg(i)"foo_agg_$i")
+    val agg1 =, fooAgg(2), fooAgg(3), fooAgg(4), fooAgg(5))
+    checkDataset(agg1, (3, 5, 7, 9, 11))
+    val agg2 = df.selectUntyped(fooAgg(1), fooAgg(2), fooAgg(3), fooAgg(4), 
fooAgg(5), fooAgg(6))
+      .asInstanceOf[Dataset[(Int, Int, Int, Int, Int, Int)]]
 Review comment:
   is this really a good use case? It looks to me `selectUntyped` is a bad 
user-facing API that is hard to use. Maybe we should follow other places and 
add more overloads that take up to 22 columns?

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

With regards,
Apache Git Services

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to