Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/19480#discussion_r145264550
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -2103,4 +2103,16 @@ class DataFrameSuite extends QueryTest with
SharedSQLContext {
testData2.select(lit(7), 'a, 'b).orderBy(lit(1), lit(2), lit(3)),
Seq(Row(7, 1, 1), Row(7, 1, 2), Row(7, 2, 1), Row(7, 2, 2), Row(7,
3, 1), Row(7, 3, 2)))
}
+
+ test("SPARK-22226: splitExpressions should not generate codes beyond
64KB") {
+ val colNumber = 10000
--- End diff --
@mgaido91 .
I'm wondering if this is the new maximum number of columns tested in Spark?
After your patch, what will be the minimum value of `colNumber` to cause
failures in Spark?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]