advancedxy commented on code in PR #109:
URL: 
https://github.com/apache/arrow-datafusion-comet/pull/109#discussion_r1505265438


##########
spark/src/test/scala/org/apache/comet/exec/CometAggregateSuite.scala:
##########
@@ -537,23 +537,22 @@ class CometAggregateSuite extends CometTestBase with 
AdaptiveSparkPlanHelper {
               withSQLConf(CometConf.COMET_BATCH_SIZE.key -> 
batchSize.toString) {
 
                 // Test all combinations of different aggregation & group-by 
types
-                (1 to 4).foreach { col =>
-                  (1 to 14).foreach { gCol =>
+                (1 to 14).foreach { gCol =>
+                  (1 to 4).foreach { col =>
                     withView("v") {
                       sql(s"CREATE TEMP VIEW v AS SELECT _g$gCol, _$col FROM 
tbl ORDER BY _$col")
                       checkSparkAnswer(s"SELECT _g$gCol, FIRST(_$col) FROM v 
GROUP BY _g$gCol")

Review Comment:
   instead of multiple checkSparkAnswer, which will submit two spark jobs, how 
about:
   
   ```
   checkSparkAnswer(s"SELECT _g$gCol, FIRST(_1), FIRST(_2), FIRST(_3), 
FIRST(_4) FROM v GROUP BY _g$gCol")
   ```



##########
spark/src/test/scala/org/apache/comet/exec/CometAggregateSuite.scala:
##########
@@ -537,23 +537,22 @@ class CometAggregateSuite extends CometTestBase with 
AdaptiveSparkPlanHelper {
               withSQLConf(CometConf.COMET_BATCH_SIZE.key -> 
batchSize.toString) {
 
                 // Test all combinations of different aggregation & group-by 
types
-                (1 to 4).foreach { col =>
-                  (1 to 14).foreach { gCol =>
+                (1 to 14).foreach { gCol =>
+                  (1 to 4).foreach { col =>
                     withView("v") {
                       sql(s"CREATE TEMP VIEW v AS SELECT _g$gCol, _$col FROM 
tbl ORDER BY _$col")
                       checkSparkAnswer(s"SELECT _g$gCol, FIRST(_$col) FROM v 
GROUP BY _g$gCol")

Review Comment:
   hmmm. Something like this:
   
   ```scala
    // Test all combinations of different aggregation & group-by types
                   (1 to 14).foreach { gCol =>
                     withView("v") {
                       sql(s"CREATE TEMP VIEW v AS SELECT _g$gCol, _1, _2, _3, 
_4 FROM tbl ORDER BY _1, _2, _3, _4")
                       checkSparkAnswer(s"SELECT _g$gCol, FIRST(_1), FIRST(_2), 
FIRST(_3), FIRST(_4) , LAST(_1), LAST(_2), LAST(_3), LAST(_4) FROM v GROUP BY 
_g$gCol")
                     }
                     checkSparkAnswer(s"SELECT _g$gCol, SUM(_1), SUM(_2), 
COUNT(_3), COUNT(_4), MIN(_1), AVG(_2), AVG(_3), MAX(_4) FROM tbl GROUP BY 
_g$gCol")
                     checkSparkAnswer(s"SELECT _g$gCol, SUM(DISTINCT _3) FROM 
tbl GROUP BY _g$gCol")
                     checkSparkAnswer(
                       s"SELECT _g$gCol, COUNT(DISTINCT _1) FROM tbl GROUP BY 
_g$gCol")
                   }
   ```



##########
spark/src/test/scala/org/apache/comet/exec/CometAggregateSuite.scala:
##########
@@ -537,23 +537,22 @@ class CometAggregateSuite extends CometTestBase with 
AdaptiveSparkPlanHelper {
               withSQLConf(CometConf.COMET_BATCH_SIZE.key -> 
batchSize.toString) {
 
                 // Test all combinations of different aggregation & group-by 
types
-                (1 to 4).foreach { col =>
-                  (1 to 14).foreach { gCol =>
+                (1 to 14).foreach { gCol =>
+                  (1 to 4).foreach { col =>
                     withView("v") {
                       sql(s"CREATE TEMP VIEW v AS SELECT _g$gCol, _$col FROM 
tbl ORDER BY _$col")
                       checkSparkAnswer(s"SELECT _g$gCol, FIRST(_$col) FROM v 
GROUP BY _g$gCol")
                       checkSparkAnswer(s"SELECT _g$gCol, LAST(_$col) FROM v 
GROUP BY _g$gCol")
                     }
-                    checkSparkAnswer(s"SELECT _g$gCol, SUM(_$col) FROM tbl 
GROUP BY _g$gCol")
-                    checkSparkAnswer(
-                      s"SELECT _g$gCol, SUM(DISTINCT _$col) FROM tbl GROUP BY 
_g$gCol")
-                    checkSparkAnswer(s"SELECT _g$gCol, COUNT(_$col) FROM tbl 
GROUP BY _g$gCol")
-                    checkSparkAnswer(
-                      s"SELECT _g$gCol, COUNT(DISTINCT _$col) FROM tbl GROUP 
BY _g$gCol")
-                    checkSparkAnswer(
-                      s"SELECT _g$gCol, MIN(_$col), MAX(_$col) FROM tbl GROUP 
BY _g$gCol")
-                    checkSparkAnswer(s"SELECT _g$gCol, AVG(_$col) FROM tbl 
GROUP BY _g$gCol")
                   }
+                  checkSparkAnswer(s"SELECT _g$gCol, SUM(_1), SUM(_2) FROM tbl 
GROUP BY _g$gCol")

Review Comment:
   similar for `sum`, `count`, `min`, `avg` and `max`.
   
   `Count(distinct xx)` and `sum(distinct xx)` is different, might have to be 
iterated by 4 cols.



##########
spark/src/test/scala/org/apache/comet/exec/CometAggregateSuite.scala:
##########
@@ -537,23 +537,22 @@ class CometAggregateSuite extends CometTestBase with 
AdaptiveSparkPlanHelper {
               withSQLConf(CometConf.COMET_BATCH_SIZE.key -> 
batchSize.toString) {
 
                 // Test all combinations of different aggregation & group-by 
types
-                (1 to 4).foreach { col =>
-                  (1 to 14).foreach { gCol =>
+                (1 to 14).foreach { gCol =>
+                  (1 to 4).foreach { col =>

Review Comment:
   Another unrelated question: why `1 to 4`? seems like `_1` to `_4` are both 
integer types. 
   
   We probably want to test other types like float/double, decimal etc?
   
   But this should be addressed in another PR.



##########
spark/src/test/scala/org/apache/comet/exec/CometNativeShuffleSuite.scala:
##########
@@ -0,0 +1,198 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.comet.exec
+
+import org.scalactic.source.Position
+import org.scalatest.Tag
+
+import org.apache.hadoop.fs.Path
+import org.apache.spark.sql.CometTestBase
+import org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec
+import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper
+import org.apache.spark.sql.functions.col
+
+import org.apache.comet.CometConf
+import org.apache.comet.CometSparkSessionExtensions.isSpark34Plus
+
+class CometNativeShuffleSuite extends CometTestBase with 
AdaptiveSparkPlanHelper {
+  override protected def test(testName: String, testTags: Tag*)(testFun: => 
Any)(implicit
+      pos: Position): Unit = {
+    super.test(testName, testTags: _*) {
+      withSQLConf(
+        CometConf.COMET_EXEC_ENABLED.key -> "true",
+        CometConf.COMET_COLUMNAR_SHUFFLE_ENABLED.key -> "false",
+        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> "true") {
+        testFun
+      }
+    }
+  }
+
+  import testImplicits._
+
+  // TODO: this test takes ~5mins to run, we should reduce the test time.
+  test("fix: Too many task completion listener of ArrowReaderIterator causes 
OOM") {
+    withSQLConf(CometConf.COMET_BATCH_SIZE.key -> "1") {
+      withParquetTable((0 until 100000).map(i => (1, (i + 1).toLong)), "tbl") {
+        assert(
+          sql("SELECT * FROM tbl").repartition(201, $"_1").count() == 
sql("SELECT * FROM tbl")
+            .count())
+      }
+    }
+  }
+
+  test("native shuffle: different data type") {
+    Seq(true, false).foreach { dictionaryEnabled =>
+      withTempDir { dir =>
+        val path = new Path(dir.toURI.toString, "test.parquet")
+        makeParquetFileAllTypes(path, dictionaryEnabled = dictionaryEnabled, 
1000)
+        var allTypes: Seq[Int] = (1 to 20)
+        if (isSpark34Plus) {
+          allTypes = allTypes.filterNot(Set(14, 17).contains)
+        }
+        allTypes.map(i => s"_$i").foreach { c =>
+          withSQLConf("parquet.enable.dictionary" -> 
dictionaryEnabled.toString) {
+            readParquetFile(path.toString) { df =>
+              val shuffled = df
+                .select($"_1")
+                .repartition(10, col(c))
+              checkCometExchange(shuffled, 1, true)
+              checkSparkAnswerAndOperator(shuffled)
+            }
+          }
+        }
+      }
+    }
+  }
+
+  test("hash-based native shuffle") {
+    withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), "tbl") {
+      val df = sql("SELECT * FROM tbl").sortWithinPartitions($"_1".desc)
+      val shuffled1 = df.repartition(10, $"_1")
+
+      checkCometExchange(shuffled1, 1, true)
+      checkSparkAnswer(shuffled1)
+
+      val shuffled2 = df.repartition(10, $"_1", $"_2")
+
+      checkCometExchange(shuffled2, 1, true)
+      checkSparkAnswer(shuffled2)
+
+      val shuffled3 = df.repartition(10, $"_2", $"_1")
+
+      checkCometExchange(shuffled3, 1, true)
+      checkSparkAnswer(shuffled3)
+    }
+  }
+
+  test("columnar shuffle: single partition") {
+    withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), "tbl") {
+      val df = sql("SELECT * FROM tbl").sortWithinPartitions($"_1".desc)
+
+      val shuffled = df.repartition(1)
+
+      checkCometExchange(shuffled, 1, true)
+      checkSparkAnswer(shuffled)
+    }
+  }
+
+  test("native operator after native shuffle") {
+    withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), "tbl") {
+      val df = sql("SELECT * FROM tbl")
+
+      val shuffled1 = df
+        .repartition(10, $"_2")
+        .select($"_1", $"_1" + 1, $"_2" + 2)
+        .repartition(10, $"_1")
+        .filter($"_1" > 1)
+
+      // 2 Comet shuffle exchanges are expected
+      checkCometExchange(shuffled1, 2, true)
+      checkSparkAnswer(shuffled1)
+
+      val shuffled2 = df
+        .repartitionByRange(10, $"_2")
+        .select($"_1", $"_1" + 1, $"_2" + 2)
+        .repartition(10, $"_1")
+        .filter($"_1" > 1)
+
+      // Because the first exchange from the bottom is range exchange which 
native shuffle
+      // doesn't support. So Comet exec operators stop before the first 
exchange and thus
+      // there is no Comet exchange.
+      checkCometExchange(shuffled2, 0, true)
+      checkSparkAnswer(shuffled2)
+    }
+  }
+
+  test("grouped aggregate: native shuffle") {
+    withParquetTable((0 until 5).map(i => (i, i + 1)), "tbl") {
+      val df = sql("SELECT count(_2), sum(_2) FROM tbl GROUP BY _1")
+      checkCometExchange(df, 1, true)
+      checkSparkAnswerAndOperator(df)
+    }
+  }
+
+  test("native shuffle metrics") {
+    withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), "tbl") {
+      val df = sql("SELECT * FROM tbl").sortWithinPartitions($"_1".desc)
+      val shuffled = df.repartition(10, $"_1")
+
+      checkCometExchange(shuffled, 1, true)
+      checkSparkAnswer(shuffled)
+
+      // Materialize the shuffled data
+      shuffled.collect()
+      val metrics = find(shuffled.queryExecution.executedPlan) {
+        case _: CometShuffleExchangeExec => true
+        case _ => false
+      }.map(_.metrics).get
+
+      assert(metrics.contains("shuffleRecordsWritten"))
+      assert(metrics("shuffleRecordsWritten").value == 5L)
+    }
+  }
+
+  test("fix: Dictionary arrays imported from native should not be overridden") 
{
+    Seq(10, 201).foreach { numPartitions =>
+      withSQLConf(
+        CometConf.COMET_BATCH_SIZE.key -> "10",
+        CometConf.COMET_EXEC_ALL_OPERATOR_ENABLED.key -> "true",
+        CometConf.COMET_EXEC_ALL_EXPR_ENABLED.key -> "true") {
+        withParquetTable((0 until 50).map(i => (1.toString, 2.toString, (i + 
1).toLong)), "tbl") {
+          val df = sql("SELECT * FROM tbl")
+            .filter($"_1" === 1.toString)
+            .repartition(numPartitions, $"_1", $"_2")
+            .sortWithinPartitions($"_1")
+          checkSparkAnswerAndOperator(df)
+        }
+      }
+    }
+  }
+
+  test("fix: comet native shuffle with binary data") {

Review Comment:
   Seems like this test case has already been covered by `test("native shuffle: 
different data type") {`?
   
   Let me do a refactor in a  follow up PR?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to