This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 6e371e1df50e [SPARK-47617][SQL] Add TPC-DS testing infrastructure for 
collations
6e371e1df50e is described below

commit 6e371e1df50e35d807065015525772c3c02a5995
Author: Nikola Mandic <nikola.man...@databricks.com>
AuthorDate: Thu Apr 11 21:08:17 2024 +0800

    [SPARK-47617][SQL] Add TPC-DS testing infrastructure for collations
    
    <!--
    Thanks for sending a pull request!  Here are some tips for you:
      1. If this is your first time, please read our contributor guidelines: 
https://spark.apache.org/contributing.html
      2. Ensure you have added or run the appropriate tests for your PR: 
https://spark.apache.org/developer-tools.html
      3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., 
'[WIP][SPARK-XXXX] Your PR title ...'.
      4. Be sure to keep the PR description updated to reflect all changes.
      5. Please write your PR title to summarize what this PR proposes.
      6. If possible, provide a concise example to reproduce the issue for a 
faster review.
      7. If you want to add a new configuration, please read the guideline 
first for naming configurations in
         
'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
      8. If you want to add or modify an error type or message, please read the 
guideline first in
         'common/utils/src/main/resources/error/README.md'.
    -->
    
    ### What changes were proposed in this pull request?
    <!--
    Please clarify what changes you are proposing. The purpose of this section 
is to outline the changes and how this PR fixes the issue.
    If possible, please consider writing useful notes for better and faster 
reviews in your PR. See the examples below.
      1. If you refactor some codes with changing classes, showing the class 
hierarchy will help reviewers.
      2. If you fix some SQL features, you can provide some references of other 
DBMSes.
      3. If there is design documentation, please add the link.
      4. If there is a discussion in the mailing list, please add the link.
    -->
    
    We can utilize TPC-DS testing infrastructure already present in Spark. The 
idea is to vary TPC-DS table string columns by adding multiple collations with 
different ordering rules and case sensitivity, producing new tables. These 
tables should yield the same results against predefined TPC-DS queries for 
certain batches of collations. For example, when comparing query runs on table 
where columns are first collated as `UTF8_BINARY` and then as 
`UTF8_BINARY_LCASE`, we should be getting sa [...]
    
    Introduce new query suite which tests the described behavior with available 
collations (utf8_binary and unicode) combined with case conversions (lowercase, 
uppercase, randomized case for fuzzy testing).
    
    ### Why are the changes needed?
    <!--
    Please clarify why the changes are needed. For instance,
      1. If you propose a new API, clarify the use case for a new API.
      2. If you fix a bug, you can clarify why it is a bug.
    -->
    
    Improve collations testing coverage.
    
    ### Does this PR introduce _any_ user-facing change?
    <!--
    Note that it means *any* user-facing change including all aspects such as 
the documentation fix.
    If yes, please clarify the previous behavior and the change this PR 
proposes - provide the console output, description and/or an example to show 
the behavior difference if possible.
    If possible, please also clarify if this is a user-facing change compared 
to the released Spark versions or within the unreleased branches such as master.
    If no, write 'No'.
    -->
    
    No.
    
    ### How was this patch tested?
    <!--
    If tests were added, say they were added here. Please make sure to add some 
test cases that check the changes thoroughly including negative and positive 
cases if possible.
    If it was tested in a way different from regular unit tests, please clarify 
how you tested step by step, ideally copy and paste-able, so that other 
reviewers can test and check, and descendants can verify in the future.
    If tests were not added, please describe why they were not added and/or why 
it was difficult to add.
    If benchmark tests were added, please run the benchmarks in GitHub Actions 
for the consistent environment, and the instructions could accord to: 
https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
    -->
    
    Added TPC-DS collations query suite.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    <!--
    If generative AI tooling has been used in the process of authoring this 
patch, please include the
    phrase: 'Generated-by: ' followed by the name of the tool and its version.
    If no, write 'No'.
    Please refer to the [ASF Generative Tooling 
Guidance](https://www.apache.org/legal/generative-tooling.html) for details.
    -->
    
    No.
    
    Closes #45739 from nikolamand-db/SPARK-47617.
    
    Lead-authored-by: Nikola Mandic <nikola.man...@databricks.com>
    Co-authored-by: Stefan Kandic <stefan.kan...@databricks.com>
    Signed-off-by: Wenchen Fan <wenc...@databricks.com>
---
 .github/workflows/build_and_test.yml               |   3 +
 .../scala/org/apache/spark/sql/TPCDSBase.scala     |   2 +-
 .../spark/sql/TPCDSCollationQueryTestSuite.scala   | 262 +++++++++++++++++++++
 .../scala/org/apache/spark/sql/TPCDSSchema.scala   |   3 +-
 4 files changed, 268 insertions(+), 2 deletions(-)

diff --git a/.github/workflows/build_and_test.yml 
b/.github/workflows/build_and_test.yml
index e505be7d4d98..832826333f09 100644
--- a/.github/workflows/build_and_test.yml
+++ b/.github/workflows/build_and_test.yml
@@ -937,6 +937,9 @@ jobs:
         SPARK_TPCDS_JOIN_CONF: |
           spark.sql.autoBroadcastJoinThreshold=-1
           spark.sql.join.forceApplyShuffledHashJoin=true
+    - name: Run TPC-DS queries on collated data
+      run: |
+        SPARK_TPCDS_DATA=`pwd`/tpcds-sf-1 build/sbt "sql/testOnly 
org.apache.spark.sql.TPCDSCollationQueryTestSuite"
     - name: Upload test results to report
       if: always()
       uses: actions/upload-artifact@v4
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/TPCDSBase.scala 
b/sql/core/src/test/scala/org/apache/spark/sql/TPCDSBase.scala
index b6d46d279f4c..d4b70ae0d478 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/TPCDSBase.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/TPCDSBase.scala
@@ -34,7 +34,7 @@ trait TPCDSBase extends TPCBase with TPCDSSchema {
     "q81", "q82", "q83", "q84", "q85", "q86", "q87", "q88", "q89", "q90",
     "q91", "q92", "q93", "q94", "q95", "q96", "q97", "q98", "q99")
 
-  protected val excludedTpcdsQueries: Set[String] = if (regenerateGoldenFiles) 
{
+  protected def excludedTpcdsQueries: Set[String] = if (regenerateGoldenFiles) 
{
     Set()
   } else {
     // Since `tpcdsQueriesV2_7_0` has almost the same queries with these ones 
below,
diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/TPCDSCollationQueryTestSuite.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/TPCDSCollationQueryTestSuite.scala
new file mode 100644
index 000000000000..a84dd9645bcc
--- /dev/null
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/TPCDSCollationQueryTestSuite.scala
@@ -0,0 +1,262 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql
+
+import java.nio.file.{Files, Paths}
+import java.util.Locale
+
+import org.apache.spark.{SparkConf, SparkContext}
+import org.apache.spark.sql.catalyst.util.resourceToString
+import org.apache.spark.sql.internal.SQLConf
+import org.apache.spark.sql.test.TestSparkSession
+import org.apache.spark.tags.ExtendedSQLTest
+import org.apache.spark.util.Utils
+
+/**
+ * End-to-end tests to validate TPC-DS query results against collation-aware
+ * modified data and queries.
+ *
+ * For each collation, table schemas are replicated into two databases in such 
way that in first
+ * DB all table columns are collated with specified collation, while the 
second DB collates table
+ * columns with case-insensitive version of the collation. Tables from first 
DB are then populated
+ * with lowercase-converted data from tpc-ds kit and tables from second DB are 
populated with
+ * randomized-case data.
+ *
+ * When running arbitrary SQL query, we convert the query to lowercase in 
order to move
+ * all string literals contained in the query to lowercase for execution 
against first DB,
+ * we do this to ensure results are equivalent to case-insensitive collations 
when queries contain
+ * uppercase string literals; second DB receives original unmodified query. 
Results should compare
+ * equal, ignoring case. We use this method to validate collations are working 
with arbitrary
+ * standard SQL constructs.
+ *
+ * Additionally, we perform trims on string data to properly convert it from 
CharType
+ * to StringType and do sanity checks to verify that results are non-empty as 
expected.
+ *
+ * To run this test suite:
+ * {{{
+ *   SPARK_TPCDS_DATA=<path of TPCDS SF=1 data>
+ *     build/sbt "sql/testOnly *TPCDSCollationQueryTestSuite"
+ * }}}
+ *
+ * To run a single test file upon change:
+ * {{{
+ *   SPARK_TPCDS_DATA=<path of TPCDS SF=1 data>
+ *     build/sbt "sql/testOnly *TPCDSCollationQueryTestSuite -- -z q79"
+ * }}}
+ */
+@ExtendedSQLTest
+class TPCDSCollationQueryTestSuite extends QueryTest with TPCDSBase with 
SQLQueryTestHelper {
+
+  private val tpcdsDataPath = sys.env.get("SPARK_TPCDS_DATA")
+
+  // To make output results deterministic
+  override protected def sparkConf: SparkConf = super.sparkConf
+    .set(SQLConf.SHUFFLE_PARTITIONS.key, "1")
+
+  protected override def createSparkSession: TestSparkSession = {
+    new TestSparkSession(new SparkContext("local[1]", 
this.getClass.getSimpleName, sparkConf))
+  }
+
+  if (tpcdsDataPath.nonEmpty) {
+    val nonExistentTables = tableColumns.keys.filterNot { tableName =>
+      Files.exists(Paths.get(s"${tpcdsDataPath.get}/$tableName"))
+    }
+    if (nonExistentTables.nonEmpty) {
+      fail(s"Non-existent TPCDS table paths found in ${tpcdsDataPath.get}: " +
+        nonExistentTables.mkString(", "))
+    }
+  }
+
+  private def withDB[T](dbName: String)(fun: => T): T = {
+    Utils.tryWithSafeFinally({
+      spark.sql(s"USE `$dbName`")
+      fun
+    }) {
+      spark.sql("USE DEFAULT")
+    }
+  }
+
+  abstract class CollationCheck(
+      val dbName: String,
+      val collation: String,
+      val columnTransform: String) {
+
+    def queryTransform: String => String
+  }
+
+  case class CaseInsensitiveCollationCheck(
+      override val dbName: String,
+      override val collation: String,
+      override val columnTransform: String)
+    extends CollationCheck(dbName, collation, columnTransform) {
+
+    override def queryTransform: String => String = identity
+  }
+
+  case class CaseSensitiveCollationCheck(
+      override val dbName: String,
+      override val collation: String,
+      override val columnTransform: String)
+    extends CollationCheck(dbName, collation, columnTransform) {
+
+    override def queryTransform: String => String = _.toLowerCase(Locale.ROOT)
+  }
+
+  val randomizeCase = "RANDOMIZE_CASE"
+
+  // List of batches of runs which should yield the same result when run on a 
query
+  val checks: Seq[Seq[CollationCheck]] = Seq(
+    Seq(
+      CaseSensitiveCollationCheck("tpcds_utf8", "UTF8_BINARY", "lower"),
+      CaseInsensitiveCollationCheck("tpcds_utf8_random", "UTF8_BINARY_LCASE", 
randomizeCase)
+    ),
+    Seq(
+      CaseSensitiveCollationCheck("tpcds_unicode", "UNICODE", "lower"),
+      CaseInsensitiveCollationCheck("tpcds_unicode_random", "UNICODE_CI", 
randomizeCase)
+    )
+  )
+
+  override def createTables(): Unit = {
+    spark.udf.register(
+      randomizeCase,
+      functions.udf((s: String) => {
+        s match {
+          case null => null
+          case _ =>
+            val random = new scala.util.Random()
+            s.map(c => if (random.nextBoolean()) c.toUpper else c.toLower)
+        }
+      }).asNondeterministic())
+    checks.flatten.foreach(check => {
+      spark.sql(s"CREATE DATABASE `${check.dbName}`")
+      withDB(check.dbName) {
+        tableNames.foreach(tableName => {
+          val columns = tableColumns(tableName)
+            .split("\n")
+            .filter(_.trim.nonEmpty)
+            .map { column =>
+              if (column.trim.split("\\s+").length != 2) {
+                throw new IllegalArgumentException(s"Invalid column 
definition: $column")
+              }
+              val Array(name, colType) = column.trim.split("\\s+")
+              (name, colType.replaceAll(",$", ""))
+            }
+
+          spark.sql(
+            s"""
+               |CREATE TABLE `$tableName` (${collateStringColumns(columns, 
check.collation)})
+               |USING parquet
+               |""".stripMargin)
+
+          val transformedColumns = columns.map { case (name, colType) =>
+            if (isTextColumn(colType)) {
+              // trim to support conversions from CharType
+              s"${check.columnTransform}(trim(both from $name)) AS $name"
+            } else {
+              name
+            }
+          }.mkString(", ")
+
+          spark.sql(
+            s"""
+               |INSERT INTO TABLE `$tableName`
+               |SELECT $transformedColumns
+               |FROM parquet.`${tpcdsDataPath.get}/$tableName`
+               |""".stripMargin)
+        })
+      }
+    })
+  }
+
+  override def dropTables(): Unit =
+    checks.flatten.foreach(check => {
+      withDB(check.dbName)(super.dropTables())
+      spark.sql(s"DROP DATABASE `${check.dbName}`")
+    })
+
+  private def collateStringColumns(
+      columns: Array[(String, String)],
+      collation: String): String = {
+    columns
+      .map { case (name, colType) =>
+        if (isTextColumn(colType)) {
+          s"$name STRING COLLATE $collation"
+        } else {
+          s"$name $colType"
+        }
+      }
+      .mkString(",\n")
+  }
+
+  private def isTextColumn(columnType: String): Boolean = {
+    columnType.toUpperCase(Locale.ROOT).contains("CHAR")
+  }
+
+  private def runQuery(query: String, conf: Map[String, String], emptyResult: 
Boolean): Unit = {
+    withSQLConf(conf.toSeq: _*) {
+      try {
+        checks.foreach(batch => {
+          val res = batch.map(check =>
+            
withDB(check.dbName)(getQueryOutput(check.queryTransform(query)).toLowerCase()))
+          if (!emptyResult) {
+            res.map(queryOutput => assert(queryOutput.nonEmpty))
+          }
+          if (res.nonEmpty) {
+            res.foreach(currRes => assertResult(currRes)(res.head))
+          }
+        })
+      } catch {
+        case e: Throwable =>
+          val configs = conf.map { case (k, v) =>
+            s"$k=$v"
+          }
+          throw new Exception(s"${e.getMessage}\nError using 
configs:\n${configs.mkString("\n")}")
+      }
+    }
+  }
+
+  private def getQueryOutput(query: String): String = {
+    val (_, output) = 
handleExceptions(getNormalizedQueryExecutionResult(spark, query))
+    output.mkString("\n").replaceAll("\\s+$", "")
+  }
+
+  // skip q91 due to use of like expression which is not supported with 
collations yet
+  override def excludedTpcdsQueries: Set[String] = super.excludedTpcdsQueries 
++ Set("q91")
+
+  // Skip checks on queries which produce empty set of rows
+  val emptyResults: Set[String] = Set("q17", "q23b", "q24a", "q24b", "q25", 
"q54")
+  val emptyResultsV2_7_0: Set[String] = Set("q24", "q78")
+
+  if (tpcdsDataPath.nonEmpty) {
+    tpcdsQueries.foreach { name =>
+      val queryString = resourceToString(
+        s"tpcds/$name.sql",
+        classLoader = Thread.currentThread().getContextClassLoader)
+      test(name)(runQuery(queryString, Map.empty, emptyResults.contains(name)))
+    }
+
+    tpcdsQueriesV2_7_0.foreach { name =>
+      val queryString = resourceToString(
+        s"tpcds-v2.7.0/$name.sql",
+        classLoader = Thread.currentThread().getContextClassLoader)
+      test(s"$name-v2.7")(runQuery(queryString, Map.empty, 
emptyResultsV2_7_0.contains(name)))
+    }
+  } else {
+    ignore("skipped because env 'SPARK_TPCDS_DATA' is not set") {}
+  }
+}
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/TPCDSSchema.scala 
b/sql/core/src/test/scala/org/apache/spark/sql/TPCDSSchema.scala
index 7b2ed8d28274..203b2cf32321 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/TPCDSSchema.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/TPCDSSchema.scala
@@ -147,7 +147,8 @@ trait TPCDSSchema {
         |`cr_catalog_page_sk` INT,
         |`cr_ship_mode_sk` INT,
         |`cr_warehouse_sk` INT,
-        |`cr_reason_sk` INT,`cr_order_number` INT,
+        |`cr_reason_sk` INT,
+        |`cr_order_number` INT,
         |`cr_return_quantity` INT,
         |`cr_return_amount` DECIMAL(7,2),
         |`cr_return_tax` DECIMAL(7,2),


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to