kazuyukitanimura commented on code in PR #37096:
URL: https://github.com/apache/spark/pull/37096#discussion_r914297412


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/TPCDSQueryBenchmark.scala:
##########
@@ -65,23 +67,19 @@ object TPCDSQueryBenchmark extends SqlBasedBenchmark with 
Logging {
     "web_returns", "web_site", "reason", "call_center", "warehouse", 
"ship_mode", "income_band",
     "time_dim", "web_page")
 
-  def setupTables(dataLocation: String, createTempView: Boolean): Map[String, 
Long] = {
+  def setupTables(dataLocation: String, tableColumns: Map[String, 
StructType]): Map[String, Long] =
     tables.map { tableName =>
-      if (createTempView) {

Review Comment:
   It was initially introduced by #31011
   My understanding is that the temp view was disabled for CBO because CBO runs
   `spark.sql(s"ANALYZE TABLE $tableName COMPUTE STATISTICS FOR ALL COLUMNS")`
   
https://github.com/apache/spark/pull/31011/files#diff-f0ef9be2f138cb947253f07c4285a8cf6b054355cf53beb2ba70ce82a380356bR173
   
   `ANALYZE TABLE` requires actual table rather than temp view. Otherwise, it 
will throw `org.apache.spark.sql.AnalysisException: Temporary view is not 
cached for analyzing columns.`
   
   @maropu please chime in if I said anything wrong.
   
   With this PR, all input data are creating table with the forced schema.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to