Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19344#discussion_r141667464
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/TPCDSQueryBenchmark.scala
 ---
    @@ -67,10 +67,8 @@ object TPCDSQueryBenchmark extends Logging {
     
           // This is an indirect hack to estimate the size of each query's 
input by traversing the
           // logical plan and adding up the sizes of all tables that appear in 
the plan. Note that this
    -      // currently doesn't take WITH subqueries into account which might 
lead to fairly inaccurate
    -      // per-row processing time for those cases.
           val queryRelations = scala.collection.mutable.HashSet[String]()
    -      spark.sql(queryString).queryExecution.logical.map {
    +      spark.sql(queryString).queryExecution.analyzed.map {
             case UnresolvedRelation(t: TableIdentifier) =>
    --- End diff --
    
    If the plan is successfully analyzed, `UnresolvedRelation ` should not exist


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to