xuanyuanking commented on a change in pull request #23371: [SPARK-26223][SQL] 
Track metastore operation time in scan node
URL: https://github.com/apache/spark/pull/23371#discussion_r243943003
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
 ##########
 @@ -249,6 +251,14 @@ case class CatalogTable(
 
   import CatalogTable._
 
+  /** Record all timeline between the table and the metastore. */
+  private val _metastoreOpsPhaseSummaries: ArrayBuffer[PhaseSummary] = 
ArrayBuffer.empty
+
+  def metastoreOpsPhaseSummaries: Seq[PhaseSummary] = 
_metastoreOpsPhaseSummaries.toSeq
 
 Review comment:
   ```
   using table metadata to carry this information is not a good idea.
   ```
   Agree, as the catalog table case class used as the cache key, I also hit 
some problem before
   f82b355 b908ecb.
   ```
   Can we track it at the caller side?
   ```
   How about tracking them by QueryPlanningTracker, and use the case class as 
the key of the phase map.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to