heyihong commented on code in PR #53801:
URL: https://github.com/apache/spark/pull/53801#discussion_r2694402835


##########
sql/api/src/main/scala/org/apache/spark/sql/Observation.scala:
##########
@@ -58,12 +57,15 @@ class Observation(val name: String) {
 
   private val isRegistered = new AtomicBoolean()
 
-  private val promise = Promise[Row]()
+  private val promise = Promise[() => Row]()
+
+  private val lazyMetricsFuture: Future[() => Row] = promise.future
 
   /**
-   * Future holding the (yet to be completed) observation.
+   * Future holding the (yet to be completed) observation. Lazy to avoid 
collecting the metrics
+   * until it is needed.
    */
-  val future: Future[Row] = promise.future
+  lazy val future: Future[Row] = 
lazyMetricsFuture.map(_())(ExecutionContext.global)

Review Comment:
   IMHO, the metrics collection tasks are CPU-bound and do not contain I/O or 
system calls. ExecutionContext.global should be sufficient because by default, 
the ExecutionContext.global sets the parallelism level of its underlying 
fork-join pool to the number of available processors 
Runtime.availableProcessors according to 
https://docs.scala-lang.org/overviews/core/futures.html#the-global-execution-context



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to