Github user hvanhovell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13472#discussion_r65620010
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Benchmark.scala ---
    @@ -97,6 +111,39 @@ private[spark] class Benchmark(
         println
         // scalastyle:on
       }
    +
    +  /**
    +   * Runs a single function `f` for iters, returning the average time the 
function took and
    +   * the rate of the function.
    +   */
    +  def measure(num: Long, overrideNumIters: Int)(f: Timer => Unit): Result 
= {
    +    System.gc()  // ensures garbage from previous cases don't impact this 
one
    +    val minIters = if (overrideNumIters != 0) overrideNumIters else 
minNumIters
    +    val minDuration = if (overrideNumIters != 0) 0.seconds.fromNow else 
minTime.fromNow
    +    val runTimes = ArrayBuffer[Long]()
    +    var i = 0
    +    while (i < minIters || !minDuration.isOverdue) {
    +      val timer = new Benchmark.Timer(i)
    +      f(timer)
    +      val runTime = timer.totalTime()
    +      if (i > 0) {
    +        runTimes += runTime
    --- End diff --
    
    It is quite likely that we will also add unoptimized results here. Is that 
a problem? Or are we only interested in the `best`  runtime?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to