tkonolige commented on code in PR #11465:
URL: https://github.com/apache/tvm/pull/11465#discussion_r904005004


##########
include/tvm/runtime/profiling.h:
##########
@@ -554,10 +554,16 @@ PackedFunc ProfileFunction(Module mod, std::string 
func_name, int device_type, i
  *        minimum duration requirement of one `repeat`.
  *        i.e., When the run time of one `repeat` falls below this time,
  *        the `number` parameter will be automatically increased.
- * \param f_preproc The function to be executed before we excetute time 
evaluator.
+ * \param cooldown_interval_ms The cooldown interval in milliseconds between 
the number of repeats
+ *        defined by `repeats_to_cooldown`.
+ * \param repeats_to_cooldown The number of repeats before the
+ *        cooldown is activated.
+ * \param f_preproc The function to be executed before we excetute time

Review Comment:
   ```suggestion
    * \param f_preproc The function to be executed before we execute time
   ```



##########
src/runtime/crt/common/crt_runtime_api.c:
##########
@@ -528,23 +539,34 @@ tvm_crt_error_t RunTimeEvaluator(tvm_function_index_t 
function_index, TVMValue*
   }
   result_byte_arr->data = NULL;
   size_t data_size = sizeof(double) * g_time_evaluator_state.repeat;
-  err = TVMPlatformMemoryAllocate(data_size, result_byte_dev, 
(void*)&result_byte_arr->data);
+  err = TVMPlatformMemoryAllocate(data_size, result_byte_dev, 
(void**)&result_byte_arr->data);
   if (err != kTvmErrorNoError) {
     goto release_and_return;
   }
   result_byte_arr->size = data_size;
+
+  // skip first time call, to activate lazy compilation components.
+  err = TVMFuncCall(g_time_evaluator_state.func_to_time, args, type_codes, 
num_args, ret_val,
+                    ret_type_code);
+  if (err != kTvmErrorNoError) {
+    goto release_and_return;
+  }
+
   double min_repeat_seconds = ((double)g_time_evaluator_state.min_repeat_ms) / 
1000;
   double* iter = (double*)result_byte_arr->data;
   for (int i = 0; i < g_time_evaluator_state.repeat; i++) {
-    double repeat_res_seconds = 0.0;
-    int exec_count = 0;
+    double curr_res_seconds = 0.0;
     // do-while structure ensures we run even when `min_repeat_ms` isn't set 
(i.e., is 0).
     do {
+      if (curr_res_seconds > 0.0) {
+        double a = (min_repeat_seconds / (curr_res_seconds / 
g_time_evaluator_state.number) + 1);
+        double b = g_time_evaluator_state.number * 1.618;
+        g_time_evaluator_state.number = (int64_t)(a > b ? a : b);  // 1.618 is 
chosen by random

Review Comment:
   ```suggestion
           g_time_evaluator_state.number = (int64_t)(a > b ? a : b);
   ```



##########
src/runtime/graph_executor/debug/graph_executor_debug.cc:
##########
@@ -55,48 +56,54 @@ class GraphExecutorDebug : public GraphExecutor {
    *        By default, one `repeat` contains `number` runs. If this parameter 
is set,
    *        the parameters `number` will be dynamically adjusted to meet the
    *        minimum duration requirement of one `repeat`.
-   * \return Comma seperated string containing the elapsed time per op for the 
last
-   *         iteration only, because returning a long string over rpc can be 
expensive.
+   * \param cooldown_interval_ms The cooldown interval in milliseconds between 
the number of repeats
+   *        defined by `repeats_to_cooldown`.
+   * \param repeats_to_cooldown The number of repeats before the
+   *        cooldown is activated.
+   * \return Comma separated string containing the elapsed time per op for
+   *         the last iteration only, because returning a long string over rpc 
can be expensive.

Review Comment:
   It seems like you've changed this to return all elapsed time per ops instead 
of just the last one? The comment seems to indicate this may be problematic?



##########
include/tvm/runtime/profiling.h:
##########
@@ -554,10 +554,16 @@ PackedFunc ProfileFunction(Module mod, std::string 
func_name, int device_type, i
  *        minimum duration requirement of one `repeat`.
  *        i.e., When the run time of one `repeat` falls below this time,
  *        the `number` parameter will be automatically increased.
- * \param f_preproc The function to be executed before we excetute time 
evaluator.
+ * \param cooldown_interval_ms The cooldown interval in milliseconds between 
the number of repeats
+ *        defined by `repeats_to_cooldown`.
+ * \param repeats_to_cooldown The number of repeats before the
+ *        cooldown is activated.

Review Comment:
   With these two new parameters, this function is getting pretty complicated. 
Could you add some pseudocode to the comment here that shows how all the 
parameters interact. Maybe something like:
   ```
   f() // warmup
   for i in 0..repeat {
     start timer
     for j in 0..number {
       f()
     }
     stop timer
     if i % repeats_to_cooldown == 0 {
       sleep(cooldown_interval_ms)
     }
   }
   ```



##########
src/runtime/graph_executor/debug/graph_executor_debug.cc:
##########
@@ -176,10 +185,18 @@ class GraphExecutorDebug : public GraphExecutor {
     }
     TVMRetValue rv;
     time_eval.CallPacked(TVMArgs(values.get(), type_codes.get(), 
num_flat_args), &rv);
-    std::string results = rv.operator std::string();
-    const double* results_arr = reinterpret_cast<const 
double*>(results.data());
-    LOG(INFO) << "Got op timing: " << results_arr[0];
-    return results_arr[0];
+    std::string results_str = rv.operator std::string();
+    const double* blob_ptr = reinterpret_cast<const 
double*>(results_str.data());
+    for (int i = 0; i < repeat; ++i, ++blob_ptr) {
+      results[i] = *blob_ptr;
+    }
+
+    std::ostringstream os;
+    for (auto& repeat_data : results) {
+      os << std::to_string(repeat_data) + ", ";

Review Comment:
   ```suggestion
         os << std::to_string(repeat_data) << ", ";
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to