abhishekagarwal87 commented on code in PR #12923:
URL: https://github.com/apache/druid/pull/12923#discussion_r950157225
##########
sql/src/main/java/org/apache/druid/sql/SqlExecutionReporter.java:
##########
@@ -105,9 +111,16 @@ public void emit()
if (bytesWritten >= 0) {
emitter.emit(metricBuilder.build("sqlQuery/bytes", bytesWritten));
}
+ if (planningTimeNanos >= 0) {
+ emitter.emit(metricBuilder.build(
+ "sqlQuery/planningTimeMs",
+ TimeUnit.NANOSECONDS.toMillis(planningTimeNanos)
+ ));
+ }
final Map<String, Object> statsMap = new LinkedHashMap<>();
statsMap.put("sqlQuery/time",
TimeUnit.NANOSECONDS.toMillis(queryTimeNs));
+ statsMap.put("sqlQuery/planningTimeMs",
TimeUnit.NANOSECONDS.toMillis(planningTimeNanos));
Review Comment:
FWIW, I have seen many queries taking significant planning time. For E.g. we
have seen this in particular with queries with a large IN clause or queries
with many UNION operators. When debugging a slow query, we often don't have
direct information about the time spent in planning. We tend to infer it by
looking at other metrics and then comparing it with overall execution time.
Having this metric available firsthand would help in that troubleshooting. If
that planning time is too high, we would either fix it through a code change or
adjust the query in some way.
So IMO there is definitely utility in having this metric.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]