gianm commented on a change in pull request #12163:
URL: https://github.com/apache/druid/pull/12163#discussion_r801035605
##########
File path:
sql/src/main/java/org/apache/druid/sql/calcite/parser/DruidSqlParserUtils.java
##########
@@ -38,20 +39,22 @@
public class DruidSqlParserUtils
{
+
+ private static final Logger log = new Logger(DruidSqlParserUtils.class);
+
/**
* Delegates to {@code convertSqlNodeToGranularity} and converts the
exceptions to {@link ParseException}
* with the underlying message
*/
public static Granularity
convertSqlNodeToGranularityThrowingParseExceptions(SqlNode sqlNode) throws
ParseException
{
- Granularity g;
try {
- g = convertSqlNodeToGranularity(sqlNode);
+ return convertSqlNodeToGranularity(sqlNode);
}
catch (Exception e) {
+ log.error(e, StringUtils.format("Unable to convert %s to a valid
granularity.", sqlNode.toString()));
Review comment:
If this error occurs, it's due to user error on one particular query, so
it shouldn't be an ERROR that appears in the server logs. That should be
reserved for really important stuff that cluster operators need to worry about.
However, there should be _some_ way to get the stack trace, and we generally
_don't_ want to return it directly to users of the SQL API.
So in a situation like this I'd do `log.debug` instead of `log.error`, so
server admins can see the stack trace if they really need to, but won't see it
by default. That balances all of the relevant desires: server admins shouldn't
see "user error" stuff by default, and users shouldn't see stack traces as part
of normal error messages.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]