fsk119 commented on a change in pull request #15986:
URL: https://github.com/apache/flink/pull/15986#discussion_r638593315
##########
File path:
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/operations/SqlToOperationConverter.java
##########
@@ -844,16 +843,19 @@ private Operation convertShowViews(SqlShowViews
sqlShowViews) {
return new ShowViewsOperation();
}
- /** Convert EXPLAIN statement. */
- private Operation convertExplain(SqlExplain sqlExplain) {
- Operation operation = convertSqlQuery(sqlExplain.getExplicandum());
-
- if (sqlExplain.getDetailLevel() != SqlExplainLevel.EXPPLAN_ATTRIBUTES
- || sqlExplain.getDepth() != SqlExplain.Depth.PHYSICAL
- || sqlExplain.getFormat() != SqlExplainFormat.TEXT) {
- throw new TableException("Only default behavior is supported now,
EXPLAIN PLAN FOR xx");
+ /** Convert RICH EXPLAIN statement. */
+ private Operation convertRichExplain(SqlRichExplain sqlExplain) {
+ Operation operation;
+ SqlNode sqlNode = sqlExplain.getStatement();
+ if (sqlNode instanceof RichSqlInsert) {
+ operation = convertSqlInsert((RichSqlInsert) sqlNode);
+ } else if (sqlNode instanceof SqlSelect) {
+ operation = convertSqlQuery(sqlExplain.getStatement());
Review comment:
Flink has its own logic to validate the INSERT statement, which is
different from calcite. Here we only validate the query part of the INSERT
statement and check whether the sink schema is as same as the query schema if
the statement is the INSERT statement.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]