abhishekagarwal87 commented on a change in pull request #12163:
URL: https://github.com/apache/druid/pull/12163#discussion_r792364174
##########
File path: sql/pom.xml
##########
@@ -255,6 +260,140 @@
</execution>
</executions>
</plugin>
+
+ <plugin>
Review comment:
nit - are these plugins arranged in order of execution? It will be nice
if they are since following the code becomes easier.
##########
File path: sql/pom.xml
##########
@@ -255,6 +260,140 @@
</execution>
</executions>
</plugin>
+
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-dependency-plugin</artifactId>
+ <executions>
+ <execution>
+ <!-- Extract parser grammar template from Apache Calcite and put
Review comment:
can you add similar comments to other plugins that are added in this PR?
##########
File path: sql/src/main/codegen/config.fmpp
##########
@@ -0,0 +1,433 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
Review comment:
in this file, can you add comments around custom changes that we made?
##########
File path:
sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidPlanner.java
##########
@@ -765,13 +785,53 @@ static ParsedNodes create(final SqlNode node) throws
ValidationException
if (query.getKind() == SqlKind.INSERT) {
insert = (SqlInsert) query;
query = insert.getSource();
+
+ // Processing to be done when the original query has either of the
PARTITION BY or CLUSTER BY clause
+ if (insert instanceof DruidSqlInsert) {
+ DruidSqlInsert druidSqlInsert = (DruidSqlInsert) insert;
+
+ ingestionGranularity = druidSqlInsert.getPartitionBy();
+
+ if (druidSqlInsert.getClusterBy() != null) {
+ // If we have a CLUSTER BY clause, extract the information in that
CLUSTER BY and create a new SqlOrderBy
+ // node
+ SqlNode offset = null;
+ SqlNode fetch = null;
+ SqlNodeList orderByList = null;
+
+ if (query instanceof SqlOrderBy) {
+ SqlOrderBy sqlOrderBy = (SqlOrderBy) query;
+ // Extract the query present inside the SqlOrderBy (which is
free of ORDER BY, OFFSET and FETCH clauses)
+ query = sqlOrderBy.query;
+
+ offset = sqlOrderBy.offset;
+ fetch = sqlOrderBy.fetch;
+ orderByList = sqlOrderBy.orderList;
+ // If the orderList is non-empty (i.e. there existed an ORDER BY
clause in the query) and CLUSTER BY clause
+ // is also non-empty, throw an error
+ if (!(orderByList == null ||
orderByList.equals(SqlNodeList.EMPTY))
+ && druidSqlInsert.getClusterBy() != null) {
+ throw new ValidationException(
+ "Cannot have both ORDER BY and CLUSTER BY clauses in the
same INSERT query");
+ }
+ }
+ // Creates a new SqlOrderBy query, which may have our CLUSTER BY
overwritten
+ query = new SqlOrderBy(
Review comment:
I am wondering if the SQL layer is the right place to do this
transformation or should we just leave it to the native layer to use orderBy
and clusterBy together. My assumption for this transformation is that we would
like the query results to be ordered on same dimensions that we want to use to
arrange data in segments. Maybe it's something that QueryMaker should decide?
##########
File path:
sql/src/main/java/org/apache/druid/sql/calcite/parser/DruidSqlInsert.java
##########
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.sql.calcite.parser;
+
+import org.apache.calcite.sql.SqlInsert;
+import org.apache.calcite.sql.SqlLiteral;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.SqlNodeList;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.calcite.sql.SqlWriter;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+
+/**
+ * Extends the Insert call to hold custom paramaters specific to druid i.e.
PARTITION BY and CLUSTER BY
+ * This class extends the {@link SqlInsert} so that the node can be used for
further conversion
+ */
+public class DruidSqlInsert extends SqlInsert
+{
+ // Unsure if this should be kept as is, but this allows reusing super.unparse
+ public static final SqlOperator OPERATOR = SqlInsert.OPERATOR;
+
+ private final SqlNode partitionBy;
+ private final SqlNodeList clusterBy;
+
+ public DruidSqlInsert(
+ @Nonnull SqlInsert insertNode,
+ @Nullable SqlNode partitionBy,
+ @Nullable SqlNodeList clusterBy
+ )
+ {
+ super(
+ insertNode.getParserPosition(),
+ (SqlNodeList) insertNode.getOperandList().get(0), // No better getter
to extract this
+ insertNode.getTargetTable(),
+ insertNode.getSource(),
+ insertNode.getTargetColumnList()
+ );
+ this.partitionBy = partitionBy;
+ this.clusterBy = clusterBy;
+ }
+
+ @Nullable
+ public SqlNodeList getClusterBy()
+ {
+ return clusterBy;
+ }
+
+ @Nullable
+ public String getPartitionBy()
+ {
+ if (partitionBy == null) {
+ return null;
+ }
+ return SqlLiteral.unchain(partitionBy).toValue();
+ }
+
+ @Nonnull
+ @Override
+ public SqlOperator getOperator()
+ {
+ return OPERATOR;
+ }
+
+ @Override
+ public void unparse(SqlWriter writer, int leftPrec, int rightPrec)
+ {
+ super.unparse(writer, leftPrec, rightPrec);
+ if (partitionBy != null) {
+ writer.keyword("PARTITION");
+ writer.keyword("BY");
+ writer.keyword(getPartitionBy());
+ }
+ if (clusterBy != null) {
+ writer.sep("CLUSTER BY");
+ SqlWriter.Frame frame = writer.startList("", "");
+ for (SqlNode clusterByOpts : clusterBy.getList()) {
+ clusterByOpts.unparse(writer, leftPrec, rightPrec);
+ }
+ writer.endList(frame);
+ }
+ }
Review comment:
thats not needed since superclass `SqlNode` already overrides
`toString()` as beloe
```
public String toString() {
return toSqlString(null).getSql();
}
```
`toSqlString` will eventually call unparse method implemented here.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]