holdenk commented on a change in pull request #1948:
URL: https://github.com/apache/iceberg/pull/1948#discussion_r545474190



##########
File path: 
spark3-extensions/src/main/scala/org/apache/spark/sql/catalyst/parser/extensions/IcebergSparkSqlExtensionsParser.scala
##########
@@ -94,13 +94,20 @@ class IcebergSparkSqlExtensionsParser(delegate: 
ParserInterface) extends ParserI
    */
   override def parsePlan(sqlText: String): LogicalPlan = {
     val sqlTextAfterSubstitution = substitutor.substitute(sqlText)
-    if 
(sqlTextAfterSubstitution.toLowerCase(Locale.ROOT).trim().startsWith("call")) {
+    if (isIcebergCommand(sqlTextAfterSubstitution)) {
       parse(sqlTextAfterSubstitution) { parser => 
astBuilder.visit(parser.singleStatement()) }.asInstanceOf[LogicalPlan]
     } else {
       delegate.parsePlan(sqlText)
     }
   }
 
+  private def isIcebergCommand(sqlText: String): Boolean = {
+    val normalized = sqlText.toLowerCase(Locale.ROOT).trim()
+    normalized.startsWith("call") ||
+        (normalized.startsWith("alter table") && (
+            normalized.contains("add partition field") || 
normalized.contains("drop partition field")))

Review comment:
       So, this seems really unlikely and I can't figure out a good way around 
it, but if someone had a "add partition field" as a column name doing a Spark 
alter table this would probably give a false positive, but that probably 
doesn't matter since even then most of the time the iceberg parser is a 
superset of the delegate parser. Does that sound right or am I off base with 
the assumption this block?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to