[jira] [Commented] (SPARK-19131) Support "alter table drop partition [if exists]"
[ https://issues.apache.org/jira/browse/SPARK-19131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940030#comment-15940030 ] Goun Na commented on SPARK-19131: - I can reproduce it when "if exists" is omitted. {code} sql("alter table t drop partition (p=1)") {code} > Support "alter table drop partition [if exists]" > > > Key: SPARK-19131 > URL: https://issues.apache.org/jira/browse/SPARK-19131 > Project: Spark > Issue Type: New Feature >Affects Versions: 2.1.0 >Reporter: lichenglin > > {code} > val parts = client.getPartitions(hiveTable, s.asJava).asScala > if (parts.isEmpty && !ignoreIfNotExists) { > throw new AnalysisException( > s"No partition is dropped. One partition spec '$s' does not exist > in table '$table' " + > s"database '$db'") > } > parts.map(_.getValues) > {code} > Until 2.1.0,drop partition will throw a exception when no partition to drop. > I notice there is a param named ignoreIfNotExists. > But I don't know how to set it. > May be we can implement "alter table drop partition [if exists] " -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-19131) Support "alter table drop partition [if exists]"
[ https://issues.apache.org/jira/browse/SPARK-19131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810768#comment-15810768 ] lichenglin commented on SPARK-19131: Yes > Support "alter table drop partition [if exists]" > > > Key: SPARK-19131 > URL: https://issues.apache.org/jira/browse/SPARK-19131 > Project: Spark > Issue Type: New Feature >Affects Versions: 2.1.0 >Reporter: lichenglin > > {code} > val parts = client.getPartitions(hiveTable, s.asJava).asScala > if (parts.isEmpty && !ignoreIfNotExists) { > throw new AnalysisException( > s"No partition is dropped. One partition spec '$s' does not exist > in table '$table' " + > s"database '$db'") > } > parts.map(_.getValues) > {code} > Until 2.1.0,drop partition will throw a exception when no partition to drop. > I notice there is a param named ignoreIfNotExists. > But I don't know how to set it. > May be we can implement "alter table drop partition [if exists] " -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-19131) Support "alter table drop partition [if exists]"
[ https://issues.apache.org/jira/browse/SPARK-19131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810740#comment-15810740 ] Dongjoon Hyun commented on SPARK-19131: --- Maybe, do you mean the following Hive syntax ? {code} ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec[, PARTITION partition_spec, ...] [IGNORE PROTECTION] [PURGE];-- (Note: PURGE available in Hive 1.2.0 and later, IGNORE PROTECTION not available 2.0.0 and later) {code} https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-DropPartitions > Support "alter table drop partition [if exists]" > > > Key: SPARK-19131 > URL: https://issues.apache.org/jira/browse/SPARK-19131 > Project: Spark > Issue Type: New Feature >Affects Versions: 2.1.0 >Reporter: lichenglin > > {code} > val parts = client.getPartitions(hiveTable, s.asJava).asScala > if (parts.isEmpty && !ignoreIfNotExists) { > throw new AnalysisException( > s"No partition is dropped. One partition spec '$s' does not exist > in table '$table' " + > s"database '$db'") > } > parts.map(_.getValues) > {code} > Until 2.1.0,drop partition will throw a exception when no partition to drop. > I notice there is a param named ignoreIfNotExists. > But I don't know how to set it. > May be we can implement "alter table drop partition [if exists] " -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org