[GitHub] spark pull request #13822: [SPARK-16115][SQL] Change output schema to be par...
Github user skambha commented on a diff in the pull request: https://github.com/apache/spark/pull/13822#discussion_r79265065 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala --- @@ -660,6 +662,10 @@ case class ShowPartitionsCommand( override def run(sparkSession: SparkSession): Seq[Row] = { val catalog = sparkSession.sessionState.catalog +if (!catalog.tableExists(table)) { + throw new AnalysisException(s" Table does not exist") --- End diff -- Changes done. I used table.unquotedString (similar to what was used in the nearby code. ) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #13822: [SPARK-16115][SQL] Change output schema to be par...
Github user skambha commented on a diff in the pull request: https://github.com/apache/spark/pull/13822#discussion_r79265007 --- Diff: sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala --- @@ -595,6 +595,19 @@ class HiveDDLSuite } } + test("test show partitions") { +val message = intercept[AnalysisException] { + sql("SHOW PARTITIONS default.nonexistentTable") +}.getMessage +assert(message.contains("Table does not exist")) + +withTable("t1") { + sql("CREATE TABLE t1 (key STRING, value STRING) PARTITIONED BY (ds STRING)") + sql("ALTER TABLE t1 ADD PARTITION (ds = '1')") + assert(sql(" SHOW PARTITIONS t1").schema.getFieldIndex("partition") == Some(0)) --- End diff -- My original reasoning for adding this test was to exercise the code change that makes the output attribute reference name for SHOW PARTITIONS command to be âpartitionâ. This test covers this scenario.Without the fix, it was âresultâ. With the fix it is âpartitionâ. But it turns out another PR already made the code change for it. But no tests were added in that pr. So I have kept this as a separate test for now. If you prefer we do not need to have this test, we can remove it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #13822: [SPARK-16115][SQL] Change output schema to be par...
Github user skambha commented on a diff in the pull request: https://github.com/apache/spark/pull/13822#discussion_r79264648 --- Diff: sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala --- @@ -595,6 +595,19 @@ class HiveDDLSuite } } + test("test show partitions") { --- End diff -- Done. I renamed it per your suggestion. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #13822: [SPARK-16115][SQL] Change output schema to be par...
Github user andrewor14 commented on a diff in the pull request: https://github.com/apache/spark/pull/13822#discussion_r79247914 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala --- @@ -660,6 +662,10 @@ case class ShowPartitionsCommand( override def run(sparkSession: SparkSession): Seq[Row] = { val catalog = sparkSession.sessionState.catalog +if (!catalog.tableExists(table)) { + throw new AnalysisException(s" Table does not exist") --- End diff -- Can you include the table name in this message? Also please remove the random space in the beginning. Something like: ``` s"Table $table does not exist" ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #13822: [SPARK-16115][SQL] Change output schema to be par...
Github user andrewor14 commented on a diff in the pull request: https://github.com/apache/spark/pull/13822#discussion_r79247651 --- Diff: sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala --- @@ -595,6 +595,19 @@ class HiveDDLSuite } } + test("test show partitions") { --- End diff -- can you call this something more specific, like `show partitions on non-existent table` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #13822: [SPARK-16115][SQL] Change output schema to be par...
Github user andrewor14 commented on a diff in the pull request: https://github.com/apache/spark/pull/13822#discussion_r79247588 --- Diff: sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala --- @@ -595,6 +595,19 @@ class HiveDDLSuite } } + test("test show partitions") { +val message = intercept[AnalysisException] { + sql("SHOW PARTITIONS default.nonexistentTable") +}.getMessage +assert(message.contains("Table does not exist")) + +withTable("t1") { + sql("CREATE TABLE t1 (key STRING, value STRING) PARTITIONED BY (ds STRING)") + sql("ALTER TABLE t1 ADD PARTITION (ds = '1')") + assert(sql(" SHOW PARTITIONS t1").schema.getFieldIndex("partition") == Some(0)) --- End diff -- I'm not sure if this is worth testing, since this is just Hive behavior --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #13822: [SPARK-16115][SQL] Change output schema to be par...
GitHub user skambha opened a pull request: https://github.com/apache/spark/pull/13822 [SPARK-16115][SQL] Change output schema to be partition for SHOW PARTITIONS command and ⦠## What changes were proposed in this pull request? Changes include: 1. For the SHOW PARTITIONS command, the column name in the output is changed from 'result' to 'partition' as it is more descriptive. This is now similar to Hive. 2. Corner case: Improve the error message when calling show partitions on a non-existent table. Add a check to see if table exists or not and add error message. Without the fix: scala> spark.sql("show partitions t1"); org.apache.spark.sql.AnalysisException: SHOW PARTITIONS is not allowed on a table that is not partitioned: default.t1; With the fix: scala> spark.sql("SHOW PARTITIONS T1").show; org.apache.spark.sql.AnalysisException: Table does not exist; ## How was this patch tested? - Added unit tests to cover these two scenarios - Following unit tests were run successfully: hive/test, sql/test, catalyst/test You can merge this pull request into a Git repository by running: $ git pull https://github.com/skambha/spark showpart16115 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/13822.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #13822 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org