[
https://issues.apache.org/jira/browse/SPARK-35531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17843619#comment-17843619
]
Sandeep Katta commented on SPARK-35531:
---------------------------------------
[~angerszhuuu] , I do see same issue in alter table command, I tested in
SPARK-3.5.0 and issue still exists
{code:java}
CREATE TABLE TEST1(
V1 BIGINT,
S1 INT)
PARTITIONED BY (PK BIGINT)
CLUSTERED BY (V1)
SORTED BY (S1)
INTO 200 BUCKETS
STORED AS PARQUET;
ALTER TABLE test1 SET TBLPROPERTIES ('comment' = 'This is a new comment.');
{code}
{code:java}
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Bucket columns V1
is not part of the table columns ([FieldSchema(name:v1, type:bigint,
comment:null), FieldSchema(name:s1, type:int, comment:null)] at
org.apache.hadoop.hive.ql.metadata.Table.setBucketCols(Table.java:552)
at
org.apache.spark.sql.hive.client.HiveClientImpl$.toHiveTable(HiveClientImpl.scala:1145)
at
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$alterTable$1(HiveClientImpl.scala:594)
at
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:303)
at
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:234)
at
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:233)
at
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:283)
at
org.apache.spark.sql.hive.client.HiveClientImpl.alterTable(HiveClientImpl.scala:587)
at
org.apache.spark.sql.hive.client.HiveClient.alterTable(HiveClient.scala:124)
at
org.apache.spark.sql.hive.client.HiveClient.alterTable$(HiveClient.scala:123)
at
org.apache.spark.sql.hive.client.HiveClientImpl.alterTable(HiveClientImpl.scala:93)
at
org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$alterTable$1(HiveExternalCatalog.scala:687)
at
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
... 62 more
{code}
> Can not insert into hive bucket table if create table with upper case schema
> ----------------------------------------------------------------------------
>
> Key: SPARK-35531
> URL: https://issues.apache.org/jira/browse/SPARK-35531
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 3.0.0, 3.1.1, 3.2.0
> Reporter: Hongyi Zhang
> Assignee: angerszhu
> Priority: Major
> Fix For: 3.3.0, 3.1.4
>
>
>
>
> create table TEST1(
> V1 BIGINT,
> S1 INT)
> partitioned by (PK BIGINT)
> clustered by (V1)
> sorted by (S1)
> into 200 buckets
> STORED AS PARQUET;
>
> insert into test1
> select
> * from values(1,1,1);
>
>
> org.apache.hadoop.hive.ql.metadata.HiveException: Bucket columns V1 is not
> part of the table columns ([FieldSchema(name:v1, type:bigint, comment:null),
> FieldSchema(name:s1, type:int, comment:null)]
> org.apache.spark.sql.AnalysisException:
> org.apache.hadoop.hive.ql.metadata.HiveException: Bucket columns V1 is not
> part of the table columns ([FieldSchema(name:v1, type:bigint, comment:null),
> FieldSchema(name:s1, type:int, comment:null)]
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]