coolderli opened a new issue #4176:
URL: https://github.com/apache/iceberg/issues/4176


   I want to change the columns and set it not null, but got the exceptions:
   ```
   2022-02-21 10:07:53,952 INFO  [SparkSQLSessionManager-exec-pool: 
Thread-31932] org.apache.kyuubi.Logging:60  - Processing 
lipeidian:10034:workspace_10034.admin's 
query[b1087ff1-241f-410c-aacb-248a68d4526a]: RUNNING_STATE -> ERROR_STATE, 
statement: alter table monitor_sms_gateway_record_with_time_partition alter 
column id set not null, time taken: 0.123 seconds
   2022-02-21 10:07:53,953 ERROR [SparkSQLSessionManager-exec-pool: 
Thread-31932] org.apache.kyuubi.Logging:78  - Error operating 
EXECUTE_STATEMENT: org.apache.spark.sql.AnalysisException: Cannot change 
nullable column to non-nullable: id; line 1 pos 0;
   AlterTable org.apache.iceberg.spark.SparkCatalog@b0a970f, 
monitor_falcon.monitor_sms_gateway_record_with_time_partition, 
RelationV2[id#9255, task_id#9256, username#9257, token#9258, mobile#9259, 
content#9260, provider#9261, status#9262, count#9263, created#9264, type#9265, 
receive_time#9266, operator_rec_time#9267, start_time#9268, status_desc#9269, 
last_task_id#9270, record_date#9271] 
iceberg_zjyprc_hadoop.monitor_falcon.monitor_sms_gateway_record_with_time_partition,
 
[org.apache.spark.sql.connector.catalog.TableChange$UpdateColumnNullability@a6c6]
   
        at 
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
        at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$33(CheckAnalysis.scala:564)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:504)
        at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1$adapted(CheckAnalysis.scala:95)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:184)
        at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:95)
        at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:92)
        at 
org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:155)
        at 
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:176)
        at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:228)
        at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:173)
        at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:73)
        at 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
        at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:147)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
        at 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:147)
        at 
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:73)
        at 
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:71)
        at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:63)
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:98)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
        at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613)
        at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement.$anonfun$executeStatement$1(ExecuteStatement.scala:84)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
org.apache.kyuubi.engine.spark.operation.SparkOperation.withLocalProperties(SparkOperation.scala:112)
        at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement.org$apache$kyuubi$engine$spark$operation$ExecuteStatement$$executeStatement(ExecuteStatement.scala:78)
        at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:122)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
   ```
   
   I found the 
[doc](https://iceberg.apache.org/docs/latest/spark-ddl/#alter-table--alter-column),
 and the syntax should be supported. But I also found the exception is in line 
with expectations: 
https://github.com/apache/iceberg/blob/master/spark/v3.1/spark/src/test/java/org/apache/iceberg/spark/sql/TestAlterTable.java#L214.
  Are there inconsistencies in the document?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to