[
https://issues.apache.org/jira/browse/IMPALA-14643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18047541#comment-18047541
]
Fang-Yu Rao commented on IMPALA-14643:
--------------------------------------
Don't know if we had seen this in the past, [~topicfun] and [~noemi].
> ALTER TABLE against iceberg tables could fail due to an IOException
> -------------------------------------------------------------------
>
> Key: IMPALA-14643
> URL: https://issues.apache.org/jira/browse/IMPALA-14643
> Project: IMPALA
> Issue Type: Improvement
> Reporter: Fang-Yu Rao
> Priority: Major
>
> We found that test_partitioned_insert_v2() in
> [https://github.com/apache/impala/blob/master/tests/query_test/test_iceberg.py]
> could fail due to an IOException thrown from
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketOutputStream.java]
> via a call to internalWrite() in
> [https://github.com/apache/iceberg/blob/main/core/src/main/java/org/apache/iceberg/TableMetadataParser.java].
> The command that failed seems to be "{{alter table special_char_partitions
> set partition spec (s2)}}" in
> https://github.com/apache/impala/blob/master/testdata/workloads/functional-query/queries/QueryTest/iceberg-partitioned-insert-v2.test.
> {code:java}
> E20251225 11:17:54.079689 718842 JniUtil.java:184]
> c94e672a5eb11678:c57e3b1f00000000] Error in ALTER_TABLE
> test_partitioned_insert_v2_4dd0ae48.special_char_partitions
> SET_PARTITION_SPEC issued by ubuntu. Time spe
> I20251225 11:17:54.080186 718842 jni-util.cc:321]
> c94e672a5eb11678:c57e3b1f00000000]
> org.apache.iceberg.exceptions.RuntimeIOException: Failed to write json to
> file: hdfs://localhost:20500/test-warehouse/test_part
> at
> org.apache.iceberg.TableMetadataParser.internalWrite(TableMetadataParser.java:133)
> at
> org.apache.iceberg.TableMetadataParser.overwrite(TableMetadataParser.java:115)
> at
> org.apache.iceberg.BaseMetastoreTableOperations.writeNewMetadata(BaseMetastoreTableOperations.java:170)
> at
> org.apache.iceberg.BaseMetastoreTableOperations.writeNewMetadataIfRequired(BaseMetastoreTableOperations.java:160)
> at
> org.apache.iceberg.hive.HiveTableOperations.doCommit(HiveTableOperations.java:173)
> at
> org.apache.iceberg.BaseMetastoreTableOperations.commit(BaseMetastoreTableOperations.java:135)
> at
> org.apache.iceberg.BaseTransaction.lambda$commitSimpleTransaction$3(BaseTransaction.java:427)
> at
> org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
> at
> org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
> at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
> at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
> at
> org.apache.iceberg.BaseTransaction.commitSimpleTransaction(BaseTransaction.java:423)
> at
> org.apache.iceberg.BaseTransaction.commitTransaction(BaseTransaction.java:318)
> at
> org.apache.impala.service.CatalogOpExecutor.alterIcebergTable(CatalogOpExecutor.java:1721)
> at
> org.apache.impala.service.CatalogOpExecutor.alterTable(CatalogOpExecutor.java:1263)
> at
> org.apache.impala.service.CatalogOpExecutor.execDdlRequest(CatalogOpExecutor.java:481)
> at
> org.apache.impala.service.JniCatalog.lambda$execDdl$3(JniCatalog.java:318)
> at
> org.apache.impala.service.JniCatalogOp.lambda$execAndSerialize$1(JniCatalogOp.java:90)
> at org.apache.impala.service.JniCatalogOp.execOp(JniCatalogOp.java:58)
> at
> org.apache.impala.service.JniCatalogOp.execAndSerialize(JniCatalogOp.java:89)
> at
> org.apache.impala.service.JniCatalogOp.execAndSerialize(JniCatalogOp.java:100)
> at
> org.apache.impala.service.JniCatalog.execAndSerialize(JniCatalog.java:243)
> at
> org.apache.impala.service.JniCatalog.execAndSerialize(JniCatalog.java:257)
> at org.apache.impala.service.JniCatalog.execDdl(JniCatalog.java:317)
> Caused by: java.io.IOException: The stream is closed
> at
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
> at
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
> at java.io.DataOutputStream.flush(DataOutputStream.java:123)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
> at
> org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:1021)
> at
> org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:854)
> at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:849)
> Suppressed: java.io.IOException: The stream is closed
> at
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
> at
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at
> java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
> at
> java.io.FilterOutputStream.close(FilterOutputStream.java:158)
> at
> java.io.FilterOutputStream.close(FilterOutputStream.java:159)
> {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]