Joe McDonnell created IMPALA-13144:
--------------------------------------
Summary: TestIcebergTable.test_migrated_table_field_id_resolution
fails with Disk I/O error
Key: IMPALA-13144
URL: https://issues.apache.org/jira/browse/IMPALA-13144
Project: IMPALA
Issue Type: Bug
Components: Backend
Affects Versions: Impala 4.5.0
Reporter: Joe McDonnell
A couple test jobs hit a failure on
TestIcebergTable.test_migrated_table_field_id_resolution:
{noformat}
query_test/test_iceberg.py:270: in test_migrated_table_field_id_resolution
vector, unique_database)
common/impala_test_suite.py:725: in run_test_case
result = exec_fn(query, user=test_section.get('USER', '').strip() or None)
common/impala_test_suite.py:660: in __exec_in_impala
result = self.__execute_query(target_impalad_client, query, user=user)
common/impala_test_suite.py:1013: in __execute_query
return impalad_client.execute(query, user=user)
common/impala_connection.py:216: in execute
fetch_profile_after_close=fetch_profile_after_close)
beeswax/impala_beeswax.py:191: in execute
handle = self.__execute_query(query_string.strip(), user=user)
beeswax/impala_beeswax.py:384: in __execute_query
self.wait_for_finished(handle)
beeswax/impala_beeswax.py:405: in wait_for_finished
raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
E ImpalaBeeswaxException: ImpalaBeeswaxException:
E Query aborted:Disk I/O error on
impala-ec2-centos79-m6i-4xlarge-xldisk-153e.vpc.cloudera.com:27000: Failed to
open HDFS file
hdfs://localhost:20500/test-warehouse/iceberg_migrated_alter_test/000000_0
E Error(2): No such file or directory
E Root cause: RemoteException: File does not exist:
/test-warehouse/iceberg_migrated_alter_test/000000_0
E at
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:87)
E at
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:77)
E at
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:159)
E at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2040)
E at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:738)
E at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:454)
E at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
E at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
E at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
E at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994)
E at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922)
E at java.security.AccessController.doPrivileged(Native Method)
E at javax.security.auth.Subject.doAs(Subject.java:422)
E at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
E at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2899){noformat}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)