Joe McDonnell created IMPALA-13442:
--------------------------------------

             Summary: TestAcidInsertsBasic.test_read_hive_inserts failed with 
Disk I/O error
                 Key: IMPALA-13442
                 URL: https://issues.apache.org/jira/browse/IMPALA-13442
             Project: IMPALA
          Issue Type: Bug
          Components: Frontend
    Affects Versions: Impala 4.5.0
            Reporter: Joe McDonnell


This error has been seen more than once on an exhaustive release run:
{noformat}
tests/stress/test_acid_stress.py:172: in test_read_hive_inserts
    self._run_test_read_hive_inserts(unique_database, is_partitioned)
tests/stress/test_acid_stress.py:150: in _run_test_read_hive_inserts
    sleep_seconds=3)])
stress/stress_util.py:46: in run_tasks
    pool.map_async(Task.run, tasks).get(timeout_seconds)
Impala-Toolchain/toolchain-packages-gcc10.4.0/python-2.7.16/lib/python2.7/multiprocessing/pool.py:572:
 in get
    raise self._value
E   ImpalaBeeswaxException: Query da4c111cf317b467:0a99f70100000000 failed:
E   Disk I/O error on hostname:27000: Failed to open HDFS file 
hdfs://localhost:20500/test-warehouse/managed/test_read_hive_inserts_8bb30aca.db/test_read_hive_inserts/-tmp.base_0000001_0/000000_0.manifest
E   Error(2): No such file or directory
E   Root cause: RemoteException: File does not exist: 
/test-warehouse/managed/test_read_hive_inserts_8bb30aca.db/test_read_hive_inserts/-tmp.base_0000001_0/000000_0.manifest
E       at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:87)
E       at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:77)
E       at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:159)
E       at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2040)
E       at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:738)
E       at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:454)
E       at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
E       at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
E       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
E       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994)
E       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922)
E       at java.security.AccessController.doPrivileged(Native Method)
E       at javax.security.auth.Subject.doAs(Subject.java:422)
E       at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
E       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2899){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to