Have you tried reloading the alltypesnopart_insert table?

bin/load-data.py -f -w functional-query --table_names=alltypesnopart_insert

You may have to run this first:

bin/create_testdata.sh


On Sat, May 13, 2017 at 2:44 PM, Lars Volker <[email protected]> wrote:

> I cannot run test_insert.py anymore on master. I tried clean.sh, rebuilt
> from scratch, removed the whole toolchain, but it still won't work. On
> first glance it looks like the test setup code tries to drop the Hive
> default partition but cannot find a file for it. Has anyone seen this error
> before? Could this be related to the cdh_components update? Thanks, Lars
>
> -- executing against localhost:21000
> select count(*) from alltypesnopart_insert;
>
> FAILED-- closing connection to: localhost:21000
>
> ========================================================= short test
> summary info =========================================================
> FAIL
> tests/query_test/test_insert.py::TestInsertQueries::()::
> test_insert[exec_option:
> {'batch_size': 0, 'num_nodes': 0, 'sync_ddl': 0, 'disable_codegen': False,
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format:
> text/none]
>
> ================================================================ FAILURES
> =================================================================
>  TestInsertQueries.test_insert[exec_option: {'batch_size': 0, 'num_nodes':
> 0, 'sync_ddl': 0, 'disable_codegen': False, 'abort_on_error': 1,
> 'exec_single_node_rows_threshold': 0} | table_format: text/none]
> tests/query_test/test_insert.py:119: in test_insert
>     multiple_impalad=vector.get_value('exec_option')['sync_ddl'] == 1)
> tests/common/impala_test_suite.py:332: in run_test_case
>     self.execute_test_case_setup(test_section['SETUP'], table_format_info)
> tests/common/impala_test_suite.py:448: in execute_test_case_setup
>     self.__drop_partitions(db_name, table_name)
> tests/common/impala_test_suite.py:596: in __drop_partitions
>     partition, True), 'Could not drop partition: %s' % partition
> shell/gen-py/hive_metastore/ThriftHiveMetastore.py:2513: in
> drop_partition_by_name
>     return self.recv_drop_partition_by_name()
> shell/gen-py/hive_metastore/ThriftHiveMetastore.py:2541: in
> recv_drop_partition_by_name
>     raise result.o2
> E   MetaException: MetaException(_message='File does not exist:
> /test-warehouse/functional.db/alltypesinsert/year=__HIVE_
> DEFAULT_PARTITION__\n\tat
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> getContentSummary(FSDirectory.java:2296)\n\tat
> org.apache.ha
> doop.hdfs.server.namenode.FSNamesystem.getContentSummary(
> FSNamesystem.java:4545)\n\tat
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> getContentSummary(NameNodeRpcServer.java:1087)\n\tat
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderP
> roxyClientProtocol.getContentSummary(AuthorizationProviderProxyClie
> ntProtocol.java:563)\n\tat
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> deTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSi
> deTranslatorPB.java:873)\n\tat
> org.ap
> ache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.
> java)\n\tat
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> ProtobufRpcEngine.java:617)\n\tat
> org.apa
> che.hadoop.ipc.RPC$Server.call(RPC.java:1073)\n\tat
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)\n\tat
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)\n\tat
> java.security.AccessController.doPrivileged(Native Method)\n\tat javax.s
> ecurity.auth.Subject.doAs(Subject.java:415)\n\tat
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1917)\n\tat
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)\n')
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: stopping
> after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> =================================================== 1 failed, 1 passed in
> 22.29 seconds ===================================================
>

Reply via email to