[
https://issues.apache.org/jira/browse/IMPALA-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sahil Takiar resolved IMPALA-10126.
-----------------------------------
Resolution: Duplicate
Duplicate of IMPALA-9058
> asf-master-core-s3
> test_aggregation.TestWideAggregationQueries.test_many_grouping_columns failed
> ------------------------------------------------------------------------------------------------
>
> Key: IMPALA-10126
> URL: https://issues.apache.org/jira/browse/IMPALA-10126
> Project: IMPALA
> Issue Type: Bug
> Components: Backend
> Reporter: Yongzhi Chen
> Priority: Major
>
> query_test.test_aggregation.TestWideAggregationQueries.test_many_grouping_columns[protocol:
> beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0,
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True,
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format:
> parquet/none] (from pytest)
> {noformat}
> Error Message
> query_test/test_aggregation.py:453: in test_many_grouping_columns result
> = self.execute_query(query, exec_option, table_format=table_format)
> common/impala_test_suite.py:811: in wrapper return function(*args,
> **kwargs) common/impala_test_suite.py:843: in execute_query return
> self.__execute_query(self.client, query, query_options)
> common/impala_test_suite.py:909: in __execute_query return
> impalad_client.execute(query, user=user) common/impala_connection.py:205: in
> execute return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:187: in execute handle =
> self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:365: in __execute_query
> self.wait_for_finished(handle) beeswax/impala_beeswax.py:386: in
> wait_for_finished raise ImpalaBeeswaxException("Query aborted:" +
> error_log, None) E ImpalaBeeswaxException: ImpalaBeeswaxException: E
> Query aborted:Disk I/O error on
> impala-ec2-centos74-m5-4xlarge-ondemand-1129.vpc.cloudera.com:22001: Failed
> to open HDFS file
> s3a://impala-test-uswest2-1/test-warehouse/widetable_1000_cols_parquet/1f4ec08992b6e3f9-6fd9a17d00000000_1482052561_data.0.parq
> E Error(2): No such file or directory E Root cause:
> ResourceNotFoundException: Requested resource not found (Service:
> AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException;
> Request ID: 1HMMG39MJ9GP2JEENAUFVFDVA3VV4KQNSO5AEMVJF66Q9ASUAAJG)
> Stacktrace
> query_test/test_aggregation.py:453: in test_many_grouping_columns
> result = self.execute_query(query, exec_option, table_format=table_format)
> common/impala_test_suite.py:811: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:843: in execute_query
> return self.__execute_query(self.client, query, query_options)
> common/impala_test_suite.py:909: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:205: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:187: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:365: in __execute_query
> self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:386: in wait_for_finished
> raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
> E ImpalaBeeswaxException: ImpalaBeeswaxException:
> E Query aborted:Disk I/O error on
> impala-ec2-centos74-m5-4xlarge-ondemand-1129.vpc.cloudera.com:22001: Failed
> to open HDFS file
> s3a://impala-test-uswest2-1/test-warehouse/widetable_1000_cols_parquet/1f4ec08992b6e3f9-6fd9a17d00000000_1482052561_data.0.parq
> E Error(2): No such file or directory
> E Root cause: ResourceNotFoundException: Requested resource not found
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code:
> ResourceNotFoundException; Request ID:
> 1HMMG39MJ9GP2JEENAUFVFDVA3VV4KQNSO5AEMVJF66Q9ASUAAJG)
> {noformat}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]