[ 
https://issues.apache.org/jira/browse/IMPALA-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikramjeet Vig resolved IMPALA-7925.
------------------------------------
       Resolution: Fixed
    Fix Version/s: Impala 3.2.0

resolved the duplicate issue IMPALA-7925

> test_bloom_filters and test_hdfs_scanner_profile running out of memory during 
> exhaustive tests
> ----------------------------------------------------------------------------------------------
>
>                 Key: IMPALA-7925
>                 URL: https://issues.apache.org/jira/browse/IMPALA-7925
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Backend
>    Affects Versions: Impala 3.2.0
>            Reporter: Lars Volker
>            Assignee: Bikramjeet Vig
>            Priority: Critical
>              Labels: broken-build, flaky, resource-management
>             Fix For: Impala 3.2.0
>
>
> {noformat}
> 00:12:56  TestBloomFilters.test_bloom_filters[protocol: beeswax | 
> exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: rc/gzip/block] 
> 00:12:56 [gw6] linux2 -- Python 2.7.5 
> /data/jenkins/workspace/impala-asf-master-exhaustive-release/repos/Impala/bin/../infra/python/env/bin/python
> 00:12:56 query_test/test_runtime_filters.py:87: in test_bloom_filters
> 00:12:56     self.run_test_case('QueryTest/bloom_filters', vector)
> 00:12:56 common/impala_test_suite.py:467: in run_test_case
> 00:12:56     result = self.__execute_query(target_impalad_client, query, 
> user=user)
> 00:12:56 common/impala_test_suite.py:688: in __execute_query
> 00:12:56     return impalad_client.execute(query, user=user)
> 00:12:56 common/impala_connection.py:170: in execute
> 00:12:56     return self.__beeswax_client.execute(sql_stmt, user=user)
> 00:12:56 beeswax/impala_beeswax.py:182: in execute
> 00:12:56     handle = self.__execute_query(query_string.strip(), user=user)
> 00:12:56 beeswax/impala_beeswax.py:356: in __execute_query
> 00:12:56     self.wait_for_finished(handle)
> 00:12:56 beeswax/impala_beeswax.py:377: in wait_for_finished
> 00:12:56     raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
> 00:12:56 E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> 00:12:56 E    Query aborted:ExecQueryFInstances rpc 
> query_id=a8434897f14b07d4:d4eb584700000000 failed: Failed to get minimum 
> memory reservation of 24.62 MB on daemon jenkins-worker:22000 for query 
> a8434897f14b07d4:d4eb584700000000 due to following error: Memory limit 
> exceeded: Could not allocate memory while trying to increase reservation.
> 00:12:56 E   Query(a8434897f14b07d4:d4eb584700000000) could not allocate 
> 24.62 MB without exceeding limit.
> 00:12:56 E   Error occurred on backend jenkins-worker:22000
> 00:12:56 E   Memory left in process limit: 894.84 MB
> 00:12:56 E   Query(a8434897f14b07d4:d4eb584700000000): Reservation=0 
> ReservationLimit=9.60 GB OtherMemory=0 Total=0 Peak=0
> 00:12:56 E   Memory is likely oversubscribed. Reducing query concurrency or 
> configuring admission control may help avoid this error.
> ...
> 00:12:56  TestScannersAllTableFormats.test_hdfs_scanner_profile[batch_size: 
> 16 | debug_action: -1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0 | protocol: 
> beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: text/bzip/block] 
> 00:12:56 [gw5] linux2 -- Python 2.7.5 
> /data/jenkins/workspace/impala-asf-master-exhaustive-release/repos/Impala/bin/../infra/python/env/bin/python
> 00:12:56 query_test/test_scanners.py:109: in test_hdfs_scanner_profile
> 00:12:56     self.run_test_case('QueryTest/hdfs_scanner_profile', vector)
> 00:12:56 common/impala_test_suite.py:467: in run_test_case
> 00:12:56     result = self.__execute_query(target_impalad_client, query, 
> user=user)
> 00:12:56 common/impala_test_suite.py:688: in __execute_query
> 00:12:56     return impalad_client.execute(query, user=user)
> 00:12:56 common/impala_connection.py:170: in execute
> 00:12:56     return self.__beeswax_client.execute(sql_stmt, user=user)
> 00:12:56 beeswax/impala_beeswax.py:182: in execute
> 00:12:56     handle = self.__execute_query(query_string.strip(), user=user)
> 00:12:56 beeswax/impala_beeswax.py:356: in __execute_query
> 00:12:56     self.wait_for_finished(handle)
> 00:12:56 beeswax/impala_beeswax.py:377: in wait_for_finished
> 00:12:56     raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
> 00:12:56 E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> 00:12:56 E    Query aborted:ExecQueryFInstances rpc 
> query_id=e844898a9e09e17c:e43433ca00000000 failed: Failed to get minimum 
> memory reservation of 3.06 MB on daemon jenkins-worker:22000 for query 
> e844898a9e09e17c:e43433ca00000000 due to following error: Memory limit 
> exceeded: Could not allocate memory while trying to increase reservation.
> 00:12:56 E   Query(e844898a9e09e17c:e43433ca00000000) could not allocate 3.06 
> MB without exceeding limit.
> 00:12:56 E   Error occurred on backend jenkins-worker:22000
> 00:12:56 E   Memory left in process limit: 890.85 MB
> 00:12:56 E   Query(e844898a9e09e17c:e43433ca00000000): Reservation=0 
> ReservationLimit=9.60 GB OtherMemory=0 Total=0 Peak=0
> 00:12:56 E   Memory is likely oversubscribed. Reducing query concurrency or 
> configuring admission control may help avoid this error.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org

Reply via email to