[ 
https://issues.apache.org/jira/browse/IMPALA-12429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778484#comment-17778484
 ] 

ASF subversion and git services commented on IMPALA-12429:
----------------------------------------------------------

Commit 379038f7639731605bca4356337616fa69f35f9d in impala's branch 
refs/heads/master from Riza Suminto
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=379038f76 ]

IMPALA-12429: Reduce parallelism for TPC-DS q51a and q67a test.

TestTpcdsQueryWithProcessingCost.test_tpcds_q51a and
TestTpcdsQuery.test_tpcds_q67a has been intermittently failing with
memory oversubscription error. The fact that test minicluster start 3
impalad in single host probably make admission control less effective in
preventing these queries from running in parallel with others.

This patch keep both test, but reduce max_fragment_instances_per_node
from 4 to 2 to lower its memory requirement.

Before patch:

q51a
Max Per-Host Resource Reservation: Memory=3.08GB Threads=129
Per-Host Resource Estimates: Memory=124.24GB
Per Host Min Memory Reservation: localhost:27001(2.93 GB) localhost:27002(1.97 
GB) localhost:27000(2.82 GB)
Per Host Number of Fragment Instances: localhost:27001(115) localhost:27002(79) 
localhost:27000(119)
Admission result: Admitted immediately
Cluster Memory Admitted: 33.00 GB
Per Node Peak Memory Usage: localhost:27000(2.84 GB) localhost:27002(1.99 GB) 
localhost:27001(2.95 GB)
Per Node Bytes Read: localhost:27000(62.08 MB) localhost:27002(45.71 MB) 
localhost:27001(47.39 MB)

q67a
Max Per-Host Resource Reservation: Memory=2.15GB Threads=105
Per-Host Resource Estimates: Memory=4.48GB
Per Host Min Memory Reservation: localhost:27001(2.13 GB) localhost:27002(2.13 
GB) localhost:27000(2.15 GB)
Per Host Number of Fragment Instances: localhost:27001(76) localhost:27002(76) 
localhost:27000(105)
Cluster Memory Admitted: 13.44 GB
Per Node Peak Memory Usage: localhost:27000(2.24 GB) localhost:27002(2.21 GB) 
localhost:27001(2.21 GB)
Per Node Bytes Read: localhost:27000(112.79 MB) localhost:27002(109.57 MB) 
localhost:27001(105.16 MB)

After patch:

q51a
Max Per-Host Resource Reservation: Memory=2.00GB Threads=79
Per-Host Resource Estimates: Memory=118.75GB
Per Host Min Memory Reservation: localhost:27001(1.84 GB) localhost:27002(1.28 
GB) localhost:27000(1.86 GB)
Per Host Number of Fragment Instances: localhost:27001(65) localhost:27002(46) 
localhost:27000(74)
Cluster Memory Admitted: 33.00 GB
Per Node Peak Memory Usage: localhost:27000(1.88 GB) localhost:27002(1.31 GB) 
localhost:27001(1.88 GB)
Per Node Bytes Read: localhost:27000(62.08 MB) localhost:27002(45.71 MB) 
localhost:27001(47.39 MB)

q67a
Max Per-Host Resource Reservation: Memory=1.31GB Threads=85
Per-Host Resource Estimates: Memory=3.76GB
Per Host Min Memory Reservation: localhost:27001(1.29 GB) localhost:27002(1.29 
GB) localhost:27000(1.31 GB)
Per Host Number of Fragment Instances: localhost:27001(56) localhost:27002(56) 
localhost:27000(85)
Cluster Memory Admitted: 11.28 GB
Per Node Peak Memory Usage: localhost:27000(1.35 GB) localhost:27002(1.32 GB) 
localhost:27001(1.33 GB)
Per Node Bytes Read: localhost:27000(112.79 MB) localhost:27002(109.57 MB) 
localhost:27001(105.16 MB)

Testing:
- Pass test_tpcds_queries.py in local machine.

Change-Id: I6ae5aeb97a8353d5eaa4d85e3f600513f42f7cf4
Reviewed-on: http://gerrit.cloudera.org:8080/20581
Reviewed-by: Impala Public Jenkins <[email protected]>
Tested-by: Impala Public Jenkins <[email protected]>


> TestTpcdsQueryWithProcessingCost.test_tpcds_q51a and 
> TestTpcdsQuery.test_tpcds_q67a failed
> ------------------------------------------------------------------------------------------
>
>                 Key: IMPALA-12429
>                 URL: https://issues.apache.org/jira/browse/IMPALA-12429
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Frontend
>            Reporter: Wenzhe Zhou
>            Assignee: Riza Suminto
>            Priority: Critical
>
> The test failed after the patch for IMPALA-12408 (Optimize 
> HdfsScanNode.computeScanRangeLocations) was merged. It may be related.
> Stacktrace
> {code:java}
> query_test/test_tpcds_queries.py:196: in test_tpcds_q51a
>     self.run_test_case(self.get_workload() + '-q51a', vector)
> common/impala_test_suite.py:718: in run_test_case
>     result = exec_fn(query, user=test_section.get('USER', '').strip() or None)
> common/impala_test_suite.py:656: in __exec_in_impala
>     result = self.__execute_query(target_impalad_client, query, user=user)
> common/impala_test_suite.py:992: in __execute_query
>     return impalad_client.execute(query, user=user)
> common/impala_connection.py:214: in execute
>     return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:191: in execute
>     handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:369: in __execute_query
>     self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:390: in wait_for_finished
>     raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
> E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> E    Query aborted:Failed to get minimum memory reservation of 2.82 GB on 
> daemon impala-ec2-centos79-m6i-4xlarge-ondemand-0c5c.vpc.cloudera.com:27000 
> for query 95482fe28499fbef:6bed6d8400000000 due to following error: Memory 
> limit exceeded: Could not allocate memory while trying to increase 
> reservation.
> E   Query(95482fe28499fbef:6bed6d8400000000) could not allocate 2.82 GB 
> without exceeding limit.
> E   Error occurred on backend 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0c5c.vpc.cloudera.com:27000
> E   Memory left in process limit: 3.63 GB
> E   Query(95482fe28499fbef:6bed6d8400000000): Reservation=0 
> ReservationLimit=9.60 GB OtherMemory=0 Total=0 Peak=0
> E   Memory is likely oversubscribed. Reducing query concurrency or 
> configuring admission control may help avoid this error.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to