[ 
https://issues.apache.org/jira/browse/IMPALA-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yida Wu updated IMPALA-14493:
-----------------------------
    Description: 
The admission service ({{{}admissiond{}}}) can OOM on high workload because its 
process memory tracker is inaccurate and doesn't account for all memory 
allocations.

Ensuring memory tracker accurately accounts for every allocation could be 
difficult, one simpler solution is to introduce a hard memory cap using 
{{tcmalloc}} statistics, which accurately reflect the true process memory 
usage. If a new query is submitted while {{tcmalloc}} memory usage is over the 
process limit, the query will be rejected immediately to protect from OOM.

The major memory usage for admissiond can be for deserializing the sidecar 
stuff to read and hold like below. This seems to be related to 
[IMPALA-14499|http://issues.apache.org/jira/browse/IMPALA-14499], as the memory 
tracker doesn't function correctly in admissiond currently 
{code:java}
Total: 631.5 MB
 200.0 31.7% 31.7% 200.0 31.7% std::vector::_M_default_append (inline)
 100.0 15.8% 47.5% 100.0 15.8% google::protobuf::Arena::CreateMaybeMessage 
(inline)
 ...
 0.0 0.0% 100.0% 217.0 34.4% impala::AdmissionControlService::AdmitQuery
 ...
 0.0 0.0% 100.0% 217.0 34.4% impala::GetSidecar
 ...
 0.0 0.0% 100.0% 217.0 34.4% impala::TQueryExecRequest::read
 ...
 0.0 0.0% 100.0% 199.0 31.5% impala::TScanRangeSpec::read
 0.0 0.0% 100.0% 187.0 29.6% impala::TScanRangeSpec::read (inline){code}

  was:
The admission service ({{{}admissiond{}}}) can OOM on high workload because its 
process memory tracker is inaccurate and doesn't account for all memory 
allocations.

Ensuring memory tracker accurately accounts for every allocation could be 
difficult, one simpler solution is to introduce a hard memory cap using 
{{tcmalloc}} statistics, which accurately reflect the true process memory 
usage. If a new query is submitted while {{tcmalloc}} memory usage is over the 
process limit, the query will be rejected immediately to protect from OOM.


> Capping Memory Usage of Global Admission Service 
> -------------------------------------------------
>
>                 Key: IMPALA-14493
>                 URL: https://issues.apache.org/jira/browse/IMPALA-14493
>             Project: IMPALA
>          Issue Type: Bug
>            Reporter: Yida Wu
>            Assignee: Yida Wu
>            Priority: Major
>             Fix For: Impala 5.0.0
>
>
> The admission service ({{{}admissiond{}}}) can OOM on high workload because 
> its process memory tracker is inaccurate and doesn't account for all memory 
> allocations.
> Ensuring memory tracker accurately accounts for every allocation could be 
> difficult, one simpler solution is to introduce a hard memory cap using 
> {{tcmalloc}} statistics, which accurately reflect the true process memory 
> usage. If a new query is submitted while {{tcmalloc}} memory usage is over 
> the process limit, the query will be rejected immediately to protect from OOM.
> The major memory usage for admissiond can be for deserializing the sidecar 
> stuff to read and hold like below. This seems to be related to 
> [IMPALA-14499|http://issues.apache.org/jira/browse/IMPALA-14499], as the 
> memory tracker doesn't function correctly in admissiond currently 
> {code:java}
> Total: 631.5 MB
>  200.0 31.7% 31.7% 200.0 31.7% std::vector::_M_default_append (inline)
>  100.0 15.8% 47.5% 100.0 15.8% google::protobuf::Arena::CreateMaybeMessage 
> (inline)
>  ...
>  0.0 0.0% 100.0% 217.0 34.4% impala::AdmissionControlService::AdmitQuery
>  ...
>  0.0 0.0% 100.0% 217.0 34.4% impala::GetSidecar
>  ...
>  0.0 0.0% 100.0% 217.0 34.4% impala::TQueryExecRequest::read
>  ...
>  0.0 0.0% 100.0% 199.0 31.5% impala::TScanRangeSpec::read
>  0.0 0.0% 100.0% 187.0 29.6% impala::TScanRangeSpec::read (inline){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to