Hi Donishka,

Seems like Impala underestimated the cardinality of the 01:SCAN KUDU
operator. If the stats are up-to-date then it means Impala overestimated
the filtering efficiency of the given predicates.
As a workaround you can try setting the 'mem_limit' query option to a
higher value.

You can also try a newer Impala which might calculate the estimations more
precisely.

Cheers,
    Zoltan



On Mon, Dec 5, 2022 at 7:27 AM Donishka Tharindu <
donishka.thari...@gmail.com> wrote:

> Hi,
>
> I executed COMPUTE STATS and still it shows the same error. I attached the
> query profile with this for your reference.
>
> Thanks,
> Donishka
>
> On Tue, Nov 15, 2022 at 3:54 PM Zoltán Borók-Nagy <borokna...@cloudera.com>
> wrote:
>
>> Hi Donishka,
>>
>> Could you please share the query profile of your query? Did you execute
>> COMPUTE STATS on all the tables that participate in the query?
>> You can also try setting the mem_limit query option to a higher value if
>> Impala underestimates the memory requirements for your query.
>>
>> Cheers,
>>     Zoltan
>>
>>
>>
>> On Tue, Nov 15, 2022 at 8:39 AM Donishka Tharindu <
>> donishka.thari...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I enable impala admission control with a cluster which has two impala
>>> coordinators and six impala executors.
>>>
>>> Max Memory Multiple : 8GB
>>> Minimum Query Memory Limit : 1GB
>>> Maximum Query Memory Limit : 5GB
>>>
>>> impalad version 3.4.0
>>>
>>> There were some queries which were admitted to the cluster and allocated
>>> memory within the memory range by impala admission control and gave the
>>> below exception while executing the queries.
>>>
>>> Status: Memory limit exceeded: Error occurred on backend
>>> Memory left in process limit: 22 .63 GB Memory left in query limit :
>>> -352.28 KB
>>> Query (7c4b6e3d86b1cb83 :56c29eef00000000) : memory limit exceeded.
>>> Limit=1.00 GB
>>> Reservation=818 .00 MB ReservationLimit=819.40 MB OtherMemory=206.59 MB
>>> Total 1.00 GB
>>> Peak=1.GB Fragment 7c4b6e3d86b1cb83: Reservation=818.00 MB
>>> OtherMemory=206.59 MB Total-1.00GB Peak=1.00 GB SORT NODE (id=6) :
>>> Total=197.51 MB
>>> Peak=197.51 MB SELECT NODE (id=5) : Total=20.00 KB Peak=8.02 MB Exprs:
>>> Total=4.00 KB
>>> Peak=4.00 KB ANALYTIC_EVAL_NODE (id=4) : Reservation=4.00 MB
>>> OtherMemory=9.01 MB
>>> Total=13.01 MB Peak=14.04 MB Exprs: Total=4.00 KB Peak=4.00 KB SORT NODE
>>> (id=3) :
>>> Reservation=814.00 MB OtherMemory=16.00 KB Total=814.02 MB Peak=814.02
>>> MB EXCHANGE_NODE
>>> (id=7) : OtherMemory=0 Total=0 Peak=21.28 MB KrpcDeferredRpcs: Total=0
>>> Peak=26.29 KB KrpcDataStreamSender (dst_id=8) : Total=168 .00 B
>>> Peak=168.00 B CodeGen:
>>> Total=28.24 KB Peak=5.23 MB Fragment 7c4b6e3d86b1cb83:56c29eef00000005:
>>> Reservation=0
>>> OtherMemory=0 Total=0 Peak=20.90 MB UNION NODE (id=0) : Total=0
>>> Peak=1.39 MB
>>> KUDU_SCAN_NODE (id=1) : Total=0 Peak=18.08 MB HDFS_SCAN_NODE (id=2) :
>>> Reservation=0
>>> ReservationLimit=0 OtherMemory=0 Total=0 Peak=4.00 KB
>>> KrpcDataStreamSender (dst_id=7) :
>>> Total=0 Peak=168.98 KB CodeGen: Total=0 Peak=4.58 MB
>>>
>>>
>>> Can you assist me find a solution for this issue ?
>>>
>>> Thanks & Regards
>>> Donishka
>>>
>>
>
> -
>

Reply via email to