[ 
https://issues.apache.org/jira/browse/IMPALA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong resolved IMPALA-5234.
-----------------------------------
    Resolution: Later

This has been dormant for a while. Not sure that it's a real problem in practive

> Get rid of redundant LogError() messages
> ----------------------------------------
>
>                 Key: IMPALA-5234
>                 URL: https://issues.apache.org/jira/browse/IMPALA-5234
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Backend
>    Affects Versions: Impala 2.8.0
>            Reporter: Sailesh Mukil
>            Priority: Major
>              Labels: errorhandling
>
> In a few places in the codebase, there are redundant LogError() calls that 
> add error statuses to the error_log AND return the same error status up the 
> call stack. This results in the same error message being sent back twice to 
> the client. We need to find all such cases and remove these redundant 
> LogError() calls.
> Repro:
> set mem_limit=1m;
> select * from tpch.lineitem;
> Output:
> {code}
> [localhost:21000] > select * from tpch.lineitem;
> Query: select * from tpch.lineitem
> Query submitted at: 2017-04-20 12:04:22 (Coordinator: http://localhost:25000)
> Query progress can be monitored at: 
> http://localhost:25000/query_plan?query_id=6048492f67282f78:ef0f2bd400000000
> WARNINGS: Memory limit exceeded: Failed to allocate tuple buffer
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit.
> Error occurred on backend localhost:22000 by fragment 
> 6048492f67282f78:ef0f2bd400000003
> Memory left in process limit: 8.24 GB
> Memory left in query limit: -7369392.00 B
> Query(6048492f67282f78:ef0f2bd400000000): memory limit exceeded. Limit=1.00 
> MB Total=8.03 MB Peak=8.03 MB
>   Fragment 6048492f67282f78:ef0f2bd400000000: Total=8.00 KB Peak=8.00 KB
>     EXCHANGE_NODE (id=1): Total=0 Peak=0
>     DataStreamRecvr: Total=0 Peak=0
>     PLAN_ROOT_SINK: Total=0 Peak=0
>     CodeGen: Total=0 Peak=0
>   Block Manager: Total=0 Peak=0
>   Fragment 6048492f67282f78:ef0f2bd400000003: Total=8.02 MB Peak=8.02 MB
>     HDFS_SCAN_NODE (id=0): Total=8.01 MB Peak=8.01 MB
>     DataStreamSender (dst_id=1): Total=688.00 B Peak=688.00 B
>     CodeGen: Total=0 Peak=0
> Memory limit exceeded: Failed to allocate tuple buffer
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit.
> Error occurred on backend localhost:22000 by fragment 
> 6048492f67282f78:ef0f2bd400000003
> Memory left in process limit: 8.24 GB
> Memory left in query limit: -7369392.00 B
> Query(6048492f67282f78:ef0f2bd400000000): memory limit exceeded. Limit=1.00 
> MB Total=8.03 MB Peak=8.03 MB
>   Fragment 6048492f67282f78:ef0f2bd400000000: Total=8.00 KB Peak=8.00 KB
>     EXCHANGE_NODE (id=1): Total=0 Peak=0
>     DataStreamRecvr: Total=0 Peak=0
>     PLAN_ROOT_SINK: Total=0 Peak=0
>     CodeGen: Total=0 Peak=0
>   Block Manager: Total=0 Peak=0
>   Fragment 6048492f67282f78:ef0f2bd400000003: Total=8.02 MB Peak=8.02 MB
>     HDFS_SCAN_NODE (id=0): Total=8.01 MB Peak=8.01 MB
>     DataStreamSender (dst_id=1): Total=688.00 B Peak=688.00 B
>     CodeGen: Total=0 Peak=0
> {code}
> This can be traced back to:
> https://github.com/apache/incubator-impala/blob/a50c344077f6c9bbea3d3cbaa2e9146ba20ac9a9/be/src/runtime/row-batch.cc#L462
> https://github.com/apache/incubator-impala/blob/master/be/src/runtime/mem-tracker.cc#L319-L320
> There are more such examples that need to be taken care of too.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to