[ https://issues.apache.org/jira/browse/IMPALA-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16549857#comment-16549857 ]
Tim Armstrong commented on IMPALA-6626: --------------------------------------- I can't reproduce this {noformat} [localhost:21000] default> set mem_limit=1; explain select * from tpch_parquet.lineitem where concat(l_returnflag,'test') = 'test'; MEM_LIMIT set to 1 Query: explain select * from tpch_parquet.lineitem where concat(l_returnflag,'test') = 'test' +----------------------------------------------------------------------------------------+ | Explain String | +----------------------------------------------------------------------------------------+ | Max Per-Host Resource Reservation: Memory=40.00MB Threads=3 | | Per-Host Resource Estimates: Memory=640.00MB | | | | F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 | | | Per-Host Resources: mem-estimate=0B mem-reservation=0B thread-reservation=1 | | PLAN-ROOT SINK | | | mem-estimate=0B mem-reservation=0B thread-reservation=0 | | | | | 01:EXCHANGE [UNPARTITIONED] | | | mem-estimate=0B mem-reservation=0B thread-reservation=0 | | | tuple-ids=0 row-size=263B cardinality=600122 | | | | | F00:PLAN FRAGMENT [RANDOM] hosts=3 instances=3 | | Per-Host Resources: mem-estimate=640.00MB mem-reservation=40.00MB thread-reservation=2 | | 00:SCAN HDFS [tpch_parquet.lineitem, RANDOM] | | partitions=1/1 files=3 size=193.71MB | | predicates: concat(l_returnflag, 'test') = 'test' | | stored statistics: | | table: rows=6001215 size=193.71MB | | columns: all | | extrapolated-rows=disabled max-scan-range-rows=2141802 | | mem-estimate=640.00MB mem-reservation=40.00MB thread-reservation=1 | | tuple-ids=0 row-size=263B cardinality=600122 | +----------------------------------------------------------------------------------------+ Fetched 23 row(s) in 0.01s {noformat} It did fail in the backend but behaved as described in the JIRA. {noformat} W0719 13:50:58.339025 7834 HdfsScanNode.java:703] Skipping dictionary filter because backend evaluation failed: concat(l_returnflag, 'test') = 'test' Java exception follows: org.apache.impala.common.InternalException: Memory limit exceeded: Could not allocate constant expression value Query(8545a96e025fc278:80c20ce400000000) could not allocate 16.00 B without exceeding limit. Error occurred on backend tarmstrong-box:22000 by fragment 0:0 Memory left in process limit: 6.81 GB Memory left in query limit: 1.00 B Query(8545a96e025fc278:80c20ce400000000): Limit=1.00 B Total=0 Peak=0 <unnamed>: Total=0 Peak=0 at org.apache.impala.service.FeSupport.NativeEvalExprsWithoutRow(Native Method) at org.apache.impala.service.FeSupport.EvalExprsWithoutRow(FeSupport.java:208) at org.apache.impala.service.FeSupport.EvalExprWithoutRow(FeSupport.java:163) at org.apache.impala.service.FeSupport.EvalPredicate(FeSupport.java:221) at org.apache.impala.analysis.Analyzer.isTrueWithNullSlots(Analyzer.java:1919) at org.apache.impala.planner.HdfsScanNode.addDictionaryFilter(HdfsScanNode.java:699) at org.apache.impala.planner.HdfsScanNode.computeDictionaryFilterConjuncts(HdfsScanNode.java:725) at org.apache.impala.planner.HdfsScanNode.init(HdfsScanNode.java:386) at org.apache.impala.planner.SingleNodePlanner.createHdfsScanPlan(SingleNodePlanner.java:1261) at org.apache.impala.planner.SingleNodePlanner.createScanNode(SingleNodePlanner.java:1305) at org.apache.impala.planner.SingleNodePlanner.createTableRefNode(SingleNodePlanner.java:1513) at org.apache.impala.planner.SingleNodePlanner.createTableRefsPlan(SingleNodePlanner.java:776) at org.apache.impala.planner.SingleNodePlanner.createSelectPlan(SingleNodePlanner.java:614) at org.apache.impala.planner.SingleNodePlanner.createQueryPlan(SingleNodePlanner.java:257) at org.apache.impala.planner.SingleNodePlanner.createSingleNodePlan(SingleNodePlanner.java:147) at org.apache.impala.planner.Planner.createPlan(Planner.java:101) at org.apache.impala.service.Frontend.createExecRequest(Frontend.java:969) at org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1090) at org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:165) {noformat} > Failure to assign dictionary predicates should not result in query failure > -------------------------------------------------------------------------- > > Key: IMPALA-6626 > URL: https://issues.apache.org/jira/browse/IMPALA-6626 > Project: IMPALA > Issue Type: Improvement > Components: Frontend > Affects Versions: Impala 2.9.0, Impala 2.10.0, Impala 2.11.0 > Reporter: Alexander Behm > Priority: Major > > Assigning dictionary predicates to Parquet scans may involve evaluation of > expressions in the BE which could fail for various reasons. Such failures > should lead to non-assignment of dictionary predicates but not to query > failure. > See HdfsScanNode: > {code} > private void addDictionaryFilter(...) { > ... > try { > if (analyzer.isTrueWithNullSlots(conjunct)) return; > } catch (InternalException e) { <--- does not handle Exception which will > cause query to fail > // Expr evaluation failed in the backend. Skip this conjunct since we > cannot > // determine whether it is safe to apply it against a dictionary. > LOG.warn("Skipping dictionary filter because backend evaluation failed: > " > + conjunct.toSql(), e); > return; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org For additional commands, e-mail: issues-all-h...@impala.apache.org