[ 
https://issues.apache.org/jira/browse/IMPALA-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16565671#comment-16565671
 ] 

ASF subversion and git services commented on IMPALA-6625:
---------------------------------------------------------

Commit 672a271fd0966bd77f38eda9b6f1e768415bac04 in impala's branch 
refs/heads/master from poojanilangekar
[ https://git-wip-us.apache.org/repos/asf?p=impala.git;h=672a271 ]

IMPALA-7234: Improve memory estimates produced by the Planner

Previously, the planner used the getMajorityFormat to estimate
the memory requirements of its partitions. Additionally, before
IMPALA-6625 was merged, the majority format for a multi-format
table with no numerical majority was calculated using a HashMap,
thus producing non deterministic results. This change ensures that
the memory estimate is deterministic and always based on partition
that has the maximum memory requirement.

Testing: Ran all PlannerTests. Also, modified plans of scans with
multiple partitions to ensure that the memory estimate produced
corresponds to the partition with the maximum requirement.

Change-Id: I0666ae3d45fbd8615d3fa9a8626ebd29cf94fb4b
Reviewed-on: http://gerrit.cloudera.org:8080/11001
Reviewed-by: Impala Public Jenkins <[email protected]>
Tested-by: Impala Public Jenkins <[email protected]>


> Skip dictionary and collection conjunct assignment for non-Parquet scans.
> -------------------------------------------------------------------------
>
>                 Key: IMPALA-6625
>                 URL: https://issues.apache.org/jira/browse/IMPALA-6625
>             Project: IMPALA
>          Issue Type: Improvement
>          Components: Frontend
>    Affects Versions: Impala 2.9.0, Impala 2.10.0, Impala 2.11.0
>            Reporter: Alexander Behm
>            Assignee: Pooja Nilangekar
>            Priority: Critical
>              Labels: perf, planner
>
> In HdfsScanNode.init() we try to assign dictionary and collection conjuncts 
> even for non-Parquet scans. Such predicates only make sense for Parquet 
> scans, so there is no point in collecting them for other scans.
> The current behavior is undesirable because:
> * init() can be substantially slower because assigning dictionary filters may 
> involve evaluating exprs in the BE which can be expensive
> * the explain plan of non-Parquet scans may have a section "parquet 
> dictionary predicates" which is confusing/misleading
> Relevant code snippet from HdfsScanNode:
> {code}
> @Override
>   public void init(Analyzer analyzer) throws ImpalaException {
>     conjuncts_ = orderConjunctsByCost(conjuncts_);
>     checkForSupportedFileFormats();
>     assignCollectionConjuncts(analyzer);
>     computeDictionaryFilterConjuncts(analyzer);
>     // compute scan range locations with optional sampling
>     Set<HdfsFileFormat> fileFormats = computeScanRangeLocations(analyzer);
> ...
>     if (fileFormats.contains(HdfsFileFormat.PARQUET)) { <--- assignment 
> should go in here
>       computeMinMaxTupleAndConjuncts(analyzer);
>     }
> ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to