isidentical opened a new pull request, #3787:
URL: https://github.com/apache/arrow-datafusion/pull/3787
# Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases. You can
link an issue to this PR using the GitHub syntax. For example `Closes #123`
indicates that this PR will close issue #123.
-->
This PR is an initial step towards #128, although it does not close it (more
info below).
# Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly in
the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand your
changes and offer better suggestions for fixes.
-->
DataFusion's statistics can be helpful when making cost-based optimizations
(e.g. the probe ordering for hash join uses the statistics to determine which
side is more 'heavy'). However there are a few operations that don't produce
any statistics since it is hard to know beforehand and we don't want to await
the full results before we can make the optimizations (the physical optimizer
pass is for example not adaptive, but rather just done at once when planning
the query with the stats we can derive statically). There have been extensive
academical and practical work done on this estimation, and there are both very
complex algorithms and more simpler ones. This PR is just an introduction
towards implementing the basic principles of join cardinality computation from
Spark's Catalyst optimizer (which AFAIK is heavily inspired from Hive's Optiq
CBO) which is very simple and heavily battle-tested ([initially shared by
@Dandandan & @alamb on #128](https://github.com/apache/arrow-datafusi
on/issues/128#issuecomment-826832822)).
# What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it is
sometimes worth providing a summary of the individual changes in this PR.
-->
The catalyst optimizer is very extensive in terms of cost calculation, so
this PR is just the initial steps towards having something similar. It
implements cardinality estimation (without filter selectivity, which would
probably be the next follow up) for inner join and other join types which we
can estimate through deriving the result from the inner join cardinality. More
description on the algorithms are available in the code, as well as the [blog
post](https://databricks.com/blog/2017/08/31/cost-based-optimizer-in-apache-spark-2-2.html)
Spark/DataBricks team has shared.
# Are there any user-facing changes?
<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->
Yes, this brings more statistics and more potential for optimization.
<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]