isidentical opened a new pull request, #3837:
URL: https://github.com/apache/arrow-datafusion/pull/3837

   # Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   
   Part of #3813.
   
    # Rationale for this change
   <!--
    Why are you proposing this change? If this is already explained clearly in 
the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your 
changes and offer better suggestions for fixes.  
   -->
   This was a point that came out during the initial join cardinality 
computation PR 
([link](https://github.com/apache/arrow-datafusion/pull/3787#discussion_r992751749))
 where the logic only gave an estimate when the distinct count information was 
available directly in the statistics. This was effective for certain use cases 
where the distinct count was already calculated (e.g. propagated statistics 
from stuff like aggregates) but for statistics that originate from initial user 
input, having `distinct_count` is very unlikely (e.g. there is no way to save 
distinct count when exporting a parquet file from pandas, none of the official 
backends [pyarrow/fastparquet] even support such a thing in their write APIs). 
So one main thing we can do is actually use `min`/`max` values (which are 
nearly universal at this point) to calculate the maximum possible distinct 
count (which is actually what we need for selectivity).
   
   # What changes are included in this PR?
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   A fallback option for inferring the maximum distinct count when the actual 
distinct count information is not available. It only works with numeric values 
(more specifically, integers) at this point (we can technically determine the 
range for timestamps or floats, but neither of them feels close to accurate 
since that would be essentially brute forcing every possible value within the 
precision boundaries, something that feels very unlikely to happen in real 
world, but open for discussion).
   
   # Are there any user-facing changes?
   <!--
   If there are user-facing changes then we may require documentation to be 
updated before approving the PR.
   -->
   No backwards incompatible changes.
   
   <!--
   If there are any breaking changes to public APIs, please add the `api 
change` label.
   -->


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to