[
https://issues.apache.org/jira/browse/HIVE-22239?focusedWorklogId=324951&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324951
]
ASF GitHub Bot logged work on HIVE-22239:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 08/Oct/19 08:47
Start Date: 08/Oct/19 08:47
Worklog Time Spent: 10m
Work Description: kgyrtkirk commented on pull request #787: HIVE-22239
URL: https://github.com/apache/hive/pull/787#discussion_r332377711
##########
File path:
standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift
##########
@@ -562,14 +562,27 @@ struct DateColumnStatsData {
5: optional binary bitVectors
}
+struct Timestamp {
+1: required i64 secondsSinceEpoch
Review comment:
I'm afraid that there will be a downside that we are throwing away precision
- and because of that we may get into some troubles later:
If we do truncate to seconds; we may not be able to extend the timestamp
logic to the stats optimizer - as we are not working with the real values.
consider the following
```sql
select '2019-11-11 11:11:11.400' < '2019-11-11 11:11:11.300'
```
if we round to seconds; consider that the left side comes from a table - a
the columns maxvalue; the stats optimizer could deduce a "true" for the above
Would it complicate things much to use a non-rounded timestamp - and retain
miliseconds/microsendond as well ?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 324951)
Time Spent: 1h 50m (was: 1h 40m)
> Scale data size using column value ranges
> -----------------------------------------
>
> Key: HIVE-22239
> URL: https://issues.apache.org/jira/browse/HIVE-22239
> Project: Hive
> Issue Type: Improvement
> Components: Physical Optimizer
> Reporter: Jesus Camacho Rodriguez
> Assignee: Jesus Camacho Rodriguez
> Priority: Major
> Labels: pull-request-available
> Attachments: HIVE-22239.01.patch, HIVE-22239.02.patch,
> HIVE-22239.patch
>
> Time Spent: 1h 50m
> Remaining Estimate: 0h
>
> Currently, min/max values for columns are only used to determine whether a
> certain range filter falls out of range and thus filters all rows or none at
> all. If it does not, we just use a heuristic that the condition will filter
> 1/3 of the input rows. Instead of using that heuristic, we can use another
> one that assumes that data will be uniformly distributed across that range,
> and calculate the selectivity for the condition accordingly.
> This patch also includes the propagation of min/max column values from
> statistics to the optimizer for timestamp type.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)