[ 
https://issues.apache.org/jira/browse/CALCITE-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16037588#comment-16037588
 ] 

Joshua Walters edited comment on CALCITE-1787 at 6/5/17 9:06 PM:
-----------------------------------------------------------------

[~julianhyde] There is a problem with this approach due to how Druid schema is 
usually designed. This link explains this: 
http://druid.io/docs/latest/ingestion/schema-design.html#high-cardinality-dimensions-e-g-unique-ids

In Druid, you don't want to store very high cardinality columns like 
{{user_id}} as dimensions, you want to store those as aggregates (sketches). 
This is because for each distinct dimension combination, there will be a rollup 
stored in Druid. If you have a dimension column with cardinality in the 
billions, then Druid will have to store billions of rows. In practice, if a 
dimension has a cardinality above a few hundred thousand in Druid, it should be 
a metric.

In summary, if you have a column like {{user_id}} in Druid, you store it only 
as a metric, and never as a dimension. You can't filter on it, it can only be 
an output metric.


was (Author: joshwalters):
[~julianhyde] There is a problem with this approach due to how Druid schema is 
usually designed. This link explains this: 
http://druid.io/docs/latest/ingestion/schema-design.html#high-cardinality-dimensions-e-g-unique-ids

In Druid, you don't want to store very high cardinality columns like 
{{user_id}}, you want to store those as aggregates (sketches). This is because 
for each distinct combination, there will be a rollup stored in Druid. If you 
have a column with cardinality in the billions, then Druid will have to store 
billions of rows. In practice, if a dimension has a cardinality above a few 
hundred thousand in Druid, it should be a metric.

In summary, if you have a column like {{user_id}} in Druid, you store it only 
as a metric, and never as a dimension. You can't filter on it, it can only be 
an output metric.

> thetaSketch Support for Druid Adapter
> -------------------------------------
>
>                 Key: CALCITE-1787
>                 URL: https://issues.apache.org/jira/browse/CALCITE-1787
>             Project: Calcite
>          Issue Type: New Feature
>          Components: druid
>    Affects Versions: 1.12.0
>            Reporter: Zain Humayun
>            Assignee: Zain Humayun
>            Priority: Minor
>
> Currently, the Druid adapter does not support the 
> [thetaSketch|http://druid.io/docs/latest/development/extensions-core/datasketches-aggregators.html]
>  aggregate type, which is used to measure the cardinality of a column 
> quickly. Many Druid instances support theta sketches, so I think it would be 
> a nice feature to have.
> I've been looking at the Druid adapter, and propose we add a new DruidType 
> called {{thetaSketch}} and then add logic in the {{getJsonAggregation}} 
> method in class {{DruidQuery}} to generate the {{thetaSketch}} aggregate. 
> This will require accessing information about the columns (what data type 
> they are) so that the thetaSketch aggregate is only produced if the column's 
> type is {{thetaSketch}}. 
> Also, I've noticed that a {{hyperUnique}} DruidType is currently defined, but 
> a {{hyperUnique}} aggregate is never produced. Since both are approximate 
> aggregators, I could also couple in the logic for {{hyperUnique}}.
> I'd love to hear your thoughts on my approach, and any suggestions you have 
> for this feature.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to