[jira] [Commented] (CALCITE-4065) Projecting a nested Row fails with org.apache.calcite.util.Util.needToImplement

2020-06-16 Thread Slim Bouguerra (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136760#comment-17136760
 ] 

Slim Bouguerra commented on CALCITE-4065:
-

[~danny0405] has suggested to use
{code:java}
org.apache.calcite.rel.type.StructKind#PEEK_FIELDS
{code}
When building the struct. I have Tried his suggestion but still facing the same 
issue [^test_cases_CALCITE-4065.patch]

[~danny0405] please let me know if I am missing something and thanks for your 
help.

> Projecting a nested Row fails with 
> org.apache.calcite.util.Util.needToImplement
> ---
>
> Key: CALCITE-4065
> URL: https://issues.apache.org/jira/browse/CALCITE-4065
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.22.0
>Reporter: Slim Bouguerra
>Priority: Major
> Attachments: test_cases_CALCITE-4065.patch
>
>
> Calcite Row operator does not support the projection of a Nested row.
> Take this example where the goal is to Select the inner row 
> {code:java}
>  ROW(2) {code}
> Full query is
> {code:java}
> select row(1,row(2)).\"EXPR$1\" from emp
> {code}
> Query fails to plan with the exception listed below
> {code:java}
> class org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
> java.lang.UnsupportedOperationException: class 
> org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
>   at org.apache.calcite.util.Util.needToImplement(Util.java:967)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjection(RelStructuredTypeFlattener.java:699)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjections(RelStructuredTypeFlattener.java:601)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewriteRel(RelStructuredTypeFlattener.java:521)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.calcite.util.ReflectUtil.invokeVisitorInternal(ReflectUtil.java:257)
>   at 
> org.apache.calcite.util.ReflectUtil.invokeVisitor(ReflectUtil.java:214)
>   at 
> org.apache.calcite.util.ReflectUtil$1.invokeVisitor(ReflectUtil.java:464)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener$RewriteRelVisitor.visit(RelStructuredTypeFlattener.java:831)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewrite(RelStructuredTypeFlattener.java:198)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.flattenTypes(SqlToRelConverter.java:473)
>   at 
> org.apache.calcite.test.SqlToRelTestBase$TesterImpl.convertSqlToRel(SqlToRelTestBase.java:634)
>   at 
> org.apache.calcite.test.SqlToRelTestBase$TesterImpl.assertConvertsTo(SqlToRelTestBase.java:749)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest$Sql.convertsTo(SqlToRelConverterTest.java:3864)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest$Sql.ok(SqlToRelConverterTest.java:3856)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest.testDotLiteralAfterRow(SqlToRelConverterTest.java:98)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:675)
>   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>   
> // org.apache.calcite.test.SqlToRelConverterTest#testDotLiteralAfterRow
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (CALCITE-4065) Projecting a nested Row fails with org.apache.calcite.util.Util.needToImplement

2020-06-16 Thread Slim Bouguerra (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slim Bouguerra updated CALCITE-4065:

Attachment: test_cases_CALCITE-4065.patch

> Projecting a nested Row fails with 
> org.apache.calcite.util.Util.needToImplement
> ---
>
> Key: CALCITE-4065
> URL: https://issues.apache.org/jira/browse/CALCITE-4065
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.22.0
>Reporter: Slim Bouguerra
>Priority: Major
> Attachments: test_cases_CALCITE-4065.patch
>
>
> Calcite Row operator does not support the projection of a Nested row.
> Take this example where the goal is to Select the inner row 
> {code:java}
>  ROW(2) {code}
> Full query is
> {code:java}
> select row(1,row(2)).\"EXPR$1\" from emp
> {code}
> Query fails to plan with the exception listed below
> {code:java}
> class org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
> java.lang.UnsupportedOperationException: class 
> org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
>   at org.apache.calcite.util.Util.needToImplement(Util.java:967)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjection(RelStructuredTypeFlattener.java:699)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjections(RelStructuredTypeFlattener.java:601)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewriteRel(RelStructuredTypeFlattener.java:521)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.calcite.util.ReflectUtil.invokeVisitorInternal(ReflectUtil.java:257)
>   at 
> org.apache.calcite.util.ReflectUtil.invokeVisitor(ReflectUtil.java:214)
>   at 
> org.apache.calcite.util.ReflectUtil$1.invokeVisitor(ReflectUtil.java:464)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener$RewriteRelVisitor.visit(RelStructuredTypeFlattener.java:831)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewrite(RelStructuredTypeFlattener.java:198)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.flattenTypes(SqlToRelConverter.java:473)
>   at 
> org.apache.calcite.test.SqlToRelTestBase$TesterImpl.convertSqlToRel(SqlToRelTestBase.java:634)
>   at 
> org.apache.calcite.test.SqlToRelTestBase$TesterImpl.assertConvertsTo(SqlToRelTestBase.java:749)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest$Sql.convertsTo(SqlToRelConverterTest.java:3864)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest$Sql.ok(SqlToRelConverterTest.java:3856)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest.testDotLiteralAfterRow(SqlToRelConverterTest.java:98)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:675)
>   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>   
> // org.apache.calcite.test.SqlToRelConverterTest#testDotLiteralAfterRow
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (CALCITE-4065) Projecting a nested Row fails with org.apache.calcite.util.Util.needToImplement

2020-06-15 Thread Slim Bouguerra (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slim Bouguerra updated CALCITE-4065:

Description: 
Calcite Row operator does not support the projection of a Nested row.

Take this example where the goal is to Select the inner row 
{code:java}
 ROW(2) {code}
Full query is
{code:java}
select row(1,row(2)).\"EXPR$1\" from emp
{code}
Query fails to plan with the exception listed below
{code:java}
class org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
java.lang.UnsupportedOperationException: class 
org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
at org.apache.calcite.util.Util.needToImplement(Util.java:967)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjection(RelStructuredTypeFlattener.java:699)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjections(RelStructuredTypeFlattener.java:601)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewriteRel(RelStructuredTypeFlattener.java:521)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.calcite.util.ReflectUtil.invokeVisitorInternal(ReflectUtil.java:257)
at 
org.apache.calcite.util.ReflectUtil.invokeVisitor(ReflectUtil.java:214)
at 
org.apache.calcite.util.ReflectUtil$1.invokeVisitor(ReflectUtil.java:464)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener$RewriteRelVisitor.visit(RelStructuredTypeFlattener.java:831)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewrite(RelStructuredTypeFlattener.java:198)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.flattenTypes(SqlToRelConverter.java:473)
at 
org.apache.calcite.test.SqlToRelTestBase$TesterImpl.convertSqlToRel(SqlToRelTestBase.java:634)
at 
org.apache.calcite.test.SqlToRelTestBase$TesterImpl.assertConvertsTo(SqlToRelTestBase.java:749)
at 
org.apache.calcite.test.SqlToRelConverterTest$Sql.convertsTo(SqlToRelConverterTest.java:3864)
at 
org.apache.calcite.test.SqlToRelConverterTest$Sql.ok(SqlToRelConverterTest.java:3856)
at 
org.apache.calcite.test.SqlToRelConverterTest.testDotLiteralAfterRow(SqlToRelConverterTest.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:675)
at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)

// org.apache.calcite.test.SqlToRelConverterTest#testDotLiteralAfterRow
{code}
 

  was:
Calcite Row operator does not support the projection of a Nested row.

Take this example where the goal is to project ROW(2)
{code:java}
select row(1,row(2)).\"EXPR$1\" from emp
{code}
fails with the exception listed below
{code:java}
class org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
java.lang.UnsupportedOperationException: class 
org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
at org.apache.calcite.util.Util.needToImplement(Util.java:967)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjection(RelStructuredTypeFlattener.java:699)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjections(RelStructuredTypeFlattener.java:601)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewriteRel(RelStructuredTypeFlattener.java:521)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.calcite.util.ReflectUtil.invokeVisitorInternal(ReflectUtil.java:257)
at 
org.apache.calcite.util.ReflectUtil.invokeVisitor(ReflectUtil.java:214)
at 
org.apache.calcite.util.ReflectUtil$1.invokeVisitor(ReflectUtil.java:464)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener$RewriteRelVisitor.visit(RelStructuredTypeFlattener.java:831)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewrite(RelStructuredTypeFlattener.java:198)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.flattenTypes(SqlToRelConverter.java:473)
at 
org.apache.calcite.test.SqlToRelTestBase$Test

[jira] [Updated] (CALCITE-4065) Projecting a nested Row fails with org.apache.calcite.util.Util.needToImplement

2020-06-15 Thread Slim Bouguerra (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slim Bouguerra updated CALCITE-4065:

Description: 
Calcite Row operator does not support the projection of a Nested row.

Take this example where the goal is to project ROW(2)
{code:java}
select row(1,row(2)).\"EXPR$1\" from emp
{code}
fails with the exception listed below
{code:java}
class org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
java.lang.UnsupportedOperationException: class 
org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
at org.apache.calcite.util.Util.needToImplement(Util.java:967)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjection(RelStructuredTypeFlattener.java:699)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjections(RelStructuredTypeFlattener.java:601)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewriteRel(RelStructuredTypeFlattener.java:521)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.calcite.util.ReflectUtil.invokeVisitorInternal(ReflectUtil.java:257)
at 
org.apache.calcite.util.ReflectUtil.invokeVisitor(ReflectUtil.java:214)
at 
org.apache.calcite.util.ReflectUtil$1.invokeVisitor(ReflectUtil.java:464)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener$RewriteRelVisitor.visit(RelStructuredTypeFlattener.java:831)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewrite(RelStructuredTypeFlattener.java:198)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.flattenTypes(SqlToRelConverter.java:473)
at 
org.apache.calcite.test.SqlToRelTestBase$TesterImpl.convertSqlToRel(SqlToRelTestBase.java:634)
at 
org.apache.calcite.test.SqlToRelTestBase$TesterImpl.assertConvertsTo(SqlToRelTestBase.java:749)
at 
org.apache.calcite.test.SqlToRelConverterTest$Sql.convertsTo(SqlToRelConverterTest.java:3864)
at 
org.apache.calcite.test.SqlToRelConverterTest$Sql.ok(SqlToRelConverterTest.java:3856)
at 
org.apache.calcite.test.SqlToRelConverterTest.testDotLiteralAfterRow(SqlToRelConverterTest.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:675)
at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:125)
at 
org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:46)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:139)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:131)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:81)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:104)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:62)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:43)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:35)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:202)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:198)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(Tes

[jira] [Updated] (CALCITE-4065) Projecting a nested Row fails with org.apache.calcite.util.Util.needToImplement

2020-06-15 Thread Slim Bouguerra (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slim Bouguerra updated CALCITE-4065:

Component/s: core

> Projecting a nested Row fails with 
> org.apache.calcite.util.Util.needToImplement
> ---
>
> Key: CALCITE-4065
> URL: https://issues.apache.org/jira/browse/CALCITE-4065
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.22.0
>Reporter: Slim Bouguerra
>Priority: Major
>
> Not sure If I am missing something, but seems like Calcite Row operator does 
> not support the projection of a Nested row.
> Take this example where the goal is to project ROW(2)
> {code:java}
> select row(1,row(2)).\"EXPR$1\" from emp
> {code}
> fails with the exception listed below
> {code:java}
> class org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
> java.lang.UnsupportedOperationException: class 
> org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
>   at org.apache.calcite.util.Util.needToImplement(Util.java:967)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjection(RelStructuredTypeFlattener.java:699)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjections(RelStructuredTypeFlattener.java:601)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewriteRel(RelStructuredTypeFlattener.java:521)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.calcite.util.ReflectUtil.invokeVisitorInternal(ReflectUtil.java:257)
>   at 
> org.apache.calcite.util.ReflectUtil.invokeVisitor(ReflectUtil.java:214)
>   at 
> org.apache.calcite.util.ReflectUtil$1.invokeVisitor(ReflectUtil.java:464)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener$RewriteRelVisitor.visit(RelStructuredTypeFlattener.java:831)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewrite(RelStructuredTypeFlattener.java:198)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.flattenTypes(SqlToRelConverter.java:473)
>   at 
> org.apache.calcite.test.SqlToRelTestBase$TesterImpl.convertSqlToRel(SqlToRelTestBase.java:634)
>   at 
> org.apache.calcite.test.SqlToRelTestBase$TesterImpl.assertConvertsTo(SqlToRelTestBase.java:749)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest$Sql.convertsTo(SqlToRelConverterTest.java:3864)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest$Sql.ok(SqlToRelConverterTest.java:3856)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest.testDotLiteralAfterRow(SqlToRelConverterTest.java:98)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:675)
>   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:125)
>   at 
> org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:46)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:139)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:131)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:81)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:104)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:62)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:43)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:35)
>   at 
> org.junit.jupiter.engine.execution.Exe

[jira] [Updated] (CALCITE-4065) Projecting a nested Row fails with org.apache.calcite.util.Util.needToImplement

2020-06-15 Thread Slim Bouguerra (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slim Bouguerra updated CALCITE-4065:

Environment: (was: {code:java}
// code placeholder
{code})

> Projecting a nested Row fails with 
> org.apache.calcite.util.Util.needToImplement
> ---
>
> Key: CALCITE-4065
> URL: https://issues.apache.org/jira/browse/CALCITE-4065
> Project: Calcite
>  Issue Type: Bug
>Reporter: Slim Bouguerra
>Priority: Major
>
> Not sure If I am missing something, but seems like Calcite Row operator does 
> not support the projection of a Nested row.
> Take this example where the goal is to project ROW(2)
> {code:java}
> select row(1,row(2)).\"EXPR$1\" from emp
> {code}
> fails with the exception listed below
> {code:java}
> class org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
> java.lang.UnsupportedOperationException: class 
> org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
>   at org.apache.calcite.util.Util.needToImplement(Util.java:967)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjection(RelStructuredTypeFlattener.java:699)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjections(RelStructuredTypeFlattener.java:601)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewriteRel(RelStructuredTypeFlattener.java:521)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.calcite.util.ReflectUtil.invokeVisitorInternal(ReflectUtil.java:257)
>   at 
> org.apache.calcite.util.ReflectUtil.invokeVisitor(ReflectUtil.java:214)
>   at 
> org.apache.calcite.util.ReflectUtil$1.invokeVisitor(ReflectUtil.java:464)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener$RewriteRelVisitor.visit(RelStructuredTypeFlattener.java:831)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewrite(RelStructuredTypeFlattener.java:198)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.flattenTypes(SqlToRelConverter.java:473)
>   at 
> org.apache.calcite.test.SqlToRelTestBase$TesterImpl.convertSqlToRel(SqlToRelTestBase.java:634)
>   at 
> org.apache.calcite.test.SqlToRelTestBase$TesterImpl.assertConvertsTo(SqlToRelTestBase.java:749)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest$Sql.convertsTo(SqlToRelConverterTest.java:3864)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest$Sql.ok(SqlToRelConverterTest.java:3856)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest.testDotLiteralAfterRow(SqlToRelConverterTest.java:98)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:675)
>   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:125)
>   at 
> org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:46)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:139)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:131)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:81)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:104)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:62)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:43)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:35)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoke

[jira] [Updated] (CALCITE-4065) Projecting a nested Row fails with org.apache.calcite.util.Util.needToImplement

2020-06-15 Thread Slim Bouguerra (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slim Bouguerra updated CALCITE-4065:

Affects Version/s: 1.22.0

> Projecting a nested Row fails with 
> org.apache.calcite.util.Util.needToImplement
> ---
>
> Key: CALCITE-4065
> URL: https://issues.apache.org/jira/browse/CALCITE-4065
> Project: Calcite
>  Issue Type: Bug
>Affects Versions: 1.22.0
>Reporter: Slim Bouguerra
>Priority: Major
>
> Not sure If I am missing something, but seems like Calcite Row operator does 
> not support the projection of a Nested row.
> Take this example where the goal is to project ROW(2)
> {code:java}
> select row(1,row(2)).\"EXPR$1\" from emp
> {code}
> fails with the exception listed below
> {code:java}
> class org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
> java.lang.UnsupportedOperationException: class 
> org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
>   at org.apache.calcite.util.Util.needToImplement(Util.java:967)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjection(RelStructuredTypeFlattener.java:699)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjections(RelStructuredTypeFlattener.java:601)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewriteRel(RelStructuredTypeFlattener.java:521)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.calcite.util.ReflectUtil.invokeVisitorInternal(ReflectUtil.java:257)
>   at 
> org.apache.calcite.util.ReflectUtil.invokeVisitor(ReflectUtil.java:214)
>   at 
> org.apache.calcite.util.ReflectUtil$1.invokeVisitor(ReflectUtil.java:464)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener$RewriteRelVisitor.visit(RelStructuredTypeFlattener.java:831)
>   at 
> org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewrite(RelStructuredTypeFlattener.java:198)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.flattenTypes(SqlToRelConverter.java:473)
>   at 
> org.apache.calcite.test.SqlToRelTestBase$TesterImpl.convertSqlToRel(SqlToRelTestBase.java:634)
>   at 
> org.apache.calcite.test.SqlToRelTestBase$TesterImpl.assertConvertsTo(SqlToRelTestBase.java:749)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest$Sql.convertsTo(SqlToRelConverterTest.java:3864)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest$Sql.ok(SqlToRelConverterTest.java:3856)
>   at 
> org.apache.calcite.test.SqlToRelConverterTest.testDotLiteralAfterRow(SqlToRelConverterTest.java:98)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:675)
>   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:125)
>   at 
> org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:46)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:139)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:131)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:81)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:104)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:62)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:43)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:35)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invok

[jira] [Created] (CALCITE-4065) Projecting a nested Row fails with org.apache.calcite.util.Util.needToImplement

2020-06-15 Thread Slim Bouguerra (Jira)
Slim Bouguerra created CALCITE-4065:
---

 Summary: Projecting a nested Row fails with 
org.apache.calcite.util.Util.needToImplement
 Key: CALCITE-4065
 URL: https://issues.apache.org/jira/browse/CALCITE-4065
 Project: Calcite
  Issue Type: Bug
 Environment: {code:java}
// code placeholder
{code}
Reporter: Slim Bouguerra


Not sure If I am missing something, but seems like Calcite Row operator does 
not support the projection of a Nested row.

Take this example where the goal is to project ROW(2)
{code:java}
select row(1,row(2)).\"EXPR$1\" from emp
{code}
fails with the exception listed below
{code:java}
class org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
java.lang.UnsupportedOperationException: class 
org.apache.calcite.rex.RexFieldAccess: ROW(1, ROW(2)).EXPR$1
at org.apache.calcite.util.Util.needToImplement(Util.java:967)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjection(RelStructuredTypeFlattener.java:699)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.flattenProjections(RelStructuredTypeFlattener.java:601)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewriteRel(RelStructuredTypeFlattener.java:521)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.calcite.util.ReflectUtil.invokeVisitorInternal(ReflectUtil.java:257)
at 
org.apache.calcite.util.ReflectUtil.invokeVisitor(ReflectUtil.java:214)
at 
org.apache.calcite.util.ReflectUtil$1.invokeVisitor(ReflectUtil.java:464)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener$RewriteRelVisitor.visit(RelStructuredTypeFlattener.java:831)
at 
org.apache.calcite.sql2rel.RelStructuredTypeFlattener.rewrite(RelStructuredTypeFlattener.java:198)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.flattenTypes(SqlToRelConverter.java:473)
at 
org.apache.calcite.test.SqlToRelTestBase$TesterImpl.convertSqlToRel(SqlToRelTestBase.java:634)
at 
org.apache.calcite.test.SqlToRelTestBase$TesterImpl.assertConvertsTo(SqlToRelTestBase.java:749)
at 
org.apache.calcite.test.SqlToRelConverterTest$Sql.convertsTo(SqlToRelConverterTest.java:3864)
at 
org.apache.calcite.test.SqlToRelConverterTest$Sql.ok(SqlToRelConverterTest.java:3856)
at 
org.apache.calcite.test.SqlToRelConverterTest.testDotLiteralAfterRow(SqlToRelConverterTest.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:675)
at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:125)
at 
org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:46)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:139)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:131)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:81)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:104)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:62)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:43)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:35)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:202)
at 
org.junit.platform.engine.support.hie

[jira] [Commented] (CALCITE-2358) Use null literal instead of empty string as argument for timestamp_parse Druid expression

2018-06-15 Thread slim bouguerra (JIRA)


[ 
https://issues.apache.org/jira/browse/CALCITE-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513499#comment-16513499
 ] 

slim bouguerra commented on CALCITE-2358:
-

[~nishantbangarwa] thanks here is a new commit to fix the comments 
https://github.com/apache/calcite/pull/732.


> Use null literal instead of empty string as argument for timestamp_parse 
> Druid expression
> -
>
> Key: CALCITE-2358
> URL: https://issues.apache.org/jira/browse/CALCITE-2358
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> With the new ability to support null values in Druid, empty string is not 
> equal to null anymore.
> To enable auto format parser {code}timestamp_parse{code} Druid function 
> expects null literal as format argument.
> I have added a connection config parameter to allow smooth transition from 
> old Druid version to version 0.13.0 and above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-2358) Use null literal instead of empty string as argument for timestamp_parse Druid expression

2018-06-12 Thread slim bouguerra (JIRA)


 [ 
https://issues.apache.org/jira/browse/CALCITE-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2358:

Description: 
With the new ability to support null values in Druid, empty string is not equal 
to null anymore.
To enable auto format parser {code}timestamp_parse{code} Druid function expects 
null literal as format argument.
I have added a connection config parameter to allow smooth transition from old 
Druid version to version 0.13.0 and above.


  was:
With the new ability to support null values in Druid, empty string is not equal 
to null anymore.
To enable auto format parser {code}timestamp_parse{code} Druid function expects 
null literal as format argument.



> Use null literal instead of empty string as argument for timestamp_parse 
> Druid expression
> -
>
> Key: CALCITE-2358
> URL: https://issues.apache.org/jira/browse/CALCITE-2358
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> With the new ability to support null values in Druid, empty string is not 
> equal to null anymore.
> To enable auto format parser {code}timestamp_parse{code} Druid function 
> expects null literal as format argument.
> I have added a connection config parameter to allow smooth transition from 
> old Druid version to version 0.13.0 and above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-2358) Use null literal instead of empty string as argument for timestamp_parse Druid expression

2018-06-12 Thread slim bouguerra (JIRA)


 [ 
https://issues.apache.org/jira/browse/CALCITE-2358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2358:

Description: 
With the new ability to support null values in Druid, empty string is not equal 
to null anymore.
To enable auto format parser {code}timestamp_parse{code} Druid function expects 
null literal as format argument.


  was:
With the new ability to support null values in Druid, empty string is not equal 
to null anymore.
To enable auto format parser {code}timestamp_parser{code} Druid function 
expects null literal as format argument.



> Use null literal instead of empty string as argument for timestamp_parse 
> Druid expression
> -
>
> Key: CALCITE-2358
> URL: https://issues.apache.org/jira/browse/CALCITE-2358
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> With the new ability to support null values in Druid, empty string is not 
> equal to null anymore.
> To enable auto format parser {code}timestamp_parse{code} Druid function 
> expects null literal as format argument.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CALCITE-2358) Use null literal instead of empty string as argument for timestamp_parse Druid expression

2018-06-12 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2358:
---

 Summary: Use null literal instead of empty string as argument for 
timestamp_parse Druid expression
 Key: CALCITE-2358
 URL: https://issues.apache.org/jira/browse/CALCITE-2358
 Project: Calcite
  Issue Type: Bug
  Components: druid
Reporter: slim bouguerra
Assignee: slim bouguerra


With the new ability to support null values in Druid, empty string is not equal 
to null anymore.
To enable auto format parser {code}timestamp_parser{code} Druid function 
expects null literal as format argument.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-1591) Druid adapter: Use "groupBy" query with extractionFn for time dimension

2018-04-18 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16443028#comment-16443028
 ] 

slim bouguerra commented on CALCITE-1591:
-

i think this one can be closed too, we are using Extraction functions to 
project expression on the top of Druid columns. 

> Druid adapter: Use "groupBy" query with extractionFn for time dimension
> ---
>
> Key: CALCITE-1591
> URL: https://issues.apache.org/jira/browse/CALCITE-1591
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: Julian Hyde
>Assignee: Julian Hyde
>Priority: Major
>
> For queries that aggregate on the time dimension, or a function of it such as 
> {{FLOOR(__time TO DAY)}}, as of the fix for CALCITE-1579 we generate a 
> "groupBy" query that does not sort or apply limit. It would be better (in the 
> sense that Druid is doing more of the work, and Hive is doing less work) if 
> we use an extractionFn to create a dimension that we can sort on.
> In CALCITE-1578, [~nishantbangarwa] gives the following example query:
> {code}
> {
>   "queryType": "groupBy",
>   "dataSource": "druid_tpcds_ss_sold_time_subset",
>   "granularity": "ALL",
>   "dimensions": [
> "i_brand_id",
> {
>   "type" : "extraction",
>   "dimension" : "__time",
>   "outputName" :  "year",
>   "extractionFn" : {
> "type" : "timeFormat",
> "granularity" : "YEAR"
>   }
> }
>   ],
>   "limitSpec": {
> "type": "default",
> "limit": 10,
> "columns": [
>   {
> "dimension": "$f3",
> "direction": "ascending"
>   }
> ]
>   },
>   "aggregations": [
> {
>   "type": "longMax",
>   "name": "$f2",
>   "fieldName": "ss_quantity"
> },
> {
>   "type": "doubleSum",
>   "name": "$f3",
>   "fieldName": "ss_wholesale_cost"
> }
>   ],
>   "intervals": [
> "1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"
>   ]
> }
> {code}
> and for {{DruidAdapterIt. testGroupByDaySortDescLimit}}, [~bslim] suggests
> {code}
> {
>   "queryType": "groupBy",
>   "dataSource": "foodmart",
>   "granularity": "all",
>   "dimensions": [
> "brand_name",
> {
>   "type": "extraction",
>   "dimension": "__time",
>   "outputName": "day",
>   "extractionFn": {
> "type": "timeFormat",
> "granularity": "DAY"
>   }
> }
>   ],
>   "aggregations": [
> {
>   "type": "longSum",
>   "name": "S",
>   "fieldName": "unit_sales"
> }
>   ],
>   "limitSpec": {
> "type": "default",
> "limit": 30,
> "columns": [
>   {
> "dimension": "S",
> "direction": "ascending"
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-1775) Druid adapter: "GROUP BY ()" on empty relation should return 1 row

2018-04-18 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-1775:

Fix Version/s: 1.16.0

> Druid adapter: "GROUP BY ()" on empty relation should return 1 row
> --
>
> Key: CALCITE-1775
> URL: https://issues.apache.org/jira/browse/CALCITE-1775
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: Julian Hyde
>Assignee: Julian Hyde
>Priority: Major
> Fix For: 1.16.0
>
>
> A "GROUP BY ()" query on an empty relation should return 1 row, but currently 
> returns 0 rows. 
> Test case in {{DruidAdapterIT}}:
> {code}
>   @Test public void testSelectCountEmpty() {
> sql("select count(*) as c from \"foodmart\" where \"product_id\" < 0")
> .returnsUnordered("C=0");
>   }
> {code}
> The query should return one row, but returns 0 rows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-1775) Druid adapter: "GROUP BY ()" on empty relation should return 1 row

2018-04-18 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16443026#comment-16443026
 ] 

slim bouguerra commented on CALCITE-1775:
-

I think we can close this since it was fixed already.

> Druid adapter: "GROUP BY ()" on empty relation should return 1 row
> --
>
> Key: CALCITE-1775
> URL: https://issues.apache.org/jira/browse/CALCITE-1775
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: Julian Hyde
>Assignee: Julian Hyde
>Priority: Major
> Fix For: 1.16.0
>
>
> A "GROUP BY ()" query on an empty relation should return 1 row, but currently 
> returns 0 rows. 
> Test case in {{DruidAdapterIT}}:
> {code}
>   @Test public void testSelectCountEmpty() {
> sql("select count(*) as c from \"foodmart\" where \"product_id\" < 0")
> .returnsUnordered("C=0");
>   }
> {code}
> The query should return one row, but returns 0 rows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-1718) Incorrect data type for context in Druid adapter

2018-04-18 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16443021#comment-16443021
 ] 

slim bouguerra commented on CALCITE-1718:
-

[~michaelmior] shall we mark this as fixed? 

> Incorrect data type for context in Druid adapter
> 
>
> Key: CALCITE-1718
> URL: https://issues.apache.org/jira/browse/CALCITE-1718
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Affects Versions: 1.11.0
>Reporter: Michael Mior
>Assignee: Michael Mior
>Priority: Major
>
> The context {{skipEmptyBuckets}} added in 
> [CALCITE-1589|https://issues.apache.org/jira/browse/CALCITE-1589] was given 
> the boolean value {{true}}. The [Druid 
> documentation|http://druid.io/docs/latest/querying/timeseriesquery.html] 
> shows that this should be the string {{"true"}}. I get a test failure on 
> Druid 0.9.2 because of this discrepancy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-1206) Push HAVING, ORDER BY and LIMIT into Druid

2018-04-18 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16443018#comment-16443018
 ] 

slim bouguerra commented on CALCITE-1206:
-

[~julianhyde] i think we push most of this to Druid  after 1.16.0, Shall we 
mark this as fixed or you have more ideas?

> Push HAVING, ORDER BY and LIMIT into Druid
> --
>
> Key: CALCITE-1206
> URL: https://issues.apache.org/jira/browse/CALCITE-1206
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: Julian Hyde
>Assignee: Julian Hyde
>Priority: Major
>
> Push HAVING, ORDER BY and LIMIT into Druid. This extends the basic Druid 
> adapter added in CALCITE-1121.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-2119) Druid Filter validation Logic broken for filters like column_A = column_B

2018-04-18 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2119:

Fix Version/s: 1.16.0

> Druid Filter validation Logic broken for filters like column_A = column_B
> -
>
> Key: CALCITE-2119
> URL: https://issues.apache.org/jira/browse/CALCITE-2119
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Affects Versions: 1.15.0
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>Priority: Major
> Fix For: 1.16.0
>
>
> Currently, the logic for Filter tree validation and Filter translation to 
> Druid native JSON is in a two different functions.
> Ideal to avoid this kind of runtime exceptions, we can blend both path of 
> +Filter push down validation function 
> +org.apache.calcite.adapter.druid.DruidQuery#isValidFilter(org.apache.calcite.rex.RexNode)
> and the +Translation function 
> +org.apache.calcite.adapter.druid.DruidQuery.Translator#translateFilter.
> IMO, an easy implementation will be to try generating Druid native filter 
> treat exceptions or null instance as it can not be pushed down. This will 
> make code more readable and less duplication of logic that leads to fewer 
> runtime exceptions.
> The following test 
> {code}
>  @Test
>   public void testFilterColumnAEqColumnB() {
> final String sql = "SELECT count(*) from \"foodmart\" where 
> \"product_id\" = \"city\"";
> sql(sql, FOODMART).runs();
>   }
>  {code}
> retruns 
> {code} 
> java.lang.AssertionError: it is not a valid comparison: =($1, $29)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery$Translator.translateFilter(DruidQuery.java:1234)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery$Translator.access$000(DruidQuery.java:1114)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery.getQuery(DruidQuery.java:525)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery.deriveQuerySpec(DruidQuery.java:495)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery.getQuerySpec(DruidQuery.java:434)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery.deriveRowType(DruidQuery.java:324)
>   at 
> org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:224)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:857)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:883)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1766)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:135)
>   at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234)
>   at 
> org.apache.calcite.adapter.druid.DruidRules$DruidFilterRule.onMatch(DruidRules.java:283)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:212)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:650)
>   at org.apache.calcite.tools.Programs$5.run(Programs.java:326)
>   at 
> org.apache.calcite.tools.Programs$SequenceProgram.run(Programs.java:387)
>   at org.apache.calcite.prepare.Prepare.optimize(Prepare.java:188)
>   at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:319)
>   at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:230)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:781)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:640)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:610)
>   at 
> org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:221)
>   at 
> org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:603)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:638)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:149)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
>   at 
> org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:568)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1407)
>   at 
> org.apache.calcite.test.DruidAdapterIT.testFilterColumnAEqColumnB(DruidAdapterIT.java:3494)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.

[jira] [Closed] (CALCITE-2119) Druid Filter validation Logic broken for filters like column_A = column_B

2018-04-18 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra closed CALCITE-2119.
---
Resolution: Fixed
  Assignee: slim bouguerra  (was: Julian Hyde)

> Druid Filter validation Logic broken for filters like column_A = column_B
> -
>
> Key: CALCITE-2119
> URL: https://issues.apache.org/jira/browse/CALCITE-2119
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Affects Versions: 1.15.0
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 1.16.0
>
>
> Currently, the logic for Filter tree validation and Filter translation to 
> Druid native JSON is in a two different functions.
> Ideal to avoid this kind of runtime exceptions, we can blend both path of 
> +Filter push down validation function 
> +org.apache.calcite.adapter.druid.DruidQuery#isValidFilter(org.apache.calcite.rex.RexNode)
> and the +Translation function 
> +org.apache.calcite.adapter.druid.DruidQuery.Translator#translateFilter.
> IMO, an easy implementation will be to try generating Druid native filter 
> treat exceptions or null instance as it can not be pushed down. This will 
> make code more readable and less duplication of logic that leads to fewer 
> runtime exceptions.
> The following test 
> {code}
>  @Test
>   public void testFilterColumnAEqColumnB() {
> final String sql = "SELECT count(*) from \"foodmart\" where 
> \"product_id\" = \"city\"";
> sql(sql, FOODMART).runs();
>   }
>  {code}
> retruns 
> {code} 
> java.lang.AssertionError: it is not a valid comparison: =($1, $29)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery$Translator.translateFilter(DruidQuery.java:1234)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery$Translator.access$000(DruidQuery.java:1114)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery.getQuery(DruidQuery.java:525)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery.deriveQuerySpec(DruidQuery.java:495)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery.getQuerySpec(DruidQuery.java:434)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery.deriveRowType(DruidQuery.java:324)
>   at 
> org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:224)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:857)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:883)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1766)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:135)
>   at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234)
>   at 
> org.apache.calcite.adapter.druid.DruidRules$DruidFilterRule.onMatch(DruidRules.java:283)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:212)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:650)
>   at org.apache.calcite.tools.Programs$5.run(Programs.java:326)
>   at 
> org.apache.calcite.tools.Programs$SequenceProgram.run(Programs.java:387)
>   at org.apache.calcite.prepare.Prepare.optimize(Prepare.java:188)
>   at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:319)
>   at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:230)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:781)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:640)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:610)
>   at 
> org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:221)
>   at 
> org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:603)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:638)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:149)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
>   at 
> org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:568)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1407)
>   at 
> org.apache.calcite.test.DruidAdapterIT.testFilterColumnAEqColumnB(DruidAdapterIT.java:3494)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcc

[jira] [Commented] (CALCITE-2262) In Druid adapter, allow count(*) to be pushed when other aggregate functions are present

2018-04-18 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16443012#comment-16443012
 ] 

slim bouguerra commented on CALCITE-2262:
-

[~julianhyde] not sure, currently that is what Jira shows first when i select 
fix version, please feel free to set the right version. 

> In Druid adapter, allow count(*) to be pushed when other aggregate functions 
> are present
> 
>
> Key: CALCITE-2262
> URL: https://issues.apache.org/jira/browse/CALCITE-2262
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>  Labels: improvement
> Fix For: 1.16.1
>
>
> Currently only {code}select count(*) from druid_table {code} is pushed as 
> Timeseries.
> The goal of this patch is to allow the push of more complicated queries like 
> {code} select count(*), sum(metric) from table {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-2262) Allow count(*) to be pushed with other aggregators to Druid Storage Handler.

2018-04-17 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2262:

Fix Version/s: 1.16.1

> Allow count(*) to be pushed with other aggregators to Druid Storage Handler.
> 
>
> Key: CALCITE-2262
> URL: https://issues.apache.org/jira/browse/CALCITE-2262
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>  Labels: improvement
> Fix For: 1.16.1
>
>
> Currently only {code}select count(*) from druid_table {code} is pushed as 
> Timeseries.
> The goal of this patch is to allow the push of more complicated queries like 
> {code} select count(*), sum(metric) from table {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-2262) Allow count(*) to be pushed with other aggregators to Druid Storage Handler.

2018-04-17 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2262:

Component/s: druid

> Allow count(*) to be pushed with other aggregators to Druid Storage Handler.
> 
>
> Key: CALCITE-2262
> URL: https://issues.apache.org/jira/browse/CALCITE-2262
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>  Labels: improvement
>
> Currently only {code}select count(*) from druid_table {code} is pushed as 
> Timeseries.
> The goal of this patch is to allow the push of more complicated queries like 
> {code} select count(*), sum(metric) from table {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-2262) Allow count(*) to be pushed with other aggregators to Druid Storage Handler.

2018-04-17 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2262:

Labels: improvement  (was: )

> Allow count(*) to be pushed with other aggregators to Druid Storage Handler.
> 
>
> Key: CALCITE-2262
> URL: https://issues.apache.org/jira/browse/CALCITE-2262
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>  Labels: improvement
>
> Currently only {code}select count(*) from druid_table {code} is pushed as 
> Timeseries.
> The goal of this patch is to allow the push of more complicated queries like 
> {code} select count(*), sum(metric) from table {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-2262) Allow count(*) to be pushed with other aggregators to Druid Storage Handler.

2018-04-17 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2262:

Description: 
Currently only {code}select count(*) from druid_table {code} is pushed as 
Timeseries.

The goal of this patch is to allow the push of more complicated queries like 

{code} select count(*), sum(metric) from table {code}

 

  was:
Currently only \{code}select count(*) from druid_table \{code} is pushed as 
Timeseries.

The goal of this patch is to allow the push of more complicated queries like 

{code} select count(*), sum(metric) from table \{code}

 


> Allow count(*) to be pushed with other aggregators to Druid Storage Handler.
> 
>
> Key: CALCITE-2262
> URL: https://issues.apache.org/jira/browse/CALCITE-2262
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Currently only {code}select count(*) from druid_table {code} is pushed as 
> Timeseries.
> The goal of this patch is to allow the push of more complicated queries like 
> {code} select count(*), sum(metric) from table {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CALCITE-2262) Allow count(*) to be pushed with other aggregators to Druid Storage Handler.

2018-04-17 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2262:
---

 Summary: Allow count(*) to be pushed with other aggregators to 
Druid Storage Handler.
 Key: CALCITE-2262
 URL: https://issues.apache.org/jira/browse/CALCITE-2262
 Project: Calcite
  Issue Type: Bug
Reporter: slim bouguerra
Assignee: slim bouguerra


Currently only \{code}select count(*) from druid_table \{code} is pushed as 
Timeseries.

The goal of this patch is to allow the push of more complicated queries like 

{code} select count(*), sum(metric) from table \{code}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2236) Avoid duplications of fields names during Druid query planing

2018-04-02 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16423371#comment-16423371
 ] 

slim bouguerra commented on CALCITE-2236:
-

Original issue

> Avoid duplications of fields names during Druid query planing
> -
>
> Key: CALCITE-2236
> URL: https://issues.apache.org/jira/browse/CALCITE-2236
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Affects Versions: 1.16.0
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 1.16.1
>
>
> This issue occurs if two projects layers use the same fields name, it will 
> lead to a Druid query with a duplicated field names.
> I can not reproduce this in Calcite but it is reproducible in 
> [Hive|https://issues.apache.org/jira/browse/HIVE-19044] (it has to deal on 
> how different layers of project are getting names)
> Here is an example of faulty query where "$f4" is used twice.
> {code}
> {"queryType":"groupBy","dataSource":"druid_tableau.calcs","granularity":"all","dimensions":[{"type":"default","dimension":"key","outputName":"key","outputType":"STRING"}],"limitSpec":{"type":"default"},"aggregations":[{"type":"doubleSum","name":"$f1","fieldName":"num0"},{"type":"filtered","filter":{"type":"not","field":{"type":"selector","dimension":"num0","value":null}},"aggregator":{"type":"count","name":"$f2","fieldName":"num0"}},{"type":"doubleSum","name":"$f3","expression":"(\"num0\"
>  * \"num0\")"},{"type":"doubleSum","name":"$f4","expression":"(\"num0\" * 
> \"num0\")"}],"postAggregations":[{"type":"expression","name":"$f4","expression":"pow(((\"$f4\"
>  - ((\"$f1\" * \"$f1\") / \"$f2\")) / 
> \"$f2\"),0.5)"}],"intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CALCITE-2236) Avoid duplications of fields names during Druid query planing

2018-04-02 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2236:
---

 Summary: Avoid duplications of fields names during Druid query 
planing
 Key: CALCITE-2236
 URL: https://issues.apache.org/jira/browse/CALCITE-2236
 Project: Calcite
  Issue Type: Bug
  Components: druid
Affects Versions: 1.16.0
Reporter: slim bouguerra
Assignee: slim bouguerra
 Fix For: 1.16.1


This issue occurs if two projects layers use the same fields name, it will lead 
to a Druid query with a duplicated field names.
I can not reproduce this in Calcite but it is reproducible in 
[Hive|https://issues.apache.org/jira/browse/HIVE-19044] (it has to deal on how 
different layers of project are getting names)
Here is an example of faulty query where "$f4" is used twice.
{code}
{"queryType":"groupBy","dataSource":"druid_tableau.calcs","granularity":"all","dimensions":[{"type":"default","dimension":"key","outputName":"key","outputType":"STRING"}],"limitSpec":{"type":"default"},"aggregations":[{"type":"doubleSum","name":"$f1","fieldName":"num0"},{"type":"filtered","filter":{"type":"not","field":{"type":"selector","dimension":"num0","value":null}},"aggregator":{"type":"count","name":"$f2","fieldName":"num0"}},{"type":"doubleSum","name":"$f3","expression":"(\"num0\"
 * \"num0\")"},{"type":"doubleSum","name":"$f4","expression":"(\"num0\" * 
\"num0\")"}],"postAggregations":[{"type":"expression","name":"$f4","expression":"pow(((\"$f4\"
 - ((\"$f1\" * \"$f1\") / \"$f2\")) / 
\"$f2\"),0.5)"}],"intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"]}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2226) Substring operator converter doesn't handle non-constant literals

2018-03-28 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417804#comment-16417804
 ] 

slim bouguerra commented on CALCITE-2226:
-

PR sent here https://github.com/apache/calcite/pull/653


> Substring operator converter doesn't handle non-constant literals 
> --
>
> Key: CALCITE-2226
> URL: https://issues.apache.org/jira/browse/CALCITE-2226
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Affects Versions: 1.16.0
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 1.17.0
>
>
> Query like the following 
> {code}
> SELECT substring(namespace, CAST(deleted AS INT), 4)
> FROM druid_table;
> {code}
> will fail with 
> {code}
> java.lang.AssertionError: not a literal: $13
>   at org.apache.calcite.rex.RexLiteral.findValue(RexLiteral.java:963)
>   at org.apache.calcite.rex.RexLiteral.findValue(RexLiteral.java:955)
>   at org.apache.calcite.rex.RexLiteral.intValue(RexLiteral.java:938)
>   at 
> org.apache.calcite.adapter.druid.SubstringOperatorConversion.toDruidExpression(SubstringOperatorConversion.java:46)
>   at 
> org.apache.calcite.adapter.druid.DruidExpressions.toDruidExpression(DruidExpressions.java:120)
>   at 
> org.apache.calcite.adapter.druid.DruidQuery.computeProjectAsScan(DruidQuery.java:746)
>   at 
> org.apache.calcite.adapter.druid.DruidRules$DruidProjectRule.onMatch(DruidRules.java:308)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:317)
> {code}
> Druid Substring converter is assuming that index is always a constant 
> literal, which is wrong.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CALCITE-2226) Substring operator converter doesn't handle non-constant literals

2018-03-28 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2226:
---

 Summary: Substring operator converter doesn't handle non-constant 
literals 
 Key: CALCITE-2226
 URL: https://issues.apache.org/jira/browse/CALCITE-2226
 Project: Calcite
  Issue Type: Bug
  Components: druid
Affects Versions: 1.16.0
Reporter: slim bouguerra
Assignee: slim bouguerra
 Fix For: 1.17.0


Query like the following 
{code}
SELECT substring(namespace, CAST(deleted AS INT), 4)
FROM druid_table;
{code}
will fail with 
{code}
java.lang.AssertionError: not a literal: $13
at org.apache.calcite.rex.RexLiteral.findValue(RexLiteral.java:963)
at org.apache.calcite.rex.RexLiteral.findValue(RexLiteral.java:955)
at org.apache.calcite.rex.RexLiteral.intValue(RexLiteral.java:938)
at 
org.apache.calcite.adapter.druid.SubstringOperatorConversion.toDruidExpression(SubstringOperatorConversion.java:46)
at 
org.apache.calcite.adapter.druid.DruidExpressions.toDruidExpression(DruidExpressions.java:120)
at 
org.apache.calcite.adapter.druid.DruidQuery.computeProjectAsScan(DruidQuery.java:746)
at 
org.apache.calcite.adapter.druid.DruidRules$DruidProjectRule.onMatch(DruidRules.java:308)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:317)
{code}

Druid Substring converter is assuming that index is always a constant literal, 
which is wrong.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2222) Add Quarter as part of valid floor units to push down to Druid.

2018-03-28 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417764#comment-16417764
 ] 

slim bouguerra commented on CALCITE-:
-

PR sent https://github.com/apache/calcite/pull/652

> Add Quarter as part of valid floor  units to push down to Druid.
> 
>
> Key: CALCITE-
> URL: https://issues.apache.org/jira/browse/CALCITE-
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Affects Versions: 1.16.0
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 1.17.0
>
>
> Current list of valid floor units that can be pushed as valid floor unit is 
> missing Quarters, something that Druid can actually do. 
> This is a performance bug.
> For instance query 
> {code} 
> SELECT floor_year(`__time`), max(added), sum(variation)
> FROM druid_table_1
> {code}
> Is currently planned as 
> {code} 
> {"queryType":"scan","dataSource":"wikipedia","intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"],"virtualColumns":[{"type":"expression","name":"vc","expression":"\"__time\"","outputType":"LONG"}],"columns":["vc","added","variation"],"resultFormat":"compactedList"}
> {code}
> And it can be optimized to 
> {code}
> {"queryType":"timeseries","dataSource":"wikipedia","descending":false,"granularity":"quarter","aggregations":[{"type":"doubleMax","name":"$f1","fieldName":"added"},{"type":"doubleSum","name":"$f2","fieldName":"variation"}],"intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"],"context":{"skipEmptyBuckets":true}}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CALCITE-2222) Add Quarter as part of valid floor units to push down to Druid.

2018-03-22 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-:
---

 Summary: Add Quarter as part of valid floor  units to push down to 
Druid.
 Key: CALCITE-
 URL: https://issues.apache.org/jira/browse/CALCITE-
 Project: Calcite
  Issue Type: Bug
  Components: druid
Affects Versions: 1.16.0
Reporter: slim bouguerra
Assignee: slim bouguerra
 Fix For: 1.17.0


Current list of valid floor units that can be pushed as valid floor unit is 
missing Quarters, something that Druid can actually do. 
This is a performance bug.
For instance query 
{code} 
SELECT floor_year(`__time`), max(added), sum(variation)
FROM druid_table_1
{code}
Is currently planned as 
{code} 
{"queryType":"scan","dataSource":"wikipedia","intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"],"virtualColumns":[{"type":"expression","name":"vc","expression":"\"__time\"","outputType":"LONG"}],"columns":["vc","added","variation"],"resultFormat":"compactedList"}
{code}

And it can be optimized to 
{code}
{"queryType":"timeseries","dataSource":"wikipedia","descending":false,"granularity":"quarter","aggregations":[{"type":"doubleMax","name":"$f1","fieldName":"added"},{"type":"doubleSum","name":"$f2","fieldName":"variation"}],"intervals":["1900-01-01T00:00:00.000Z/3000-01-01T00:00:00.000Z"],"context":{"skipEmptyBuckets":true}}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-2191) Drop support for Guava versions earlier than 19

2018-02-23 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2191:

Description: 
Currently, Calcite-1.15.0 version supports Guava versions from 23 to 14.

Calcite-1.16.0-Snapshot is building against version 19.0.1 

As far I know the only reason we support versions earlier to 19 is Hive project 
depending on Guava 14.0.1 This is not true anymore after 
https://issues.apache.org/jira/browse/HIVE-15393.

Druid project is still using Guava 16.0.1 but [some 
work|https://groups.google.com/forum/#!topic/druid-development/Dw2Qu1CWbuQ] is 
under review to make sure it is not using deprecated API.   

Thus I think it is time to Drop support for versions earlier than 19

  was:
Currently, Calcite-1.15.0 version supports Guava versions from 23 to 14.

Calcite-1.16.0-Snapshot is building against version 19.0.1 

As far I know the only reason we support versions earlier to 19 is Hive project 
depending on Guava 14.0.1 This is not true anymore after 
https://issues.apache.org/jira/browse/HIVE-15393.

Druid project is still using Guava 16.0.1 but some work is done to make sure it 
is not using deprecated API.   

Thus I think it is time to Drop support for versions earlier than 19


> Drop support for Guava versions earlier than 19
> ---
>
> Key: CALCITE-2191
> URL: https://issues.apache.org/jira/browse/CALCITE-2191
> Project: Calcite
>  Issue Type: Task
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>Priority: Major
> Fix For: 1.16.0
>
>
> Currently, Calcite-1.15.0 version supports Guava versions from 23 to 14.
> Calcite-1.16.0-Snapshot is building against version 19.0.1 
> As far I know the only reason we support versions earlier to 19 is Hive 
> project depending on Guava 14.0.1 This is not true anymore after 
> https://issues.apache.org/jira/browse/HIVE-15393.
> Druid project is still using Guava 16.0.1 but [some 
> work|https://groups.google.com/forum/#!topic/druid-development/Dw2Qu1CWbuQ] 
> is under review to make sure it is not using deprecated API.   
> Thus I think it is time to Drop support for versions earlier than 19



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CALCITE-2191) Drop support for Guava versions earlier than 19

2018-02-23 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2191:
---

 Summary: Drop support for Guava versions earlier than 19
 Key: CALCITE-2191
 URL: https://issues.apache.org/jira/browse/CALCITE-2191
 Project: Calcite
  Issue Type: Task
Reporter: slim bouguerra
Assignee: Julian Hyde
 Fix For: 1.16.0


Currently, Calcite-1.15.0 version supports Guava versions from 23 to 14.

Calcite-1.16.0-Snapshot is building against version 19.0.1 

As far I know the only reason we support versions earlier to 19 is Hive project 
depending on Guava 14.0.1 This is not true anymore after 
https://issues.apache.org/jira/browse/HIVE-15393.

Druid project is still using Guava 16.0.1 but some work is done to make sure it 
is not using deprecated API.   

Thus I think it is time to Drop support for versions earlier than 19



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (CALCITE-2187) Fix Build issue caused by CALCITE-2170

2018-02-17 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368392#comment-16368392
 ] 

slim bouguerra edited comment on CALCITE-2187 at 2/17/18 10:25 PM:
---

[~julianhyde] please check the fix, I have tested with

{code} mvn clean install -P it -Duser.timezone=Europe/Moscow 
-Dguava.version=14.0.1 -DskipTests {code}

Did not know that we are running builds with Old Guava. 

FYI if Hive is the reason to keep building with guava 14 then we can drop it 
since Hive moved to Guava 19 as per 
https://issues.apache.org/jira/browse/HIVE-15393  


was (Author: bslim):
[~julianhyde] please check the fix, I have tested with \{code} mvn clean 
install -P it -Duser.timezone=Europe/Moscow -Dguava.version=14.0.1 -DskipTests 
\{code}. Did not know that we are running builds with Old Guava. 

FYI if Hive is the reason to keep building with guava 14 then we can drop it 
since Hive moved to Guava 19 as per 
https://issues.apache.org/jira/browse/HIVE-15393  

> Fix Build issue caused by CALCITE-2170
> --
>
> Key: CALCITE-2187
> URL: https://issues.apache.org/jira/browse/CALCITE-2187
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 1.16.0
>
>
> CALCITE-2170 introduced the use of Guava Function not existing in 14 Version 
> that cause the build to fail when \{code}guava.version=14.0.1\{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2187) Fix Build issue caused by CALCITE-2170

2018-02-17 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368392#comment-16368392
 ] 

slim bouguerra commented on CALCITE-2187:
-

[~julianhyde] please check the fix, I have tested with \{code} mvn clean 
install -P it -Duser.timezone=Europe/Moscow -Dguava.version=14.0.1 -DskipTests 
\{code}. Did not know that we are running builds with Old Guava. 

FYI if Hive is the reason to keep building with guava 14 then we can drop it 
since Hive moved to Guava 19 as per 
https://issues.apache.org/jira/browse/HIVE-15393  

> Fix Build issue caused by CALCITE-2170
> --
>
> Key: CALCITE-2187
> URL: https://issues.apache.org/jira/browse/CALCITE-2187
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 1.16.0
>
>
> CALCITE-2170 introduced the use of Guava Function not existing in 14 Version 
> that cause the build to fail when \{code}guava.version=14.0.1\{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2187) Fix Build issue caused by CALCITE-2170

2018-02-17 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368387#comment-16368387
 ] 

slim bouguerra commented on CALCITE-2187:
-

https://github.com/apache/calcite/pull/631

> Fix Build issue caused by CALCITE-2170
> --
>
> Key: CALCITE-2187
> URL: https://issues.apache.org/jira/browse/CALCITE-2187
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 1.16.0
>
>
> CALCITE-2170 introduced the use of Guava Function not existing in 14 Version 
> that cause the build to fail when \{code}guava.version=14.0.1\{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CALCITE-2187) Fix Build issue caused by CALCITE-2170

2018-02-17 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2187:
---

 Summary: Fix Build issue caused by CALCITE-2170
 Key: CALCITE-2187
 URL: https://issues.apache.org/jira/browse/CALCITE-2187
 Project: Calcite
  Issue Type: Bug
  Components: druid
Reporter: slim bouguerra
Assignee: slim bouguerra
 Fix For: 1.16.0


CALCITE-2170 introduced the use of Guava Function not existing in 14 Version 
that cause the build to fail when \{code}guava.version=14.0.1\{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-16 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367738#comment-16367738
 ] 

slim bouguerra commented on CALCITE-2170:
-

[~julianhyde] I am sorry to hear that this is causing frustration. I agree with 
you, contributions should not be framed as `take or leave it and it` is not my 
intent anyway. 

> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
> This is a link to the current supported functions and expressions in Druid
>  [http://druid.io/docs/latest/misc/math-expr.html]
> As you can see from the Docs an expression can be an actual tree of operators,
> Expression can be used with Filters, Projects, Aggregates, PostAggregates and
> Having filters. For Filters will have new Filter kind called Filter 
> expression.
> FYI, you might ask can we push everything as Expression Filter the short 
> answer
> is no because, other kinds of Druid filters perform better when used, Hence
> Expression filter is a plan B sort of thing. In order to push expression as
> Projects and Aggregates we will be using Expression based Virtual Columns.
> The major change is the merging of the logic of pushdown verification code and
> the Translation of RexCall/RexNode to Druid Json, native physical language. 
> The
> main drive behind this redesign is the fact that in order to check if we can
> push down a tree of expressions to Druid we have to compute the Druid 
> Expression
> String anyway. Thus instead of having 2 different code paths, one for pushdown
> validation and one for Json generation we can have one function that does 
> both.
> For instance instead of having one code path to test and check if a given 
> filter
> can be pushed or not and then having a translation layer code, will have
> one function that either returns a valid Druid Filter or null if it is not
> possible to pushdown. The same idea will be applied to how we push Projects 
> and
> Aggregates, Post Aggregates and Sort.
> Here are the main elements/Classes of the new design. First will be merging 
> the logic of
> Translation of Literals/InputRex/RexCall to a Druid physical representation.
> Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
> possible
> {code:java}
> /**
>  * @param rexNode leaf Input Ref to Druid Column
>  * @param rowType row type
>  * @param druidQuery druid query
>  *
>  * @return {@link Pair} of Column name and Extraction Function on the top of 
> the input ref or
>  * {@link Pair of(null, null)} when can not translate to valid Druid column
>  */
>  protected static Pair toDruidColumn(RexNode 
> rexNode,
>  RelDataType rowType, DruidQuery druidQuery
>  )
> {code}
> In the other hand, in order to Convert Literals to Druid Literals will 
> introduce
> {code:java}
> /**
>  * @param rexNode rexNode to translate to Druid literal equivalante
>  * @param rowType rowType associated to rexNode
>  * @param druidQuery druid Query
>  *
>  * @return non null string or null if it can not translate to valid Druid 
> equivalent
>  */
> @Nullable
> private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
>  DruidQuery druidQuery
> )
> {code}
> Main new functions used to pushdown nodes and Druid Json generation.
> Filter pushdown verification and generates is done via
> {code:java}
> org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
> {code}
> For project pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
> {code}
> For Grouping pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
> {code}
> For Aggregation pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
> {code}
> For sort pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
> Pushing of PostAggregates will be using Expression post Aggregates and use
> {code}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> to generate expression
> For Expression computation most of the work is done here
> {code:java}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> This static function generates Drui

[jira] [Created] (CALCITE-2175) Revisit the assumption made by druid calcite adapter that there is only one timestamp column

2018-02-12 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2175:
---

 Summary: Revisit the assumption made by druid calcite adapter that 
there is only one timestamp column 
 Key: CALCITE-2175
 URL: https://issues.apache.org/jira/browse/CALCITE-2175
 Project: Calcite
  Issue Type: Task
  Components: druid
Reporter: slim bouguerra
Assignee: Julian Hyde


Currently, the Druid Calcite adapter assumes that the row returned by druid has 
only one timestamp typed column, this is not true, in fact, we can have 
multiple projections of the time column with extraction functions. Thus code 
like this need to be revisited. 

{code}

int posTimestampField = -1;
for (int i = 0; i < fieldTypes.size(); i++) {
if (fieldTypes.get(i) == ColumnMetaData.Rep.JAVA_SQL_TIMESTAMP) {
 posTimestampField = i;
 break;
 }
}

{code} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-07 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16355497#comment-16355497
 ] 

slim bouguerra commented on CALCITE-2170:
-

[~julianhyde] totally agree with you staying as much as we can in the 
Relational realm is the way to go, in fact, I think we are doing that since the 
actual DruidQuery is holding an Array of Calcite Rels, and we use those Rels to 
generate Druid physical query at the end. Please let me know if you this is not 
what you pointed out.  

> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
> This is a link to the current supported functions and expressions in Druid
>  [http://druid.io/docs/latest/misc/math-expr.html]
> As you can see from the Docs an expression can be an actual tree of operators,
> Expression can be used with Filters, Projects, Aggregates, PostAggregates and
> Having filters. For Filters will have new Filter kind called Filter 
> expression.
> FYI, you might ask can we push everything as Expression Filter the short 
> answer
> is no because, other kinds of Druid filters perform better when used, Hence
> Expression filter is a plan B sort of thing. In order to push expression as
> Projects and Aggregates we will be using Expression based Virtual Columns.
> The major change is the merging of the logic of pushdown verification code and
> the Translation of RexCall/RexNode to Druid Json, native physical language. 
> The
> main drive behind this redesign is the fact that in order to check if we can
> push down a tree of expressions to Druid we have to compute the Druid 
> Expression
> String anyway. Thus instead of having 2 different code paths, one for pushdown
> validation and one for Json generation we can have one function that does 
> both.
> For instance instead of having one code path to test and check if a given 
> filter
> can be pushed or not and then having a translation layer code, will have
> one function that either returns a valid Druid Filter or null if it is not
> possible to pushdown. The same idea will be applied to how we push Projects 
> and
> Aggregates, Post Aggregates and Sort.
> Here are the main elements/Classes of the new design. First will be merging 
> the logic of
> Translation of Literals/InputRex/RexCall to a Druid physical representation.
> Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
> possible
> {code:java}
> /**
>  * @param rexNode leaf Input Ref to Druid Column
>  * @param rowType row type
>  * @param druidQuery druid query
>  *
>  * @return {@link Pair} of Column name and Extraction Function on the top of 
> the input ref or
>  * {@link Pair of(null, null)} when can not translate to valid Druid column
>  */
>  protected static Pair toDruidColumn(RexNode 
> rexNode,
>  RelDataType rowType, DruidQuery druidQuery
>  )
> {code}
> In the other hand, in order to Convert Literals to Druid Literals will 
> introduce
> {code:java}
> /**
>  * @param rexNode rexNode to translate to Druid literal equivalante
>  * @param rowType rowType associated to rexNode
>  * @param druidQuery druid Query
>  *
>  * @return non null string or null if it can not translate to valid Druid 
> equivalent
>  */
> @Nullable
> private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
>  DruidQuery druidQuery
> )
> {code}
> Main new functions used to pushdown nodes and Druid Json generation.
> Filter pushdown verification and generates is done via
> {code:java}
> org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
> {code}
> For project pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
> {code}
> For Grouping pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
> {code}
> For Aggregation pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
> {code}
> For sort pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
> Pushing of PostAggregates will be using Expression post Aggregates and use
> {code}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> to generate expression
> For Expression computati

[jira] [Commented] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16354630#comment-16354630
 ] 

slim bouguerra commented on CALCITE-2170:
-

PR https://github.com/apache/calcite/pull/624

> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
> This is a link to the current supported functions and expressions in Druid
>  [http://druid.io/docs/latest/misc/math-expr.html]
> As you can see from the Docs an expression can be an actual tree of operators,
> Expression can be used with Filters, Projects, Aggregates, PostAggregates and
> Having filters. For Filters will have new Filter kind called Filter 
> expression.
> FYI, you might ask can we push everything as Expression Filter the short 
> answer
> is no because, other kinds of Druid filters perform better when used, Hence
> Expression filter is a plan B sort of thing. In order to push expression as
> Projects and Aggregates we will be using Expression based Virtual Columns.
> The major change is the merging of the logic of pushdown verification code and
> the Translation of RexCall/RexNode to Druid Json, native physical language. 
> The
> main drive behind this redesign is the fact that in order to check if we can
> push down a tree of expressions to Druid we have to compute the Druid 
> Expression
> String anyway. Thus instead of having 2 different code paths, one for pushdown
> validation and one for Json generation we can have one function that does 
> both.
> For instance instead of having one code path to test and check if a given 
> filter
> can be pushed or not and then having a translation layer code, will have
> one function that either returns a valid Druid Filter or null if it is not
> possible to pushdown. The same idea will be applied to how we push Projects 
> and
> Aggregates, Post Aggregates and Sort.
> Here are the main elements/Classes of the new design. First will be merging 
> the logic of
> Translation of Literals/InputRex/RexCall to a Druid physical representation.
> Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
> possible
> {code:java}
> /**
>  * @param rexNode leaf Input Ref to Druid Column
>  * @param rowType row type
>  * @param druidQuery druid query
>  *
>  * @return {@link Pair} of Column name and Extraction Function on the top of 
> the input ref or
>  * {@link Pair of(null, null)} when can not translate to valid Druid column
>  */
>  protected static Pair toDruidColumn(RexNode 
> rexNode,
>  RelDataType rowType, DruidQuery druidQuery
>  )
> {code}
> In the other hand, in order to Convert Literals to Druid Literals will 
> introduce
> {code:java}
> /**
>  * @param rexNode rexNode to translate to Druid literal equivalante
>  * @param rowType rowType associated to rexNode
>  * @param druidQuery druid Query
>  *
>  * @return non null string or null if it can not translate to valid Druid 
> equivalent
>  */
> @Nullable
> private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
>  DruidQuery druidQuery
> )
> {code}
> Main new functions used to pushdown nodes and Druid Json generation.
> Filter pushdown verification and generates is done via
> {code:java}
> org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
> {code}
> For project pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
> {code}
> For Grouping pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
> {code}
> For Aggregation pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
> {code}
> For sort pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
> Pushing of PostAggregates will be using Expression post Aggregates and use
> {code}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> to generate expression
> For Expression computation most of the work is done here
> {code:java}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> This static function generates Druid String expression out of a given RexNode 
> or
> returns null if not possible.
> {code}
> @Nullable
> public static String toDrui

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.

Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.
Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible
{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return {@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * {@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code}
In the other hand, in order to Convert Literals to Druid Literals will introduce
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}
Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
{code}
For project pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
{code}
For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
{code}
For Aggregation pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
{code}
For sort pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.
{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to valid Druid expression.
 * @param rexNode rexNode to translate to Druid expression
 * @param rowType row type associated with rexNode
 * @param druidQuery druid query used to figure out configs/fields related like 
time

[jira] [Commented] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16354639#comment-16354639
 ] 

slim bouguerra commented on CALCITE-2170:
-

[~julianhyde] you are exactly right am converting RexNodes to Druid physical 
operators (eg Strings and Json objs). This is done as a direct column name, as 
column name and extraction function (can be considered as Project) and 
sometimes as a String expression.  Am wondering what is the advantage to 
translates it first as a RexNode? 

> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
> This is a link to the current supported functions and expressions in Druid
>  [http://druid.io/docs/latest/misc/math-expr.html]
> As you can see from the Docs an expression can be an actual tree of operators,
> Expression can be used with Filters, Projects, Aggregates, PostAggregates and
> Having filters. For Filters will have new Filter kind called Filter 
> expression.
> FYI, you might ask can we push everything as Expression Filter the short 
> answer
> is no because, other kinds of Druid filters perform better when used, Hence
> Expression filter is a plan B sort of thing. In order to push expression as
> Projects and Aggregates we will be using Expression based Virtual Columns.
> The major change is the merging of the logic of pushdown verification code and
> the Translation of RexCall/RexNode to Druid Json, native physical language. 
> The
> main drive behind this redesign is the fact that in order to check if we can
> push down a tree of expressions to Druid we have to compute the Druid 
> Expression
> String anyway. Thus instead of having 2 different code paths, one for pushdown
> validation and one for Json generation we can have one function that does 
> both.
> For instance instead of having one code path to test and check if a given 
> filter
> can be pushed or not and then having a translation layer code, will have
> one function that either returns a valid Druid Filter or null if it is not
> possible to pushdown. The same idea will be applied to how we push Projects 
> and
> Aggregates, Post Aggregates and Sort.
> Here are the main elements/Classes of the new design. First will be merging 
> the logic of
> Translation of Literals/InputRex/RexCall to a Druid physical representation.
> Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
> possible
> {code:java}
> /**
>  * @param rexNode leaf Input Ref to Druid Column
>  * @param rowType row type
>  * @param druidQuery druid query
>  *
>  * @return {@link Pair} of Column name and Extraction Function on the top of 
> the input ref or
>  * {@link Pair of(null, null)} when can not translate to valid Druid column
>  */
>  protected static Pair toDruidColumn(RexNode 
> rexNode,
>  RelDataType rowType, DruidQuery druidQuery
>  )
> {code}
> In the other hand, in order to Convert Literals to Druid Literals will 
> introduce
> {code:java}
> /**
>  * @param rexNode rexNode to translate to Druid literal equivalante
>  * @param rowType rowType associated to rexNode
>  * @param druidQuery druid Query
>  *
>  * @return non null string or null if it can not translate to valid Druid 
> equivalent
>  */
> @Nullable
> private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
>  DruidQuery druidQuery
> )
> {code}
> Main new functions used to pushdown nodes and Druid Json generation.
> Filter pushdown verification and generates is done via
> {code:java}
> org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
> {code}
> For project pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
> {code}
> For Grouping pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
> {code}
> For Aggregation pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
> {code}
> For sort pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
> Pushing of PostAggregates will be using Expression post Aggregates and use
> {code}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> to generate expression
> For Expression computat

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.

Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.
Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible
{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return {@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * {@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code}
In the other hand, in order to Convert Literals to Druid Literals will introduce
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}
Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
{code}
For project pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
{code}
For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
{code}
For Aggregation pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
{code}
For sort pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.
{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to valid Druid expression.
 * @param rexNode rexNode to translate to Druid expression
 * @param rowType row type associated with rexNode
 * @param druidQuery druid query used to figure out configs/fields related like 
time

[jira] [Commented] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16354621#comment-16354621
 ] 

slim bouguerra commented on CALCITE-2170:
-

[~julianhyde] just updated the Jira. I will link to PoC shortly and let's start 
the discussion.

  

> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
> This is a link to the current supported functions and expressions in Druid
>  [http://druid.io/docs/latest/misc/math-expr.html]
> As you can see from the Docs an expression can be an actual tree of operators,
> Expression can be used with Filters, Projects, Aggregates, PostAggregates and
> Having filters. For Filters will have new Filter kind called Filter 
> expression.
> FYI, you might ask can we push everything as Expression Filter the short 
> answer
> is no because, other kinds of Druid filters perform better when used, Hence
> Expression filter is a plan B sort of thing. In order to push expression as
> Projects and Aggregates we will be using Expression based Virtual Columns.
> The major change is the merging of the logic of pushdown verification code and
> the Translation of RexCall/RexNode to Druid Json, native physical language. 
> The
> main drive behind this redesign is the fact that in order to check if we can
> push down a tree of expressions to Druid we have to compute the Druid 
> Expression
> String anyway. Thus instead of having 2 different code paths, one for pushdown
> validation and one for Json generation we can have one function that does 
> both.
> For instance instead of having one code path to test and check if a given 
> filter
> can be pushed or not and then having a translation layer code, will have
> one function that either returns a valid Druid Filter or null if it is not
> possible to pushdown. The same idea will be applied to how we push Projects 
> and
> Aggregates, Post Aggregates and Sort.
> Here are the main elements/Classes of the new design. First will be merging 
> the logic of
> Translation of Literals/InputRex/RexCall to a Druid physical representation.
> Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
> possible
> {code:java}
> /**
>  * @param rexNode leaf Input Ref to Druid Column
>  * @param rowType row type
>  * @param druidQuery druid query
>  *
>  * @return {@link Pair} of Column name and Extraction Function on the top of 
> the input ref or
>  * {@link Pair of(null, null)} when can not translate to valid Druid column
>  */
>  protected static Pair toDruidColumn(RexNode 
> rexNode,
>  RelDataType rowType, DruidQuery druidQuery
>  )
> {code}
> In the other hand, in order to Convert Literals to Druid Literals will 
> introduce
> {code:java}
> /**
>  * @param rexNode rexNode to translate to Druid literal equivalante
>  * @param rowType rowType associated to rexNode
>  * @param druidQuery druid Query
>  *
>  * @return non null string or null if it can not translate to valid Druid 
> equivalent
>  */
> @Nullable
> private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
>  DruidQuery druidQuery
> )
> {code}
> Main new functions used to pushdown nodes and Druid Json generation.
> Filter pushdown verification and generates is done via
> {code:java}
> org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
> {code}
> For project pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
> {code}
> For Grouping pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
> {code}
> For Aggregation pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
> {code}
> For sort pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
> Pushing of PostAggregates will be using Expression post Aggregates and use
> {code}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> to generate expression
> For Expression computation most of the work is done here
> {code:java}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> This static function generates Druid String expression out of a given RexNode 
> or
> returns null if not possi

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.

Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.
Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible

{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return {@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * {@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code}

In the other hand, in order to Convert Literals to Druid Literals will introduce
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}

Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
{code}

For project pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
{code}

For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
{code}

For Aggregation pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
{code}
For sort pushdown added

{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.

{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to valid Druid expression.
 * @param rexNode rexNode to translate to Druid expression
 * @param rowType row type associated with rexNode
 * @param druidQuery druid query used to figure out configs/fields related li

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: (was: Druid 0.11 has newly built in capabilities called 
Expressions that can be used to push expression like 
projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.

Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.
Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible
{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return {@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * {@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code:java}

In the other hand, in order to Convert Literals to Druid Literals will introduce
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}
Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
{code:java}

For project pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
{code:java}

For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
{code}
For Aggregation pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
{code}
For sort pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.
{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to valid Druid expression.
 * @param rexNode rexNode to translate to Druid expression
 * @param rowType row type associated with rexNode
 * @param druidQuery druid query used to figure out confi

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.

Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.
Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible
{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return {@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * {@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code:java}

In the other hand, in order to Convert Literals to Druid Literals will introduce
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}
Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
{code:java}

For project pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
{code:java}

For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
{code}
For Aggregation pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
{code}
For sort pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.
{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to valid Druid expression.
 * @param rexNode rexNode to translate to Druid expression
 * @param rowType row type associated with rexNode
 * @param druidQuery druid query used to figure out configs/fields 

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.
Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.

Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible
org.apache.calcite.adapter.druid.DruidQuery#toDruidColumn
{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return {@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * {@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code:java}
In the other hand, in order to Convert Literals to Druid Literals will introduce
org.apache.calcite.adapter.druid.DruidQuery#toDruidLiteral
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}
Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
{code:java}

For project pushdown added
{code}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
{code:java}
For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
{code}
For Aggregation pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg\{code}
For sort pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.
{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to valid Druid expression.
 * @param rexNode rexNode to translate to Druid expression
 * @para

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.
Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.

Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible
org.apache.calcite.adapter.druid.DruidQuery#toDruidColumn
{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return \{@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * \{@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code}
In the other hand, in order to Convert Literals to Druid Literals will introduce
org.apache.calcite.adapter.druid.DruidQuery#toDruidLiteral
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}
Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters\{code}

For project pushdown added function
{code}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.\{code}
This function will be using Virtual columns project using expression.
For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.\{code}
For Aggregation pushdown added
{code}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg\{code}
For sort pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.
{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to valid Druid expression.
 * @pa

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

 

 

 

 

  was:
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

 

 


> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2170:
---

 Summary: Use Druid Expressions capabilities to improve the amount 
of work that can be pushed to Druid
 Key: CALCITE-2170
 URL: https://issues.apache.org/jira/browse/CALCITE-2170
 Project: Calcite
  Issue Type: New Feature
  Components: druid
Reporter: slim bouguerra
Assignee: slim bouguerra


Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2123) Bug in the Druid Filter Translation when Comparing String Ref to a Constant Number

2018-01-08 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317689#comment-16317689
 ] 

slim bouguerra commented on CALCITE-2123:
-

okay Thanks, will explore that. 

> Bug in the Druid Filter Translation when Comparing String Ref to a Constant 
> Number
> --
>
> Key: CALCITE-2123
> URL: https://issues.apache.org/jira/browse/CALCITE-2123
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>
> The following query {code} SELECT COUNT(*) FROM  \"foodmart\"  WHERE 
> \"product_id\" = 16.0{code} Translates to a Druid Table Scan with a String to 
> String Selector comparison filter.
> instead we need to have a Bound filter that cast the String to number.
> This is what we should expect.
> {code} 
> {"type":"bound","dimension":"product_id","lower":"16.0","lowerStrict":false,"upper":"16.0","upperStrict":false,"ordering":"numeric"}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2123) Bug in the Druid Filter Translation when Comparing String Ref to a Constant Number

2018-01-08 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317270#comment-16317270
 ] 

slim bouguerra commented on CALCITE-2123:
-

[~julianhyde], thanks for the pointer. Looking at the code I am suspecting that 
issue is how we are invoking {code} 
org.apache.calcite.sql.type.SqlTypeUtil#canCastFrom{code} at the function 
{code} org.apache.calcite.sql.type.SqlTypeFactoryImpl#leastRestrictiveByCast 
{code} TBH am not sure what the boolean var {code} boolean coerce{code} means 
exactly.
Althought i when i change the code to the following (instead of coerce=false 
use true)
{code} 
if (SqlTypeUtil.canCastFrom(type, resultType, true)) {
{code}
we get positive resutl for cast from VarChar to Decimal.
Not sure if that is the actual fix.
am still trying to wrap my mind please let me know if am on the wrong track.

> Bug in the Druid Filter Translation when Comparing String Ref to a Constant 
> Number
> --
>
> Key: CALCITE-2123
> URL: https://issues.apache.org/jira/browse/CALCITE-2123
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>
> The following query {code} SELECT COUNT(*) FROM  \"foodmart\"  WHERE 
> \"product_id\" = 16.0{code} Translates to a Druid Table Scan with a String to 
> String Selector comparison filter.
> instead we need to have a Bound filter that cast the String to number.
> This is what we should expect.
> {code} 
> {"type":"bound","dimension":"product_id","lower":"16.0","lowerStrict":false,"upper":"16.0","upperStrict":false,"ordering":"numeric"}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2123) Bug in the Druid Filter Translation when Comparing String Ref to a Constant Number

2018-01-08 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316898#comment-16316898
 ] 

slim bouguerra commented on CALCITE-2123:
-

[~julianhyde] Thanks, can you point me out to where the RexNode is actually 
created out of ATS or Lex Parser? 

> Bug in the Druid Filter Translation when Comparing String Ref to a Constant 
> Number
> --
>
> Key: CALCITE-2123
> URL: https://issues.apache.org/jira/browse/CALCITE-2123
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>
> The following query {code} SELECT COUNT(*) FROM  \"foodmart\"  WHERE 
> \"product_id\" = 16.0{code} Translates to a Druid Table Scan with a String to 
> String Selector comparison filter.
> instead we need to have a Bound filter that cast the String to number.
> This is what we should expect.
> {code} 
> {"type":"bound","dimension":"product_id","lower":"16.0","lowerStrict":false,"upper":"16.0","upperStrict":false,"ordering":"numeric"}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2123) Bug in the Druid Filter Translation when Comparing String Ref to a Constant Number

2018-01-08 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316870#comment-16316870
 ] 

slim bouguerra commented on CALCITE-2123:
-

Currently calcite does not add the extra CAST (looking at debugger)  The filter 
is {code} =($1, 16.0) {code} And the type of input ref is VarChar. Agree we 
need to have the implicit cast to make it more consistent. 


> Bug in the Druid Filter Translation when Comparing String Ref to a Constant 
> Number
> --
>
> Key: CALCITE-2123
> URL: https://issues.apache.org/jira/browse/CALCITE-2123
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>
> The following query {code} SELECT COUNT(*) FROM  \"foodmart\"  WHERE 
> \"product_id\" = 16.0{code} Translates to a Druid Table Scan with a String to 
> String Selector comparison filter.
> instead we need to have a Bound filter that cast the String to number.
> This is what we should expect.
> {code} 
> {"type":"bound","dimension":"product_id","lower":"16.0","lowerStrict":false,"upper":"16.0","upperStrict":false,"ordering":"numeric"}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CALCITE-2123) Bug in the Druid Filter Translation when Comparing String Ref to a Constant Number

2018-01-05 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2123:
---

 Summary: Bug in the Druid Filter Translation when Comparing String 
Ref to a Constant Number
 Key: CALCITE-2123
 URL: https://issues.apache.org/jira/browse/CALCITE-2123
 Project: Calcite
  Issue Type: Bug
  Components: druid
Reporter: slim bouguerra
Assignee: Julian Hyde


The following query {code} SELECT COUNT(*) FROM  \"foodmart\"  WHERE 
\"product_id\" = 16.0{code} Translates to a Druid Table Scan with a String to 
String Selector comparison filter.
instead we need to have a Bound filter that cast the String to number.
This is what we should expect.
{code} 
{"type":"bound","dimension":"product_id","lower":"16.0","lowerStrict":false,"upper":"16.0","upperStrict":false,"ordering":"numeric"}
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-1658) DateRangeRules issues

2018-01-05 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16313675#comment-16313675
 ] 

slim bouguerra commented on CALCITE-1658:
-

[~julianhyde] I have created new JIRA Case here 
https://issues.apache.org/jira/browse/CALCITE-2122. 
Also not sure if this is related to the actual rule.


> DateRangeRules issues
> -
>
> Key: CALCITE-1658
> URL: https://issues.apache.org/jira/browse/CALCITE-1658
> Project: Calcite
>  Issue Type: Bug
>  Components: core, druid
>Reporter: Gian Merlino
>Assignee: Nishant Bangarwa
> Fix For: 1.16.0
>
>
> Follow up to CALCITE-1601. In Druid's built in SQL module (not the Druid 
> adapter in Calcite), some unit tests fail when DateRangeRules.FILTER_INSTANCE 
> is enabled. These include the SQLs below. In all cases, the predicate was 
> incorrectly simplified to "false" and no Druid queries were made.
> Removing DateRangeRules from the planner causes the results to be correct.
> {code}
> SELECT COUNT(*) FROM druid.foo
> WHERE
>(EXTRACT(YEAR FROM __time) = 2000 AND EXTRACT(MONTH FROM __time) IN 
> (2, 3, 5))
> OR (EXTRACT(YEAR FROM __time) = 2001 AND EXTRACT(MONTH FROM __time) = 1)
> {code}
> {code}
> SELECT COUNT(*) FROM druid.foo
> WHERE
>   EXTRACT(YEAR FROM __time) IN (2000, 2001) AND (   (EXTRACT(YEAR FROM 
> __time) = 2000 AND EXTRACT(MONTH FROM __time) IN (2, 3, 5))
> OR (EXTRACT(YEAR FROM __time) = 2001 AND EXTRACT(MONTH FROM __time) = 1)
>   )
> {code}
> {code}
> SELECT COUNT(*) FROM druid.foo
> WHERE
>   EXTRACT(YEAR FROM __time) <> 2000 AND (   (EXTRACT(YEAR FROM __time) = 
> 2000 AND EXTRACT(MONTH FROM __time) IN (2, 3, 5))
> OR (EXTRACT(YEAR FROM __time) = 2001 AND EXTRACT(MONTH FROM __time) = 1)
>   )
> {code}
> {code}
> SELECT COUNT(*) FROM druid.foo
> WHERE
>   EXTRACT(MONTH FROM __time) IN (1, 2, 3, 5) AND (   (EXTRACT(YEAR FROM 
> __time) = 2000 AND EXTRACT(MONTH FROM __time) IN (2, 3, 5))
> OR (EXTRACT(YEAR FROM __time) = 2001 AND EXTRACT(MONTH FROM __time) = 1)
>   )
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2120) Request to support of Oracle database as meta data store for Druid

2018-01-05 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16313621#comment-16313621
 ] 

slim bouguerra commented on CALCITE-2120:
-

I don't think this is the place to raise this JIRA case. Druid has a GitHub 
issue page. https://github.com/druid-io/druid/issues
FYI there was some related work on this but it is aborted 
https://github.com/druid-io/druid/pull/5053.

> Request to support of Oracle database as meta data store for Druid
> --
>
> Key: CALCITE-2120
> URL: https://issues.apache.org/jira/browse/CALCITE-2120
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: vaibhav
>Assignee: Julian Hyde
>
> Currently MySQL, Postgres, and Derby are supported as Druid metadata stores. 
> Request to support of Oracle database as meta data store for Druid.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CALCITE-2122) Continuum of DateRangeRules issues.

2018-01-05 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2122:

Component/s: core

> Continuum of DateRangeRules issues.
> ---
>
> Key: CALCITE-2122
> URL: https://issues.apache.org/jira/browse/CALCITE-2122
> Project: Calcite
>  Issue Type: Bug
>  Components: core, druid
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>
> The following test at org.apache.calcite.test.DruidAdapterIT
> {code}
> @Test
>   public void testCombinationOfValidAndNotValidAndInterval() {
> final String sql = "SELECT COUNT(*) FROM \"foodmart\" "
> + "WHERE  \"timestamp\" < CAST('1997-01-02' as TIMESTAMP) AND "
> + "EXTRACT(MONTH FROM \"timestamp\") = 01 AND EXTRACT(YEAR FROM 
> \"timestamp\") = 01 ";
> sql(sql, FOODMART)
> .runs();
>   }
> {code}
> Leads to 
> {code}
> java.lang.RuntimeException: exception while executing [SELECT COUNT(*) FROM 
> "foodmart" WHERE  "timestamp" < CAST('1997-01-02' as TIMESTAMP) AND 
> EXTRACT(MONTH FROM "timestamp") = 01 AND EXTRACT(YEAR FROM "timestamp") = 01 ]
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1411)
>   at 
> org.apache.calcite.test.DruidAdapterIT.testCombinationOfValidAndNotValidAndInterval(DruidAdapterIT.java:3497)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: java.lang.RuntimeException: With materializationsEnabled=false, 
> limit=0
>   at 
> org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:600)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1407)
>   ... 23 more
> Caused by: java.sql.SQLException: Error while executing SQL "SELECT COUNT(*) 
> FROM "foodmart" WHERE  "timestamp" < CAST('1997-01-02' as TIMESTAMP) AND 
> EXTRACT(MONTH FROM "timestamp") = 01 AND EXTRACT(YEAR FROM "timestamp") = 01 
> ": Error while applying rule FilterDateRangeRule, args 
> [rel#19:LogicalFilter.NONE.[](input=rel#18:Subset#0.BINDABLE.[],condition=AND(<($0,
>  CAST('1997-01-02'):TIMESTAMP(0) NOT NULL), =(EXTRACT(FLAG(MONTH), $0), 1), 
> =(EXTRACT(FLAG(YEAR), $0), 1)))]
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
>   at 
> org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:568)
>   ... 24 more
> Caused by: java.lang.RuntimeException: Error while applying rule 
> FilterDateRangeRule, args 
> [rel#19:LogicalFilter.NONE.[](input=rel#18:Subset#0.BINDABLE.[],condition=AND(<($0,
>  CAST('1997-01-02'):TIMESTAMP(0) NOT NULL), =(EXTRACT(FLAG(MONTH), $0), 1), 
> =(EXTRACT(FLAG(YEAR), $0), 1)))]
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(V

[jira] [Commented] (CALCITE-2121) CAST timestamp with time zone to SQL VARCHAR

2018-01-05 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16313579#comment-16313579
 ] 

slim bouguerra commented on CALCITE-2121:
-

FYI [~jcamachorodriguez] this is a Timezone related issue. I think is a valid 
cast since {code} "SELECT CAST(CAST(\"timestamp\" AS TIMESTAMP) AS VARCHAR) 
FROM \"foodmart\""{code} works fine.

> CAST timestamp with time zone to SQL VARCHAR
> 
>
> Key: CALCITE-2121
> URL: https://issues.apache.org/jira/browse/CALCITE-2121
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>
> Running the following test  at org.apache.calcite.test.DruidAdapterIT
> {code}
> @Test
>   public void testCastTimestampWithTimeZone() {
> final String sql = "SELECT CAST(\"timestamp\" AS VARCHAR) FROM 
> \"foodmart\"";
> sql(sql, FOODMART).runs();
>   }
> {code}
> leads to 
> {code} 
> 2018-01-05 09:56:57,358 [main] ERROR - 
> org.apache.calcite.sql.validate.SqlValidatorException: Cast function cannot 
> convert value of type TIMESTAMP_WITH_LOCAL_TIME_ZONE(0) to type VARCHAR
> 2018-01-05 09:56:57,360 [main] ERROR - 
> org.apache.calcite.runtime.CalciteContextException: From line 1, column 8 to 
> line 1, column 35: Cast function cannot convert value of type 
> TIMESTAMP_WITH_LOCAL_TIME_ZONE(0) to type VARCHAR
> java.lang.RuntimeException: exception while executing [SELECT 
> CAST("timestamp" AS VARCHAR) FROM "foodmart"]
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1411)
>   at 
> org.apache.calcite.test.DruidAdapterIT.testCastTimestampWithTimeZone(DruidAdapterIT.java:3503)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: java.lang.RuntimeException: With materializationsEnabled=false, 
> limit=0
>   at 
> org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:600)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1407)
>   ... 23 more
> Caused by: java.sql.SQLException: Error while executing SQL "SELECT 
> CAST("timestamp" AS VARCHAR) FROM "foodmart"": From line 1, column 8 to line 
> 1, column 35: Cast function cannot convert value of type 
> TIMESTAMP_WITH_LOCAL_TIME_ZONE(0) to type VARCHAR
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
>   at 
> org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:568)
>   ... 24 more
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 35: Cast function cannot convert value of type 
> TIMESTAMP_WITH_LOCAL_TIME_ZONE(0) to type VARCHAR
>   at sun.

[jira] [Created] (CALCITE-2122) Continuum of DateRangeRules issues.

2018-01-05 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2122:
---

 Summary: Continuum of DateRangeRules issues.
 Key: CALCITE-2122
 URL: https://issues.apache.org/jira/browse/CALCITE-2122
 Project: Calcite
  Issue Type: Bug
  Components: druid
Reporter: slim bouguerra
Assignee: Julian Hyde


The following test at org.apache.calcite.test.DruidAdapterIT
{code}
@Test
  public void testCombinationOfValidAndNotValidAndInterval() {
final String sql = "SELECT COUNT(*) FROM \"foodmart\" "
+ "WHERE  \"timestamp\" < CAST('1997-01-02' as TIMESTAMP) AND "
+ "EXTRACT(MONTH FROM \"timestamp\") = 01 AND EXTRACT(YEAR FROM 
\"timestamp\") = 01 ";
sql(sql, FOODMART)
.runs();
  }
{code}
Leads to 
{code}
java.lang.RuntimeException: exception while executing [SELECT COUNT(*) FROM 
"foodmart" WHERE  "timestamp" < CAST('1997-01-02' as TIMESTAMP) AND 
EXTRACT(MONTH FROM "timestamp") = 01 AND EXTRACT(YEAR FROM "timestamp") = 01 ]

at 
org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1411)
at 
org.apache.calcite.test.DruidAdapterIT.testCombinationOfValidAndNotValidAndInterval(DruidAdapterIT.java:3497)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.lang.RuntimeException: With materializationsEnabled=false, 
limit=0
at 
org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:600)
at 
org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1407)
... 23 more
Caused by: java.sql.SQLException: Error while executing SQL "SELECT COUNT(*) 
FROM "foodmart" WHERE  "timestamp" < CAST('1997-01-02' as TIMESTAMP) AND 
EXTRACT(MONTH FROM "timestamp") = 01 AND EXTRACT(YEAR FROM "timestamp") = 01 ": 
Error while applying rule FilterDateRangeRule, args 
[rel#19:LogicalFilter.NONE.[](input=rel#18:Subset#0.BINDABLE.[],condition=AND(<($0,
 CAST('1997-01-02'):TIMESTAMP(0) NOT NULL), =(EXTRACT(FLAG(MONTH), $0), 1), 
=(EXTRACT(FLAG(YEAR), $0), 1)))]
at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
at 
org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
at 
org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:568)
... 24 more
Caused by: java.lang.RuntimeException: Error while applying rule 
FilterDateRangeRule, args 
[rel#19:LogicalFilter.NONE.[](input=rel#18:Subset#0.BINDABLE.[],condition=AND(<($0,
 CAST('1997-01-02'):TIMESTAMP(0) NOT NULL), =(EXTRACT(FLAG(MONTH), $0), 1), 
=(EXTRACT(FLAG(YEAR), $0), 1)))]
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:236)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:650)
at org.apache.calcite.tools.Programs$5.run(Programs.java:326)
at 
org.apache.calcite.tools.Programs$SequenceProgram.run(Programs.java:387)
   

[jira] [Created] (CALCITE-2121) CAST timestamp with time zone to SQL VARCHAR

2018-01-05 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2121:
---

 Summary: CAST timestamp with time zone to SQL VARCHAR
 Key: CALCITE-2121
 URL: https://issues.apache.org/jira/browse/CALCITE-2121
 Project: Calcite
  Issue Type: Bug
  Components: core
Reporter: slim bouguerra
Assignee: Julian Hyde


Running the following test  at org.apache.calcite.test.DruidAdapterIT
{code}
@Test
  public void testCastTimestampWithTimeZone() {
final String sql = "SELECT CAST(\"timestamp\" AS VARCHAR) FROM 
\"foodmart\"";
sql(sql, FOODMART).runs();
  }
{code}
leads to 
{code} 
2018-01-05 09:56:57,358 [main] ERROR - 
org.apache.calcite.sql.validate.SqlValidatorException: Cast function cannot 
convert value of type TIMESTAMP_WITH_LOCAL_TIME_ZONE(0) to type VARCHAR
2018-01-05 09:56:57,360 [main] ERROR - 
org.apache.calcite.runtime.CalciteContextException: From line 1, column 8 to 
line 1, column 35: Cast function cannot convert value of type 
TIMESTAMP_WITH_LOCAL_TIME_ZONE(0) to type VARCHAR

java.lang.RuntimeException: exception while executing [SELECT CAST("timestamp" 
AS VARCHAR) FROM "foodmart"]

at 
org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1411)
at 
org.apache.calcite.test.DruidAdapterIT.testCastTimestampWithTimeZone(DruidAdapterIT.java:3503)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.lang.RuntimeException: With materializationsEnabled=false, 
limit=0
at 
org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:600)
at 
org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1407)
... 23 more
Caused by: java.sql.SQLException: Error while executing SQL "SELECT 
CAST("timestamp" AS VARCHAR) FROM "foodmart"": From line 1, column 8 to line 1, 
column 35: Cast function cannot convert value of type 
TIMESTAMP_WITH_LOCAL_TIME_ZONE(0) to type VARCHAR
at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
at 
org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
at 
org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:568)
... 24 more
Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 8 to line 1, column 35: Cast function cannot convert value of type 
TIMESTAMP_WITH_LOCAL_TIME_ZONE(0) to type VARCHAR
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:463)
at org.apache.calcite.sql.SqlUtil.newContextException

[jira] [Commented] (CALCITE-1658) DateRangeRules issues

2018-01-05 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16313510#comment-16313510
 ] 

slim bouguerra commented on CALCITE-1658:
-

[~julianhyde] and [~nishantbangarwa] this simple test is failing and I think it 
is related to this patch. Should i open new JIRA case or reopen this one?
{code}
@Test
  public void testCombinationOfValidAndNotValidAndInterval() {
final String sql = "SELECT COUNT(*) FROM \"foodmart\" "
+ "WHERE  \"timestamp\" < CAST('1997-01-02' as TIMESTAMP) AND "
+ "EXTRACT(MONTH FROM \"timestamp\") = 01 AND EXTRACT(YEAR FROM 
\"timestamp\") = 01 ";
sql(sql, FOODMART)
.runs();
  }
{code} 
Stack 
{code}

java.lang.RuntimeException: exception while executing [SELECT COUNT(*) FROM 
"foodmart" WHERE  "timestamp" < CAST('1997-01-02' as TIMESTAMP) AND 
EXTRACT(MONTH FROM "timestamp") = 01 AND EXTRACT(YEAR FROM "timestamp") = 01 ]

at 
org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1411)
at 
org.apache.calcite.test.DruidAdapterIT.testCombinationOfValidAndNotValidAndInterval(DruidAdapterIT.java:3497)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.lang.RuntimeException: With materializationsEnabled=false, 
limit=0
at 
org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:600)
at 
org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1407)
... 23 more
Caused by: java.sql.SQLException: Error while executing SQL "SELECT COUNT(*) 
FROM "foodmart" WHERE  "timestamp" < CAST('1997-01-02' as TIMESTAMP) AND 
EXTRACT(MONTH FROM "timestamp") = 01 AND EXTRACT(YEAR FROM "timestamp") = 01 ": 
Error while applying rule FilterDateRangeRule, args 
[rel#19:LogicalFilter.NONE.[](input=rel#18:Subset#0.BINDABLE.[],condition=AND(<($0,
 CAST('1997-01-02'):TIMESTAMP(0) NOT NULL), =(EXTRACT(FLAG(MONTH), $0), 1), 
=(EXTRACT(FLAG(YEAR), $0), 1)))]
at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
at 
org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
at 
org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:568)
... 24 more
Caused by: java.lang.RuntimeException: Error while applying rule 
FilterDateRangeRule, args 
[rel#19:LogicalFilter.NONE.[](input=rel#18:Subset#0.BINDABLE.[],condition=AND(<($0,
 CAST('1997-01-02'):TIMESTAMP(0) NOT NULL), =(EXTRACT(FLAG(MONTH), $0), 1), 
=(EXTRACT(FLAG(YEAR), $0), 1)))]
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:236)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:650)
at org.apache.calcite.tools.Programs$5.run(Programs.java:326)
at 
org.apache.calcite.tools.Programs$SequenceProgram.run(Programs.java:387)
at org.apache.calcite.prepare.Prepare.optimiz

[jira] [Updated] (CALCITE-2119) Druid Filter validation Logic broken for filters like column_A = column_B

2018-01-04 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2119:

Description: 
Currently, the logic for Filter tree validation and Filter translation to Druid 
native JSON is in a two different functions.
Ideal to avoid this kind of runtime exceptions, we can blend both path of 
+Filter push down validation function 
+org.apache.calcite.adapter.druid.DruidQuery#isValidFilter(org.apache.calcite.rex.RexNode)
and the +Translation function 
+org.apache.calcite.adapter.druid.DruidQuery.Translator#translateFilter.
IMO, an easy implementation will be to try generating Druid native filter treat 
exceptions or null instance as it can not be pushed down. This will make code 
more readable and less duplication of logic that leads to fewer runtime 
exceptions.
The following test 
{code}
 @Test
  public void testFilterColumnAEqColumnB() {
final String sql = "SELECT count(*) from \"foodmart\" where \"product_id\" 
= \"city\"";
sql(sql, FOODMART).runs();
  }
 {code}
retruns 
{code} 

java.lang.AssertionError: it is not a valid comparison: =($1, $29)

at 
org.apache.calcite.adapter.druid.DruidQuery$Translator.translateFilter(DruidQuery.java:1234)
at 
org.apache.calcite.adapter.druid.DruidQuery$Translator.access$000(DruidQuery.java:1114)
at 
org.apache.calcite.adapter.druid.DruidQuery.getQuery(DruidQuery.java:525)
at 
org.apache.calcite.adapter.druid.DruidQuery.deriveQuerySpec(DruidQuery.java:495)
at 
org.apache.calcite.adapter.druid.DruidQuery.getQuerySpec(DruidQuery.java:434)
at 
org.apache.calcite.adapter.druid.DruidQuery.deriveRowType(DruidQuery.java:324)
at 
org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:224)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:857)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:883)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1766)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:135)
at 
org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234)
at 
org.apache.calcite.adapter.druid.DruidRules$DruidFilterRule.onMatch(DruidRules.java:283)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:212)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:650)
at org.apache.calcite.tools.Programs$5.run(Programs.java:326)
at 
org.apache.calcite.tools.Programs$SequenceProgram.run(Programs.java:387)
at org.apache.calcite.prepare.Prepare.optimize(Prepare.java:188)
at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:319)
at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:230)
at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:781)
at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:640)
at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:610)
at 
org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:221)
at 
org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:603)
at 
org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:638)
at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:149)
at 
org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
at 
org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:568)
at 
org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1407)
at 
org.apache.calcite.test.DruidAdapterIT.testFilterColumnAEqColumnB(DruidAdapterIT.java:3494)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRu

[jira] [Created] (CALCITE-2119) Druid Filter validation Logic broken for filters like column_A = column_B

2018-01-04 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2119:
---

 Summary: Druid Filter validation Logic broken for filters like 
column_A = column_B
 Key: CALCITE-2119
 URL: https://issues.apache.org/jira/browse/CALCITE-2119
 Project: Calcite
  Issue Type: Bug
  Components: druid
Affects Versions: 1.15.0
Reporter: slim bouguerra
Assignee: Julian Hyde


Currently, the logic for Filter tree validation and Filter translation to Druid 
native JSON is in a two different function.
Ideal to avoid this kind of runtime exceptions, we can blend both path of 
+Filter push down validation function 
+org.apache.calcite.adapter.druid.DruidQuery#isValidFilter(org.apache.calcite.rex.RexNode)
and the +Translation function 
+org.apache.calcite.adapter.druid.DruidQuery.Translator#translateFilter.
IMO, an easy implementation will be to try generating Druid native filter treat 
exceptions or null instance as it can not be pushed down. This will make code 
more readable and less duplication of logic that leads to fewer runtime 
exceptions.
The following test 
{code}
 @Test
  public void testFilterColumnAEqColumnB() {
final String sql = "SELECT count(*) from \"foodmart\" where \"product_id\" 
= \"city\"";
sql(sql, FOODMART).runs();
  }
 {code}
retruns 
{code} 

java.lang.AssertionError: it is not a valid comparison: =($1, $29)

at 
org.apache.calcite.adapter.druid.DruidQuery$Translator.translateFilter(DruidQuery.java:1234)
at 
org.apache.calcite.adapter.druid.DruidQuery$Translator.access$000(DruidQuery.java:1114)
at 
org.apache.calcite.adapter.druid.DruidQuery.getQuery(DruidQuery.java:525)
at 
org.apache.calcite.adapter.druid.DruidQuery.deriveQuerySpec(DruidQuery.java:495)
at 
org.apache.calcite.adapter.druid.DruidQuery.getQuerySpec(DruidQuery.java:434)
at 
org.apache.calcite.adapter.druid.DruidQuery.deriveRowType(DruidQuery.java:324)
at 
org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:224)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:857)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:883)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1766)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:135)
at 
org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234)
at 
org.apache.calcite.adapter.druid.DruidRules$DruidFilterRule.onMatch(DruidRules.java:283)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:212)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:650)
at org.apache.calcite.tools.Programs$5.run(Programs.java:326)
at 
org.apache.calcite.tools.Programs$SequenceProgram.run(Programs.java:387)
at org.apache.calcite.prepare.Prepare.optimize(Prepare.java:188)
at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:319)
at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:230)
at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:781)
at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:640)
at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:610)
at 
org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:221)
at 
org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:603)
at 
org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:638)
at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:149)
at 
org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
at 
org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:568)
at 
org.apache.calcite.test.CalciteAssert$AssertQuery.runs(CalciteAssert.java:1407)
at 
org.apache.calcite.test.DruidAdapterIT.testFilterColumnAEqColumnB(DruidAdapterIT.java:3494)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.

[jira] [Commented] (CALCITE-2113) Push column pruning to druid when Aggregate cannot be pushed

2018-01-02 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308893#comment-16308893
 ] 

slim bouguerra commented on CALCITE-2113:
-

[~nishantbangarwa] and [~julianhyde] thanks, I don't have any objections was 
wondering if we can actually harness the case we are solving since am working 
on another patch that will allow pushing aggregate on top of projects.
 

 

> Push column pruning to druid when Aggregate cannot be pushed
> 
>
> Key: CALCITE-2113
> URL: https://issues.apache.org/jira/browse/CALCITE-2113
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>
> Column pruning will not work when we have an Aggregate on top of a DruidQuery 
> and the aggregate cannot be pushed to druid. (one such case is when it is 
> count on a metric). 
> To fix this, we can introduce a new Rule to extract a Project from the 
> aggregate and push that to DruidQuery before pushing the aggregate.
> {code} 
> INFO  : Executing 
> command(queryId=hive_20171020180303_09fd3ab2-6e4a-42a1-9e85-4bca0e13460b): 
> explain SELECT COUNT(`__time`)
>   FROM tpcds_denormalized_druid_table_300M
>   WHERE `__time` >= '1999-11-01 00:00:00'
> AND `__time` <= '1999-11-10 00:00:00'
> AND `__time` < '1999-11-05 00:00:00'
> INFO  : Starting task [Stage-3:EXPLAIN] in serial mode
> INFO  : Resetting the caller context to 
> HIVE_SSN_ID:a5e1f82e-6d6c-405c-a6da-0d74f2248603
> INFO  : Completed executing 
> command(queryId=hive_20171020180303_09fd3ab2-6e4a-42a1-9e85-4bca0e13460b); 
> Time taken: 0.011 seconds
> INFO  : OK
> tpcds_real_bin_partitioned_orc_1000@tpcds_denormalized_druid_table_300m,tpcds_denormalized_druid_table_300m,Tbl:COMPLETE,Col:NONE,Output:["__time"],properties:{"druid.query.json":"{\"queryType\":\"select\",\"dataSource\":\"tpcds_real_bin_partitioned_orc_1000.tpcds_denormalized_druid_table_300M\",\"descending\":false,\"intervals\":[\"1999-11-01T00:00:00.000/1999-11-05T00:00:00.000\"],\"dimensions\":[\"i_item_id\",\"i_rec_start_date\",\"i_rec_end_date\",\"i_item_desc\",\"i_brand_id\",\"i_brand\",\"i_class_id\",\"i_class\",\"i_category_id\",\"i_category\",\"i_manufact_id\",\"i_manufact\",\"i_size\",\"i_formulation\",\"i_color\",\"i_units\",\"i_container\",\"i_manager_id\",\"i_product_name\",\"c_customer_id\",\"c_salutation\",\"c_first_name\",\"c_last_name\",\"c_preferred_cust_flag\",\"c_birth_day\",\"c_birth_month\",\"c_birth_year\",\"c_birth_country\",\"c_login\",\"c_email_address\",\"c_last_review_date\",\"ca_address_id\",\"ca_street_number\",\"ca_street_name\",\"ca_street_type\",\"ca_suite_number\",\"ca_city\",\"ca_county\",\"ca_state\",\"ca_zip\",\"ca_country\",\"ca_gmt_offset\",\"s_rec_end_date\",\"s_store_name\",\"s_hours\",\"s_manager\",\"s_market_id\",\"s_geography_class\",\"s_market_desc\",\"s_market_manager\",\"s_division_id\",\"s_division_name\",\"s_company_id\",\"s_company_name\",\"s_street_number\",\"s_street_name\",\"s_street_type\",\"s_suite_number\",\"s_city\",\"s_county\",\"s_state\",\"s_zip\",\"s_country\",\"s_gmt_offset\"],\"metrics\":[\"ss_ticket_number\",\"ss_quantity\",\"ss_wholesale_cost\",\"ss_list_price\",\"ss_sales_price\",\"ss_ext_discount_amt\",\"ss_ext_sales_price\",\"ss_ext_wholesale_cost\",\"ss_ext_list_price\",\"ss_ext_tax\",\"ss_coupon_amt\",\"ss_net_paid\",\"ss_net_paid_inc_tax\",\"ss_net_profit\",\"i_current_price\",\"i_wholesale_cost\",\"s_number_employees\",\"s_floor_space\",\"s_tax_precentage\"],\"granularity\":\"all\",\"pagingSpec\":{\"threshold\":16384,\"fromNext\":true},\"context\":{\"druid.query.fetch\":false}}","druid.query.type":"select"}
>   |
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2113) Push column pruning to druid when Aggregate cannot be pushed

2017-12-30 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307013#comment-16307013
 ] 

slim bouguerra commented on CALCITE-2113:
-

[~nishantbangarwa] we currently push count(`__time`) as count(*) and we also 
added the possibility to push grouping and count over metrics columns. Thus the 
example you are pointing out will be pushed to Druid as timeseries query. Can 
you please elaborate more which case are we solving here with some example? 

> Push column pruning to druid when Aggregate cannot be pushed
> 
>
> Key: CALCITE-2113
> URL: https://issues.apache.org/jira/browse/CALCITE-2113
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>
> Column pruning will not work when we have an Aggregate on top of a DruidQuery 
> and the aggregate cannot be pushed to druid. (one such case is when it is 
> count on a metric). 
> To fix this, we can introduce a new Rule to extract a Project from the 
> aggregate and push that to DruidQuery before pushing the aggregate.
> {code} 
> INFO  : Executing 
> command(queryId=hive_20171020180303_09fd3ab2-6e4a-42a1-9e85-4bca0e13460b): 
> explain SELECT COUNT(`__time`)
>   FROM tpcds_denormalized_druid_table_300M
>   WHERE `__time` >= '1999-11-01 00:00:00'
> AND `__time` <= '1999-11-10 00:00:00'
> AND `__time` < '1999-11-05 00:00:00'
> INFO  : Starting task [Stage-3:EXPLAIN] in serial mode
> INFO  : Resetting the caller context to 
> HIVE_SSN_ID:a5e1f82e-6d6c-405c-a6da-0d74f2248603
> INFO  : Completed executing 
> command(queryId=hive_20171020180303_09fd3ab2-6e4a-42a1-9e85-4bca0e13460b); 
> Time taken: 0.011 seconds
> INFO  : OK
> tpcds_real_bin_partitioned_orc_1000@tpcds_denormalized_druid_table_300m,tpcds_denormalized_druid_table_300m,Tbl:COMPLETE,Col:NONE,Output:["__time"],properties:{"druid.query.json":"{\"queryType\":\"select\",\"dataSource\":\"tpcds_real_bin_partitioned_orc_1000.tpcds_denormalized_druid_table_300M\",\"descending\":false,\"intervals\":[\"1999-11-01T00:00:00.000/1999-11-05T00:00:00.000\"],\"dimensions\":[\"i_item_id\",\"i_rec_start_date\",\"i_rec_end_date\",\"i_item_desc\",\"i_brand_id\",\"i_brand\",\"i_class_id\",\"i_class\",\"i_category_id\",\"i_category\",\"i_manufact_id\",\"i_manufact\",\"i_size\",\"i_formulation\",\"i_color\",\"i_units\",\"i_container\",\"i_manager_id\",\"i_product_name\",\"c_customer_id\",\"c_salutation\",\"c_first_name\",\"c_last_name\",\"c_preferred_cust_flag\",\"c_birth_day\",\"c_birth_month\",\"c_birth_year\",\"c_birth_country\",\"c_login\",\"c_email_address\",\"c_last_review_date\",\"ca_address_id\",\"ca_street_number\",\"ca_street_name\",\"ca_street_type\",\"ca_suite_number\",\"ca_city\",\"ca_county\",\"ca_state\",\"ca_zip\",\"ca_country\",\"ca_gmt_offset\",\"s_rec_end_date\",\"s_store_name\",\"s_hours\",\"s_manager\",\"s_market_id\",\"s_geography_class\",\"s_market_desc\",\"s_market_manager\",\"s_division_id\",\"s_division_name\",\"s_company_id\",\"s_company_name\",\"s_street_number\",\"s_street_name\",\"s_street_type\",\"s_suite_number\",\"s_city\",\"s_county\",\"s_state\",\"s_zip\",\"s_country\",\"s_gmt_offset\"],\"metrics\":[\"ss_ticket_number\",\"ss_quantity\",\"ss_wholesale_cost\",\"ss_list_price\",\"ss_sales_price\",\"ss_ext_discount_amt\",\"ss_ext_sales_price\",\"ss_ext_wholesale_cost\",\"ss_ext_list_price\",\"ss_ext_tax\",\"ss_coupon_amt\",\"ss_net_paid\",\"ss_net_paid_inc_tax\",\"ss_net_profit\",\"i_current_price\",\"i_wholesale_cost\",\"s_number_employees\",\"s_floor_space\",\"s_tax_precentage\"],\"granularity\":\"all\",\"pagingSpec\":{\"threshold\":16384,\"fromNext\":true},\"context\":{\"druid.query.fetch\":false}}","druid.query.type":"select"}
>   |
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2101) Push Count(column) over Druid storage handler

2017-12-20 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16298623#comment-16298623
 ] 

slim bouguerra commented on CALCITE-2101:
-

https://github.com/apache/calcite/pull/586

> Push Count(column) over Druid storage handler
> -
>
> Key: CALCITE-2101
> URL: https://issues.apache.org/jira/browse/CALCITE-2101
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
> Fix For: 1.16.0
>
>
> We can make use of newly added filtered aggregate (ie 
> org.apache.calcite.adapter.druid.DruidQuery.JsonFilteredAggregation) and push 
> down Count(column) as
> {code} Count(*) where column is not Null {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CALCITE-2101) Push Count(column) over Druid storage handler

2017-12-19 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2101:
---

 Summary: Push Count(column) over Druid storage handler
 Key: CALCITE-2101
 URL: https://issues.apache.org/jira/browse/CALCITE-2101
 Project: Calcite
  Issue Type: Bug
Reporter: slim bouguerra
Assignee: slim bouguerra
 Fix For: 1.16.0


We can make use of newly added filtered aggregate (ie 
org.apache.calcite.adapter.druid.DruidQuery.JsonFilteredAggregation) and push 
down Count(column) as
{code} Count(*) where column is not Null {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2097) Push group by and filters over metrics columns to Druid Query scan.

2017-12-18 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16295545#comment-16295545
 ] 

slim bouguerra commented on CALCITE-2097:
-

[~jcamachorodriguez] can you please take a look at this.

> Push group by and filters over metrics columns to Druid Query scan.
> ---
>
> Key: CALCITE-2097
> URL: https://issues.apache.org/jira/browse/CALCITE-2097
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
> Fix For: 1.16.0
>
>
> Druid 0.10.0 version has added the capability to group by or filter over 
> metrics columns.
> This Patch will allow pushing of grouping by metric column using same api for 
> dimensions, same will be done for filters. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2097) Push group by and filters over metrics columns to Druid Query scan.

2017-12-18 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16295541#comment-16295541
 ] 

slim bouguerra commented on CALCITE-2097:
-

PR sent https://github.com/apache/calcite/pull/585

> Push group by and filters over metrics columns to Druid Query scan.
> ---
>
> Key: CALCITE-2097
> URL: https://issues.apache.org/jira/browse/CALCITE-2097
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
> Fix For: 1.16.0
>
>
> Druid 0.10.0 version has added the capability to group by or filter over 
> metrics columns.
> This Patch will allow pushing of grouping by metric column using same api for 
> dimensions, same will be done for filters. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2098) Push filters to Druid Query Scan when we have OR of AND clauses

2017-12-18 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16295540#comment-16295540
 ] 

slim bouguerra commented on CALCITE-2098:
-

[~jcamachorodriguez] FYI

> Push filters to Druid Query Scan when we have OR of AND clauses
> ---
>
> Key: CALCITE-2098
> URL: https://issues.apache.org/jira/browse/CALCITE-2098
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
> Fix For: 1.16.0
>
>
> Currently Druid Filter Rule doesn't push filters like {code} OR(AND(F1,F2), 
> F3){code} 
> This is due to optimization logic 
> {code}org.apache.calcite.adapter.druid.DruidRules.DruidFilterRule.splitFilters{code}
> Here is an test example:
> {code}
> /**
>* @TODO Fix this case, Druid can handel this kind of expression but the way
>* org.apache.calcite.adapter.druid.DruidRules.DruidFilterRule.splitFilters
>* works doesn't accept this filter
>   */
>   @Ignore
>   @Test public void testFilterClauseWithMetric2() {
> String sql = "select sum(\"store_sales\")"
> + "from \"foodmart\" where \"product_id\" > 1555 or \"store_cost\" > 
> 5 or extract(year "
> + "from \"timestamp\") = 1997 "
> + "group by floor(\"timestamp\" to DAY),\"product_id\"";
> sql(sql)
> .queryContains(druidChecker("\"queryType\":\"groupBy\"", 
> "{\"type\":\"bound\","
> + 
> "\"dimension\":\"store_cost\",\"lower\":\"5\",\"lowerStrict\":true,"
> + "\"ordering\":\"numeric\"}"))
> .returnsUnordered("to be computed");
>   }
> {code}
> FYI in this example {code} extract(year from \"timestamp\") = 1997{code} will 
> be transformed to {code}(year >= 1996) AND(year <= 1997){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CALCITE-2098) Push filters to Druid Query Scan when we have OR of AND clauses

2017-12-18 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2098:
---

 Summary: Push filters to Druid Query Scan when we have OR of AND 
clauses
 Key: CALCITE-2098
 URL: https://issues.apache.org/jira/browse/CALCITE-2098
 Project: Calcite
  Issue Type: Bug
Reporter: slim bouguerra
Assignee: slim bouguerra
 Fix For: 1.16.0


Currently Druid Filter Rule doesn't push filters like {code} OR(AND(F1,F2), 
F3){code} 
This is due to optimization logic 
{code}org.apache.calcite.adapter.druid.DruidRules.DruidFilterRule.splitFilters{code}
Here is an test example:
{code}
/**
   * @TODO Fix this case, Druid can handel this kind of expression but the way
   * org.apache.calcite.adapter.druid.DruidRules.DruidFilterRule.splitFilters
   * works doesn't accept this filter
  */
  @Ignore
  @Test public void testFilterClauseWithMetric2() {
String sql = "select sum(\"store_sales\")"
+ "from \"foodmart\" where \"product_id\" > 1555 or \"store_cost\" > 5 
or extract(year "
+ "from \"timestamp\") = 1997 "
+ "group by floor(\"timestamp\" to DAY),\"product_id\"";
sql(sql)
.queryContains(druidChecker("\"queryType\":\"groupBy\"", 
"{\"type\":\"bound\","
+ 
"\"dimension\":\"store_cost\",\"lower\":\"5\",\"lowerStrict\":true,"
+ "\"ordering\":\"numeric\"}"))
.returnsUnordered("to be computed");
  }
{code}
FYI in this example {code} extract(year from \"timestamp\") = 1997{code} will 
be transformed to {code}(year >= 1996) AND(year <= 1997){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CALCITE-2097) Push group by and filters over metrics columns to Druid Query scan.

2017-12-18 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2097:

Summary: Push group by and filters over metrics columns to Druid Query 
scan.  (was: Push aggregation and filters over metrics to Druid Query scan.)

> Push group by and filters over metrics columns to Druid Query scan.
> ---
>
> Key: CALCITE-2097
> URL: https://issues.apache.org/jira/browse/CALCITE-2097
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
> Fix For: 1.16.0
>
>
> Druid 0.10.0 version has added the capability to group by or filter over 
> metrics columns.
> This Patch will allow pushing of grouping by metric column using same api for 
> dimensions, same will be done for filters. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CALCITE-2097) Push aggregation and filters over metrics to Druid Query scan.

2017-12-18 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra reassigned CALCITE-2097:
---

Assignee: slim bouguerra  (was: Julian Hyde)

> Push aggregation and filters over metrics to Druid Query scan.
> --
>
> Key: CALCITE-2097
> URL: https://issues.apache.org/jira/browse/CALCITE-2097
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
> Fix For: 1.16.0
>
>
> Druid 0.10.0 version has added the capability to group by or filter over 
> metrics columns.
> This Patch will allow pushing of grouping by metric column using same api for 
> dimensions, same will be done for filters. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CALCITE-2097) Push aggregation and filters over metrics to Druid Query scan.

2017-12-18 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2097:
---

 Summary: Push aggregation and filters over metrics to Druid Query 
scan.
 Key: CALCITE-2097
 URL: https://issues.apache.org/jira/browse/CALCITE-2097
 Project: Calcite
  Issue Type: Bug
Reporter: slim bouguerra
Assignee: Julian Hyde
 Fix For: 1.16.0


Druid 0.10.0 version has added the capability to group by or filter over 
metrics columns.
This Patch will allow pushing of grouping by metric column using same api for 
dimensions, same will be done for filters. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2094) Druid Count(*) return null instead of 0 when filter does not match rows.

2017-12-18 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16295176#comment-16295176
 ] 

slim bouguerra commented on CALCITE-2094:
-

[~jcamachorodriguez] please take a look when you have time thanks.

> Druid Count(*) return null instead of 0 when filter does not match rows. 
> -
>
> Key: CALCITE-2094
> URL: https://issues.apache.org/jira/browse/CALCITE-2094
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>
> Druid adapter returns nothing for {code} Select count(*) from table where 
> condition_is_false {code}
> According to SQL standard the result need to be zero.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2094) Druid Count(*) return null instead of 0 when filter does not match rows.

2017-12-18 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16295174#comment-16295174
 ] 

slim bouguerra commented on CALCITE-2094:
-

Pull request https://github.com/apache/calcite/pull/584/

> Druid Count(*) return null instead of 0 when filter does not match rows. 
> -
>
> Key: CALCITE-2094
> URL: https://issues.apache.org/jira/browse/CALCITE-2094
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>
> Druid adapter returns nothing for {code} Select count(*) from table where 
> condition_is_false {code}
> According to SQL standard the result need to be zero.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CALCITE-2096) Remove extra dummy_aggregator for druid adapter

2017-12-18 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2096:
---

 Summary: Remove extra dummy_aggregator for druid adapter
 Key: CALCITE-2096
 URL: https://issues.apache.org/jira/browse/CALCITE-2096
 Project: Calcite
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: slim bouguerra
Assignee: slim bouguerra
 Fix For: 1.16.0


Druid 0.10.0 version removed the restriction to have at least one aggregator 
when not needed.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CALCITE-2095) Push always_true and always_false to Druid as Expression Filter.

2017-12-18 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2095:
---

 Summary: Push always_true and always_false to Druid as Expression 
Filter.
 Key: CALCITE-2095
 URL: https://issues.apache.org/jira/browse/CALCITE-2095
 Project: Calcite
  Issue Type: Bug
Reporter: slim bouguerra
Assignee: slim bouguerra
 Fix For: 1.16.0


Druid 0.11.0 version adds new kind of filter called expressions. This filter 
can be used to expression filters like always true {code} {type:expression, 
expression: 1 ==1}{code} or always false {code}  {type:expression, expression: 
1 ==2}  {code}.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CALCITE-2094) Druid Count(*) return null instead of 0 when filter does not match rows.

2017-12-18 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2094:
---

 Summary: Druid Count(*) return null instead of 0 when filter does 
not match rows. 
 Key: CALCITE-2094
 URL: https://issues.apache.org/jira/browse/CALCITE-2094
 Project: Calcite
  Issue Type: Bug
Reporter: slim bouguerra
Assignee: slim bouguerra


Druid adapter returns nothing for {code} Select count(*) from table where 
condition_is_false {code}
According to SQL standard the result need to be zero.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2041) Adding the ability to turn off nullability matching for ReduceExpressionsRule

2017-11-08 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16244311#comment-16244311
 ] 

slim bouguerra commented on CALCITE-2041:
-

Trying to add some testing to class {code} 
org.apache.calcite.rel.rules.ReduceExpressionsRule#FILTER_INSTANCE {code} part 
of {code} 
org.apache.calcite.prepare.CalcitePrepareImpl#CONSTANT_REDUCTION_RULES {code}
but looking at the code base i can see that the only use of such rule is 
excluded via this  block at 
org/apache/calcite/prepare/CalcitePrepareImpl.java:571 git hash 
221739354b56e34e9f1d41b42a0e6881a8f5ddee
{code} 
// Change the below to enable constant-reduction.
if (false) {
  for (RelOptRule rule : CONSTANT_REDUCTION_RULES) {
planner.addRule(rule);
  }
}
{code}
while the comment says it is enabling constant reductions I can not see how 
this is done since the block is never executed? not sure what is the idea of 
keeping such code with {code} if (false) {code}.
Not sure what is the best way to test such rule since it is excluded from 
planer anyway.



> Adding the ability to turn off nullability matching for ReduceExpressionsRule
> -
>
> Key: CALCITE-2041
> URL: https://issues.apache.org/jira/browse/CALCITE-2041
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>
> In some cases, the user needs to select whether or not to add casts that 
> match nullability.
> One of the motivations behind this is to avoid unnecessary casts like the 
> following example.
> original filter 
> {code}
> OR(AND(>=($0, CAST(_UTF-16LE'2010-01-01 00:00:00 
> UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15)), <=($0, CAST(_UTF-16LE'2012-03-01 
> 00:00:00 UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15
> {code}
> the optimized expression with matching nullability
> {code}
> OR(AND(CAST(>=($0, 2010-01-01 00:00:00)):BOOLEAN, CAST(<=($0, 2012-03-01 
> 00:00:00)):BOOLEAN))
> {code}
> As you can see this extra cast gets into the way of following plan 
> optimization steps.
> The desired expression can be obtained by turning off the nullability 
> matching.
> {code}
> OR(AND(>=($0, 2010-01-01 00:00:00), <=($0, 2012-03-01 00:00:00)))
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2019) Push count over druid's time column as count (*)

2017-11-08 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16244247#comment-16244247
 ] 

slim bouguerra commented on CALCITE-2019:
-

[~michaelmior] Thanks. 

> Push count over druid's time column as count (*)
> 
>
> Key: CALCITE-2019
> URL: https://issues.apache.org/jira/browse/CALCITE-2019
> Project: Calcite
>  Issue Type: Improvement
>  Components: druid
>Reporter: slim bouguerra
>Assignee: Jesus Camacho Rodriguez
>
> Druid Time column is not null by default, thus we can transform {code} select 
> count(__time) from table {code} to {code} select count(*) from table{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2041) Adding the ability to turn off nullability matching for ReduceExpressionsRule

2017-11-08 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16244016#comment-16244016
 ] 

slim bouguerra commented on CALCITE-2041:
-

[~julianhyde] why the discussions have to be always you reminding people how 
bad they are and how great you are? The fact that my previous PR failed the 
checkstyle has nothing to deal with this contribution and I haven't written 
test yet since am not even sure if this is the way to go. 

> Adding the ability to turn off nullability matching for ReduceExpressionsRule
> -
>
> Key: CALCITE-2041
> URL: https://issues.apache.org/jira/browse/CALCITE-2041
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>
> In some cases, the user needs to select whether or not to add casts that 
> match nullability.
> One of the motivations behind this is to avoid unnecessary casts like the 
> following example.
> original filter 
> {code}
> OR(AND(>=($0, CAST(_UTF-16LE'2010-01-01 00:00:00 
> UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15)), <=($0, CAST(_UTF-16LE'2012-03-01 
> 00:00:00 UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15
> {code}
> the optimized expression with matching nullability
> {code}
> OR(AND(CAST(>=($0, 2010-01-01 00:00:00)):BOOLEAN, CAST(<=($0, 2012-03-01 
> 00:00:00)):BOOLEAN))
> {code}
> As you can see this extra cast gets into the way of following plan 
> optimization steps.
> The desired expression can be obtained by turning off the nullability 
> matching.
> {code}
> OR(AND(>=($0, 2010-01-01 00:00:00), <=($0, 2012-03-01 00:00:00)))
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CALCITE-2041) Adding the ability to turn off nullability matching for ReduceExpressionsRule

2017-11-07 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2041:

Description: 
In some cases, the user needs to select whether or not to add casts that match 
nullability.
One of the motivations behind this is to avoid unnecessary casts like the 
following example.
original filter 
{code}
OR(AND(>=($0, CAST(_UTF-16LE'2010-01-01 00:00:00 
UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15)), <=($0, CAST(_UTF-16LE'2012-03-01 
00:00:00 UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15
{code}
the optimized expression with matching nullability
{code}
OR(AND(CAST(>=($0, 2010-01-01 00:00:00)):BOOLEAN, CAST(<=($0, 2012-03-01 
00:00:00)):BOOLEAN))
{code}
As you can see this extra cast gets into the way of following plan optimization 
steps.
The desired expression can be obtained by turning off the nullability matching.
{code}
OR(AND(>=($0, 2010-01-01 00:00:00), <=($0, 2012-03-01 00:00:00)))
{code}


  was:
In some cases, the user needs to select whether or not to add casts that match 
nullability.
One of the motivations behind this is to avoid unnecessary casts like the 
following example.
original filter 
{code}
OR(AND(>=($0, CAST(_UTF-16LE'2010-01-01 00:00:00 
UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15)), <=($0, CAST(_UTF-16LE'2012-03-01 
00:00:00 UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15
{code}
the optimized expression with matching nullability
{code}
OR(AND(CAST(>=($0, 2010-01-01 00:00:00)):BOOLEAN, CAST(<=($0, 2012-03-01 
00:00:00)):BOOLEAN))
{code}
As you can see this extra cast gets into the way of following plan optimization 
steps.
The desired expression can be obtained by turning off the nullability matching.
{code}
OR(AND(>=($0, 2010-01-01 00:00:00), <=($0, 2012-03-01 00:00:00)),)
{code}



> Adding the ability to turn off nullability matching for ReduceExpressionsRule
> -
>
> Key: CALCITE-2041
> URL: https://issues.apache.org/jira/browse/CALCITE-2041
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>
> In some cases, the user needs to select whether or not to add casts that 
> match nullability.
> One of the motivations behind this is to avoid unnecessary casts like the 
> following example.
> original filter 
> {code}
> OR(AND(>=($0, CAST(_UTF-16LE'2010-01-01 00:00:00 
> UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15)), <=($0, CAST(_UTF-16LE'2012-03-01 
> 00:00:00 UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15
> {code}
> the optimized expression with matching nullability
> {code}
> OR(AND(CAST(>=($0, 2010-01-01 00:00:00)):BOOLEAN, CAST(<=($0, 2012-03-01 
> 00:00:00)):BOOLEAN))
> {code}
> As you can see this extra cast gets into the way of following plan 
> optimization steps.
> The desired expression can be obtained by turning off the nullability 
> matching.
> {code}
> OR(AND(>=($0, 2010-01-01 00:00:00), <=($0, 2012-03-01 00:00:00)))
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2041) Adding the ability to turn off nullability matching for ReduceExpressionsRule

2017-11-07 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16243157#comment-16243157
 ] 

slim bouguerra commented on CALCITE-2041:
-

https://github.com/apache/calcite/pull/563

> Adding the ability to turn off nullability matching for ReduceExpressionsRule
> -
>
> Key: CALCITE-2041
> URL: https://issues.apache.org/jira/browse/CALCITE-2041
> Project: Calcite
>  Issue Type: Bug
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>
> In some cases, the user needs to select whether or not to add casts that 
> match nullability.
> One of the motivations behind this is to avoid unnecessary casts like the 
> following example.
> original filter 
> {code}
> OR(AND(>=($0, CAST(_UTF-16LE'2010-01-01 00:00:00 
> UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15)), <=($0, CAST(_UTF-16LE'2012-03-01 
> 00:00:00 UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15
> {code}
> the optimized expression with matching nullability
> {code}
> OR(AND(CAST(>=($0, 2010-01-01 00:00:00)):BOOLEAN, CAST(<=($0, 2012-03-01 
> 00:00:00)):BOOLEAN))
> {code}
> As you can see this extra cast gets into the way of following plan 
> optimization steps.
> The desired expression can be obtained by turning off the nullability 
> matching.
> {code}
> OR(AND(>=($0, 2010-01-01 00:00:00), <=($0, 2012-03-01 00:00:00)),)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CALCITE-2041) Adding the ability to turn off nullability matching for ReduceExpressionsRule

2017-11-07 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2041:
---

 Summary: Adding the ability to turn off nullability matching for 
ReduceExpressionsRule
 Key: CALCITE-2041
 URL: https://issues.apache.org/jira/browse/CALCITE-2041
 Project: Calcite
  Issue Type: Bug
Reporter: slim bouguerra
Assignee: Julian Hyde


In some cases, the user needs to select whether or not to add casts that match 
nullability.
One of the motivations behind this is to avoid unnecessary casts like the 
following example.
original filter 
{code}
OR(AND(>=($0, CAST(_UTF-16LE'2010-01-01 00:00:00 
UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15)), <=($0, CAST(_UTF-16LE'2012-03-01 
00:00:00 UTC'):TIMESTAMP_WITH_LOCAL_TIME_ZONE(15
{code}
the optimized expression with matching nullability
{code}
OR(AND(CAST(>=($0, 2010-01-01 00:00:00)):BOOLEAN, CAST(<=($0, 2012-03-01 
00:00:00)):BOOLEAN))
{code}
As you can see this extra cast gets into the way of following plan optimization 
steps.
The desired expression can be obtained by turning off the nullability matching.
{code}
OR(AND(>=($0, 2010-01-01 00:00:00), <=($0, 2012-03-01 00:00:00)),)
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2035) Add APPROX_COUNT_DISTINCT aggregate function

2017-11-07 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16242518#comment-16242518
 ] 

slim bouguerra commented on CALCITE-2035:
-

[~julianhyde] with respect to ??Note "may" not "must", above: the planner may 
choose a plan that returns exact results.??
is there a cost function that dictates which plan to chose? I couldn't see how 
that is done?
Thanks. 

> Add APPROX_COUNT_DISTINCT aggregate function
> 
>
> Key: CALCITE-2035
> URL: https://issues.apache.org/jira/browse/CALCITE-2035
> Project: Calcite
>  Issue Type: Bug
>Reporter: Julian Hyde
>Assignee: Julian Hyde
>
> Add {{APPROX_COUNT_DISTINCT}} aggregate function. The effect of 
> {{APPROX_COUNT_DISTINCT(args)}} is the same as {{COUNT(DISTINCT args)}} but 
> the planner may generate approximate results (e.g. by using HyperLogLog).
> Note "may" not "must", above: the planner may choose a plan that returns 
> exact results.
> This is a step towards CALCITE-1588, which would allow an {{APPROXIMATE}} 
> clause and specify in more detail the degree of approximation allowed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-2019) Push count over druid's time column as count (*)

2017-10-20 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213283#comment-16213283
 ] 

slim bouguerra commented on CALCITE-2019:
-

https://github.com/apache/calcite/pull/551

> Push count over druid's time column as count (*)
> 
>
> Key: CALCITE-2019
> URL: https://issues.apache.org/jira/browse/CALCITE-2019
> Project: Calcite
>  Issue Type: Improvement
>  Components: druid
>Reporter: slim bouguerra
>Assignee: Jesus Camacho Rodriguez
>
> Druid Time column is not null by default, thus we can transform {code} select 
> count(__time) from table {code} to {code} select count(*) from table{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CALCITE-2019) Push count over druid's time column as count (*)

2017-10-20 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra reassigned CALCITE-2019:
---

Assignee: Jesus Camacho Rodriguez  (was: Julian Hyde)

> Push count over druid's time column as count (*)
> 
>
> Key: CALCITE-2019
> URL: https://issues.apache.org/jira/browse/CALCITE-2019
> Project: Calcite
>  Issue Type: Improvement
>  Components: druid
>Reporter: slim bouguerra
>Assignee: Jesus Camacho Rodriguez
>
> Druid Time column is not null by default, thus we can transform {code} select 
> count(__time) from table {code} to {code} select count(*) from table{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CALCITE-2019) Push count over druid's time column as count (*)

2017-10-20 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2019:

Summary: Push count over druid's time column as count (*)  (was: Push count 
over druid column as count *)

> Push count over druid's time column as count (*)
> 
>
> Key: CALCITE-2019
> URL: https://issues.apache.org/jira/browse/CALCITE-2019
> Project: Calcite
>  Issue Type: Improvement
>  Components: druid
>Reporter: slim bouguerra
>Assignee: Julian Hyde
>
> Druid Time column is not null by default, thus we can transform {code} select 
> count(__time) from table {code} to {code} select count(*) from table{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CALCITE-2019) Push count over druid column as count *

2017-10-20 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2019:
---

 Summary: Push count over druid column as count *
 Key: CALCITE-2019
 URL: https://issues.apache.org/jira/browse/CALCITE-2019
 Project: Calcite
  Issue Type: Improvement
  Components: druid
Reporter: slim bouguerra
Assignee: Julian Hyde


Druid Time column is not null by default, thus we can transform {code} select 
count(__time) from table {code} to {code} select count(*) from table{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CALCITE-1787) thetaSketch Support for Druid Adapter

2017-06-09 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16044422#comment-16044422
 ] 

slim bouguerra commented on CALCITE-1787:
-

+1 for the idea of abstract metric or what we call in druid complex metric.
Are we saying that for this to work the druid user has to follow this naming 
convention for columns?
Does this still work if we have multiple sketches for user ? (it is pretty 
common use case where the user is tracked via multiple streams hence multiple 
sketches)
How calcite will be able to know the details about whether this sketch can be 
used as a histogram or count ? 
Keep in mind that hyperUnique like  Theta-sketches or Quantile-Histogram are 
UDFs so we can have different UDFs that does the same thing in the same table 
where each UDF has its own API and capabilities.
As an example Theta-Sketches (Yahoo sketches) and druid HLL can be used to 
compute unique user estimate but T-Sketch can do intersection/subtract/union 
while HLL can only do union.

> thetaSketch Support for Druid Adapter
> -
>
> Key: CALCITE-1787
> URL: https://issues.apache.org/jira/browse/CALCITE-1787
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Affects Versions: 1.12.0
>Reporter: Zain Humayun
>Assignee: Zain Humayun
>Priority: Minor
>
> Currently, the Druid adapter does not support the 
> [thetaSketch|http://druid.io/docs/latest/development/extensions-core/datasketches-aggregators.html]
>  aggregate type, which is used to measure the cardinality of a column 
> quickly. Many Druid instances support theta sketches, so I think it would be 
> a nice feature to have.
> I've been looking at the Druid adapter, and propose we add a new DruidType 
> called {{thetaSketch}} and then add logic in the {{getJsonAggregation}} 
> method in class {{DruidQuery}} to generate the {{thetaSketch}} aggregate. 
> This will require accessing information about the columns (what data type 
> they are) so that the thetaSketch aggregate is only produced if the column's 
> type is {{thetaSketch}}. 
> Also, I've noticed that a {{hyperUnique}} DruidType is currently defined, but 
> a {{hyperUnique}} aggregate is never produced. Since both are approximate 
> aggregators, I could also couple in the logic for {{hyperUnique}}.
> I'd love to hear your thoughts on my approach, and any suggestions you have 
> for this feature.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CALCITE-1787) thetaSketch Support for Druid Adapter

2017-06-06 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039593#comment-16039593
 ] 

slim bouguerra commented on CALCITE-1787:
-

my 2 cents.
I think renaming is adding some complexity and the outcome is similar-ish to if 
we leave it as it is.
I think going the route of udf is better as you can see per the sketch-hive 
docs https://datasketches.github.io/docs/Theta/ThetaHiveUDFs.html it is treated 
as UDF am assuming SQL/Hive users at Yahoo are already using those function so 
making calcite inline with this syntax will make perfect sense to me.


> thetaSketch Support for Druid Adapter
> -
>
> Key: CALCITE-1787
> URL: https://issues.apache.org/jira/browse/CALCITE-1787
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Affects Versions: 1.12.0
>Reporter: Zain Humayun
>Assignee: Zain Humayun
>Priority: Minor
>
> Currently, the Druid adapter does not support the 
> [thetaSketch|http://druid.io/docs/latest/development/extensions-core/datasketches-aggregators.html]
>  aggregate type, which is used to measure the cardinality of a column 
> quickly. Many Druid instances support theta sketches, so I think it would be 
> a nice feature to have.
> I've been looking at the Druid adapter, and propose we add a new DruidType 
> called {{thetaSketch}} and then add logic in the {{getJsonAggregation}} 
> method in class {{DruidQuery}} to generate the {{thetaSketch}} aggregate. 
> This will require accessing information about the columns (what data type 
> they are) so that the thetaSketch aggregate is only produced if the column's 
> type is {{thetaSketch}}. 
> Also, I've noticed that a {{hyperUnique}} DruidType is currently defined, but 
> a {{hyperUnique}} aggregate is never produced. Since both are approximate 
> aggregators, I could also couple in the logic for {{hyperUnique}}.
> I'd love to hear your thoughts on my approach, and any suggestions you have 
> for this feature.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CALCITE-1787) thetaSketch Support for Druid Adapter

2017-06-05 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16037679#comment-16037679
 ] 

slim bouguerra commented on CALCITE-1787:
-

am wondering what the renaming will buy us ? 


> thetaSketch Support for Druid Adapter
> -
>
> Key: CALCITE-1787
> URL: https://issues.apache.org/jira/browse/CALCITE-1787
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Affects Versions: 1.12.0
>Reporter: Zain Humayun
>Assignee: Zain Humayun
>Priority: Minor
>
> Currently, the Druid adapter does not support the 
> [thetaSketch|http://druid.io/docs/latest/development/extensions-core/datasketches-aggregators.html]
>  aggregate type, which is used to measure the cardinality of a column 
> quickly. Many Druid instances support theta sketches, so I think it would be 
> a nice feature to have.
> I've been looking at the Druid adapter, and propose we add a new DruidType 
> called {{thetaSketch}} and then add logic in the {{getJsonAggregation}} 
> method in class {{DruidQuery}} to generate the {{thetaSketch}} aggregate. 
> This will require accessing information about the columns (what data type 
> they are) so that the thetaSketch aggregate is only produced if the column's 
> type is {{thetaSketch}}. 
> Also, I've noticed that a {{hyperUnique}} DruidType is currently defined, but 
> a {{hyperUnique}} aggregate is never produced. Since both are approximate 
> aggregators, I could also couple in the logic for {{hyperUnique}}.
> I'd love to hear your thoughts on my approach, and any suggestions you have 
> for this feature.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CALCITE-1822) Push Aggregate that follows Aggregate down to Druid

2017-06-05 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16037027#comment-16037027
 ] 

slim bouguerra commented on CALCITE-1822:
-

another example of {code}select count(distinct dim2) from foo {code}
{code} 
{
  "queryType": "groupBy",
  "dataSource": {
"type": "query",
"query": {
  "queryType": "groupBy",
  "dataSource": {
"type": "table",
"name": "foo"
  },
  "intervals": {
"type": "intervals",
"intervals": [
  "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
]
  },
  "granularity": {
"type": "all"
  },
  "dimensions": [
{
  "type": "default",
  "dimension": "dim2",
  "outputName": "d0",
  "outputType": "STRING"
}
  ]
}
  },
  "intervals": {
"type": "intervals",
"intervals": [
  "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
]
  },
  "granularity": {
"type": "all"
  },
  "aggregations": [
{
  "type": "count",
  "name": "a0"
}
  ]
}
{code}

> Push Aggregate that follows Aggregate down to Druid
> ---
>
> Key: CALCITE-1822
> URL: https://issues.apache.org/jira/browse/CALCITE-1822
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: Julian Hyde
>Assignee: Julian Hyde
>
> Push Aggregate that follows Aggregate down to Druid. This can occur if the 
> SQL has an aggregate function applied to an aggregate function, or with a 
> sub-query in the FROM clause.
> {code}
> SELECT MAX(COUNT(*))
> FROM Emp
> GROUP BY deptno
> SELECT MAX(c) FROM (
>   SELECT deptno, COUNT(*) AS c
>   FROM Emp
>   GROUP BY deptno)
> {code}
> And there are other possibilities where there is a Project and/or a Filter 
> after the first Aggregate and before the second Aggregate.
> [~bslim], you wrote:
> {quote}
> For instance in druid we can do select count distinct as an inner group by 
> that group on the key and the outer one does then count. more complex cases 
> is count distinct from unions of multiple queries
> {quote}
> Can you please write a SQL statement for each of those cases?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CALCITE-1822) Push Aggregate that follows Aggregate down to Druid

2017-06-05 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16037004#comment-16037004
 ] 

slim bouguerra edited comment on CALCITE-1822 at 6/5/17 2:09 PM:
-


{code}
SELECT MAX(COUNT(*))
FROM Emp
GROUP BY deptno

SELECT MAX(c) FROM (
  SELECT deptno, COUNT(*) AS c
  FROM Emp
  GROUP BY deptno)  
{code}
translates to 
{code} 
{
  "queryType": "groupBy",
  "dataSource": {
"type": "query",
"query": {
  "queryType": "groupBy",
  "dataSource": {
"type": "table",
"name": "Emp"
  },
  "intervals": {
"type": "intervals",
"intervals": [
  "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
]
  },
  "granularity": {
"type": "all"
  },
  "dimensions": [
{
  "type": "default",
  "dimension": "deptno",
  "outputName": "d0",
  "outputType": "STRING"
}
  ],
  "aggregations": [
{
  "type": "count",
  "name": "a0"
}
  ]
}
  },
  "intervals": {
"type": "intervals",
"intervals": [
  "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
]
  },
  "granularity": {
"type": "all"
  },
  "aggregations": [
{
  "type": "longMax",
  "name": "a0",
  "fieldName": "a0"
}
  ]
}
{code}


was (Author: bslim):
{code}  Push Aggregate that follows Aggregate down to Druid. This can 
occur if the SQL has an aggregate function applied to an aggregate function, or 
with a sub-query in the FROM clause.

{code}
SELECT MAX(COUNT(*))
FROM Emp
GROUP BY deptno

SELECT MAX(c) FROM (
  SELECT deptno, COUNT(*) AS c
  FROM Emp
  GROUP BY deptno)  {code}
{code} 
{
  "queryType": "groupBy",
  "dataSource": {
"type": "query",
"query": {
  "queryType": "groupBy",
  "dataSource": {
"type": "table",
"name": "Emp"
  },
  "intervals": {
"type": "intervals",
"intervals": [
  "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
]
  },
  "granularity": {
"type": "all"
  },
  "dimensions": [
{
  "type": "default",
  "dimension": "deptno",
  "outputName": "d0",
  "outputType": "STRING"
}
  ],
  "aggregations": [
{
  "type": "count",
  "name": "a0"
}
  ]
}
  },
  "intervals": {
"type": "intervals",
"intervals": [
  "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
]
  },
  "granularity": {
"type": "all"
  },
  "aggregations": [
{
  "type": "longMax",
  "name": "a0",
  "fieldName": "a0"
}
  ]
}
{code}

> Push Aggregate that follows Aggregate down to Druid
> ---
>
> Key: CALCITE-1822
> URL: https://issues.apache.org/jira/browse/CALCITE-1822
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: Julian Hyde
>Assignee: Julian Hyde
>
> Push Aggregate that follows Aggregate down to Druid. This can occur if the 
> SQL has an aggregate function applied to an aggregate function, or with a 
> sub-query in the FROM clause.
> {code}
> SELECT MAX(COUNT(*))
> FROM Emp
> GROUP BY deptno
> SELECT MAX(c) FROM (
>   SELECT deptno, COUNT(*) AS c
>   FROM Emp
>   GROUP BY deptno)
> {code}
> And there are other possibilities where there is a Project and/or a Filter 
> after the first Aggregate and before the second Aggregate.
> [~bslim], you wrote:
> {quote}
> For instance in druid we can do select count distinct as an inner group by 
> that group on the key and the outer one does then count. more complex cases 
> is count distinct from unions of multiple queries
> {quote}
> Can you please write a SQL statement for each of those cases?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CALCITE-1822) Push Aggregate that follows Aggregate down to Druid

2017-06-05 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16037004#comment-16037004
 ] 

slim bouguerra commented on CALCITE-1822:
-

{code}  Push Aggregate that follows Aggregate down to Druid. This can 
occur if the SQL has an aggregate function applied to an aggregate function, or 
with a sub-query in the FROM clause.

{code}
SELECT MAX(COUNT(*))
FROM Emp
GROUP BY deptno

SELECT MAX(c) FROM (
  SELECT deptno, COUNT(*) AS c
  FROM Emp
  GROUP BY deptno)  {code}
{code} 
{
  "queryType": "groupBy",
  "dataSource": {
"type": "query",
"query": {
  "queryType": "groupBy",
  "dataSource": {
"type": "table",
"name": "Emp"
  },
  "intervals": {
"type": "intervals",
"intervals": [
  "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
]
  },
  "granularity": {
"type": "all"
  },
  "dimensions": [
{
  "type": "default",
  "dimension": "deptno",
  "outputName": "d0",
  "outputType": "STRING"
}
  ],
  "aggregations": [
{
  "type": "count",
  "name": "a0"
}
  ]
}
  },
  "intervals": {
"type": "intervals",
"intervals": [
  "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
]
  },
  "granularity": {
"type": "all"
  },
  "aggregations": [
{
  "type": "longMax",
  "name": "a0",
  "fieldName": "a0"
}
  ]
}
{code}

> Push Aggregate that follows Aggregate down to Druid
> ---
>
> Key: CALCITE-1822
> URL: https://issues.apache.org/jira/browse/CALCITE-1822
> Project: Calcite
>  Issue Type: Bug
>  Components: druid
>Reporter: Julian Hyde
>Assignee: Julian Hyde
>
> Push Aggregate that follows Aggregate down to Druid. This can occur if the 
> SQL has an aggregate function applied to an aggregate function, or with a 
> sub-query in the FROM clause.
> {code}
> SELECT MAX(COUNT(*))
> FROM Emp
> GROUP BY deptno
> SELECT MAX(c) FROM (
>   SELECT deptno, COUNT(*) AS c
>   FROM Emp
>   GROUP BY deptno)
> {code}
> And there are other possibilities where there is a Project and/or a Filter 
> after the first Aggregate and before the second Aggregate.
> [~bslim], you wrote:
> {quote}
> For instance in druid we can do select count distinct as an inner group by 
> that group on the key and the outer one does then count. more complex cases 
> is count distinct from unions of multiple queries
> {quote}
> Can you please write a SQL statement for each of those cases?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CALCITE-1828) Push the FILTER clause into Druid as a Filtered Aggregator

2017-06-02 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-1828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16035305#comment-16035305
 ] 

slim bouguerra commented on CALCITE-1828:
-

I agree on 1 if it is doable. Also i would like to see (if possible) to have an 
OR of the filter to be added upfront as first Filter. That will act as a first 
pruning filter to select the rows needed for the aggregation then will do the 
rest as `Filtered aggregator`. Taking a simple case where cond1 is country=US, 
cond2 is country=CA, The first filter will be country = US OR country = CA. 
Again i feel like with arbitrary filter expression might be hard to construct 
such filter but maybe calcite can do that.

2 -> yes, i thought we already do this.

3 -> not sure maybe [~julianhyde] or [~jcamachorodriguez] can answer.   

> Push the FILTER clause into Druid as a Filtered Aggregator 
> ---
>
> Key: CALCITE-1828
> URL: https://issues.apache.org/jira/browse/CALCITE-1828
> Project: Calcite
>  Issue Type: Improvement
>  Components: druid
>Affects Versions: 1.12.0
>Reporter: Zain Humayun
>Assignee: Zain Humayun
>
> Druid has support for a special aggregator it calls the [Filtered 
> Aggregator|http://druid.io/docs/latest/querying/aggregations.html] that 
> allows aggregations to occur with filters independent to other filters in the 
> Druid query.
> An example where the filtered aggregator is useful:
> {code:sql}
> SELECT 
> sum("col1") FILTER (WHERE ),
> sum("col2") FILTER (WHERE )
> FROM "table"; 
> {code} 
> Currently, calcite will scan Druid, then do the filtering and aggregation 
> itself. With filtered aggregators, both the filter and aggregation and be 
> pushed into Druid. 
> *A few comments/questions:*
> 1) If all conditions in the filter clause are the same, then instead of 
> pushing filtered aggregators individually, it would make more sense to push 1 
> single filter into the Druid query. I.e the filters can be factored out into 
> 1 filter. I don't see calcite currently do this, does it have such a rule in 
> place already?
> 2) The filters can/should only be pushed if they are filtering on dimension 
> columns
> 3) Currently, the above query would create the following relation: 
> DruidQuery -> Project -> Aggregate. There is already a rule called 
> {{DruidAggregateProjectRule}} which matches the previous relation. Is it 
> better to add logic to that rule, or to create a new rule that also matches 
> that relation?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   3   >