[jira] [Closed] (CALCITE-2171) ExampleFunctionTest is not reading model.json

2018-02-06 Thread Shuyi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuyi Chen closed CALCITE-2171.
---
Resolution: Won't Fix

> ExampleFunctionTest is not reading model.json 
> --
>
> Key: CALCITE-2171
> URL: https://issues.apache.org/jira/browse/CALCITE-2171
> Project: Calcite
>  Issue Type: Bug
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> ExampleFunctionTest is not reading model.json at all. I think we can either 
> remove it, or modify the code to read model.json. Please let me know.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2171) ExampleFunctionTest is not reading model.json

2018-02-06 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354865#comment-16354865
 ] 

Julian Hyde commented on CALCITE-2171:
--

The unit testing philosophy is that a test should only depend on a small part 
of the stack. So, the test will tend to be fast, and will tend to stay working 
if you are making changes elsewhere. If ModelHandler.addFunctions breaks there 
are plenty of other tests that will detect it, just not this one.

> ExampleFunctionTest is not reading model.json 
> --
>
> Key: CALCITE-2171
> URL: https://issues.apache.org/jira/browse/CALCITE-2171
> Project: Calcite
>  Issue Type: Bug
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> ExampleFunctionTest is not reading model.json at all. I think we can either 
> remove it, or modify the code to read model.json. Please let me know.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2171) ExampleFunctionTest is not reading model.json

2018-02-06 Thread Shuyi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354858#comment-16354858
 ] 

Shuyi Chen commented on CALCITE-2171:
-

I see, do we leave it there just for demonstration purpose? But it's confusing 
though when I was reading the code. I would suggest clone the test to both use 
SPI and model.json file. Otherwise, if ModelHandler.addFunctions breaks, we 
wont be able to catch it. But maybe there are other tests testing the code path 
that I am unaware of. What do you think?

> ExampleFunctionTest is not reading model.json 
> --
>
> Key: CALCITE-2171
> URL: https://issues.apache.org/jira/browse/CALCITE-2171
> Project: Calcite
>  Issue Type: Bug
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> ExampleFunctionTest is not reading model.json at all. I think we can either 
> remove it, or modify the code to read model.json. Please let me know.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2171) ExampleFunctionTest is not reading model.json

2018-02-06 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354845#comment-16354845
 ] 

Julian Hyde commented on CALCITE-2171:
--

In my opinion, it's not a problem that it doesn't read model.json. Calcite has 
a schema SPI, and a JSON model just one way to create a model. 
{{ExampleFunctionTest}} creates a model with two functions by calling 
{{SchemaPlus.add(String, Function)}}, and that's fine.

> ExampleFunctionTest is not reading model.json 
> --
>
> Key: CALCITE-2171
> URL: https://issues.apache.org/jira/browse/CALCITE-2171
> Project: Calcite
>  Issue Type: Bug
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> ExampleFunctionTest is not reading model.json at all. I think we can either 
> remove it, or modify the code to read model.json. Please let me know.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CALCITE-2171) [Cleanup] ExampleFunctionTest is not reading model.json

2018-02-06 Thread Shuyi Chen (JIRA)
Shuyi Chen created CALCITE-2171:
---

 Summary: [Cleanup] ExampleFunctionTest is not reading model.json 
 Key: CALCITE-2171
 URL: https://issues.apache.org/jira/browse/CALCITE-2171
 Project: Calcite
  Issue Type: Bug
Reporter: Shuyi Chen
Assignee: Shuyi Chen


ExampleFunctionTest is not reading model.json at all. I think we can either 
remove it, or modify the code to read model.json. Please let me know.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-2171) ExampleFunctionTest is not reading model.json

2018-02-06 Thread Shuyi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuyi Chen updated CALCITE-2171:

Summary: ExampleFunctionTest is not reading model.json   (was: [Cleanup] 
ExampleFunctionTest is not reading model.json )

> ExampleFunctionTest is not reading model.json 
> --
>
> Key: CALCITE-2171
> URL: https://issues.apache.org/jira/browse/CALCITE-2171
> Project: Calcite
>  Issue Type: Bug
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> ExampleFunctionTest is not reading model.json at all. I think we can either 
> remove it, or modify the code to read model.json. Please let me know.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354733#comment-16354733
 ] 

Julian Hyde commented on CALCITE-2170:
--

If you stay within the algebra you can still do algebraic operations, e.g. 
simplification, costing, and you know the type of expressions. Plus we can 
share a framework to push down expressions to Druid and other engines such as 
Jethro.

> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
> This is a link to the current supported functions and expressions in Druid
>  [http://druid.io/docs/latest/misc/math-expr.html]
> As you can see from the Docs an expression can be an actual tree of operators,
> Expression can be used with Filters, Projects, Aggregates, PostAggregates and
> Having filters. For Filters will have new Filter kind called Filter 
> expression.
> FYI, you might ask can we push everything as Expression Filter the short 
> answer
> is no because, other kinds of Druid filters perform better when used, Hence
> Expression filter is a plan B sort of thing. In order to push expression as
> Projects and Aggregates we will be using Expression based Virtual Columns.
> The major change is the merging of the logic of pushdown verification code and
> the Translation of RexCall/RexNode to Druid Json, native physical language. 
> The
> main drive behind this redesign is the fact that in order to check if we can
> push down a tree of expressions to Druid we have to compute the Druid 
> Expression
> String anyway. Thus instead of having 2 different code paths, one for pushdown
> validation and one for Json generation we can have one function that does 
> both.
> For instance instead of having one code path to test and check if a given 
> filter
> can be pushed or not and then having a translation layer code, will have
> one function that either returns a valid Druid Filter or null if it is not
> possible to pushdown. The same idea will be applied to how we push Projects 
> and
> Aggregates, Post Aggregates and Sort.
> Here are the main elements/Classes of the new design. First will be merging 
> the logic of
> Translation of Literals/InputRex/RexCall to a Druid physical representation.
> Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
> possible
> {code:java}
> /**
>  * @param rexNode leaf Input Ref to Druid Column
>  * @param rowType row type
>  * @param druidQuery druid query
>  *
>  * @return {@link Pair} of Column name and Extraction Function on the top of 
> the input ref or
>  * {@link Pair of(null, null)} when can not translate to valid Druid column
>  */
>  protected static Pair toDruidColumn(RexNode 
> rexNode,
>  RelDataType rowType, DruidQuery druidQuery
>  )
> {code}
> In the other hand, in order to Convert Literals to Druid Literals will 
> introduce
> {code:java}
> /**
>  * @param rexNode rexNode to translate to Druid literal equivalante
>  * @param rowType rowType associated to rexNode
>  * @param druidQuery druid Query
>  *
>  * @return non null string or null if it can not translate to valid Druid 
> equivalent
>  */
> @Nullable
> private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
>  DruidQuery druidQuery
> )
> {code}
> Main new functions used to pushdown nodes and Druid Json generation.
> Filter pushdown verification and generates is done via
> {code:java}
> org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
> {code}
> For project pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
> {code}
> For Grouping pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
> {code}
> For Aggregation pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
> {code}
> For sort pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
> Pushing of PostAggregates will be using Expression post Aggregates and use
> {code}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> to generate expression
> For Expression computation most of the work is done here
> {code:java}
> 

[jira] [Commented] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354639#comment-16354639
 ] 

slim bouguerra commented on CALCITE-2170:
-

[~julianhyde] you are exactly right am converting RexNodes to Druid physical 
operators (eg Strings and Json objs). This is done as a direct column name, as 
column name and extraction function (can be considered as Project) and 
sometimes as a String expression.  Am wondering what is the advantage to 
translates it first as a RexNode? 

> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
> This is a link to the current supported functions and expressions in Druid
>  [http://druid.io/docs/latest/misc/math-expr.html]
> As you can see from the Docs an expression can be an actual tree of operators,
> Expression can be used with Filters, Projects, Aggregates, PostAggregates and
> Having filters. For Filters will have new Filter kind called Filter 
> expression.
> FYI, you might ask can we push everything as Expression Filter the short 
> answer
> is no because, other kinds of Druid filters perform better when used, Hence
> Expression filter is a plan B sort of thing. In order to push expression as
> Projects and Aggregates we will be using Expression based Virtual Columns.
> The major change is the merging of the logic of pushdown verification code and
> the Translation of RexCall/RexNode to Druid Json, native physical language. 
> The
> main drive behind this redesign is the fact that in order to check if we can
> push down a tree of expressions to Druid we have to compute the Druid 
> Expression
> String anyway. Thus instead of having 2 different code paths, one for pushdown
> validation and one for Json generation we can have one function that does 
> both.
> For instance instead of having one code path to test and check if a given 
> filter
> can be pushed or not and then having a translation layer code, will have
> one function that either returns a valid Druid Filter or null if it is not
> possible to pushdown. The same idea will be applied to how we push Projects 
> and
> Aggregates, Post Aggregates and Sort.
> Here are the main elements/Classes of the new design. First will be merging 
> the logic of
> Translation of Literals/InputRex/RexCall to a Druid physical representation.
> Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
> possible
> {code:java}
> /**
>  * @param rexNode leaf Input Ref to Druid Column
>  * @param rowType row type
>  * @param druidQuery druid query
>  *
>  * @return {@link Pair} of Column name and Extraction Function on the top of 
> the input ref or
>  * {@link Pair of(null, null)} when can not translate to valid Druid column
>  */
>  protected static Pair toDruidColumn(RexNode 
> rexNode,
>  RelDataType rowType, DruidQuery druidQuery
>  )
> {code}
> In the other hand, in order to Convert Literals to Druid Literals will 
> introduce
> {code:java}
> /**
>  * @param rexNode rexNode to translate to Druid literal equivalante
>  * @param rowType rowType associated to rexNode
>  * @param druidQuery druid Query
>  *
>  * @return non null string or null if it can not translate to valid Druid 
> equivalent
>  */
> @Nullable
> private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
>  DruidQuery druidQuery
> )
> {code}
> Main new functions used to pushdown nodes and Druid Json generation.
> Filter pushdown verification and generates is done via
> {code:java}
> org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
> {code}
> For project pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
> {code}
> For Grouping pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
> {code}
> For Aggregation pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
> {code}
> For sort pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
> Pushing of PostAggregates will be using Expression post Aggregates and use
> {code}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> to generate expression
> For 

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.

Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.
Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible
{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return {@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * {@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code}
In the other hand, in order to Convert Literals to Druid Literals will introduce
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}
Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
{code}
For project pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
{code}
For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
{code}
For Aggregation pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
{code}
For sort pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.
{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to valid Druid expression.
 * @param rexNode rexNode to translate to Druid expression
 * @param rowType row type associated with rexNode
 * @param druidQuery druid query used to figure out 

[jira] [Commented] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354621#comment-16354621
 ] 

slim bouguerra commented on CALCITE-2170:
-

[~julianhyde] just updated the Jira. I will link to PoC shortly and let's start 
the discussion.

  

> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
> This is a link to the current supported functions and expressions in Druid
>  [http://druid.io/docs/latest/misc/math-expr.html]
> As you can see from the Docs an expression can be an actual tree of operators,
> Expression can be used with Filters, Projects, Aggregates, PostAggregates and
> Having filters. For Filters will have new Filter kind called Filter 
> expression.
> FYI, you might ask can we push everything as Expression Filter the short 
> answer
> is no because, other kinds of Druid filters perform better when used, Hence
> Expression filter is a plan B sort of thing. In order to push expression as
> Projects and Aggregates we will be using Expression based Virtual Columns.
> The major change is the merging of the logic of pushdown verification code and
> the Translation of RexCall/RexNode to Druid Json, native physical language. 
> The
> main drive behind this redesign is the fact that in order to check if we can
> push down a tree of expressions to Druid we have to compute the Druid 
> Expression
> String anyway. Thus instead of having 2 different code paths, one for pushdown
> validation and one for Json generation we can have one function that does 
> both.
> For instance instead of having one code path to test and check if a given 
> filter
> can be pushed or not and then having a translation layer code, will have
> one function that either returns a valid Druid Filter or null if it is not
> possible to pushdown. The same idea will be applied to how we push Projects 
> and
> Aggregates, Post Aggregates and Sort.
> Here are the main elements/Classes of the new design. First will be merging 
> the logic of
> Translation of Literals/InputRex/RexCall to a Druid physical representation.
> Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
> possible
> {code:java}
> /**
>  * @param rexNode leaf Input Ref to Druid Column
>  * @param rowType row type
>  * @param druidQuery druid query
>  *
>  * @return {@link Pair} of Column name and Extraction Function on the top of 
> the input ref or
>  * {@link Pair of(null, null)} when can not translate to valid Druid column
>  */
>  protected static Pair toDruidColumn(RexNode 
> rexNode,
>  RelDataType rowType, DruidQuery druidQuery
>  )
> {code}
> In the other hand, in order to Convert Literals to Druid Literals will 
> introduce
> {code:java}
> /**
>  * @param rexNode rexNode to translate to Druid literal equivalante
>  * @param rowType rowType associated to rexNode
>  * @param druidQuery druid Query
>  *
>  * @return non null string or null if it can not translate to valid Druid 
> equivalent
>  */
> @Nullable
> private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
>  DruidQuery druidQuery
> )
> {code}
> Main new functions used to pushdown nodes and Druid Json generation.
> Filter pushdown verification and generates is done via
> {code:java}
> org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
> {code}
> For project pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
> {code}
> For Grouping pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
> {code}
> For Aggregation pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
> {code}
> For sort pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
> Pushing of PostAggregates will be using Expression post Aggregates and use
> {code}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> to generate expression
> For Expression computation most of the work is done here
> {code:java}
> org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
> This static function generates Druid String expression out of a given RexNode 
> or
> returns null 

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.

Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.
Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible

{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return {@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * {@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code}

In the other hand, in order to Convert Literals to Druid Literals will introduce
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}

Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
{code}

For project pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
{code}

For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
{code}

For Aggregation pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
{code}
For sort pushdown added

{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.

{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to valid Druid expression.
 * @param rexNode rexNode to translate to Druid expression
 * @param rowType row type associated with rexNode
 * @param druidQuery druid query used to figure 

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: (was: Druid 0.11 has newly built in capabilities called 
Expressions that can be used to push expression like 
projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.

Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.
Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible
{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return {@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * {@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code:java}

In the other hand, in order to Convert Literals to Druid Literals will introduce
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}
Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
{code:java}

For project pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
{code:java}

For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
{code}
For Aggregation pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
{code}
For sort pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.
{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to valid Druid expression.
 * @param rexNode rexNode to translate to Druid expression
 * @param rowType row type associated with rexNode
 * @param druidQuery druid 

[jira] [Commented] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354613#comment-16354613
 ] 

Julian Hyde commented on CALCITE-2170:
--

It seems that you are proposing to translate Calcite RexNodes to strings. How 
about instead translating Calcite RexNodes to RexNodes that use Druid 
operators? (I feel sure that you can represent a Druid expression as a RexNode 
tree because anything can be represented as operators applied to leaf or 
non-leaf nodes.) Then as a separate step convert the Druid RexNode tree to a 
string.

> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
> This is a link to the current supported functions and expressions in Druid
>  [http://druid.io/docs/latest/misc/math-expr.html]
> As you can see from the Docs an expression can be an actual tree of operators,
> Expression can be used with Filters, Projects, Aggregates, PostAggregates and
> Having filters. For Filters will have new Filter kind called Filter 
> expression.
> FYI, you might ask can we push everything as Expression Filter the short 
> answer
> is no because, other kinds of Druid filters perform better when used, Hence
> Expression filter is a plan B sort of thing. In order to push expression as
> Projects and Aggregates we will be using Expression based Virtual Columns.
> The major change is the merging of the logic of pushdown verification code and
> the Translation of RexCall/RexNode to Druid Json, native physical language. 
> The
> main drive behind this redesign is the fact that in order to check if we can
> push down a tree of expressions to Druid we have to compute the Druid 
> Expression
> String anyway. Thus instead of having 2 different code paths, one for pushdown
> validation and one for Json generation we can have one function that does 
> both.
> For instance instead of having one code path to test and check if a given 
> filter
> can be pushed or not and then having a translation layer code, will have
> one function that either returns a valid Druid Filter or null if it is not
> possible to pushdown. The same idea will be applied to how we push Projects 
> and
> Aggregates, Post Aggregates and Sort.
> Here are the main elements/Classes of the new design. First will be merging 
> the logic of
> Translation of Literals/InputRex/RexCall to a Druid physical representation.
> Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
> possible
> org.apache.calcite.adapter.druid.DruidQuery#toDruidColumn
> {code:java}
> /**
>  * @param rexNode leaf Input Ref to Druid Column
>  * @param rowType row type
>  * @param druidQuery druid query
>  *
>  * @return {@link Pair} of Column name and Extraction Function on the top of 
> the input ref or
>  * {@link Pair of(null, null)} when can not translate to valid Druid column
>  */
>  protected static Pair toDruidColumn(RexNode 
> rexNode,
>  RelDataType rowType, DruidQuery druidQuery
>  )
> {code:java}
> In the other hand, in order to Convert Literals to Druid Literals will 
> introduce
> org.apache.calcite.adapter.druid.DruidQuery#toDruidLiteral
> {code:java}
> /**
>  * @param rexNode rexNode to translate to Druid literal equivalante
>  * @param rowType rowType associated to rexNode
>  * @param druidQuery druid Query
>  *
>  * @return non null string or null if it can not translate to valid Druid 
> equivalent
>  */
> @Nullable
> private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
>  DruidQuery druidQuery
> )
> {code}
> Main new functions used to pushdown nodes and Druid Json generation.
> Filter pushdown verification and generates is done via
> {code:java}
> org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
> {code:java}
> For project pushdown added
> {code}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
> {code:java}
> For Grouping pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
> {code}
> For Aggregation pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg\{code}
> For sort pushdown added
> {code:java}
> org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
> Pushing of PostAggregates 

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.

Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.
Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible
{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return {@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * {@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code:java}

In the other hand, in order to Convert Literals to Druid Literals will introduce
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}
Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters
{code:java}

For project pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.
{code:java}

For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.
{code}
For Aggregation pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg
{code}
For sort pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.
{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to valid Druid expression.
 * @param rexNode rexNode to translate to Druid expression
 * @param rowType row type associated with rexNode
 * @param druidQuery druid query used 

[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

This is a link to the current supported functions and expressions in Druid
 [http://druid.io/docs/latest/misc/math-expr.html]
As you can see from the Docs an expression can be an actual tree of operators,
Expression can be used with Filters, Projects, Aggregates, PostAggregates and
Having filters. For Filters will have new Filter kind called Filter expression.
FYI, you might ask can we push everything as Expression Filter the short answer
is no because, other kinds of Druid filters perform better when used, Hence
Expression filter is a plan B sort of thing. In order to push expression as
Projects and Aggregates we will be using Expression based Virtual Columns.

The major change is the merging of the logic of pushdown verification code and
the Translation of RexCall/RexNode to Druid Json, native physical language. The
main drive behind this redesign is the fact that in order to check if we can
push down a tree of expressions to Druid we have to compute the Druid Expression
String anyway. Thus instead of having 2 different code paths, one for pushdown
validation and one for Json generation we can have one function that does both.
For instance instead of having one code path to test and check if a given filter
can be pushed or not and then having a translation layer code, will have
one function that either returns a valid Druid Filter or null if it is not
possible to pushdown. The same idea will be applied to how we push Projects and
Aggregates, Post Aggregates and Sort.
Here are the main elements/Classes of the new design. First will be merging the 
logic of
Translation of Literals/InputRex/RexCall to a Druid physical representation.

Translate leaf RexNode to Valid pair Druid Column + Extraction functions if 
possible
org.apache.calcite.adapter.druid.DruidQuery#toDruidColumn
{code:java}
/**
 * @param rexNode leaf Input Ref to Druid Column
 * @param rowType row type
 * @param druidQuery druid query
 *
 * @return \{@link Pair} of Column name and Extraction Function on the top of 
the input ref or
 * \{@link Pair of(null, null)} when can not translate to valid Druid column
 */
 protected static Pair toDruidColumn(RexNode 
rexNode,
 RelDataType rowType, DruidQuery druidQuery
 )
{code}
In the other hand, in order to Convert Literals to Druid Literals will introduce
org.apache.calcite.adapter.druid.DruidQuery#toDruidLiteral
{code:java}
/**
 * @param rexNode rexNode to translate to Druid literal equivalante
 * @param rowType rowType associated to rexNode
 * @param druidQuery druid Query
 *
 * @return non null string or null if it can not translate to valid Druid 
equivalent
 */
@Nullable
private static String toDruidLiteral(RexNode rexNode, RelDataType rowType,
 DruidQuery druidQuery
)
{code}
Main new functions used to pushdown nodes and Druid Json generation.

Filter pushdown verification and generates is done via
{code:java}
org.apache.calcite.adapter.druid.DruidJsonFilter#toDruidFilters\{code}

For project pushdown added function
{code}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectAsScan.\{code}
This function will be using Virtual columns project using expression.
For Grouping pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeProjectGroupSet.\{code}
For Aggregation pushdown added
{code}
org.apache.calcite.adapter.druid.DruidQuery#computeDruidJsonAgg\{code}
For sort pushdown added
{code:java}
org.apache.calcite.adapter.druid.DruidQuery#computeSort\{code}
Pushing of PostAggregates will be using Expression post Aggregates and use
{code}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
to generate expression

For Expression computation most of the work is done here
{code:java}
org.apache.calcite.adapter.druid.DruidExpressions#toDruidExpression\{code}
This static function generates Druid String expression out of a given RexNode or
returns null if not possible.
{code}
@Nullable
public static String toDruidExpression(
final RexNode rexNode,
final RelDataType inputRowType,
final DruidQuery druidRel
)
{code:java}
In order to support various kind of expressions added the following interface
{code}
org.apache.calcite.adapter.druid.DruidSqlOperatorConverter\{code}
Thus user can implement custom expression converter based on the SqlOperator 
syntax and signature.
{code:java}
public interface DruidSqlOperatorConverter {
 /**
 * Returns the calcite SQL operator corresponding to Druid operator.
 *
 * @return operator
 */
 SqlOperator calciteOperator();
 /**
 * Translate rexNode to 

[jira] [Commented] (CALCITE-508) Reading from ResultSet before calling next() should throw SQLException not NoSuchElementException

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354592#comment-16354592
 ] 

ASF GitHub Bot commented on CALCITE-508:


Github user asolimando commented on the issue:

https://github.com/apache/calcite-avatica/pull/23
  
Hi @vlsi,
I have update the PR according to your suggestions (modulo the more precise 
error codes, as discussed above), can you please have a look?

I haven't managed to rebase my two commits for CALCITE-508 due to the 
intermixed commit for CALCITE-2083, is that ok? If not, how do you usually 
handle such situations?


> Reading from ResultSet before calling next() should throw SQLException not 
> NoSuchElementException
> -
>
> Key: CALCITE-508
> URL: https://issues.apache.org/jira/browse/CALCITE-508
> Project: Calcite
>  Issue Type: Bug
>Reporter: Julian Hyde
>Assignee: Julian Hyde
>Priority: Major
>  Labels: newbie
>
> Reading from ResultSet before calling next() should throw SQLException not 
> NoSuchElementException.
> Each of the Cursor.Accessor.getXxx methods should convert runtime exceptions 
> to SQLException.
> JdbcTest.testExtract currently demonstrates this problem; it passes if there 
> is a NoSuchElementException, but should look for a SQLException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354449#comment-16354449
 ] 

Julian Hyde commented on CALCITE-2170:
--

For some time we have needed a general facility that can look at an expression 
(generally in a project or filter, but not necessarily) and split off the 
sub-expressions that can be pushed down. We are encountering the same issue 
with Jethro. It would be great if this task could contribute to that general 
facility. The beginnings are in the CalcRelSplitter class.

> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CALCITE-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated CALCITE-2170:

Description: 
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

 

 

 

 

  was:
Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

 

 


> Use Druid Expressions capabilities to improve the amount of work that can be 
> pushed to Druid
> 
>
> Key: CALCITE-2170
> URL: https://issues.apache.org/jira/browse/CALCITE-2170
> Project: Calcite
>  Issue Type: New Feature
>  Components: druid
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Druid 0.11 has newly built in capabilities called Expressions that can be 
> used to push expression like projects/aggregates/filters. 
> In order to leverage this new feature, some changes need to be done to the 
> Druid Calcite adapter. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CALCITE-2170) Use Druid Expressions capabilities to improve the amount of work that can be pushed to Druid

2018-02-06 Thread slim bouguerra (JIRA)
slim bouguerra created CALCITE-2170:
---

 Summary: Use Druid Expressions capabilities to improve the amount 
of work that can be pushed to Druid
 Key: CALCITE-2170
 URL: https://issues.apache.org/jira/browse/CALCITE-2170
 Project: Calcite
  Issue Type: New Feature
  Components: druid
Reporter: slim bouguerra
Assignee: slim bouguerra


Druid 0.11 has newly built in capabilities called Expressions that can be used 
to push expression like projects/aggregates/filters. 

In order to leverage this new feature, some changes need to be done to the 
Druid Calcite adapter. 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CALCITE-2168) Implement a General Purpose Benchmark for Calcite

2018-02-06 Thread Alessandro Solimando (JIRA)

[ 
https://issues.apache.org/jira/browse/CALCITE-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353588#comment-16353588
 ] 

Alessandro Solimando commented on CALCITE-2168:
---

Thanks Edmon, I am strongly in favor of creating a representative benchmark, it 
can also be used internally to evaluate alternative strategies and/or 
improvements to the query planner, to measure the "gap" between Volcano and 
HEP, etc. 

Concerning existing benchmarks that are relevant for Calcite, I know exactly 
those you already mentioned, but I will try to have a look if there is anything 
else promising around.



> Implement a General Purpose Benchmark for Calcite 
> --
>
> Key: CALCITE-2168
> URL: https://issues.apache.org/jira/browse/CALCITE-2168
> Project: Calcite
>  Issue Type: Wish
>  Components: core
>Reporter: Edmon Begoli
>Assignee: Edmon Begoli
>Priority: Minor
>  Labels: performance
>   Original Estimate: 2,688h
>  Remaining Estimate: 2,688h
>
> Develop a benchmark that can be used for general purpose benchamrking of 
> Calcite against other frameworks, and databases, and for study,research, and 
> profiling of the framwork.
> Use popular benchmarks such as TCP-DS (or -H) or Star Schema Benchmark (SSB) 
> and measure the performance of optimized vs. unoptimized Calcite queries, and 
> the overhead of going through Calcite adapters vs. natively accessing the 
> target DB
> Look into the existing approaches and do perhaps something similar:
> * https://www.slideshare.net/julianhyde/w-435phyde-3
> * 
> https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_hive-performance-tuning/content/ch_cost-based-optimizer.html
> * (How much of this is still relevant (Hive 0.14)? Can we use 
> queries/benchmarks?)
> https://hortonworks.com/blog/hive-0-14-cost-based-optimizer-cbo-technical-overview/
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)