[jira] [Resolved] (FLINK-4385) Union on Timestamp fields does not work

2016-08-15 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4385.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 83c4b9707dd24425391bd5759f12878ad2f19175.

> Union on Timestamp fields does not work
> ---
>
> Key: FLINK-4385
> URL: https://issues.apache.org/jira/browse/FLINK-4385
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
> Fix For: 1.2.0
>
>
> The following does not work:
> {code}
> public static class SDF {
>   public Timestamp t = Timestamp.valueOf("1990-10-10 12:10:10");
> }
> ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
> DataSet dataSet1 = env.fromElements(new SDF());
> DataSet dataSet2 = env.fromElements(new SDF());
> BatchTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
> tableEnv.registerDataSet( "table0", dataSet1 );
> tableEnv.registerDataSet( "table1", dataSet2 );
> Table table = tableEnv.sql( "select t from table0 union select t from table1" 
> );
> DataSet d = tableEnv.toDataSet(table, Row.class);
> d.print();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-3097) Add support for custom functions in Table API

2016-08-15 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-3097.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 90fdae452b6a03fafd4ec7827030d78ae87dbcd3.

> Add support for custom functions in Table API
> -
>
> Key: FLINK-3097
> URL: https://issues.apache.org/jira/browse/FLINK-3097
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
> Fix For: 1.2.0
>
>
> Currently, the Table API has a very limited set of built-in functions. 
> Support for custom functions can solve this problem. Adding of a custom row 
> function could look like:
> {code}
> TableEnvironment tableEnv = new TableEnvironment();
> RowFunction rf = new RowFunction() {
> @Override
> public String call(Object[] args) {
> return ((String) args[0]).trim();
> }
> };
> tableEnv.getConfig().registerRowFunction("TRIM", rf,
> BasicTypeInfo.STRING_TYPE_INFO);
> DataSource> input = env.fromElements(
> new Tuple1<>(" 1 "));
> Table table = tableEnv.fromDataSet(input);
> Table result = table.select("TRIM(f0)");
> {code}
> This feature is also necessary as part of FLINK-2099.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4281) Wrap all Calcite Exceptions in Flink Exceptions

2016-08-17 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4281.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 7ce42c2e7e332b0fafd19a3f9ed49e4554958fdd.

> Wrap all Calcite Exceptions in Flink Exceptions
> ---
>
> Key: FLINK-4281
> URL: https://issues.apache.org/jira/browse/FLINK-4281
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Timo Walther
>Assignee: Jark Wu
> Fix For: 1.2.0
>
>
> Some exceptions are already wrapped in Flink exceptions but there are still 
> exceptions thrown by Calcite. I would propose that all Exceptions thrown by 
> the Table API are Flink's Exceptions, esp. the FlinkPlannerImpl exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4189) Introduce symbols for internal use

2016-08-18 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4189.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in cc7450d1d1ee3b5189d92ba8642c7c6cffdf056a.

> Introduce symbols for internal use
> --
>
> Key: FLINK-4189
> URL: https://issues.apache.org/jira/browse/FLINK-4189
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
> Fix For: 1.2.0
>
>
> Currently we are using integer values to pass Calcite SQL symbols down to 
> code generation. This causes problems like in FLINK-4068. We should support 
> symbols internally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3580) Reintroduce Date/Time and implement scalar functions for it

2016-08-23 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433130#comment-15433130
 ] 

Timo Walther commented on FLINK-3580:
-

[~fhueske] and others: how should we implement current time? Calcite has 
"CURRENT_TIMESTAMP" which "Returns the current date and time in the session 
time zone, in a value of datatype TIMESTAMP WITH TIME ZONE". In Calcite this 
timestamp is equal across the entire job. So 
{code}CURRENT_TIMESTAMP=CURRENT_TIMESTAMP{code} is always true. But for 
streaming we need a real current timestamp. Either we implement this scalar 
function differently for batch and streaming job or we implement an additional 
function e.g. "INSTANT()". What do you think?

> Reintroduce Date/Time and implement scalar functions for it
> ---
>
> Key: FLINK-3580
> URL: https://issues.apache.org/jira/browse/FLINK-3580
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> This task includes:
> {code}
> DATETIME_PLUS
> EXTRACT_DATE
> FLOOR
> CEIL
> CURRENT_TIME
> CURRENT_TIMESTAMP
> LOCALTIME
> LOCALTIMESTAMP
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4469) Add support for user defined table function in Table API & SQL

2016-08-29 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15445747#comment-15445747
 ] 

Timo Walther commented on FLINK-4469:
-

Instead of using an `Iterable` as return type, I would define something like a 
`Collector` as a parameter. Basically, user-defined table functions are very 
similar to Flink's `FlatMapFunction`. The framework should maintain the data 
structure where rows are stored (and maybe immediately process them further). 
The user should not have to create objects just for caching output.

> Add support for user defined table function in Table API & SQL
> --
>
> Key: FLINK-4469
> URL: https://issues.apache.org/jira/browse/FLINK-4469
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Normal user-defined functions, such as concat(), take in a single input row 
> and output a single output row. In contrast, table-generating functions 
> transform a single input row to multiple output rows. It is very useful in 
> some cases, such as look up in HBase by rowkey and return one or more rows.
> Adding a user defined table function should:
> 1. inherit from UDTF class with specific generic type T
> 2. define one or more evel function. 
> NOTE: 
> 1. the eval method must be public and non-static.
> 2. eval should always return java.lang.Iterable or scala.collection.Iterable 
> with the generic type T.
> 3. the generic type T is the row type returned by table function. Because of 
> Java type erasure, we can’t extract T from the Iterable.
> 4. eval method can be overload. Blink will choose the best match eval method 
> to call according to parameter types and number.
> {code}
> public class Word {
>   public String word;
>   public Integer length;
> }
> public class SplitStringUDTF extends UDTF {
> public Iterable eval(String str) {
> if (str == null) {
> return new ArrayList<>();
> } else {
> List list = new ArrayList<>();
> for (String s : str.split(",")) {
> Word word = new Word(s, s.length());
> list.add(word);
> }
> return list;
> }
> }
> }
> // in SQL
> tableEnv.registerFunction("split", new SplitStringUDTF())
> tableEnv.sql("SELECT a, b, t.* FROM MyTable CROSS APPLY split(c) AS t(w,l)")
> // in Java Table API
> tableEnv.registerFunction("split", new SplitStringUDTF())
> // rename split table columns to “w” and “l”
> table.crossApply("split(c)", "w, l")  
>  .select("a, b, w, l")
> // without renaming, we will use the origin field names in the POJO/case/...
> table.crossApply("split(c)")
>  .select("a, b, word, length")
> // in Scala Table API
> val split = new SplitStringUDTF()
> table.crossApply(split('c), 'w, 'l)
>  .select('a, 'b, 'w, 'l)
> // outerApply for outer join to a UDTF
> table.outerApply(split('c))
>  .select('a, 'b, 'word, 'length)
> {code}
> Here we introduce CROSS/OUTER APPLY keywords to join table functions , which 
> is used in SQL Server. We can discuss the API in the comment. 
> Maybe the {{UDTF}} class should be replaced by {{TableFunction}} or something 
> others, because we have introduced {{ScalarFunction}} for custom functions, 
> we need to keep consistent. Although, I prefer {{UDTF}} rather than 
> {{TableFunction}} as the former is more SQL-like and the latter maybe 
> confused with DataStream functions. 
> **This issue is blocked by CALCITE-1309, so we need to wait Calcite fix this 
> and release.**
> See [1] for more information about UDTF design.
> [1] 
> https://docs.google.com/document/d/15iVc1781dxYWm3loVQlESYvMAxEzbbuVFPZWBYuY1Ek/edit#



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4420) Introduce star(*) to select all of the columns in the table

2016-08-29 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4420.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 1f17886198eb67c8fdb53b96098ebb812029b3ba.

> Introduce star(*) to select all of the columns in the table
> ---
>
> Key: FLINK-4420
> URL: https://issues.apache.org/jira/browse/FLINK-4420
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
> Fix For: 1.2.0
>
>
> When we have many columns in a table, it's  easy to select all the columns if 
> we support {{select *}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4469) Add support for user defined table function in Table API & SQL

2016-08-30 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448532#comment-15448532
 ] 

Timo Walther commented on FLINK-4469:
-

Yes, this looks very good. Looking forward to reviewing a PR.

> Add support for user defined table function in Table API & SQL
> --
>
> Key: FLINK-4469
> URL: https://issues.apache.org/jira/browse/FLINK-4469
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Normal user-defined functions, such as concat(), take in a single input row 
> and output a single output row. In contrast, table-generating functions 
> transform a single input row to multiple output rows. It is very useful in 
> some cases, such as look up in HBase by rowkey and return one or more rows.
> Adding a user defined table function should:
> 1. inherit from UDTF class with specific generic type T
> 2. define one or more evel function. 
> NOTE: 
> 1. the eval method must be public and non-static.
> 2. eval should always return java.lang.Iterable or scala.collection.Iterable 
> with the generic type T.
> 3. the generic type T is the row type returned by table function. Because of 
> Java type erasure, we can’t extract T from the Iterable.
> 4. eval method can be overload. Blink will choose the best match eval method 
> to call according to parameter types and number.
> {code}
> public class Word {
>   public String word;
>   public Integer length;
> }
> public class SplitStringUDTF extends UDTF {
> public Iterable eval(String str) {
> if (str == null) {
> return new ArrayList<>();
> } else {
> List list = new ArrayList<>();
> for (String s : str.split(",")) {
> Word word = new Word(s, s.length());
> list.add(word);
> }
> return list;
> }
> }
> }
> // in SQL
> tableEnv.registerFunction("split", new SplitStringUDTF())
> tableEnv.sql("SELECT a, b, t.* FROM MyTable CROSS APPLY split(c) AS t(w,l)")
> // in Java Table API
> tableEnv.registerFunction("split", new SplitStringUDTF())
> // rename split table columns to “w” and “l”
> table.crossApply("split(c)", "w, l")  
>  .select("a, b, w, l")
> // without renaming, we will use the origin field names in the POJO/case/...
> table.crossApply("split(c)")
>  .select("a, b, word, length")
> // in Scala Table API
> val split = new SplitStringUDTF()
> table.crossApply(split('c), 'w, 'l)
>  .select('a, 'b, 'w, 'l)
> // outerApply for outer join to a UDTF
> table.outerApply(split('c))
>  .select('a, 'b, 'word, 'length)
> {code}
> Here we introduce CROSS/OUTER APPLY keywords to join table functions , which 
> is used in SQL Server. We can discuss the API in the comment. 
> Maybe the {{UDTF}} class should be replaced by {{TableFunction}} or something 
> others, because we have introduced {{ScalarFunction}} for custom functions, 
> we need to keep consistent. Although, I prefer {{UDTF}} rather than 
> {{TableFunction}} as the former is more SQL-like and the latter maybe 
> confused with DataStream functions. 
> **This issue is blocked by CALCITE-1309, so we need to wait Calcite fix this 
> and release.**
> See [1] for more information about UDTF design.
> [1] 
> https://docs.google.com/document/d/15iVc1781dxYWm3loVQlESYvMAxEzbbuVFPZWBYuY1Ek/edit#



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4541) Support for SQL NOT IN operator

2016-08-31 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4541:
---

 Summary: Support for SQL NOT IN operator
 Key: FLINK-4541
 URL: https://issues.apache.org/jira/browse/FLINK-4541
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther


This should work:

{code}
def main(args: Array[String]): Unit = {

// set up execution environment
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env)

val input = env.fromElements(WC("hello", 1), WC("hello", 1), WC("ciao", 1))

// register the DataSet as table "WordCount"
tEnv.registerDataSet("WordCount", input, 'word, 'frequency)

tEnv.registerTable("WordCount2", tEnv.fromDataSet(input, 'word, 
'frequency).select('word).filter('word !== "hello"))

// run a SQL query on the Table and retrieve the result as a new Table
val table = tEnv.sql("SELECT word, SUM(frequency) FROM WordCount WHERE word 
NOT IN (SELECT word FROM WordCount2) GROUP BY word")

table.toDataSet[WC].print()
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4542) Add support for MULTISET type and operations

2016-08-31 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4542:
---

 Summary: Add support for MULTISET type and operations
 Key: FLINK-4542
 URL: https://issues.apache.org/jira/browse/FLINK-4542
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther
Priority: Minor


Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2980) Add CUBE/ROLLUP/GROUPING SETS operator in Table API.

2016-08-31 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15452344#comment-15452344
 ] 

Timo Walther commented on FLINK-2980:
-

This tasks also includes implementing the functions {{GROUPING_ID}}, 
{{GROUP_ID}}.

> Add CUBE/ROLLUP/GROUPING SETS operator in Table API.
> 
>
> Key: FLINK-2980
> URL: https://issues.apache.org/jira/browse/FLINK-2980
> Project: Flink
>  Issue Type: New Feature
>  Components: Documentation, Table API & SQL
>Reporter: Chengxiang Li
>Assignee: Chengxiang Li
> Attachments: Cube-Rollup-GroupSet design doc in Flink.pdf
>
>
> Computing aggregates over a cube/rollup/grouping sets of several dimensions 
> is a common operation in data warehousing. It would be nice to have them in 
> Table API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2980) Add CUBE/ROLLUP/GROUPING SETS operator in Table API.

2016-08-31 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-2980:

Assignee: (was: Chengxiang Li)

> Add CUBE/ROLLUP/GROUPING SETS operator in Table API.
> 
>
> Key: FLINK-2980
> URL: https://issues.apache.org/jira/browse/FLINK-2980
> Project: Flink
>  Issue Type: New Feature
>  Components: Documentation, Table API & SQL
>Reporter: Chengxiang Li
> Attachments: Cube-Rollup-GroupSet design doc in Flink.pdf
>
>
> Computing aggregates over a cube/rollup/grouping sets of several dimensions 
> is a common operation in data warehousing. It would be nice to have them in 
> Table API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-2980) Add CUBE/ROLLUP/GROUPING SETS operator in Table API.

2016-08-31 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15452344#comment-15452344
 ] 

Timo Walther edited comment on FLINK-2980 at 8/31/16 2:14 PM:
--

This tasks also includes implementing the functions {{GROUPING}}, 
{{GROUPING_ID}}, {{GROUP_ID}}.


was (Author: twalthr):
This tasks also includes implementing the functions {{GROUPING_ID}}, 
{{GROUP_ID}}.

> Add CUBE/ROLLUP/GROUPING SETS operator in Table API.
> 
>
> Key: FLINK-2980
> URL: https://issues.apache.org/jira/browse/FLINK-2980
> Project: Flink
>  Issue Type: New Feature
>  Components: Documentation, Table API & SQL
>Reporter: Chengxiang Li
>Assignee: Chengxiang Li
> Attachments: Cube-Rollup-GroupSet design doc in Flink.pdf
>
>
> Computing aggregates over a cube/rollup/grouping sets of several dimensions 
> is a common operation in data warehousing. It would be nice to have them in 
> Table API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4542) Add support for MULTISET type and operations

2016-08-31 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-4542:

Description: 
Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF

  was:
Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT


> Add support for MULTISET type and operations
> 
>
> Key: FLINK-4542
> URL: https://issues.apache.org/jira/browse/FLINK-4542
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Priority: Minor
>
> Add the MULTISET type and add operations like:
> MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
> MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4542) Add support for MULTISET type and operations

2016-08-31 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-4542:

Description: 
Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF, 
SUBMULTISET OF

  was:
Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF


> Add support for MULTISET type and operations
> 
>
> Key: FLINK-4542
> URL: https://issues.apache.org/jira/browse/FLINK-4542
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Priority: Minor
>
> Add the MULTISET type and add operations like:
> MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
> MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF, 
> SUBMULTISET OF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4542) Add support for MULTISET type and operations

2016-08-31 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-4542:

Description: 
Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF, 
SUBMULTISET OF, IS A SET

  was:
Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF, 
SUBMULTISET OF


> Add support for MULTISET type and operations
> 
>
> Key: FLINK-4542
> URL: https://issues.apache.org/jira/browse/FLINK-4542
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Priority: Minor
>
> Add the MULTISET type and add operations like:
> MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
> MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF, 
> SUBMULTISET OF, IS A SET



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4549) Test and document implicitly supported SQL functions

2016-09-01 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-4549:

Issue Type: Improvement  (was: Bug)

> Test and document implicitly supported SQL functions
> 
>
> Key: FLINK-4549
> URL: https://issues.apache.org/jira/browse/FLINK-4549
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>
> Calcite supports many SQL functions by translating them into {{RexNode}}s. 
> However, SQL functions like {{NULLIF}}, {{OVERLAPS}} are neither tested nor 
> document although supported.
> These functions should be tested and added to the documentation. We could 
> adopt parts from the Calcite documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4549) Test and document implicitly supported SQL functions

2016-09-01 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4549:
---

 Summary: Test and document implicitly supported SQL functions
 Key: FLINK-4549
 URL: https://issues.apache.org/jira/browse/FLINK-4549
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Timo Walther


Calcite supports many SQL functions by translating them into {{RexNode}}s. 
However, SQL functions like {{NULLIF}}, {{OVERLAPS}} are neither tested nor 
document although supported.

These functions should be tested and added to the documentation. We could adopt 
parts from the Calcite documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4549) Test and document implicitly supported SQL functions

2016-09-01 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-4549:

Description: 
Calcite supports many SQL functions by translating them into {{RexNode}} s. 
However, SQL functions like {{NULLIF}}, {{OVERLAPS}} are neither tested nor 
document although supported.

These functions should be tested and added to the documentation. We could adopt 
parts from the Calcite documentation.

  was:
Calcite supports many SQL functions by translating them into {{RexNode}}s. 
However, SQL functions like {{NULLIF}}, {{OVERLAPS}} are neither tested nor 
document although supported.

These functions should be tested and added to the documentation. We could adopt 
parts from the Calcite documentation.


> Test and document implicitly supported SQL functions
> 
>
> Key: FLINK-4549
> URL: https://issues.apache.org/jira/browse/FLINK-4549
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>
> Calcite supports many SQL functions by translating them into {{RexNode}} s. 
> However, SQL functions like {{NULLIF}}, {{OVERLAPS}} are neither tested nor 
> document although supported.
> These functions should be tested and added to the documentation. We could 
> adopt parts from the Calcite documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4550) Clearly define SQL operator table

2016-09-01 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4550:
---

 Summary: Clearly define SQL operator table
 Key: FLINK-4550
 URL: https://issues.apache.org/jira/browse/FLINK-4550
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther


Currently, we use {{SqlStdOperatorTable.instance()}} for setting all supported 
operations. However, not all of them are actually supported. 
{{FunctionCatalog}} should only return those operators that are tested and 
documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4542) Add support for MULTISET type and operations

2016-09-01 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-4542:

Description: 
Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF, 
SUBMULTISET OF, IS A SET, COLLECT

  was:
Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF, 
SUBMULTISET OF, IS A SET


> Add support for MULTISET type and operations
> 
>
> Key: FLINK-4542
> URL: https://issues.apache.org/jira/browse/FLINK-4542
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Priority: Minor
>
> Add the MULTISET type and add operations like:
> MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
> MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF, 
> SUBMULTISET OF, IS A SET, COLLECT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4542) Add support for MULTISET type and operations

2016-09-01 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-4542:

Description: 
Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF, 
SUBMULTISET OF, IS A SET, COLLECT, FUSION

  was:
Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF, 
SUBMULTISET OF, IS A SET, COLLECT


> Add support for MULTISET type and operations
> 
>
> Key: FLINK-4542
> URL: https://issues.apache.org/jira/browse/FLINK-4542
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Priority: Minor
>
> Add the MULTISET type and add operations like:
> MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
> MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT, MEMBER OF, 
> SUBMULTISET OF, IS A SET, COLLECT, FUSION



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4553) Add support for table sampling

2016-09-01 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4553:
---

 Summary: Add support for table sampling
 Key: FLINK-4553
 URL: https://issues.apache.org/jira/browse/FLINK-4553
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Timo Walther


Calcite SQL defines 3 sampling functions. What we want to implement and how is 
up for discussion:

{code}
SELECT * FROM myTable TABLESAMPLE SUBSTITUTE('medium')
SELECT * FROM myTable TABLESAMPLE TABLESAMPLE BERNOULLI(percentage)
SELECT * FROM myTable TABLESAMPLE SYSTEM(percentage)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4554) Add support for array types

2016-09-01 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4554:
---

 Summary: Add support for array types
 Key: FLINK-4554
 URL: https://issues.apache.org/jira/browse/FLINK-4554
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Timo Walther


Support creating arrays:
{code}ARRAY[1, 2, 3]{code}

Access array values:
{code}myArray[3]{code}

And operations like:
{{UNNEST, UNNEST WITH ORDINALITY, CARDINALITY}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4557) Table API Stream Aggregations

2016-09-01 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4557:
---

 Summary: Table API Stream Aggregations
 Key: FLINK-4557
 URL: https://issues.apache.org/jira/browse/FLINK-4557
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Timo Walther


The Table API is a declarative API to define queries on static and streaming 
tables. So far, only projection, selection, and union are supported operations 
on streaming tables.

This issue and the corresponding FLIP proposes to add support for different 
types of aggregations on top of streaming tables. In particular, we seek to 
support:

*Group-window aggregates*, i.e., aggregates which are computed for a group of 
elements. A (time or row-count) window is required to bound the infinite input 
stream into a finite group.

*Row-window aggregates*, i.e., aggregates which are computed for each row, 
based on a window (range) of preceding and succeeding rows.
Each type of aggregate shall be supported on keyed/grouped or non-keyed/grouped 
data streams for streaming tables as well as batch tables.

Since time-windowed aggregates will be the first operation that require the 
definition of time, we also need to discuss how the Table API handles time 
characteristics, timestamps, and watermarks.

The FLIP can be found here: 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4549) Test and document implicitly supported SQL functions

2016-09-01 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-4549:
---

Assignee: Timo Walther

> Test and document implicitly supported SQL functions
> 
>
> Key: FLINK-4549
> URL: https://issues.apache.org/jira/browse/FLINK-4549
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Calcite supports many SQL functions by translating them into {{RexNode}} s. 
> However, SQL functions like {{NULLIF}}, {{OVERLAPS}} are neither tested nor 
> document although supported.
> These functions should be tested and added to the documentation. We could 
> adopt parts from the Calcite documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4260) Allow SQL's LIKE ESCAPE

2016-09-02 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457995#comment-15457995
 ] 

Timo Walther commented on FLINK-4260:
-

[~miaoever] Do you still plan to implement this issue? Otherwise I would give 
someone else the chance to solve this issue...

> Allow SQL's LIKE ESCAPE
> ---
>
> Key: FLINK-4260
> URL: https://issues.apache.org/jira/browse/FLINK-4260
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Timo Walther
>Assignee: miaoever
>Priority: Minor
>
> Currently, the SQL API does not support specifying an ESCAPE character in a 
> LIKE expression. The SIMILAR TO should also support that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4565) Support for SQL IN operator

2016-09-02 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4565:
---

 Summary: Support for SQL IN operator
 Key: FLINK-4565
 URL: https://issues.apache.org/jira/browse/FLINK-4565
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther


It seems that Flink SQL supports the uncorrelated sub-query IN operator. But it 
should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2016-09-06 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15467059#comment-15467059
 ] 

Timo Walther commented on FLINK-4565:
-

Hi [~chobeat], yes I think this issue is good for a beginner. Flink SQL already 
supports things like "SELECT x FROM myTable WHERE x IN (SELECT y FROM 
myOtherTable)". Nonetheless, we should add tests for it. The question is if we 
could make this functionality also available to the Table API. The Table API 
has Expressions (expressionDsl.scala for Scala and ExpressionParser for Java) 
and LogicalNodes (operators.scala). This type would combine both. 
{{expr.in(table)}}

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-4565) Support for SQL IN operator

2016-09-06 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15467059#comment-15467059
 ] 

Timo Walther edited comment on FLINK-4565 at 9/6/16 10:22 AM:
--

Hi [~chobeat], yes I think this issue is good for a beginner. Flink SQL already 
supports things like "SELECT x FROM myTable WHERE x IN (SELECT y FROM 
myOtherTable)". Nonetheless, we should add tests for it. The question is if we 
could make this functionality also available to the Table API. The Table API 
has Expressions (expressionDsl.scala for Scala and ExpressionParser for Java) 
and LogicalNodes (operators.scala). This issue would combine both. 
{{expr.in(table)}}


was (Author: twalthr):
Hi [~chobeat], yes I think this issue is good for a beginner. Flink SQL already 
supports things like "SELECT x FROM myTable WHERE x IN (SELECT y FROM 
myOtherTable)". Nonetheless, we should add tests for it. The question is if we 
could make this functionality also available to the Table API. The Table API 
has Expressions (expressionDsl.scala for Scala and ExpressionParser for Java) 
and LogicalNodes (operators.scala). This type would combine both. 
{{expr.in(table)}}

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4581) Table API throws "No suitable driver found for jdbc:calcite"

2016-09-06 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4581:
---

 Summary: Table API throws "No suitable driver found for 
jdbc:calcite"
 Key: FLINK-4581
 URL: https://issues.apache.org/jira/browse/FLINK-4581
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Timo Walther


It seems that in certain cases the internal Calcite JDBC driver cannot be 
found. We should either try to get rid of the entire JDBC invocation or fix 
this bug.

>From ML: 
>http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Stream-sql-query-in-Flink-tp8928.html

{code}
org.apache.flink.client.program.ProgramInvocationException: The main method
caused an error.
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:524)
at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:403)
at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:331)
at 
org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:777)
at org.apache.flink.client.CliFrontend.run(CliFrontend.java:253)
at
org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1005)
at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1048)
Caused by: java.lang.RuntimeException: java.sql.SQLException: No suitable
driver found for jdbc:calcite:
at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:151)
at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:127)
at
org.apache.flink.api.table.FlinkRelBuilder$.create(FlinkRelBuilder.scala:56)
at
org.apache.flink.api.table.TableEnvironment.(TableEnvironment.scala:73)
at
org.apache.flink.api.table.StreamTableEnvironment.(StreamTableEnvironment.scala:58)
at
org.apache.flink.api.java.table.StreamTableEnvironment.(StreamTableEnvironment.scala:45)
at
org.apache.flink.api.table.TableEnvironment$.getTableEnvironment(TableEnvironment.scala:376)
at
org.apache.flink.api.table.TableEnvironment.getTableEnvironment(TableEnvironment.scala)
at 
org.myorg.quickstart.ReadingFromKafka2.main(ReadingFromKafka2.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:509)
... 6 more
Caused by: java.sql.SQLException: No suitable driver found for jdbc:calcite:
at java.sql.DriverManager.getConnection(DriverManager.java:689)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:144)
... 20 more
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2016-09-06 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15467383#comment-15467383
 ] 

Timo Walther commented on FLINK-4565:
-

tableEnv.sql(...) returns a Table containing a LogicalRelNode (which is a 
LogicalNode and just wraps RelNodes; Calcite already did the validation). The 
Table API returns Tables containing specific LogicalNodes depending on the 
operation (Table API does the validation). All LogicalNodes have a "construct" 
method translating the Flink logical operators to Calcites RelNode 
representation for optimization. The magic happens in 
Batch/StreamTableEnvironment.translate() where the RelNodes are optimized using 
a specific set of rules and converted to "DataSet/DataStreamRel"s also by using 
specific rules.

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4554) Add support for array types

2016-09-06 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15468154#comment-15468154
 ] 

Timo Walther commented on FLINK-4554:
-

Apache Calcite does the parsing for us and they strictly follow the SQL 
standard. I don't know if they support this syntax.

> Add support for array types
> ---
>
> Key: FLINK-4554
> URL: https://issues.apache.org/jira/browse/FLINK-4554
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>
> Support creating arrays:
> {code}ARRAY[1, 2, 3]{code}
> Access array values:
> {code}myArray[3]{code}
> And operations like:
> {{UNNEST, UNNEST WITH ORDINALITY, CARDINALITY}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2016-09-07 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15469873#comment-15469873
 ] 

Timo Walther commented on FLINK-4565:
-

Calcite translates the IN operator in 
{{org.apache.calcite.sql2rel.SqlToRelConverter#convertExpression}}. Calcite 
translates this into an Aggregate and Join. After fixing some issue in 
"DataSetAggregate" we can execute: {{"SELECT WordCount.word FROM WordCount 
WHERE WordCount.word IN (SELECT WordCount1.word AS w FROM WordCount1)"}}. The 
plan looks like:

{code}
== Physical Execution Plan ==
Stage 4 : Data Source
content : collect elements with CollectionInputFormat
Partitioning : RANDOM_PARTITIONED

Stage 3 : Map
content : from: (word, frequency)
ship_strategy : Forward
exchange_mode : BATCH
driver_strategy : Map
Partitioning : RANDOM_PARTITIONED

Stage 8 : Map
content : from: (word, frequency)
ship_strategy : Forward
exchange_mode : BATCH
driver_strategy : Map
Partitioning : RANDOM_PARTITIONED

Stage 7 : Map
content : prepare select: (word)
ship_strategy : Forward
exchange_mode : PIPELINED
driver_strategy : Map
Partitioning : RANDOM_PARTITIONED

Stage 6 : GroupCombine
content : groupBy: (word), select:(word)
ship_strategy : Forward
exchange_mode : PIPELINED
driver_strategy : Sorted Combine
Partitioning : RANDOM_PARTITIONED

Stage 5 : GroupReduce
content : groupBy: (word), 
select:(word)
ship_strategy : Hash Partition 
on [0]
exchange_mode : PIPELINED
driver_strategy : Sorted Group 
Reduce
Partitioning : 
RANDOM_PARTITIONED

Stage 2 : Join
content : where: 
(=(word, w)), join: (word, frequency, w)
ship_strategy : Hash 
Partition on [0]
exchange_mode : 
PIPELINED
driver_strategy : 
Hybrid Hash (build: from: (word, frequency) (id: 3))
Partitioning : 
RANDOM_PARTITIONED

Stage 1 : FlatMap
content : 
select: (word)
ship_strategy : 
Forward
exchange_mode : 
PIPELINED
driver_strategy 
: FlatMap
Partitioning : 
RANDOM_PARTITIONED

Stage 0 : Data 
Sink
content 
: org.apache.flink.api.java.io.DiscardingOutputFormat

ship_strategy : Forward

exchange_mode : PIPELINED

Partitioning : RANDOM_PARTITIONED
{code}

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Simone Robutti
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2016-09-07 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15469879#comment-15469879
 ] 

Timo Walther commented on FLINK-4565:
-

It seems my answer was to late. Yes, I would also go for the second approach.

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Simone Robutti
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4591) Select star does not work with grouping

2016-09-07 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4591:
---

 Summary: Select star does not work with grouping
 Key: FLINK-4591
 URL: https://issues.apache.org/jira/browse/FLINK-4591
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther


It would be consistent if this would also work:

{{table.groupBy('*).select("*)}}

Currently, the star only works in a plain select without grouping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4592) Fix flaky test ScalarFunctionsTest.testCurrentTimePoint

2016-09-07 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4592:
---

 Summary: Fix flaky test ScalarFunctionsTest.testCurrentTimePoint
 Key: FLINK-4592
 URL: https://issues.apache.org/jira/browse/FLINK-4592
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Timo Walther


It seems that the test is still non deterministic.

{code}
org.apache.flink.api.table.expressions.ScalarFunctionsTest
testCurrentTimePoint(org.apache.flink.api.table.expressions.ScalarFunctionsTest)
  Time elapsed: 0.083 sec  <<< FAILURE!
org.junit.ComparisonFailure: Wrong result for: 
AS(>=(CHAR_LENGTH(CAST(CURRENT_TIMESTAMP):VARCHAR(1) CHARACTER SET "ISO-8859-1" 
COLLATE "ISO-8859-1$en_US$primary" NOT NULL), 22), '_c0') expected:<[tru]e> but 
was:<[fals]e>
at org.junit.Assert.assertEquals(Assert.java:115)
at 
org.apache.flink.api.table.expressions.utils.ExpressionTestBase$$anonfun$evaluateExprs$1.apply(ExpressionTestBase.scala:126)
at 
org.apache.flink.api.table.expressions.utils.ExpressionTestBase$$anonfun$evaluateExprs$1.apply(ExpressionTestBase.scala:123)
at 
scala.collection.mutable.LinkedHashSet.foreach(LinkedHashSet.scala:87)
at 
org.apache.flink.api.table.expressions.utils.ExpressionTestBase.evaluateExprs(ExpressionTestBase.scala:123)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4597) Improve Scalar Function section in Table API documentation

2016-09-08 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15473128#comment-15473128
 ] 

Timo Walther commented on FLINK-4597:
-

I thought the same. I'm currently reworking the SQL documentation as part of 
FLINK-4549. So that all functions (including the implicit ones) are documented. 
I will use the syntax Calcite is using {{boolean1 OR boolean2}}, {{value1 <> 
value2}}. What do you think?

> Improve Scalar Function section in Table API documentation
> --
>
> Key: FLINK-4597
> URL: https://issues.apache.org/jira/browse/FLINK-4597
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Minor
>
> The function signature in Scalar Function section is a little confusing. 
> Because it's hard to distinguish keyword and parameters. Such as : 
> {{EXTRACT(TIMEINTERVALUNIT FROM TEMPORAL)}}, user may not know TEMPORAL is a 
> parameter after first glance. I propose to use {{<>}} around parameters, i.e. 
> {{EXTRACT( FROM )}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4598) Support NULLIF in Table API

2016-09-08 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15473139#comment-15473139
 ] 

Timo Walther commented on FLINK-4598:
-

I wonder if we have to support all functions in the Table API that are 
supported in SQL. Because SQL has a very large amount of functions while the 
Table API currently has only the very basic ones. Is {{NULLIF}} a function that 
is needed very often? We can support it in the Table API, I'm just worried 
about the large amount of implicit functions.

> Support NULLIF  in Table API
> 
>
> Key: FLINK-4598
> URL: https://issues.apache.org/jira/browse/FLINK-4598
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> This could be a subtask of [FLINK-4549]. As Flink SQL has supported 
> {{NULLIF}} implicitly. We should support it in Table API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4599) Add 'explain()' also to StreamTableEnvironment

2016-09-08 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4599:
---

 Summary: Add 'explain()' also to StreamTableEnvironment
 Key: FLINK-4599
 URL: https://issues.apache.org/jira/browse/FLINK-4599
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther


Currenlty, only the BatchTableEnvironment supports the {{explain}} command for 
tables. We should also support it for the StreamTableEnvironment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4600) Support min/max aggregations for Date/Time/Timestamp/Intervals

2016-09-08 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4600:
---

 Summary: Support min/max aggregations for 
Date/Time/Timestamp/Intervals
 Key: FLINK-4600
 URL: https://issues.apache.org/jira/browse/FLINK-4600
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther


Currently no aggregation supports temporal types. At least min/max should be 
added for Date/Time/Timestamp/Intervals.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4600) Support min/max aggregations for Date/Time/Timestamp/Intervals

2016-09-08 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15473447#comment-15473447
 ] 

Timo Walther commented on FLINK-4600:
-

Go for it :) 
This issue requires some changes in 
"org.apache.flink.api.table.runtime.aggregate.AggregateUtil#transformToAggregateFunctions",
 "org.apache.flink.api.table.runtime.aggregate.MaxAggragate.scala" and tests. 
The intermediate results might work on primitive types to reduce object 
creations. 

> Support min/max aggregations for Date/Time/Timestamp/Intervals
> --
>
> Key: FLINK-4600
> URL: https://issues.apache.org/jira/browse/FLINK-4600
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Leo Deng
>
> Currently no aggregation supports temporal types. At least min/max should be 
> added for Date/Time/Timestamp/Intervals.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2016-09-08 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15473833#comment-15473833
 ] 

Timo Walther commented on FLINK-4565:
-

I think it is ok, if you just parse the table name and introduce a 
`UnresolvedTable` expression which can later be resolved/looked up in the table 
registry.

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Simone Robutti
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2016-09-08 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15473966#comment-15473966
 ] 

Timo Walther commented on FLINK-4565:
-

Yes, this would fail then. But I think there is no other good solution if you 
specify expressions as String as we do in the Java API.

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Simone Robutti
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4599) Add 'explain()' also to StreamTableEnvironment

2016-09-08 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-4599:

Labels: starter  (was: )

> Add 'explain()' also to StreamTableEnvironment
> --
>
> Key: FLINK-4599
> URL: https://issues.apache.org/jira/browse/FLINK-4599
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>  Labels: starter
>
> Currenlty, only the BatchTableEnvironment supports the {{explain}} command 
> for tables. We should also support it for the StreamTableEnvironment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2016-09-08 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15474327#comment-15474327
 ] 

Timo Walther commented on FLINK-4565:
-

I'm very sorry. Maybe I underestimated this issue after thinking about it. It 
is the first combination of expression and table which makes this issue tricky. 
Yes the validation is very complicated especially because we have 2 validations 
one for SQL and one for the Table API (with heavy Scala magic). If you are 
still wanna do that, I can also help you in a private chat. Otherwise e.g. 
FLINK-4599 would be easier. I will add the "starter" label to easier tasks.

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Simone Robutti
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4592) Fix flaky test ScalarFunctionsTest.testCurrentTimePoint

2016-09-08 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-4592:

Labels: starter  (was: )

> Fix flaky test ScalarFunctionsTest.testCurrentTimePoint
> ---
>
> Key: FLINK-4592
> URL: https://issues.apache.org/jira/browse/FLINK-4592
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>  Labels: starter
>
> It seems that the test is still non deterministic.
> {code}
> org.apache.flink.api.table.expressions.ScalarFunctionsTest
> testCurrentTimePoint(org.apache.flink.api.table.expressions.ScalarFunctionsTest)
>   Time elapsed: 0.083 sec  <<< FAILURE!
> org.junit.ComparisonFailure: Wrong result for: 
> AS(>=(CHAR_LENGTH(CAST(CURRENT_TIMESTAMP):VARCHAR(1) CHARACTER SET 
> "ISO-8859-1" COLLATE "ISO-8859-1$en_US$primary" NOT NULL), 22), '_c0') 
> expected:<[tru]e> but was:<[fals]e>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at 
> org.apache.flink.api.table.expressions.utils.ExpressionTestBase$$anonfun$evaluateExprs$1.apply(ExpressionTestBase.scala:126)
>   at 
> org.apache.flink.api.table.expressions.utils.ExpressionTestBase$$anonfun$evaluateExprs$1.apply(ExpressionTestBase.scala:123)
>   at 
> scala.collection.mutable.LinkedHashSet.foreach(LinkedHashSet.scala:87)
>   at 
> org.apache.flink.api.table.expressions.utils.ExpressionTestBase.evaluateExprs(ExpressionTestBase.scala:123)
>   at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-3656) Rework Table API tests

2016-09-08 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-3656:

Labels: starter  (was: )

> Rework Table API tests
> --
>
> Key: FLINK-3656
> URL: https://issues.apache.org/jira/browse/FLINK-3656
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Vasia Kalavri
>  Labels: starter
>
> The Table API tests are very inefficient. At the moment It is mostly 
> end-to-end integration tests, often testing the same functionality several 
> times (Java/Scala, DataSet/DataStream).
> We should look into how we can rework the Table API tests such that:
> - long-running integration tests are converted into faster unit tests
> - common parts of DataSet and DataStream are only tested once
> - common parts of Java and Scala Table APIs are only tested once
> - duplicate tests are completely removed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4550) Clearly define SQL operator table

2016-09-08 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-4550:

Labels: starter  (was: )

> Clearly define SQL operator table
> -
>
> Key: FLINK-4550
> URL: https://issues.apache.org/jira/browse/FLINK-4550
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>  Labels: starter
>
> Currently, we use {{SqlStdOperatorTable.instance()}} for setting all 
> supported operations. However, not all of them are actually supported. 
> {{FunctionCatalog}} should only return those operators that are tested and 
> documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4591) Select star does not work with grouping

2016-09-08 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15474335#comment-15474335
 ] 

Timo Walther commented on FLINK-4591:
-

Just because it is not allowed in SQL does not mean that we shouldn't allow it 
in Table API. But at least the star should be supported in a {{select}} after a 
{{groupBy}}.

> Select star does not work with grouping
> ---
>
> Key: FLINK-4591
> URL: https://issues.apache.org/jira/browse/FLINK-4591
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>
> It would be consistent if this would also work:
> {{table.groupBy('*).select("*)}}
> Currently, the star only works in a plain select without grouping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4591) Select star does not work with grouping

2016-09-09 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15476318#comment-15476318
 ] 

Timo Walther commented on FLINK-4591:
-

IMHO I would say we should ensure consistent behavior and for the user a select 
is always a select no matter if it is after a groupBy or not.

> Select star does not work with grouping
> ---
>
> Key: FLINK-4591
> URL: https://issues.apache.org/jira/browse/FLINK-4591
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> It would be consistent if this would also work:
> {{table.groupBy( '* ).select( "* )}}
> Currently, the star only works in a plain select without grouping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4601) Check for empty string properly

2016-09-09 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4601.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in e92f91aeba3c9e8da5a5ff9efda342cb19da928c.

> Check for empty string properly
> ---
>
> Key: FLINK-4601
> URL: https://issues.apache.org/jira/browse/FLINK-4601
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Alexander Pivovarov
>Priority: Trivial
> Fix For: 1.2.0
>
>
> UdfAnalyzerExamplesTest.java and UdfAnalyzerTest.java use == to check for 
> empty string. We should use isEmpty() instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4592) Fix flaky test ScalarFunctionsTest.testCurrentTimePoint

2016-09-09 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-4592:
---

Assignee: Timo Walther

> Fix flaky test ScalarFunctionsTest.testCurrentTimePoint
> ---
>
> Key: FLINK-4592
> URL: https://issues.apache.org/jira/browse/FLINK-4592
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>  Labels: starter
>
> It seems that the test is still non deterministic.
> {code}
> org.apache.flink.api.table.expressions.ScalarFunctionsTest
> testCurrentTimePoint(org.apache.flink.api.table.expressions.ScalarFunctionsTest)
>   Time elapsed: 0.083 sec  <<< FAILURE!
> org.junit.ComparisonFailure: Wrong result for: 
> AS(>=(CHAR_LENGTH(CAST(CURRENT_TIMESTAMP):VARCHAR(1) CHARACTER SET 
> "ISO-8859-1" COLLATE "ISO-8859-1$en_US$primary" NOT NULL), 22), '_c0') 
> expected:<[tru]e> but was:<[fals]e>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at 
> org.apache.flink.api.table.expressions.utils.ExpressionTestBase$$anonfun$evaluateExprs$1.apply(ExpressionTestBase.scala:126)
>   at 
> org.apache.flink.api.table.expressions.utils.ExpressionTestBase$$anonfun$evaluateExprs$1.apply(ExpressionTestBase.scala:123)
>   at 
> scala.collection.mutable.LinkedHashSet.foreach(LinkedHashSet.scala:87)
>   at 
> org.apache.flink.api.table.expressions.utils.ExpressionTestBase.evaluateExprs(ExpressionTestBase.scala:123)
>   at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4592) Fix flaky test ScalarFunctionsTest.testCurrentTimePoint

2016-09-09 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4592.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 8c0d62433a3f57d0753edb00e5c2bbc1adc467df.

> Fix flaky test ScalarFunctionsTest.testCurrentTimePoint
> ---
>
> Key: FLINK-4592
> URL: https://issues.apache.org/jira/browse/FLINK-4592
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>  Labels: starter
> Fix For: 1.2.0
>
>
> It seems that the test is still non deterministic.
> {code}
> org.apache.flink.api.table.expressions.ScalarFunctionsTest
> testCurrentTimePoint(org.apache.flink.api.table.expressions.ScalarFunctionsTest)
>   Time elapsed: 0.083 sec  <<< FAILURE!
> org.junit.ComparisonFailure: Wrong result for: 
> AS(>=(CHAR_LENGTH(CAST(CURRENT_TIMESTAMP):VARCHAR(1) CHARACTER SET 
> "ISO-8859-1" COLLATE "ISO-8859-1$en_US$primary" NOT NULL), 22), '_c0') 
> expected:<[tru]e> but was:<[fals]e>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at 
> org.apache.flink.api.table.expressions.utils.ExpressionTestBase$$anonfun$evaluateExprs$1.apply(ExpressionTestBase.scala:126)
>   at 
> org.apache.flink.api.table.expressions.utils.ExpressionTestBase$$anonfun$evaluateExprs$1.apply(ExpressionTestBase.scala:123)
>   at 
> scala.collection.mutable.LinkedHashSet.foreach(LinkedHashSet.scala:87)
>   at 
> org.apache.flink.api.table.expressions.utils.ExpressionTestBase.evaluateExprs(ExpressionTestBase.scala:123)
>   at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4604) Add support for standard deviation/variance

2016-09-09 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4604:
---

 Summary: Add support for standard deviation/variance
 Key: FLINK-4604
 URL: https://issues.apache.org/jira/browse/FLINK-4604
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Timo Walther


Calcite's {{AggregateReduceFunctionsRule}} can convert SQL {{AVG, STDDEV_POP, 
STDDEV_SAMP, VAR_POP, VAR_SAMP}} to sum/count functions. We should add, test 
and document this rule. 

If we also want to add this aggregates to Table API is up for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4591) Select star does not work with grouping

2016-09-09 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15477240#comment-15477240
 ] 

Timo Walther commented on FLINK-4591:
-

Ok, Calcite does also not support {{SELECT * FROM WordCount GROUP BY word}}. 
I'm fine with closing this issue.

> Select star does not work with grouping
> ---
>
> Key: FLINK-4591
> URL: https://issues.apache.org/jira/browse/FLINK-4591
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> It would be consistent if this would also work:
> {{table.groupBy( '* ).select( "* )}}
> Currently, the star only works in a plain select without grouping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4605) Add an expression that returns the return type of an expression

2016-09-09 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4605:
---

 Summary: Add an expression that returns the return type of an 
expression
 Key: FLINK-4605
 URL: https://issues.apache.org/jira/browse/FLINK-4605
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Timo Walther


Esp. for Java users of the Table API it is hard to obtain the return type of an 
expression. I propose to implement an expression that returns the type of an 
input expression as a string.

{{myLong.getType()}} could call the toString method of TypeInformation.

This could also be useful to distinguish between different subtypes of POJOs 
etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1135) Blog post with topic "Accessing Data Stored in Hive with Flink"

2016-09-15 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15492738#comment-15492738
 ] 

Timo Walther commented on FLINK-1135:
-

No this won't happen. I will close the issue.

> Blog post with topic "Accessing Data Stored in Hive with Flink"
> ---
>
> Key: FLINK-1135
> URL: https://issues.apache.org/jira/browse/FLINK-1135
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Timo Walther
>Assignee: Robert Metzger
>Priority: Minor
> Attachments: 2014-09-29-querying-hive.md
>
>
> Recently, I implemented a Flink job that accessed Hive. Maybe someone else is 
> going to try this. I created a blog post for the website to share my 
> experience.
> You'll find the blog post file attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-1135) Blog post with topic "Accessing Data Stored in Hive with Flink"

2016-09-15 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-1135.
---
Resolution: Won't Fix

> Blog post with topic "Accessing Data Stored in Hive with Flink"
> ---
>
> Key: FLINK-1135
> URL: https://issues.apache.org/jira/browse/FLINK-1135
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Timo Walther
>Assignee: Robert Metzger
>Priority: Minor
> Attachments: 2014-09-29-querying-hive.md
>
>
> Recently, I implemented a Flink job that accessed Hive. Maybe someone else is 
> going to try this. I created a blog post for the website to share my 
> experience.
> You'll find the blog post file attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4621) Improve decimal literals of SQL API

2016-09-15 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4621:
---

 Summary: Improve decimal literals of SQL API
 Key: FLINK-4621
 URL: https://issues.apache.org/jira/browse/FLINK-4621
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther


Currently, all SQL {{DECIMAL}} types are converted to BigDecimals internally. 
By default, the SQL parsers creates {{DECIMAL}} literals of any number e.g. 
{{SELECT 1.0, 12, -0.5 FROM x}}. I think it would be better if these simple 
numbers would be represented as Java primitives instead of objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4550) Clearly define SQL operator table

2016-09-15 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-4550:
---

Assignee: Timo Walther

> Clearly define SQL operator table
> -
>
> Key: FLINK-4550
> URL: https://issues.apache.org/jira/browse/FLINK-4550
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>  Labels: starter
>
> Currently, we use {{SqlStdOperatorTable.instance()}} for setting all 
> supported operations. However, not all of them are actually supported. 
> {{FunctionCatalog}} should only return those operators that are tested and 
> documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4599) Add 'explain()' also to StreamTableEnvironment

2016-09-15 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4599.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 545b72bee9b2297c9d1d2f5d59d6d839378fde92. I will create an issue 
regarding the Physical Stream Execution Plan.

> Add 'explain()' also to StreamTableEnvironment
> --
>
> Key: FLINK-4599
> URL: https://issues.apache.org/jira/browse/FLINK-4599
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Simone Robutti
>  Labels: starter
> Fix For: 1.2.0
>
>
> Currenlty, only the BatchTableEnvironment supports the {{explain}} command 
> for tables. We should also support it for the StreamTableEnvironment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4623) Create Physical Execution Plan of a DataStream

2016-09-15 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4623:
---

 Summary: Create Physical Execution Plan of a DataStream
 Key: FLINK-4623
 URL: https://issues.apache.org/jira/browse/FLINK-4623
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther


The {{StreamTableEnvironment#explain(Table)}} command for tables of a 
{{StreamTableEnvironment}} only outputs the abstract syntax tree. It would be 
helpful if the {{explain}} method could also generate a string from the 
{{DataStream}} containing a physical execution plan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4623) Create Physical Execution Plan of a DataStream

2016-09-15 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-4623:

Labels: starter  (was: )

> Create Physical Execution Plan of a DataStream
> --
>
> Key: FLINK-4623
> URL: https://issues.apache.org/jira/browse/FLINK-4623
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>  Labels: starter
>
> The {{StreamTableEnvironment#explain(Table)}} command for tables of a 
> {{StreamTableEnvironment}} only outputs the abstract syntax tree. It would be 
> helpful if the {{explain}} method could also generate a string from the 
> {{DataStream}} containing a physical execution plan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4590) Some Table API tests are failing when debug lvl is set to DEBUG

2016-09-15 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-4590:
---

Assignee: Timo Walther

> Some Table API tests are failing when debug lvl is set to DEBUG
> ---
>
> Key: FLINK-4590
> URL: https://issues.apache.org/jira/browse/FLINK-4590
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Timo Walther
>
> For debugging another issue, I've set the log level on travis to DEBUG.
> After that, the Table API tests started failing
> {code}
> Failed tests: 
>   SetOperatorsITCase.testMinusAll:156 Internal error: Error occurred while 
> applying rule DataSetScanRule
>   SetOperatorsITCase.testMinusAll:156 Internal error: Error occurred while 
> applying rule DataSetScanRule
>   SetOperatorsITCase.testMinusAll:156 Internal error: Error occurred while 
> applying rule DataSetScanRule
>   SetOperatorsITCase.testMinusAll:156 Internal error: Error occurred while 
> applying rule DataSetScanRule
>   SetOperatorsITCase.testMinusDifferentFieldNames:204 Internal error: Error 
> occurred while applying rule DataSetScanRule
>   SetOperatorsITCase.testMinusDifferentFieldNames:204 Internal error: Error 
> occurred while applying rule DataSetScanRule
>   SetOperatorsITCase.testMinusDifferentFieldNames:204 Internal error: Error 
> occurred while applying rule DataSetScanRule
>   SetOperatorsITCase.testMinusDifferentFieldNames:204 Internal error: Error 
> occurred while applying rule DataSetScanRule
>   SetOperatorsITCase.testMinus:175 Internal error: Error occurred while 
> applying rule DataSetScanRule
>   SetOperatorsITCase.testMinus:175 Internal error: Error occurred while 
> applying rule DataSetScanRule
>   SetOperatorsITCase.testMinus:175 Internal error: Error occurred while 
> applying rule DataSetScanRule
>   SetOperatorsITCase.testMinus:175 Internal error: Error occurred while 
> applying rule DataSetScanRule
> {code}
> Probably Calcite is executing additional assertions depending on the debug 
> level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4581) Table API throws "No suitable driver found for jdbc:calcite"

2016-09-16 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-4581:
---

Assignee: Timo Walther

> Table API throws "No suitable driver found for jdbc:calcite"
> 
>
> Key: FLINK-4581
> URL: https://issues.apache.org/jira/browse/FLINK-4581
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> It seems that in certain cases the internal Calcite JDBC driver cannot be 
> found. We should either try to get rid of the entire JDBC invocation or fix 
> this bug.
> From ML: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Stream-sql-query-in-Flink-tp8928.html
> {code}
> org.apache.flink.client.program.ProgramInvocationException: The main method
> caused an error.
>   at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:524)
>   at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:403)
>   at
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:331)
>   at 
> org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:777)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:253)
>   at
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1005)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1048)
> Caused by: java.lang.RuntimeException: java.sql.SQLException: No suitable
> driver found for jdbc:calcite:
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:151)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:127)
>   at
> org.apache.flink.api.table.FlinkRelBuilder$.create(FlinkRelBuilder.scala:56)
>   at
> org.apache.flink.api.table.TableEnvironment.(TableEnvironment.scala:73)
>   at
> org.apache.flink.api.table.StreamTableEnvironment.(StreamTableEnvironment.scala:58)
>   at
> org.apache.flink.api.java.table.StreamTableEnvironment.(StreamTableEnvironment.scala:45)
>   at
> org.apache.flink.api.table.TableEnvironment$.getTableEnvironment(TableEnvironment.scala:376)
>   at
> org.apache.flink.api.table.TableEnvironment.getTableEnvironment(TableEnvironment.scala)
>   at 
> org.myorg.quickstart.ReadingFromKafka2.main(ReadingFromKafka2.java:48)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:509)
>   ... 6 more
> Caused by: java.sql.SQLException: No suitable driver found for jdbc:calcite:
>   at java.sql.DriverManager.getConnection(DriverManager.java:689)
>   at java.sql.DriverManager.getConnection(DriverManager.java:208)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:144)
>   ... 20 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4252) Table program cannot be compiled

2016-09-16 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-4252:
---

Assignee: Timo Walther

> Table program cannot be compiled
> 
>
> Key: FLINK-4252
> URL: https://issues.apache.org/jira/browse/FLINK-4252
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.1.0
> Environment: OS X EI Captain
> scala 2.11.7
> jdk 8
>Reporter: Renkai Ge
>Assignee: Timo Walther
> Attachments: TestMain.scala
>
>
> I'm trying the table apis.
> I got some errors like this
> My code is in the attachments
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:413)
>   at 
> org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:92)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:389)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:376)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:61)
>   at 
> org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:896)
>   at org.apache.flink.api.java.DataSet.collect(DataSet.java:410)
>   at org.apache.flink.api.java.DataSet.print(DataSet.java:1605)
>   at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1672)
>   at TestMain$.main(TestMain.scala:31)
>   at TestMain.main(TestMain.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:509)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:403)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:331)
>   at 
> org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:777)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:253)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1005)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1048)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:853)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:799)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:799)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.Exception: The user defined 'open(Configuration)' method 
> in class org.apache.flink.api.table.runtime.FlatMapRunner caused an 
> exception: Table program cannot be compiled. This is a bug. Please file an 
> issue.
>   at 
> org.apache.flink.runtime.operators.BatchTask.openUserCode(BatchTask.java:1337)
>   at 
> org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.openTask(ChainedFlatMapDriver.java:47)
>   at 
> org.apache.flink.runtime.operators.BatchTask.openChainedTasks(BatchTask.java:1377)
>   at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:471)
>   at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:351)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
>

[jira] [Closed] (FLINK-4393) Failed to serialize accumulators for task

2016-09-16 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-4393.
---
Resolution: Won't Fix

This issue does not describe a bug.

> Failed to serialize accumulators for task
> -
>
> Key: FLINK-4393
> URL: https://issues.apache.org/jira/browse/FLINK-4393
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.1.0
> Environment: Redhat 6
>Reporter: Sajeev Ramakrishnan
>
> Dear Team,
>   I am getting the below exception while trying to use the Table API by 
> looping through the DataSet using collect() method.
> {code}
> 2016-08-15 07:18:52,503 WARN  
> org.apache.flink.runtime.accumulators.AccumulatorRegistry - Failed to 
> serialize accumulators for task.
> java.lang.OutOfMemoryError
> at 
> java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)
> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)
> at 
> java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
> at 
> java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
> at 
> java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
> at 
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:301)
> at 
> org.apache.flink.util.SerializedValue.(SerializedValue.java:52)
> at 
> org.apache.flink.runtime.accumulators.AccumulatorSnapshot.(AccumulatorSnapshot.java:58)
> at 
> org.apache.flink.runtime.accumulators.AccumulatorRegistry.getSnapshot(AccumulatorRegistry.java:75)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.unregisterTaskAndNotifyFinalState(TaskManager.scala:1248)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleTaskMessage(TaskManager.scala:446)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:292)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:44)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:124)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
> at akka.actor.ActorCell.invoke(ActorCell.scala:487)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
> at akka.dispatch.Mailbox.run(Mailbox.scala:221)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Suppressed: java.lang.OutOfMemoryError
> at 
> java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)
> at 
> java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)
> at 
> java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
> at 
> java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
> at 
> java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutpu

[jira] [Assigned] (FLINK-4288) Make it possible to unregister tables

2016-09-16 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-4288:
---

Assignee: Timo Walther

> Make it possible to unregister tables
> -
>
> Key: FLINK-4288
> URL: https://issues.apache.org/jira/browse/FLINK-4288
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Table names can not be changed yet. After registration you can not modify the 
> table behind a table name. Maybe this behavior is too restrictive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2016-09-19 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15503481#comment-15503481
 ] 

Timo Walther commented on FLINK-4565:
-

I looked into the code for this issue. It would be very tricky to support IN 
for the Table API as we currently separate expressions ({{RexNodes}}) from 
operators ({{RelNodes}}).

In the end we would need to call something similar to 
{{org.apache.calcite.sql2rel.SqlToRelConverter.Blackboard#convertExpression}}

{code}
final RexSubQuery in = RexSubQuery.in(root.rel, builder.build());
return op.isNotIn()
? rexBuilder.makeCall(SqlStdOperatorTable.NOT, in)
: in;
{code}

{{RexSubQuery}} is a rex node, however, it needs access to 
{{Table}}/{{LogicalNode}} to get the {{RelNode}}.

The following steps need to be implemented:
- For Java API: Create a {{UnresolvedTableReference}} expression that takes the 
name of the table.
- For Java API: Resolve the name of the table in 
{{org.apache.flink.api.table.plan.logical.LogicalNode#resolveExpressions}} to 
{{TableReference}} using the table environment that is available in this 
method. {{TableReference}} then has a {{Table}} field.
- Create an expression {{In}} that takes a {{TableReference}} and does the 
above code snippet in {{toRexNode}}.

[~chobeat] I hope this helps otherwise it also fine if you let someone else 
implement this.

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Simone Robutti
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4081) FieldParsers should support empty strings

2016-09-19 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4081.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 4b1a9c72e99125680035e5dadc148b187d9d4124.

> FieldParsers should support empty strings
> -
>
> Key: FLINK-4081
> URL: https://issues.apache.org/jira/browse/FLINK-4081
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Reporter: Flavio Pompermaier
>Assignee: Timo Walther
>  Labels: csvparser, table-api
> Fix For: 1.2.0
>
>
> In order to parse CSV files using the new Table API that converts rows to Row 
> objects (that support null values), FiledParser implementations should 
> support emptry strings setting the parser state to 
> ParseErrorState.EMPTY_STRING (for example FloatParser and DoubleParser 
> doesn't respect this constraint)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4639) Make Calcite features more pluggable

2016-09-20 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4639:
---

 Summary: Make Calcite features more pluggable
 Key: FLINK-4639
 URL: https://issues.apache.org/jira/browse/FLINK-4639
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Timo Walther


Some users might want to extend the feature set of the Table API by adding or 
replacing Calcite optimizer rules, modifying the parser etc. It would be good 
to have means to hook into the Table API and change Calcite behavior. We should 
implement something like a {{CalciteConfigBuilder}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4557) Table API Stream Aggregations

2016-09-20 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15506053#comment-15506053
 ] 

Timo Walther commented on FLINK-4557:
-

[~shijinkui] I will create issues for the subtasks once the FLIP-11 is not in 
"discuss" state anymore.

> Table API Stream Aggregations
> -
>
> Key: FLINK-4557
> URL: https://issues.apache.org/jira/browse/FLINK-4557
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>
> The Table API is a declarative API to define queries on static and streaming 
> tables. So far, only projection, selection, and union are supported 
> operations on streaming tables.
> This issue and the corresponding FLIP proposes to add support for different 
> types of aggregations on top of streaming tables. In particular, we seek to 
> support:
> *Group-window aggregates*, i.e., aggregates which are computed for a group of 
> elements. A (time or row-count) window is required to bound the infinite 
> input stream into a finite group.
> *Row-window aggregates*, i.e., aggregates which are computed for each row, 
> based on a window (range) of preceding and succeeding rows.
> Each type of aggregate shall be supported on keyed/grouped or 
> non-keyed/grouped data streams for streaming tables as well as batch tables.
> Since time-windowed aggregates will be the first operation that require the 
> definition of time, we also need to discuss how the Table API handles time 
> characteristics, timestamps, and watermarks.
> The FLIP can be found here: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4639) Make Calcite features more pluggable

2016-09-20 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-4639:
---

Assignee: Timo Walther

> Make Calcite features more pluggable
> 
>
> Key: FLINK-4639
> URL: https://issues.apache.org/jira/browse/FLINK-4639
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Some users might want to extend the feature set of the Table API by adding or 
> replacing Calcite optimizer rules, modifying the parser etc. It would be good 
> to have means to hook into the Table API and change Calcite behavior. We 
> should implement something like a {{CalciteConfigBuilder}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4624) Gelly's summarization algorithm cannot deal with null vertex group values

2016-09-20 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15506211#comment-15506211
 ] 

Timo Walther commented on FLINK-4624:
-

You can get rid of the {{ClassCastException}} by using the correct constructor 
call of {{TupleTypeInfo}}. There are two constructors one that creates plain 
tuples and one that allows subclasses of tuples. In your case 
{{VertexGroupItem}}.

{code}
new TupleTypeInfo<>(VertexGroupItem.class, keyType, keyType, eitherType, 
BasicTypeInfo.LONG_TYPE_INFO)
{code}

should work.

> Gelly's summarization algorithm cannot deal with null vertex group values
> -
>
> Key: FLINK-4624
> URL: https://issues.apache.org/jira/browse/FLINK-4624
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Reporter: Till Rohrmann
>Assignee: Martin Junghanns
> Fix For: 1.2.0
>
>
> Gelly's {{Summarization}} algorithm cannot handle null values in the 
> `VertexGroupItem.f2`. This behaviour is hidden by using Strings as a vertex 
> value in the {{SummarizationITCase}}, because the {{StringSerializer}} can 
> handle null values. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4247) CsvTableSource.getDataSet() expects Java ExecutionEnvironment

2016-09-20 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4247.
-
   Resolution: Fixed
 Assignee: Timo Walther
Fix Version/s: 1.2.0

Fixed in 0975d9f11dc09f8b1ea420d660175874d423cac3.

> CsvTableSource.getDataSet() expects Java ExecutionEnvironment
> -
>
> Key: FLINK-4247
> URL: https://issues.apache.org/jira/browse/FLINK-4247
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Timo Walther
>Priority: Minor
> Fix For: 1.2.0
>
>
> The Table API offers the {{CsvTableSource}} which can be used with the Java 
> and Scala API. However, if used with the Scala API where on has obtained a 
> {{scala.api.ExecutionEnvironment}} there is a problem with the 
> {{CsvTableSource.getDataSet}} method. The method expects a 
> {{java.api.ExecutionEnvironment}} to extract the underlying {{DataSet}}. 
> Additionally it returns a {{java.api.DataSet}} instead of a 
> {{scala.api.DataSet}}. I think we should also offer a Scala API specific 
> CsvTableSource which works with the respective Scala counterparts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4268) Add a parsers for BigDecimal/BigInteger

2016-09-20 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4268.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 5c02988b05c56f524fc8c65b15e16b0c24278a5e.

> Add a parsers for BigDecimal/BigInteger
> ---
>
> Key: FLINK-4268
> URL: https://issues.apache.org/jira/browse/FLINK-4268
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Timo Walther
>Assignee: Timo Walther
> Fix For: 1.2.0
>
>
> Since BigDecimal and BigInteger are basic types now. It would be great if we 
> also parse those.
> FLINK-628 did this a long time ago. This feature should be reintroduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4624) Gelly's summarization algorithm cannot deal with null vertex group values

2016-09-21 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15509376#comment-15509376
 ] 

Timo Walther commented on FLINK-4624:
-

{{VertexGroupItem}} and {{VertexGroupItem}} are compatible. They both 
have the same class. Just the generics cause problems as Java classes have no 
generics (because of legacy reasons). Just do a hard cast. If you look into the 
type extractor, you will see that we had to do this several times.

{code}
@SuppressWarnings({"rawtype", "unchecked"})
TypeInformation> t = (TypeInformation>) new TupleTypeInfo(VertexGroupItem.class, keyType, keyType, eitherType, 
BasicTypeInfo.LONG_TYPE_INFO);
{code}

> Gelly's summarization algorithm cannot deal with null vertex group values
> -
>
> Key: FLINK-4624
> URL: https://issues.apache.org/jira/browse/FLINK-4624
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Reporter: Till Rohrmann
>Assignee: Martin Junghanns
> Fix For: 1.2.0
>
>
> Gelly's {{Summarization}} algorithm cannot handle null values in the 
> `VertexGroupItem.f2`. This behaviour is hidden by using Strings as a vertex 
> value in the {{SummarizationITCase}}, because the {{StringSerializer}} can 
> handle null values. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4241) Cryptic expression parser exceptions

2016-09-21 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-4241:
---

Assignee: Timo Walther

> Cryptic expression parser exceptions
> 
>
> Key: FLINK-4241
> URL: https://issues.apache.org/jira/browse/FLINK-4241
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Timo Walther
>
> The exceptions thrown when giving wrong SQL syntax to Flink's SQL parser is 
> very cryptic and should be improved. For example, the following code snippet:
> {code}
> inputTable.filter("a == 0");
> {code}
> gives the following exception:
> {code}
> Exception in thread "main" 
> org.apache.flink.api.table.ExpressionParserException: Could not parse 
> expression: [1.4] failure: `-' expected but `=' found
> a == 0
>^
>   at 
> org.apache.flink.api.table.expressions.ExpressionParser$.parseExpression(ExpressionParser.scala:355)
>   at org.apache.flink.api.table.Table.filter(table.scala:161)
>   at 
> com.dataartisans.streaming.SimpleTableAPIJob.main(SimpleTableAPIJob.java:32)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
> {code}
> From this description it is very hard to understand that {{==}} is not a 
> valid operator.
> Another example is:
> {code}
> inputTable.select("*");
> {code}
> which gives
> {code}
> Exception in thread "main" 
> org.apache.flink.api.table.ExpressionParserException: Could not parse 
> expression: Base Failure
>   at 
> org.apache.flink.api.table.expressions.ExpressionParser$.parseExpressionList(ExpressionParser.scala:342)
>   at org.apache.flink.api.table.Table.select(table.scala:103)
>   at 
> com.dataartisans.streaming.SimpleTableAPIJob.main(SimpleTableAPIJob.java:33)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
> {code}
> I think it would considerably improve user experience if we print more 
> helpful parsing exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-3042) Define a way to let types create their own TypeInformation

2016-09-21 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-3042.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 4cc38fd36f3190f9c0066e9cf94580669b2410cf.

> Define a way to let types create their own TypeInformation
> --
>
> Key: FLINK-3042
> URL: https://issues.apache.org/jira/browse/FLINK-3042
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Timo Walther
> Fix For: 1.2.0, 1.0.0
>
>
> Currently, introducing new Types that should have specific TypeInformation 
> requires
>   - Either integration with the TypeExtractor
>   - Or manually constructing the TypeInformation (potentially at every place) 
> and using type hints everywhere.
> I propose to add a way to allow classes to create their own TypeInformation 
> (like a static method "createTypeInfo()").
> To support generic nested types (like Optional / Either), the type extractor 
> would provide a Map of what generic variables map to what types (deduced from 
> the input). The class can use that to create the correct nested 
> TypeInformation (possibly by calling the TypeExtractor again, passing the Map 
> of generic bindings).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-3060) Add possibility to integrate custom types into the TypeExtractor

2016-09-21 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-3060.
-
   Resolution: Fixed
 Assignee: Timo Walther
Fix Version/s: 1.2.0

This has been fixed as part of FLINK-3042.

> Add possibility to integrate custom types into the TypeExtractor
> 
>
> Key: FLINK-3060
> URL: https://issues.apache.org/jira/browse/FLINK-3060
> Project: Flink
>  Issue Type: New Feature
>  Components: Type Serialization System
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Minor
> Fix For: 1.2.0
>
>
> As discussed in [FLINK-3002]. It would be nice if we could make custom type 
> integration easier by defining an interface/static method that classes can 
> implement to create their own type information. That gives users an easy 
> extension point.
> Custom integrated types need to be checked in `getForObject`, `getForClass` 
> and `validateInput`. If we also want to support custom integrated types with 
> generics `createTypeInfoWithTypeHierarchy` needs modifications, too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4655) Add tests for validation of Expressions

2016-09-21 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4655:
---

 Summary: Add tests for validation of Expressions
 Key: FLINK-4655
 URL: https://issues.apache.org/jira/browse/FLINK-4655
 Project: Flink
  Issue Type: Test
  Components: Table API & SQL
Reporter: Timo Walther


Currently, it is only tested if Table API expressions work if the input is 
correct. The validation method of expressions is not tested. The 
{{ExpressionTestBase}} should be extended to provide means to also test invalid 
expressions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4248) CsvTableSource does not support reading SqlTimeTypeInfo types

2016-09-22 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4248.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 3507d59f969485dd735487e6bf3eb893b2e3d8ed.

> CsvTableSource does not support reading SqlTimeTypeInfo types
> -
>
> Key: FLINK-4248
> URL: https://issues.apache.org/jira/browse/FLINK-4248
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Timo Walther
> Fix For: 1.2.0
>
>
> The Table API's {{CsvTableSource}} does not support to read all Table API 
> supported data types. For example, it is not possible to read 
> {{SqlTimeTypeInfo}} types via the {{CsvTableSource}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4662) Bump Calcite version up to 1.9

2016-09-22 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4662:
---

 Summary: Bump Calcite version up to 1.9
 Key: FLINK-4662
 URL: https://issues.apache.org/jira/browse/FLINK-4662
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther


Calcite just released the 1.9 version. We should adopt it also in the Table API 
especially for FLINK-4294.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4263) SQL's VALUES does not work properly

2016-09-22 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513302#comment-15513302
 ] 

Timo Walther commented on FLINK-4263:
-

I also had a look at it. Replacing {{Seq[Row]}} field by {{Seq[List]}} does 
only solve the current problem. What happens if we have a row of rows or row of 
POJOs. I think maybe we should also code generate the values input format. 
Otherwise we also have to make sure that the contents of the values are always 
serializable no matter which data types may be added in future. [~jark] do you 
wanna still fix this issue? I could also assign it to me.

> SQL's VALUES does not work properly
> ---
>
> Key: FLINK-4263
> URL: https://issues.apache.org/jira/browse/FLINK-4263
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> Executing the following SQL leads to very strange output:
> {code}
> SELECT  *
> FROM(
> VALUES
> (1, 2),
> (3, 4)
> ) AS q (col1, col2)"
> {code}
> {code}
> org.apache.flink.optimizer.CompilerException: Error translating node 'Data 
> Source "at translateToPlan(DataSetValues.scala:88) 
> (org.apache.flink.api.table.runtime.ValuesInputFormat)" : NONE [[ 
> GlobalProperties [partitioning=RANDOM_PARTITIONED] ]] [[ LocalProperties 
> [ordering=null, grouped=null, unique=null] ]]': Could not write the user code 
> wrapper class 
> org.apache.flink.api.common.operators.util.UserCodeObjectWrapper : 
> java.io.NotSerializableException: org.apache.flink.api.table.Row
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.preVisit(JobGraphGenerator.java:381)
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.preVisit(JobGraphGenerator.java:106)
>   at 
> org.apache.flink.optimizer.plan.SourcePlanNode.accept(SourcePlanNode.java:86)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.OptimizedPlan.accept(OptimizedPlan.java:128)
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.compileJobGraph(JobGraphGenerator.java:192)
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.compileJobGraph(JobGraphGenerator.java:170)
>   at 
> org.apache.flink.test.util.TestEnvironment.execute(TestEnvironment.java:76)
>   at 
> org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:896)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:637)
>   at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
>   at 
> org.apache.flink.api.scala.batch.sql.SortITCase.testOrderByMultipleFieldsWithSql(SortITCase.scala:56)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> Caused by: 
> org.apache.flink.runtime.operators.util.CorruptConfigurationException: Could 
> not write the user code wrapper class 
> org.apache.flink.api.common.operators.util.UserCodeObjectWrapper : 
> java.io.NotSerializableException: org.apache.flink.api.table.Row
>   at 
> org.apache.flink.runtime.operators.util.TaskConfig.setStubWrapper(TaskConfig.java:279)
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.createDataSourceVertex(JobGraphGenerator.java:888)
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.preVisit(JobGraphGenerator.java:281)
>   ... 51 more
> Caused by: java.io.NotSerializableException: org.apache.flink.api.table.Row
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
>   at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
>   at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
>   at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
>   at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
>   at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
>   at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
>   at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
>

[jira] [Commented] (FLINK-4661) Failure to find org.apache.flink:flink-runtime_2.10:jar:tests:1.2-SNAPSHOT

2016-09-22 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513311#comment-15513311
 ] 

Timo Walther commented on FLINK-4661:
-

Today [~fhueske] and me also have problems in compiling {{flink-table}}. Maybe 
it is just a Maven central issue. Or have we changed Maven dependencies 
recently?

> Failure to find org.apache.flink:flink-runtime_2.10:jar:tests:1.2-SNAPSHOT
> --
>
> Key: FLINK-4661
> URL: https://issues.apache.org/jira/browse/FLINK-4661
> Project: Flink
>  Issue Type: Bug
>Reporter: shijinkui
>
> [ERROR] Failed to execute goal on project flink-streaming-java_2.10: Could 
> not resolve dependencies for project 
> org.apache.flink:flink-streaming-java_2.10:jar:1.2-SNAPSHOT: Failure to find 
> org.apache.flink:flink-runtime_2.10:jar:tests:1.2-SNAPSHOT in 
> http://localhost:/repository/maven-public/ was cached in the local 
> repository, resolution will not be reattempted until the update interval of 
> nexus-releases has elapsed or updates are forced -> [Help 1]
> Failure to find org.apache.flink:flink-runtime_2.10:jar:tests
> I can't find where this tests jar is generated.
> By the way, recently half month, I start to use flink. There is zero time I 
> can compile the Flink project with default setting..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4554) Add support for array types

2016-09-22 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-4554:
---

Assignee: Timo Walther

> Add support for array types
> ---
>
> Key: FLINK-4554
> URL: https://issues.apache.org/jira/browse/FLINK-4554
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Support creating arrays:
> {code}ARRAY[1, 2, 3]{code}
> Access array values:
> {code}myArray[3]{code}
> And operations like:
> {{UNNEST, UNNEST WITH ORDINALITY, CARDINALITY}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4263) SQL's VALUES does not work properly

2016-09-22 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513450#comment-15513450
 ] 

Timo Walther commented on FLINK-4263:
-

I think they don't need to be basic types even though it is written in the 
Javadoc. At least there is logic for {{ROW}}, {{MULTISET}}. If you don't use 
the code generation you have to make sure that you convert data types correctly 
(time, timestamp, date), which means we have duplicate code.

> SQL's VALUES does not work properly
> ---
>
> Key: FLINK-4263
> URL: https://issues.apache.org/jira/browse/FLINK-4263
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> Executing the following SQL leads to very strange output:
> {code}
> SELECT  *
> FROM(
> VALUES
> (1, 2),
> (3, 4)
> ) AS q (col1, col2)"
> {code}
> {code}
> org.apache.flink.optimizer.CompilerException: Error translating node 'Data 
> Source "at translateToPlan(DataSetValues.scala:88) 
> (org.apache.flink.api.table.runtime.ValuesInputFormat)" : NONE [[ 
> GlobalProperties [partitioning=RANDOM_PARTITIONED] ]] [[ LocalProperties 
> [ordering=null, grouped=null, unique=null] ]]': Could not write the user code 
> wrapper class 
> org.apache.flink.api.common.operators.util.UserCodeObjectWrapper : 
> java.io.NotSerializableException: org.apache.flink.api.table.Row
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.preVisit(JobGraphGenerator.java:381)
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.preVisit(JobGraphGenerator.java:106)
>   at 
> org.apache.flink.optimizer.plan.SourcePlanNode.accept(SourcePlanNode.java:86)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.OptimizedPlan.accept(OptimizedPlan.java:128)
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.compileJobGraph(JobGraphGenerator.java:192)
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.compileJobGraph(JobGraphGenerator.java:170)
>   at 
> org.apache.flink.test.util.TestEnvironment.execute(TestEnvironment.java:76)
>   at 
> org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:896)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:637)
>   at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
>   at 
> org.apache.flink.api.scala.batch.sql.SortITCase.testOrderByMultipleFieldsWithSql(SortITCase.scala:56)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> Caused by: 
> org.apache.flink.runtime.operators.util.CorruptConfigurationException: Could 
> not write the user code wrapper class 
> org.apache.flink.api.common.operators.util.UserCodeObjectWrapper : 
> java.io.NotSerializableException: org.apache.flink.api.table.Row
>   at 
> org.apache.flink.runtime.operators.util.TaskConfig.setStubWrapper(TaskConfig.java:279)
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.createDataSourceVertex(JobGraphGenerator.java:888)
>   at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.preVisit(JobGraphGenerator.java:281)
>   ... 51 more
> Caused by: java.io.NotSerializableException: org.apache.flink.api.table.Row
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
>   at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
>   at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
>   at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
>   at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
>   at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
>   at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
>   at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
>   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
>   at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStre

[jira] [Resolved] (FLINK-4550) Clearly define SQL operator table

2016-09-23 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4550.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in ecbccd940d2df462215b7a79e895114b3d2df3cf.

> Clearly define SQL operator table
> -
>
> Key: FLINK-4550
> URL: https://issues.apache.org/jira/browse/FLINK-4550
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>  Labels: starter
> Fix For: 1.2.0
>
>
> Currently, we use {{SqlStdOperatorTable.instance()}} for setting all 
> supported operations. However, not all of them are actually supported. 
> {{FunctionCatalog}} should only return those operators that are tested and 
> documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4549) Test and document implicitly supported SQL functions

2016-09-23 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4549.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

Fixed in 9a1bc021aed0a3eec8c6eabb843d15b8c2b0b43f.

> Test and document implicitly supported SQL functions
> 
>
> Key: FLINK-4549
> URL: https://issues.apache.org/jira/browse/FLINK-4549
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
> Fix For: 1.2.0
>
>
> Calcite supports many SQL functions by translating them into {{RexNode}} s. 
> However, SQL functions like {{NULLIF}}, {{OVERLAPS}} are neither tested nor 
> document although supported.
> These functions should be tested and added to the documentation. We could 
> adopt parts from the Calcite documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4671) Table API can not be built

2016-09-23 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4671:
---

 Summary: Table API can not be built
 Key: FLINK-4671
 URL: https://issues.apache.org/jira/browse/FLINK-4671
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Timo Walther


Running {{mvn clean verify}} in {{flink-table}} results in a build failure.

{code}
[ERROR] Failed to execute goal on project flink-table_2.10: Could not resolve 
dependencies for project org.apache.flink:flink-table_2.10:jar:1.2-SNAPSHOT: 
Failure to find org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2 in 
https://repo.maven.apache.org/maven2 was cached in the local repository, 
resolution will not be reattempted until the update interval of central has 
elapsed or updates are forced -> [Help 1]
{code}

However, the master can be built successfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-5881) ScalarFunction(UDF) should support variable types and variable arguments

2017-03-13 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-5881.
-
   Resolution: Fixed
Fix Version/s: 1.3.0

Fixed in 1.3.0: 9b179beaea2b623ad3637e417f6d8014b696d038

> ScalarFunction(UDF) should support variable types and variable arguments  
> -
>
> Key: FLINK-5881
> URL: https://issues.apache.org/jira/browse/FLINK-5881
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zhuoluo Yang
>Assignee: Zhuoluo Yang
> Fix For: 1.3.0
>
>
> As a sub-task of FLINK-5826. We would like to support the ScalarFunction 
> first and make the review a little bit easier.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (FLINK-5882) TableFunction (UDTF) should support variable types and variable arguments

2017-03-13 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-5882.
-
   Resolution: Fixed
Fix Version/s: 1.3.0

Fixed in 1.3.0: 04aee61d86f9ba30715c133380560739282feb81

> TableFunction (UDTF) should support variable types and variable arguments
> -
>
> Key: FLINK-5882
> URL: https://issues.apache.org/jira/browse/FLINK-5882
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zhuoluo Yang
>Assignee: Zhuoluo Yang
> Fix For: 1.3.0
>
>
> It's the second approach of FLINK-5826.
> We would like to make table functions (UDTF) of Flink support variable 
> arguments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (FLINK-5826) UDF/UDTF should support variable types and variable arguments

2017-03-13 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-5826.
-
   Resolution: Fixed
Fix Version/s: 1.3.0

Both subtasks have been implemented.

> UDF/UDTF should support variable types and variable arguments
> -
>
> Key: FLINK-5826
> URL: https://issues.apache.org/jira/browse/FLINK-5826
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zhuoluo Yang
>Assignee: Zhuoluo Yang
> Fix For: 1.3.0
>
>
> In some cases, UDF/UDTF should support variable types and variable arguments. 
> Many UDF/UDTF developers wish to make the # of arguments and types flexible 
> to users. They try to make their functions flexible.
> Thus, we should support the following styles of UDF/UDTFs.
> for example 1, in Java
> {code:java}
> public class SimpleUDF extends ScalarFunction {
>   public int eval(Object... args) {
>   // do something
>   }
> }
> {code}
> for example 2, in Scala
> {code}
> class SimpleUDF extends ScalarFunction {
>   def eval(args: Any*): Int = {
> // do something
>   }
> }
> {code}
> If we modify the code in UserDefinedFunctionUtils.getSignature() and make 
> both signatures pass. The first example will work normally. However, the 
> second example will raise an exception.
> {noformat}
> Caused by: org.codehaus.commons.compiler.CompileException: Line 58, Column 0: 
> No applicable constructor/method found for actual parameters 
> "java.lang.String"; candidates are: "public java.lang.Object 
> test.SimpleUDF.eval(scala.collection.Seq)"
>   at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11523) 
> ~[janino-3.0.6.jar:?]
>   at 
> org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:8679)
>  ~[janino-3.0.6.jar:?]
>   at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8539) 
> ~[janino-3.0.6.jar:?]
> {noformat} 
> The reason is that Scala will do a *sugary* modification to the signature of 
> the method. The mothod {code} def eval(args: Any*){code} will become 
> {code}def eval(args: scala.collection.Seq){code} in the class file. 
> The code generation has been done in Java. If we use java style 
> {code}eval(Object... args){code} to call the Scala method, it will raise the 
> above exception.
> However, I can't always restrict users to use Java to write a UDF/UDTF. Any 
> ideas in variable types and variable arguments of Scala UDF/UDTFs to prevent 
> the compilation failure?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (FLINK-5441) Directly allow SQL queries on a Table

2017-03-14 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-5441.
-
   Resolution: Fixed
Fix Version/s: 1.3.0

Fixed in 1.3.0: f333723bcdaaa1cdc99dce16b00151d1c5365869

> Directly allow SQL queries on a Table
> -
>
> Key: FLINK-5441
> URL: https://issues.apache.org/jira/browse/FLINK-5441
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
> Fix For: 1.3.0
>
>
> Right now a user has to register a table before it can be used in SQL 
> queries. In order to allow more fluent programming we propose calling SQL 
> directly on a table. An underscore can be used to reference the current table:
> {code}
> myTable.sql("SELECT a, b, c FROM _ WHERE d = 12")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6114) Type checking fails with generics, even when concrete type of field is not needed

2017-03-22 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15936388#comment-15936388
 ] 

Timo Walther commented on FLINK-6114:
-

[~greghogan] I already added this issue to my To-Do list. I also think that it 
might has to do with the lambdas, but I need to take a closer look.

> Type checking fails with generics, even when concrete type of field is not 
> needed
> -
>
> Key: FLINK-6114
> URL: https://issues.apache.org/jira/browse/FLINK-6114
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Luke Hutchison
>
> The Flink type checker does not allow generic types to be used in any field 
> of a tuple when a join is being executed, even if the generic is not in a 
> field that is involved in the join.
> I have a type Tuple3, which contains a generic type 
> parameter K. I am joining using .where(0).equalTo(0). The type of field 0 is 
> well-defined as String. However, this gives me the following error:
> {noformat}
> Exception in thread "main" 
> org.apache.flink.api.common.functions.InvalidTypesException: Type of 
> TypeVariable 'K' in 'public static org.apache.flink.api.java.DataSet 
> mypkg.MyClass.method(params)' could not be determined. This is most likely a 
> type erasure problem. The type extraction currently supports types with 
> generic variables only in cases where all variables in the return type can be 
> deduced from the input type(s).
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createSubTypesInfo(TypeExtractor.java:989)
> {noformat}
> The code compiles fine, however -- the static type system is able to 
> correctly resolve the types in the surrounding code.
> Really only the fields that are affected by joins (or groupBy, aggregation 
> etc.) should be checked for concrete types in this way.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6168) Make flink-core independent of Avro

2017-03-22 Thread Timo Walther (JIRA)
Timo Walther created FLINK-6168:
---

 Summary: Make flink-core independent of Avro
 Key: FLINK-6168
 URL: https://issues.apache.org/jira/browse/FLINK-6168
 Project: Flink
  Issue Type: Sub-task
  Components: Core
Reporter: Timo Walther


Right now, flink-core has Avro dependencies. We should move AvroTypeInfo to 
flink-avro and make the TypeExtractor Avro independent (e.g. reflection-based 
similar to Hadoop Writables or with an other approach).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5829) Bump Calcite version to 1.12 once available

2017-03-27 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942857#comment-15942857
 ] 

Timo Walther commented on FLINK-5829:
-

[~wheat9] if it is not much effort to extend the class in order to access the 
map, then it would be great. I think unregistering a table is a basic operation 
that should be supported.

> Bump Calcite version to 1.12 once available
> ---
>
> Key: FLINK-5829
> URL: https://issues.apache.org/jira/browse/FLINK-5829
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Haohui Mai
>
> Once Calcite 1.12 is release we should update to remove some copied classes. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5829) Bump Calcite version to 1.12 once available

2017-03-28 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944729#comment-15944729
 ] 

Timo Walther commented on FLINK-5829:
-

If it is too complicated to implement, we can drop support for unregistering 
tables. It was requested by a user, I don't know for which use case.

> Bump Calcite version to 1.12 once available
> ---
>
> Key: FLINK-5829
> URL: https://issues.apache.org/jira/browse/FLINK-5829
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Haohui Mai
>
> Once Calcite 1.12 is release we should update to remove some copied classes. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6214) WindowAssigners do not allow negative offsets

2017-03-29 Thread Timo Walther (JIRA)
Timo Walther created FLINK-6214:
---

 Summary: WindowAssigners do not allow negative offsets
 Key: FLINK-6214
 URL: https://issues.apache.org/jira/browse/FLINK-6214
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Reporter: Timo Walther


Both the website and the JavaDoc promotes 
".window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8))) For 
example, in China you would have to specify an offset of Time.hours(-8)". But 
both the sliding and tumbling event time assigners do not allow offset to be 
negative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   3   4   5   6   7   8   9   10   >