[jira] [Updated] (FLINK-6813) Add TIMESTAMPDIFF supported in SQL

2017-06-08 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6813:
---
Description: 
* Syntax
TIMESTAMPDIFF ( datepart , startdate , enddate )
-datepart
Is the part of startdate and enddate that specifies the type of boundary 
crossed.
-startdate
Is an expression that can be resolved to a time, date.
-enddate
Same with startdate.
* Example
SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
00:00:00.000')  from tab; --> 2

CALCITE:
{code}
 SELECT timestampdiff(YEAR, timestamp '2019-06-01 07:01:11', timestamp 
'2020-06-01 07:01:11'),timestampdiff(QUARTER, timestamp '2019-06-01 07:01:11', 
timestamp '2020-06-01 07:01:11'),timestampdiff(MONTH, timestamp '2019-06-01 
07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(WEEK, timestamp 
'2019-06-01 07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(DAY, 
timestamp '2019-06-01 07:01:11',timestamp '2020-06-01 
07:01:11'),timestampdiff(HOUR, timestamp '2019-06-01 07:01:11',timestamp 
'2020-06-01 07:01:11'),timestampdiff(MINUTE, timestamp '2019-06-01 
07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(SECOND, timestamp 
'2019-06-01 07:01:11',timestamp '2020-06-01 07:01:11') FROM depts;
| 1 | 4 | 12 | 52 | 366| 8784| 527040 | 31622400  
{code}
MSSQL:
{code}
SELECT
  datediff(YEAR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(QUARTER, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(MONTH, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(WEEK, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(DAY, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(HOUR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(MINUTE, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(SECOND,  '2019-06-01 07:01:11', '2020-06-01 07:01:11')
FROM stu;
|1  |4  |12 |53 |366|8784   |527040 |31622400
{code}

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]



  was:
* Syntax
TIMESTAMPDIFF ( datepart , startdate , enddate )
-datepart
Is the part of startdate and enddate that specifies the type of boundary 
crossed.
-startdate
Is an expression that can be resolved to a time, date.
-enddate
Same with startdate.
* Example
SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
00:00:00.000')  from tab; --> 2

CALCITE:
{code}
 SELECT timestampdiff(YEAR, timestamp '2019-06-01 07:01:11', timestamp 
'2020-06-01 07:01:11'),timestampdiff(QUARTER, timestamp '2019-06-01 07:01:11', 
timestamp '2020-06-01 07:01:11'),timestampdiff(MONTH, timestamp '2019-06-01 
07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(WEEK, timestamp 
'2019-06-01 07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(DAY, 
timestamp '2019-06-01 07:01:11',timestamp '2020-06-01 
07:01:11'),timestampdiff(HOUR, timestamp '2019-06-01 07:01:11',timestamp 
'2020-06-01 07:01:11'),timestampdiff(MINUTE, timestamp '2019-06-01 
07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(SECOND, timestamp 
'2019-06-01 07:01:11',timestamp '2020-06-01 07:01:11') FROM depts;
| 1  | 4  | 12 | 52 | 366| 8784
| 527040 | 31622400  
{code}
MSSQL:
{code}
SELECT
  datediff(YEAR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(QUARTER, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(MONTH, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(WEEK, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(DAY, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(HOUR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(MINUTE, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(SECOND,  '2019-06-01 07:01:11', '2020-06-01 07:01:11')
FROM stu;
|1  |4  |12 |53 |366|8784   |527040 |31622400
{code}

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]




> Add TIMESTAMPDIFF supported in SQL
> --
>
> Key: FLINK-6813
> URL: https://issues.apache.org/jira/browse/FLINK-6813
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> TIMESTAMPDIFF ( datepart , startdate , enddate )
> -datepart
> Is the part of startdate and enddate that specifies the type of boundary 
> crossed.
> -startdate
> Is an expression that can be resolved to a time, date.
> -enddate
> Same with startdate.
> * Example
> SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
> 00:00:00.000')  from tab; --> 2
> CALCITE:
> {code}
>  SELECT timestampdiff(YEAR, timestamp '2019-06-01 07:01:11', timestamp 
> '2020-06-01 

[jira] [Updated] (FLINK-6813) Add TIMESTAMPDIFF supported in SQL

2017-06-08 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6813:
---
Description: 
* Syntax
TIMESTAMPDIFF ( datepart , startdate , enddate )
-datepart
Is the part of startdate and enddate that specifies the type of boundary 
crossed.
-startdate
Is an expression that can be resolved to a time, date.
-enddate
Same with startdate.
* Example
SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
00:00:00.000')  from tab; --> 2

CALCITE:
{code}
 SELECT timestampdiff(YEAR, timestamp '2019-06-01 07:01:11', timestamp 
'2020-06-01 07:01:11'),timestampdiff(QUARTER, timestamp '2019-06-01 07:01:11', 
timestamp '2020-06-01 07:01:11'),timestampdiff(MONTH, timestamp '2019-06-01 
07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(WEEK, timestamp 
'2019-06-01 07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(DAY, 
timestamp '2019-06-01 07:01:11',timestamp '2020-06-01 
07:01:11'),timestampdiff(HOUR, timestamp '2019-06-01 07:01:11',timestamp 
'2020-06-01 07:01:11'),timestampdiff(MINUTE, timestamp '2019-06-01 
07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(SECOND, timestamp 
'2019-06-01 07:01:11',timestamp '2020-06-01 07:01:11') FROM depts;
| 1  | 4  | 12 | 52 | 366| 8784
| 527040 | 31622400  
{code}
MSSQL:
{code}
SELECT
  datediff(YEAR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(QUARTER, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(MONTH, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(WEEK, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(DAY, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(HOUR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(MINUTE, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(SECOND,  '2019-06-01 07:01:11', '2020-06-01 07:01:11')
FROM stu;
|1  |4  |12 |53 |366|8784   |527040 |31622400
{code}

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]



  was:
* Syntax
TIMESTAMPDIFF ( datepart , startdate , enddate )
-datepart
Is the part of startdate and enddate that specifies the type of boundary 
crossed.
-startdate
Is an expression that can be resolved to a time, date.
-enddate
Same with startdate.
* Example
SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
00:00:00.000')  from tab; --> 2

CALCITE:
{code}
 SELECT timestampdiff(YEAR, timestamp '2019-06-01 07:01:11', timestamp 
'2020-06-01 07:01:11'),timestampdiff(QUARTER, timestamp '2019-06-01 07:01:11', 
timestamp '2020-06-01 07:01:11'),timestampdiff(MONTH, timestamp '2019-06-01 
07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(WEEK, timestamp 
'2019-06-01 07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(DAY, 
timestamp '2019-06-01 07:01:11',timestamp '2020-06-01 
07:01:11'),timestampdiff(HOUR, timestamp '2019-06-01 07:01:11',timestamp 
'2020-06-01 07:01:11'),timestampdiff(MINUTE, timestamp '2019-06-01 
07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(SECOND, timestamp 
'2019-06-01 07:01:11',timestamp '2020-06-01 07:01:11') FROM depts;
++++++-+
|   EXPR$0   |   EXPR$1   |   EXPR$2   |   EXPR$3   |   EXPR$4   |   EXPR$5|
++++++-+
| 1  | 4  | 12 | 52 | 366| 8784|
++++++-+
{code}
MSSQL:
{code}
SELECT
  datediff(YEAR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(QUARTER, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(MONTH, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(WEEK, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(DAY, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(HOUR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(MINUTE, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(SECOND,  '2019-06-01 07:01:11', '2020-06-01 07:01:11')
FROM stu;
1   4   12  53  366 8784527040  31622400
{code}

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]




> Add TIMESTAMPDIFF supported in SQL
> --
>
> Key: FLINK-6813
> URL: https://issues.apache.org/jira/browse/FLINK-6813
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> TIMESTAMPDIFF ( datepart , startdate , enddate )
> -datepart
> Is the part of startdate and enddate that specifies the type of boundary 
> crossed.
> -startdate
> Is an expression that can 

[jira] [Commented] (FLINK-6772) Incorrect ordering of matched state events in Flink CEP

2017-06-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043935#comment-16043935
 ] 

Ted Yu commented on FLINK-6772:
---

{code}
+   for (String key: path.keySet()) {
+   List events = path.get(key);
{code}
Instead of calling keySet(), entrySet() should be used. This would avoid the 
path.get() call.

> Incorrect ordering of matched state events in Flink CEP
> ---
>
> Key: FLINK-6772
> URL: https://issues.apache.org/jira/browse/FLINK-6772
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Kostas Kloudas
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1
>
>
> I've stumbled across an unexepected ordering of the matched state events. 
> Pattern:
> {code}
> Pattern pattern = Pattern
> .begin("start")
> .where(new IterativeCondition() {
> @Override
> public boolean filter(String s, Context context) throws 
> Exception {
> return s.startsWith("a-");
> }
> }).times(4).allowCombinations()
> .followedByAny("end")
> .where(new IterativeCondition() {
> public boolean filter(String s, Context context) throws 
> Exception {
> return s.startsWith("b-");
> }
> }).times(3).consecutive();
> {code}
> Input event sequence:
> a-1, a-2, a-3, a-4, b-1, b-2, b-3
> On b-3 a matched pattern would be triggered.
> Now, in the {{Map}} map passed via {{select}} in 
> {{PatternSelectFunction}}, the list for the "end" state is:
> b-3, b-1, b-2.
> Based on the timestamp of the events (simply using processing time), the 
> correct order should be b-1, b-2, b-3.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6748) Table API / SQL Docs: Table API Page

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043932#comment-16043932
 ] 

ASF GitHub Bot commented on FLINK-6748:
---

GitHub user twalthr opened a pull request:

https://github.com/apache/flink/pull/4093

[FLINK-6748] [table] [docs] Reworked Table API Page

This is the first part of the reworked Table API page. I will create a 
second PR for the remaining TODOs.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/twalthr/flink FLINK-6748

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4093.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4093


commit 39afe14fc974afe954ba5bbcef4b886757b5c312
Author: twalthr 
Date:   2017-06-09T04:54:22Z

[FLINK-6748] [table] [docs] Reworked Table API Page




> Table API / SQL Docs: Table API Page
> 
>
> Key: FLINK-6748
> URL: https://issues.apache.org/jira/browse/FLINK-6748
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>
> Update and refine {{./docs/dev/table/tableApi.md}} in feature branch 
> https://github.com/apache/flink/tree/tableDocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4093: [FLINK-6748] [table] [docs] Reworked Table API Pag...

2017-06-08 Thread twalthr
GitHub user twalthr opened a pull request:

https://github.com/apache/flink/pull/4093

[FLINK-6748] [table] [docs] Reworked Table API Page

This is the first part of the reworked Table API page. I will create a 
second PR for the remaining TODOs.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/twalthr/flink FLINK-6748

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4093.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4093


commit 39afe14fc974afe954ba5bbcef4b886757b5c312
Author: twalthr 
Date:   2017-06-09T04:54:22Z

[FLINK-6748] [table] [docs] Reworked Table API Page




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-6813) Add TIMESTAMPDIFF supported in SQL

2017-06-08 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6813:
---
Description: 
* Syntax
TIMESTAMPDIFF ( datepart , startdate , enddate )
-datepart
Is the part of startdate and enddate that specifies the type of boundary 
crossed.
-startdate
Is an expression that can be resolved to a time, date.
-enddate
Same with startdate.
* Example
SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
00:00:00.000')  from tab; --> 2

CALCITE:
{code}
 SELECT timestampdiff(YEAR, timestamp '2019-06-01 07:01:11', timestamp 
'2020-06-01 07:01:11'),timestampdiff(QUARTER, timestamp '2019-06-01 07:01:11', 
timestamp '2020-06-01 07:01:11'),timestampdiff(MONTH, timestamp '2019-06-01 
07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(WEEK, timestamp 
'2019-06-01 07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(DAY, 
timestamp '2019-06-01 07:01:11',timestamp '2020-06-01 
07:01:11'),timestampdiff(HOUR, timestamp '2019-06-01 07:01:11',timestamp 
'2020-06-01 07:01:11'),timestampdiff(MINUTE, timestamp '2019-06-01 
07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(SECOND, timestamp 
'2019-06-01 07:01:11',timestamp '2020-06-01 07:01:11') FROM depts;
++++++-+
|   EXPR$0   |   EXPR$1   |   EXPR$2   |   EXPR$3   |   EXPR$4   |   EXPR$5|
++++++-+
| 1  | 4  | 12 | 52 | 366| 8784|
++++++-+
{code}
MSSQL:
{code}
SELECT
  datediff(YEAR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(QUARTER, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(MONTH, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(WEEK, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(DAY, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(HOUR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(MINUTE, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(SECOND,  '2019-06-01 07:01:11', '2020-06-01 07:01:11')
FROM stu;
1   4   12  53  366 8784527040  31622400
{code}

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]



  was:
* Syntax
TIMESTAMPDIFF ( datepart , startdate , enddate )
-datepart
Is the part of startdate and enddate that specifies the type of boundary 
crossed.
-startdate
Is an expression that can be resolved to a time, date.
-enddate
Same with startdate.
* Example
SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
00:00:00.000')  from tab; --> 2

MSSQL:
{code}
SELECT
  datediff(YEAR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(QUARTER, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(MONTH, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(WEEK, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(DAY, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(HOUR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(MINUTE, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(SECOND,  '2019-06-01 07:01:11', '2020-06-01 07:01:11')
FROM stu;
1   4   12  53  366 8784527040  31622400
{code}

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]




> Add TIMESTAMPDIFF supported in SQL
> --
>
> Key: FLINK-6813
> URL: https://issues.apache.org/jira/browse/FLINK-6813
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> TIMESTAMPDIFF ( datepart , startdate , enddate )
> -datepart
> Is the part of startdate and enddate that specifies the type of boundary 
> crossed.
> -startdate
> Is an expression that can be resolved to a time, date.
> -enddate
> Same with startdate.
> * Example
> SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
> 00:00:00.000')  from tab; --> 2
> CALCITE:
> {code}
>  SELECT timestampdiff(YEAR, timestamp '2019-06-01 07:01:11', timestamp 
> '2020-06-01 07:01:11'),timestampdiff(QUARTER, timestamp '2019-06-01 
> 07:01:11', timestamp '2020-06-01 07:01:11'),timestampdiff(MONTH, timestamp 
> '2019-06-01 07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(WEEK, 
> timestamp '2019-06-01 07:01:11',timestamp '2020-06-01 
> 07:01:11'),timestampdiff(DAY, timestamp '2019-06-01 07:01:11',timestamp 
> '2020-06-01 07:01:11'),timestampdiff(HOUR, timestamp '2019-06-01 
> 07:01:11',timestamp '2020-06-01 07:01:11'),timestampdiff(MINUTE, timestamp 
> '2019-06-01 07:01:11',timestamp '2020-06-01 

[jira] [Updated] (FLINK-6813) Add TIMESTAMPDIFF supported in SQL

2017-06-08 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6813:
---
Description: 
* Syntax
TIMESTAMPDIFF ( datepart , startdate , enddate )
-datepart
Is the part of startdate and enddate that specifies the type of boundary 
crossed.
-startdate
Is an expression that can be resolved to a time, date.
-enddate
Same with startdate.
* Example
SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
00:00:00.000')  from tab; --> 2

MSSQL:
{code}
SELECT
  datediff(YEAR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(QUARTER, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(MONTH, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(WEEK, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
  datediff(DAY, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(HOUR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(MINUTE, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
  datediff(SECOND,  '2019-06-01 07:01:11', '2020-06-01 07:01:11')
FROM stu;
1   4   12  53  366 8784527040  31622400
{code}

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]



  was:
* Syntax
TIMESTAMPDIFF ( datepart , startdate , enddate )
-datepart
Is the part of startdate and enddate that specifies the type of boundary 
crossed.
-startdate
Is an expression that can be resolved to a time, date.
-enddate
Same with startdate.
* Example
SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
00:00:00.000')  from tab; --> 2

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]


> Add TIMESTAMPDIFF supported in SQL
> --
>
> Key: FLINK-6813
> URL: https://issues.apache.org/jira/browse/FLINK-6813
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> TIMESTAMPDIFF ( datepart , startdate , enddate )
> -datepart
> Is the part of startdate and enddate that specifies the type of boundary 
> crossed.
> -startdate
> Is an expression that can be resolved to a time, date.
> -enddate
> Same with startdate.
> * Example
> SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
> 00:00:00.000')  from tab; --> 2
> MSSQL:
> {code}
> SELECT
>   datediff(YEAR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
>   datediff(QUARTER, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
>   datediff(MONTH, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
>   datediff(WEEK, '2019-06-01 07:01:11', '2020-06-01 07:01:11'),
>   datediff(DAY, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
>   datediff(HOUR, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
>   datediff(MINUTE, '2019-06-01 07:01:11','2020-06-01 07:01:11'),
>   datediff(SECOND,  '2019-06-01 07:01:11', '2020-06-01 07:01:11')
> FROM stu;
> 1 4   12  53  366 8784527040  31622400
> {code}
> See more: 
> [https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4092: [FLINK-6876] [streaming] Correct the comments of D...

2017-06-08 Thread dianfu
GitHub user dianfu opened a pull request:

https://github.com/apache/flink/pull/4092

[FLINK-6876] [streaming] Correct the comments of 
DataStream#assignTimestampsAndWatermarks

This PR corrects the comments of DataStream#assignTimestampsAndWatermarks.


Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dianfu/flink FLINK-6876

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4092.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4092


commit 40d3c2ef81d3e1fcfc35fd2302835fa5aa6aec5a
Author: 付典 
Date:   2017-06-09T03:58:39Z

Correct the comments of DataStream#assignTimestampsAndWatermarks




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6876) The comment of DataStream#assignTimestampsAndWatermarks is incorrect

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043888#comment-16043888
 ] 

ASF GitHub Bot commented on FLINK-6876:
---

GitHub user dianfu opened a pull request:

https://github.com/apache/flink/pull/4092

[FLINK-6876] [streaming] Correct the comments of 
DataStream#assignTimestampsAndWatermarks

This PR corrects the comments of DataStream#assignTimestampsAndWatermarks.


Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dianfu/flink FLINK-6876

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4092.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4092


commit 40d3c2ef81d3e1fcfc35fd2302835fa5aa6aec5a
Author: 付典 
Date:   2017-06-09T03:58:39Z

Correct the comments of DataStream#assignTimestampsAndWatermarks




> The comment of DataStream#assignTimestampsAndWatermarks is incorrect
> 
>
> Key: FLINK-6876
> URL: https://issues.apache.org/jira/browse/FLINK-6876
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Reporter: Dian Fu
>Priority: Minor
>
> The comment of DataStream#assignTimestampsAndWatermarks is incorrect, we 
> should correct it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6876) The comment of DataStream#assignTimestampsAndWatermarks is incorrect

2017-06-08 Thread Dian Fu (JIRA)
Dian Fu created FLINK-6876:
--

 Summary: The comment of DataStream#assignTimestampsAndWatermarks 
is incorrect
 Key: FLINK-6876
 URL: https://issues.apache.org/jira/browse/FLINK-6876
 Project: Flink
  Issue Type: Bug
  Components: DataStream API
Reporter: Dian Fu
Priority: Minor


The comment of DataStream#assignTimestampsAndWatermarks is incorrect, we should 
correct it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6749) Table API / SQL Docs: SQL Page

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042989#comment-16042989
 ] 

ASF GitHub Bot commented on FLINK-6749:
---

Github user haohui closed the pull request at:

https://github.com/apache/flink/pull/4046


> Table API / SQL Docs: SQL Page
> --
>
> Key: FLINK-6749
> URL: https://issues.apache.org/jira/browse/FLINK-6749
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Fabian Hueske
>Assignee: Haohui Mai
>
> Update and refine {{./docs/dev/table/sql.md}} in feature branch 
> https://github.com/apache/flink/tree/tableDocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4046: [FLINK-6749] [table] Table API / SQL Docs: SQL Pag...

2017-06-08 Thread haohui
Github user haohui closed the pull request at:

https://github.com/apache/flink/pull/4046


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-6873) Limit the number of open writers in file system connector

2017-06-08 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-6873:
--
Component/s: Streaming Connectors
 Local Runtime
 filesystem-connector

> Limit the number of open writers in file system connector
> -
>
> Key: FLINK-6873
> URL: https://issues.apache.org/jira/browse/FLINK-6873
> Project: Flink
>  Issue Type: Improvement
>  Components: filesystem-connector, Local Runtime, Streaming Connectors
>Reporter: Mu Kong
>
> Mail list discuss:
> https://mail.google.com/mail/u/1/#label/MailList%2Fflink-dev/15c869b2a5b20d43
> Following exception will occur when Flink is writing to too many files:
> {code}
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at org.apache.hadoop.hdfs.DFSOutputStream.start(DFSOutputStream.java:2170)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1685)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1689)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1624)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
> at 
> org.apache.flink.streaming.connectors.fs.StreamWriterBase.open(StreamWriterBase.java:120)
> at 
> org.apache.flink.streaming.connectors.fs.StringWriter.open(StringWriter.java:62)
> at 
> org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.openNewPartFile(BucketingSink.java:545)
> at 
> org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.invoke(BucketingSink.java:440)
> at 
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:41)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:528)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:503)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:483)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:891)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:869)
> at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:103)
> at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecord(AbstractFetcher.java:230)
> at 
> org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread.run(SimpleConsumerThread.java:379)
> {code}
> Letting developers decide the max open connections to the open files would be 
> great.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6738) HBaseConnectorITCase is flaky

2017-06-08 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-6738:
--
Component/s: Streaming Connectors

> HBaseConnectorITCase is flaky
> -
>
> Key: FLINK-6738
> URL: https://issues.apache.org/jira/browse/FLINK-6738
> Project: Flink
>  Issue Type: Test
>  Components: Streaming Connectors
>Reporter: Ted Yu
>Priority: Minor
>
> I ran integration tests for flink 1.3 RC2 and got the following failure:
> {code}
> Failed tests:
>   
> HBaseConnectorITCase>HBaseTestingClusterAutostarter.tearDown:240->HBaseTestingClusterAutostarter.deleteTables:127
>  Exception found deleting the table expected null, but 
> was: java.util.concurrent.TimeoutException: The procedure 5 is still running>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6848) Extend the managed state docs with a Scala example

2017-06-08 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-6848:
--
Component/s: State Backends, Checkpointing
 Documentation

> Extend the managed state docs with a Scala example
> --
>
> Key: FLINK-6848
> URL: https://issues.apache.org/jira/browse/FLINK-6848
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, State Backends, Checkpointing
>Reporter: Fokko Driesprong
>
> Hi all,
> It would be nice to add a Scala example code snippet in the Managed state 
> docs. This makes it a bit easier to start using managed state in Scala. The 
> code is tested and works.
> Kind regards,
> Fokko



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6730) Activate strict checkstyle for flink-optimizer

2017-06-08 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-6730:
--
Component/s: Build System

> Activate strict checkstyle for flink-optimizer
> --
>
> Key: FLINK-6730
> URL: https://issues.apache.org/jira/browse/FLINK-6730
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>
> Long term issue for incrementally introducing the strict checkstyle to 
> flink-optimizer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6615) tmp directory not cleaned up on shutdown

2017-06-08 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-6615:
--
Component/s: Local Runtime

> tmp directory not cleaned up on shutdown
> 
>
> Key: FLINK-6615
> URL: https://issues.apache.org/jira/browse/FLINK-6615
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Affects Versions: 1.2.0
>Reporter: Andrey
>
> Steps to reproduce:
> 1) Stop task manager gracefully (kill -6 )
> 2) In the logs:
> {code}
> 2017-05-17 13:35:50,147 INFO  org.apache.zookeeper.ClientCnxn 
>   - EventThread shut down [main-EventThread]
> 2017-05-17 13:35:50,200 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /mnt/flink/tmp/flink-io-66f1d0ec-8976-41bf-9575-f80b181b0e47 
> [flink-akka.actor.default-dispatcher-2]
> java.nio.file.DirectoryNotEmptyException: 
> /mnt/flink/tmp/flink-io-66f1d0ec-8976-41bf-9575-f80b181b0e47
>   at 
> sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242)
>   at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
>   at java.nio.file.Files.delete(Files.java:1126)
>   at org.apache.flink.util.FileUtils.deleteDirectory(FileUtils.java:154)
>   at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
>   at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.postStop(TaskManager.scala:241)
>   at akka.actor.Actor$class.aroundPostStop(Actor.scala:477)
> {code}
> Expected:
> * on shutdown delete non-empty directory anyway. 
> Notes:
> * after process terminated, I've checked 
> "/mnt/flink/tmp/flink-io-66f1d0ec-8976-41bf-9575-f80b181b0e47" directory and 
> didn't find anything there. So it looks like timing issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6839) Improve SQL OVER alias When only one OVER window agg in selection.

2017-06-08 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-6839:
--
Component/s: Table API & SQL

> Improve SQL OVER alias When only one OVER window agg in selection.
> --
>
> Key: FLINK-6839
> URL: https://issues.apache.org/jira/browse/FLINK-6839
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: sunjincheng
>
> For OVER SQL:
> {code}
> SELECT a COUNT(c) OVER (ORDER BY proctime  RANGE BETWEEN INTERVAL '10' SECOND 
> PRECEDING AND CURRENT ROW) as cnt1 FROM MyTable
> {code}
> We expect plan {{DataStreamCalc(select=[a, w0$o0 AS cnt1])}} But we get 
> {{DataStreamCalc(select=[a, w0$o0 AS $1])}}. this improve only for plan 
> check.  the functional is work well in nested queries,e.g.: 
> {code}
> SELECT cnt1 from (SELECT a COUNT(c) OVER (ORDER BY proctime  RANGE BETWEEN 
> INTERVAL '10' SECOND PRECEDING AND CURRENT ROW) as cnt1 FROM MyTable) 
> {code}
> The SQL above is work well. which mentioned in 
> [FLINK-6760|https://issues.apache.org/jira/browse/FLINK-6760].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6698) Activate strict checkstyle

2017-06-08 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-6698:
--
Component/s: Build System

> Activate strict checkstyle
> --
>
> Key: FLINK-6698
> URL: https://issues.apache.org/jira/browse/FLINK-6698
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Chesnay Schepler
>
> Umbrella issue for introducing the strict checkstyle, to keep track of which 
> modules are already covered.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6835) Document the checkstyle requirements

2017-06-08 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-6835:
--
Component/s: Documentation

> Document the checkstyle requirements
> 
>
> Key: FLINK-6835
> URL: https://issues.apache.org/jira/browse/FLINK-6835
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> We should document the checkstyle requirements somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6729) Activate strict checkstyle for flink-runtime

2017-06-08 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-6729:
--
Component/s: Build System

> Activate strict checkstyle for flink-runtime
> 
>
> Key: FLINK-6729
> URL: https://issues.apache.org/jira/browse/FLINK-6729
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>
> Long term issue for incrementally introducing the strict checkstyle to 
> flink-runtime.
> As proposed in https://github.com/apache/flink/pull/4032 we will introduce 
> the checkstyle incrementally by package.
> The following is a list of all packages under {{org/apache/flink/runtime}} 
> and their respective checkstyle violations:
> {code}
> akka 25
> blob 140
> broadcast 94
> checkpoint 381
> client 83
> clusterframework 281
> concurrent 33
> deployment 27
> event 17
> execution 74
> executiongraph 881
> filecache 33
> fs 62
> heartbeat 30
> highavailability 94
> instance 370
> io 1592
> iterative 316
> jobgraph 283
> jobmanager 717
> jobmaster 84
> leaderelection 54
> leaderretrieval 11
> memory 249
> messages 135
> minicluster 53
> net 46
> operators 7953
> plugable 27
> process 1
> query 106
> registration 43
> resourcemanager 114
> rpc 127
> security 58
> state 463
> taskexecutor 153
> taskmanager 343
> testutils 204
> util 536
> {code}
> {{metrics}}, {{history}} and {{webmonitor}} are excluded from this list, as 
> I'm not aware of any large-scale issue/feature branch for any of then.
> There are a number of low-hanging fruits in there for which we could apply 
> the checkstyle regardless of current efforts in there, like process, or 
> leaderretrieval.
> I will reach out to committers that are active in the runtime components to 
> see which of these we could modify without causing to much problems.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6866) ClosureCleaner.clean fails for scala's JavaConverters wrapper classes

2017-06-08 Thread SmedbergM (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042807#comment-16042807
 ] 

SmedbergM edited comment on FLINK-6866 at 6/8/17 4:20 PM:
--

In Scala 2.11 and before, inner class `MapWrapper` in trait 
`scala.collection.convert.Wrappers` does not extend `Serializable`; this was 
added in 2.12


was (Author: smedbergm):
In Scala 2.11 and before, inner class `scala.collection.convert.MapWrapper` in 
trait `Wrappers` does not extend `Serializable`; this was added in 2.12

> ClosureCleaner.clean fails for scala's JavaConverters wrapper classes
> -
>
> Key: FLINK-6866
> URL: https://issues.apache.org/jira/browse/FLINK-6866
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Scala API
>Affects Versions: 1.2.0, 1.3.0
> Environment: Scala 2.10.6, Scala 2.11.11
> Does not appear using Scala 2.12
>Reporter: SmedbergM
>
> MWE: https://github.com/SmedbergM/ClosureCleanerBug
> MWE console output: 
> https://gist.github.com/SmedbergM/ce969e6e8540da5b59c7dd921a496dc5



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6692) The flink-dist jar contains unshaded netty jar

2017-06-08 Thread Nico Kruber (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042899#comment-16042899
 ] 

Nico Kruber commented on FLINK-6692:


I tried shading netty in runtime but other modules use netty-router which in 
course needs netty.
Long story short: let's solve this together with the reworked shading model in 
FLINK-6529

> The flink-dist jar contains unshaded netty jar
> --
>
> Key: FLINK-6692
> URL: https://issues.apache.org/jira/browse/FLINK-6692
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 1.4.0
>
>
> The {{flink-dist}} jar contains unshaded netty 3 and netty 4 classes:
> {noformat}
> io/netty/handler/codec/http/router/
> io/netty/handler/codec/http/router/BadClientSilencer.class
> io/netty/handler/codec/http/router/MethodRouted.class
> io/netty/handler/codec/http/router/Handler.class
> io/netty/handler/codec/http/router/Router.class
> io/netty/handler/codec/http/router/DualMethodRouter.class
> io/netty/handler/codec/http/router/Routed.class
> io/netty/handler/codec/http/router/AbstractHandler.class
> io/netty/handler/codec/http/router/KeepAliveWrite.class
> io/netty/handler/codec/http/router/DualAbstractHandler.class
> io/netty/handler/codec/http/router/MethodRouter.class
> {noformat}
> {noformat}
> org/jboss/netty/util/internal/jzlib/InfBlocks.class
> org/jboss/netty/util/internal/jzlib/InfCodes.class
> org/jboss/netty/util/internal/jzlib/InfTree.class
> org/jboss/netty/util/internal/jzlib/Inflate$1.class
> org/jboss/netty/util/internal/jzlib/Inflate.class
> org/jboss/netty/util/internal/jzlib/JZlib$WrapperType.class
> org/jboss/netty/util/internal/jzlib/JZlib.class
> org/jboss/netty/util/internal/jzlib/StaticTree.class
> org/jboss/netty/util/internal/jzlib/Tree.class
> org/jboss/netty/util/internal/jzlib/ZStream$1.class
> org/jboss/netty/util/internal/jzlib/ZStream.class
> {noformat}
> Is it an expected behavior?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6857) Add global default Kryo serializer configuration to StreamExecutionEnvironment

2017-06-08 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042864#comment-16042864
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-6857 at 6/8/17 3:38 PM:


[~StephanEwen] no, the current {{addDefaultKryoSerializer}} methods are 
defaults tied to specific classes, and correspond to the 
{{addDefaultSerializer(Class, Serializer)}} methods in Kryo.

This JIRA is meant to add a {{setDefaultSerializer}} configuration, which 
allows overriding the "hard default" (i.e., when no registered default 
serializers for a class can be found), which in Kryo is the 
{{FieldSerializer}}. This would correspond to Kryo's 
{{setDefaultSerializer(Serializer)}} API.

As an extra remark, the "serializer resolution" for Kryo is 3-steps:
1. Is there a directly registered serializer for the class?
2. Is there a default serializer registered that is applicable for the class?
3. If above all fails, then use the hard default (out-of-the-box is the 
{{FieldSerializer}})

The 3rd step is what this JIRA is targetting to allow configuring. Currently we 
can only configure 1. and 2.


was (Author: tzulitai):
[~StephanEwen] no, the current {{addDefaultKryoSerializer}} methods are 
defaults tied to specific classes, and correspond to the 
{{addDefaultSerializer(Class, Serializer)}} methods in Kryo.

This JIRA is meant to add a {{setDefaultSerializer}} configuration, which 
allows overriding the "hard default" (i.e., when no registered default 
serializers for a class can be found), which in Kryo is the 
{{FieldSerializer}}. This would correspond to Kryo's 
{{setDefaultSerializer(Serializer)}} API.

As an extra remark, the "serializer resolution" for Kryo is 3-steps:
1. Is there a directly registered serializer for the class?
2. Is there a default serializer registered that is applicable for the class?
3. If above all fails, then use the hard default (out-of-the-box is the 
{{FieldSerializer}})

The 3rd step is what this JIRA is targetting to allow configuring.

> Add global default Kryo serializer configuration to StreamExecutionEnvironment
> --
>
> Key: FLINK-6857
> URL: https://issues.apache.org/jira/browse/FLINK-6857
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>
> See ML for original discussion: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/KryoException-Encountered-unregistered-class-ID-td13476.html.
> We should have an additional {{setDefaultKryoSerializer}} method that allows 
> overriding the global default serializer that is not tied to specific classes 
> (out-of-the-box Kryo uses the {{FieldSerializer}} if no matches for default 
> serializer settings can be found for a class). Internally in Flink's 
> {{KryoSerializer}}, this would only be a matter of proxying that configured 
> global default serializer for Kryo by calling 
> {{Kryo.setDefaultSerializer(...)}} on the created Kryo instance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6857) Add global default Kryo serializer configuration to StreamExecutionEnvironment

2017-06-08 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042864#comment-16042864
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-6857 at 6/8/17 3:38 PM:


[~StephanEwen] no, the current {{addDefaultKryoSerializer}} methods are 
defaults tied to specific classes, and correspond to the 
{{addDefaultSerializer(Class, Serializer)}} methods in Kryo.

This JIRA is meant to add a {{setDefaultSerializer}} configuration, which 
allows overriding the "hard default" (i.e., when no registered default 
serializers for a class can be found), which in Kryo is the 
{{FieldSerializer}}. This would correspond to Kryo's 
{{setDefaultSerializer(Serializer)}} API.

As an extra remark, the "serializer resolution" for Kryo is 3-steps:
1. Is there a directly registered serializer for the class?
2. Is there a default serializer registered that is applicable for the class?
3. If above all fails, then use the hard default (out-of-the-box is the 
{{FieldSerializer}})

The 3rd step is what this JIRA is targetting to allow configuring.


was (Author: tzulitai):
[~StephanEwen] no, the current {{addDefaultKryoSerializer}} methods are 
defaults tied to specific classes, and correspond to the 
{{addDefaultSerializer(Class, Serializer)}} methods in Kryo.

This JIRA is meant to add a {{setDefaultSerializer}} configuration, which 
allows overriding the "hard default" (i.e., when no registered default 
serializers for a class can be found), which in Kryo is the 
{{FieldSerializer}}. This would correspond to Kryo's 
{{setDefaultSerializer(Serializer)}} API.

> Add global default Kryo serializer configuration to StreamExecutionEnvironment
> --
>
> Key: FLINK-6857
> URL: https://issues.apache.org/jira/browse/FLINK-6857
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>
> See ML for original discussion: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/KryoException-Encountered-unregistered-class-ID-td13476.html.
> We should have an additional {{setDefaultKryoSerializer}} method that allows 
> overriding the global default serializer that is not tied to specific classes 
> (out-of-the-box Kryo uses the {{FieldSerializer}} if no matches for default 
> serializer settings can be found for a class). Internally in Flink's 
> {{KryoSerializer}}, this would only be a matter of proxying that configured 
> global default serializer for Kryo by calling 
> {{Kryo.setDefaultSerializer(...)}} on the created Kryo instance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6857) Add global default Kryo serializer configuration to StreamExecutionEnvironment

2017-06-08 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042864#comment-16042864
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-6857 at 6/8/17 3:36 PM:


[~StephanEwen] no, the current {{addDefaultKryoSerializer}} methods are 
defaults tied to specific classes, and correspond to the 
{{addDefaultSerializer(Class, Serializer)}} methods in Kryo.

This JIRA is meant to add a {{setDefaultSerializer}} configuration, which 
allows overriding the "hard default" (i.e., when no registered default 
serializers for a class can be found), which in Kryo is the 
{{FieldSerializer}}. This would correspond to Kryo's 
{{setDefaultSerializer(Serializer)}} API.


was (Author: tzulitai):
[~StephanEwen] no, the current {{addDefaultKryoSerializer}} methods are 
defaults tied to specific classes, and correspond to the 
{{addDefaultSerializer(Class, Serializer)}} methods in Kryo.

This JIRA is meant to add a {{setDefaultSerializer}} configuration, which 
allows overriding the hard default (i.e., when no registered default 
serializers for a class can be found), which in Kryo is the 
{{FieldSerializer}}. This would correspond to Kryo's 
{{setDefaultSerializer(Serializer)}} API.

> Add global default Kryo serializer configuration to StreamExecutionEnvironment
> --
>
> Key: FLINK-6857
> URL: https://issues.apache.org/jira/browse/FLINK-6857
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>
> See ML for original discussion: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/KryoException-Encountered-unregistered-class-ID-td13476.html.
> We should have an additional {{setDefaultKryoSerializer}} method that allows 
> overriding the global default serializer that is not tied to specific classes 
> (out-of-the-box Kryo uses the {{FieldSerializer}} if no matches for default 
> serializer settings can be found for a class). Internally in Flink's 
> {{KryoSerializer}}, this would only be a matter of proxying that configured 
> global default serializer for Kryo by calling 
> {{Kryo.setDefaultSerializer(...)}} on the created Kryo instance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6857) Add global default Kryo serializer configuration to StreamExecutionEnvironment

2017-06-08 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042866#comment-16042866
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-6857:


The JIRA would therefore also entail adding the configuration to the 
{{ExecutionConfig}}

> Add global default Kryo serializer configuration to StreamExecutionEnvironment
> --
>
> Key: FLINK-6857
> URL: https://issues.apache.org/jira/browse/FLINK-6857
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>
> See ML for original discussion: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/KryoException-Encountered-unregistered-class-ID-td13476.html.
> We should have an additional {{setDefaultKryoSerializer}} method that allows 
> overriding the global default serializer that is not tied to specific classes 
> (out-of-the-box Kryo uses the {{FieldSerializer}} if no matches for default 
> serializer settings can be found for a class). Internally in Flink's 
> {{KryoSerializer}}, this would only be a matter of proxying that configured 
> global default serializer for Kryo by calling 
> {{Kryo.setDefaultSerializer(...)}} on the created Kryo instance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6857) Add global default Kryo serializer configuration to StreamExecutionEnvironment

2017-06-08 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042864#comment-16042864
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-6857:


[~StephanEwen] no, the current {{addDefaultKryoSerializer}} methods are 
defaults tied to specific classes, and correspond to the 
{{addDefaultSerializer(Class, Serializer)}} methods in Kryo.

This JIRA is meant to add a {{setDefaultSerializer}} configuration, which 
allows overriding the hard default (i.e., when no registered defaults for a 
class can be found), which in Kryo is the {{FieldSerializer}}. This would 
correspond to Kryo's {{setDefaultSerializer(Serializer)}} API.

> Add global default Kryo serializer configuration to StreamExecutionEnvironment
> --
>
> Key: FLINK-6857
> URL: https://issues.apache.org/jira/browse/FLINK-6857
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>
> See ML for original discussion: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/KryoException-Encountered-unregistered-class-ID-td13476.html.
> We should have an additional {{setDefaultKryoSerializer}} method that allows 
> overriding the global default serializer that is not tied to specific classes 
> (out-of-the-box Kryo uses the {{FieldSerializer}} if no matches for default 
> serializer settings can be found for a class). Internally in Flink's 
> {{KryoSerializer}}, this would only be a matter of proxying that configured 
> global default serializer for Kryo by calling 
> {{Kryo.setDefaultSerializer(...)}} on the created Kryo instance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6857) Add global default Kryo serializer configuration to StreamExecutionEnvironment

2017-06-08 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042864#comment-16042864
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-6857 at 6/8/17 3:34 PM:


[~StephanEwen] no, the current {{addDefaultKryoSerializer}} methods are 
defaults tied to specific classes, and correspond to the 
{{addDefaultSerializer(Class, Serializer)}} methods in Kryo.

This JIRA is meant to add a {{setDefaultSerializer}} configuration, which 
allows overriding the hard default (i.e., when no registered default 
serializers for a class can be found), which in Kryo is the 
{{FieldSerializer}}. This would correspond to Kryo's 
{{setDefaultSerializer(Serializer)}} API.


was (Author: tzulitai):
[~StephanEwen] no, the current {{addDefaultKryoSerializer}} methods are 
defaults tied to specific classes, and correspond to the 
{{addDefaultSerializer(Class, Serializer)}} methods in Kryo.

This JIRA is meant to add a {{setDefaultSerializer}} configuration, which 
allows overriding the hard default (i.e., when no registered defaults for a 
class can be found), which in Kryo is the {{FieldSerializer}}. This would 
correspond to Kryo's {{setDefaultSerializer(Serializer)}} API.

> Add global default Kryo serializer configuration to StreamExecutionEnvironment
> --
>
> Key: FLINK-6857
> URL: https://issues.apache.org/jira/browse/FLINK-6857
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>
> See ML for original discussion: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/KryoException-Encountered-unregistered-class-ID-td13476.html.
> We should have an additional {{setDefaultKryoSerializer}} method that allows 
> overriding the global default serializer that is not tied to specific classes 
> (out-of-the-box Kryo uses the {{FieldSerializer}} if no matches for default 
> serializer settings can be found for a class). Internally in Flink's 
> {{KryoSerializer}}, this would only be a matter of proxying that configured 
> global default serializer for Kryo by calling 
> {{Kryo.setDefaultSerializer(...)}} on the created Kryo instance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6857) Add global default Kryo serializer configuration to StreamExecutionEnvironment

2017-06-08 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042855#comment-16042855
 ] 

Stephan Ewen commented on FLINK-6857:
-

This exists in the {{ExecutionConfig}}, which is per 
{{StreamExecutionEnvironment}}.
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/ExecutionConfig.java#L686

Does that cover this case?

> Add global default Kryo serializer configuration to StreamExecutionEnvironment
> --
>
> Key: FLINK-6857
> URL: https://issues.apache.org/jira/browse/FLINK-6857
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>
> See ML for original discussion: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/KryoException-Encountered-unregistered-class-ID-td13476.html.
> We should have an additional {{setDefaultKryoSerializer}} method that allows 
> overriding the global default serializer that is not tied to specific classes 
> (out-of-the-box Kryo uses the {{FieldSerializer}} if no matches for default 
> serializer settings can be found for a class). Internally in Flink's 
> {{KryoSerializer}}, this would only be a matter of proxying that configured 
> global default serializer for Kryo by calling 
> {{Kryo.setDefaultSerializer(...)}} on the created Kryo instance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6875) Remote DataSet API job submission timing out

2017-06-08 Thread Francisco Rosa (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francisco Rosa closed FLINK-6875.
-
Resolution: Invalid

I believe this was a versioning issue. Version on server did not match client 
version. Closing.

> Remote DataSet API job submission timing out
> 
>
> Key: FLINK-6875
> URL: https://issues.apache.org/jira/browse/FLINK-6875
> Project: Flink
>  Issue Type: Bug
>  Components: DataSet API
>Affects Versions: 1.3.0
>Reporter: Francisco Rosa
> Fix For: 1.3.1
>
>
> When trying to submit a DataSet API job from a remote environment, Flink 
> times out. This works well in 1.2.1 and seems to be broken in 1.3.0.
> The following program reproduces the issue:
> {code:title=Example|borderStyle=solid}
> package com.test;
> import org.apache.flink.api.java.DataSet;
> import org.apache.flink.api.java.ExecutionEnvironment;
> import java.util.Date;
> public class FlinkRemoteIssue {
> public static void main(String[] args) throws Exception {
> String host = "192.168.1.235";
> int port = 6123;
> String[] jars = {
> "c:\\tmp\\FlinkRemoteIssue-all-1.0-SNAPSHOT.jar"
> };
> ExecutionEnvironment env = 
> ExecutionEnvironment.createRemoteEnvironment(host, port, jars);
> DataSet pipe = env.fromElements("1");
> pipe.map( (oneString) -> {
> System.err.println("Map executing: " + new Date());
> return "Map result: " + new Date();
> }).writeAsText("/tmp/lixo-" + System.currentTimeMillis());
> env.execute("Flink Remote Issue");
> }
> }
> {code}
> Result from running program (running inside IntelliJ):
> {code}
> Submitting job with JobID: 9f96638f014a87783cecd54b61c55d9a. Waiting for job 
> completion.
> Connected to JobManager at 
> Actor[akka.tcp://flink@10.97.120.139:6123/user/jobmanager#1432447220] with 
> leader session id ----.
> Exception in thread "main" 
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Couldn't retrieve the JobExecutionResult from the 
> JobManager.
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:478)
>   at 
> org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:105)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:442)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:429)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:404)
>   at 
> org.apache.flink.client.RemoteExecutor.executePlanWithJars(RemoteExecutor.java:211)
>   at 
> org.apache.flink.client.RemoteExecutor.executePlan(RemoteExecutor.java:188)
>   at 
> org.apache.flink.api.java.RemoteEnvironment.execute(RemoteEnvironment.java:172)
>   at com.test.FlinkRemoteIssue.main(FlinkRemoteIssue.java:25)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Couldn't 
> retrieve the JobExecutionResult from the JobManager.
>   at 
> org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:309)
>   at 
> org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:396)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:467)
>   ... 13 more
> Caused by: 
> org.apache.flink.runtime.client.JobClientActorSubmissionTimeoutException: Job 
> submission to the JobManager timed out. You may increase 
> 'akka.client.timeout' in case the JobManager needs more time to configure and 
> confirm the job submission.
>   at 
> org.apache.flink.runtime.client.JobSubmissionClientActor.handleCustomMessage(JobSubmissionClientActor.java:119)
>   at 
> org.apache.flink.runtime.client.JobClientActor.handleMessage(JobClientActor.java:251)
>   at 
> org.apache.flink.runtime.akka.FlinkUntypedActor.handleLeaderSessionID(FlinkUntypedActor.java:89)
>   at 
> org.apache.flink.runtime.akka.FlinkUntypedActor.onReceive(FlinkUntypedActor.java:68)
>   at 
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at 

[GitHub] flink pull request #3877: [backport] [FLINK-6514] [build] Create a proper se...

2017-06-08 Thread StephanEwen
Github user StephanEwen closed the pull request at:

https://github.com/apache/flink/pull/3877


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6514) Cannot start Flink Cluster in standalone mode

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042832#comment-16042832
 ] 

ASF GitHub Bot commented on FLINK-6514:
---

Github user StephanEwen closed the pull request at:

https://github.com/apache/flink/pull/3877


> Cannot start Flink Cluster in standalone mode
> -
>
> Key: FLINK-6514
> URL: https://issues.apache.org/jira/browse/FLINK-6514
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Cluster Management
>Reporter: Aljoscha Krettek
>Assignee: Stephan Ewen
>Priority: Blocker
> Fix For: 1.3.0, 1.4.0
>
>
> The changes introduced for FLINK-5998 change what goes into the 
> {{flink-dost}} fat jar. As it is, this means that trying to start a cluster 
> results in a {{ClassNotFoundException}} of {{LogFactory}} in 
> {{commons-logging}}.
> One solution is to now make the shaded Hadoop jar a proper fat-jar, so that 
> we again have all the dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6875) Remote DataSet API job submission timing out

2017-06-08 Thread Francisco Rosa (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francisco Rosa updated FLINK-6875:
--
Description: 
When trying to submit a DataSet API job from a remote environment, Flink times 
out. This works well in 1.2.1 and seems to be broken in 1.3.0.

The following program reproduces the issue:
{code:title=Example|borderStyle=solid}
package com.test;

import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;

import java.util.Date;

public class FlinkRemoteIssue {

public static void main(String[] args) throws Exception {

String host = "192.168.1.235";
int port = 6123;
String[] jars = {
"c:\\tmp\\FlinkRemoteIssue-all-1.0-SNAPSHOT.jar"
};
ExecutionEnvironment env = 
ExecutionEnvironment.createRemoteEnvironment(host, port, jars);

DataSet pipe = env.fromElements("1");
pipe.map( (oneString) -> {
System.err.println("Map executing: " + new Date());
return "Map result: " + new Date();
}).writeAsText("/tmp/lixo-" + System.currentTimeMillis());

env.execute("Flink Remote Issue");
}
}
{code}

Result from running program (running inside IntelliJ):

{code}
Submitting job with JobID: 9f96638f014a87783cecd54b61c55d9a. Waiting for job 
completion.
Connected to JobManager at 
Actor[akka.tcp://flink@10.97.120.139:6123/user/jobmanager#1432447220] with 
leader session id ----.
Exception in thread "main" 
org.apache.flink.client.program.ProgramInvocationException: The program 
execution failed: Couldn't retrieve the JobExecutionResult from the JobManager.
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:478)
at 
org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:105)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:442)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:429)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:404)
at 
org.apache.flink.client.RemoteExecutor.executePlanWithJars(RemoteExecutor.java:211)
at 
org.apache.flink.client.RemoteExecutor.executePlan(RemoteExecutor.java:188)
at 
org.apache.flink.api.java.RemoteEnvironment.execute(RemoteEnvironment.java:172)
at com.test.FlinkRemoteIssue.main(FlinkRemoteIssue.java:25)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Couldn't 
retrieve the JobExecutionResult from the JobManager.
at 
org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:309)
at 
org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:396)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:467)
... 13 more
Caused by: 
org.apache.flink.runtime.client.JobClientActorSubmissionTimeoutException: Job 
submission to the JobManager timed out. You may increase 'akka.client.timeout' 
in case the JobManager needs more time to configure and confirm the job 
submission.
at 
org.apache.flink.runtime.client.JobSubmissionClientActor.handleCustomMessage(JobSubmissionClientActor.java:119)
at 
org.apache.flink.runtime.client.JobClientActor.handleMessage(JobClientActor.java:251)
at 
org.apache.flink.runtime.akka.FlinkUntypedActor.handleLeaderSessionID(FlinkUntypedActor.java:89)
at 
org.apache.flink.runtime.akka.FlinkUntypedActor.onReceive(FlinkUntypedActor.java:68)
at 
akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


[jira] [Created] (FLINK-6875) Remote DataSet API job submission timing out

2017-06-08 Thread Francisco Rosa (JIRA)
Francisco Rosa created FLINK-6875:
-

 Summary: Remote DataSet API job submission timing out
 Key: FLINK-6875
 URL: https://issues.apache.org/jira/browse/FLINK-6875
 Project: Flink
  Issue Type: Bug
  Components: DataSet API
Affects Versions: 1.3.0
Reporter: Francisco Rosa
 Fix For: 1.3.1


When trying to submit a DataSet API job from a remote environment, Flink times 
out. This works well in 1.2.1 and seems to be broken in 1.3.0.

The following program reproduces the issue:

Result from running program (running inside IntelliJ):

Submitting job with JobID: 9f96638f014a87783cecd54b61c55d9a. Waiting for job 
completion.
Connected to JobManager at 
Actor[akka.tcp://flink@10.97.120.139:6123/user/jobmanager#1432447220] with 
leader session id ----.
Exception in thread "main" 
org.apache.flink.client.program.ProgramInvocationException: The program 
execution failed: Couldn't retrieve the JobExecutionResult from the JobManager.
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:478)
at 
org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:105)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:442)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:429)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:404)
at 
org.apache.flink.client.RemoteExecutor.executePlanWithJars(RemoteExecutor.java:211)
at 
org.apache.flink.client.RemoteExecutor.executePlan(RemoteExecutor.java:188)
at 
org.apache.flink.api.java.RemoteEnvironment.execute(RemoteEnvironment.java:172)
at com.test.FlinkRemoteIssue.main(FlinkRemoteIssue.java:25)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Couldn't 
retrieve the JobExecutionResult from the JobManager.
at 
org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:309)
at 
org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:396)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:467)
... 13 more
Caused by: 
org.apache.flink.runtime.client.JobClientActorSubmissionTimeoutException: Job 
submission to the JobManager timed out. You may increase 'akka.client.timeout' 
in case the JobManager needs more time to configure and confirm the job 
submission.
at 
org.apache.flink.runtime.client.JobSubmissionClientActor.handleCustomMessage(JobSubmissionClientActor.java:119)
at 
org.apache.flink.runtime.client.JobClientActor.handleMessage(JobClientActor.java:251)
at 
org.apache.flink.runtime.akka.FlinkUntypedActor.handleLeaderSessionID(FlinkUntypedActor.java:89)
at 
org.apache.flink.runtime.akka.FlinkUntypedActor.onReceive(FlinkUntypedActor.java:68)
at 
akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Process finished with exit code 1

Message in JobManager log:

2017-06-08 10:57:03,310 WARN  org.apache.flink.runtime.jobmanager.JobManager
- Discard message 
LeaderSessionMessage(----,SubmitJob(JobGraph(jobId:
 4d414efd050a871863f3319a8c56781c),EXECUTION_RESULT_AND_STATE_CHANGES)) because 
the expected leader session ID None did not equal the received leader session 
ID Some(----).





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6866) ClosureCleaner.clean fails for scala's JavaConverters wrapper classes

2017-06-08 Thread SmedbergM (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042807#comment-16042807
 ] 

SmedbergM commented on FLINK-6866:
--

In Scala 2.11 and before, inner class `scala.collection.convert.MapWrapper` in 
trait `Wrappers` does not extend `Serializable`; this was added in 2.12

> ClosureCleaner.clean fails for scala's JavaConverters wrapper classes
> -
>
> Key: FLINK-6866
> URL: https://issues.apache.org/jira/browse/FLINK-6866
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Scala API
>Affects Versions: 1.2.0, 1.3.0
> Environment: Scala 2.10.6, Scala 2.11.11
> Does not appear using Scala 2.12
>Reporter: SmedbergM
>
> MWE: https://github.com/SmedbergM/ClosureCleanerBug
> MWE console output: 
> https://gist.github.com/SmedbergM/ce969e6e8540da5b59c7dd921a496dc5



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4091: [FLINK-6874] [docs] Static and transient fields ig...

2017-06-08 Thread greghogan
GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/4091

[FLINK-6874] [docs] Static and transient fields ignored for POJOs

Note that static and transient fields are ignored when TypeExtrator 
validates a POJO.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
6874_static_and_transient_fields_ignored_for_pojos

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4091.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4091


commit 64c4669c6c937a4611076f5bfcc2f334f70ca46e
Author: Greg Hogan 
Date:   2017-06-08T14:51:49Z

[FLINK-6874] [docs] Static and transient fields ignored for POJOs

Note that static and transient fields are ignored when TypeExtrator
validates a POJO.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6874) Static and transient fields ignored for POJOs

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042803#comment-16042803
 ] 

ASF GitHub Bot commented on FLINK-6874:
---

GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/4091

[FLINK-6874] [docs] Static and transient fields ignored for POJOs

Note that static and transient fields are ignored when TypeExtrator 
validates a POJO.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
6874_static_and_transient_fields_ignored_for_pojos

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4091.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4091


commit 64c4669c6c937a4611076f5bfcc2f334f70ca46e
Author: Greg Hogan 
Date:   2017-06-08T14:51:49Z

[FLINK-6874] [docs] Static and transient fields ignored for POJOs

Note that static and transient fields are ignored when TypeExtrator
validates a POJO.




> Static and transient fields ignored for POJOs
> -
>
> Key: FLINK-6874
> URL: https://issues.apache.org/jira/browse/FLINK-6874
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.4.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
>
> Update {{dev/types_serialization.html}} to note that static and transient 
> fields are ignored when validating a POJO ({{TypeExtractor#analyzePojo}} 
> calls {{#getAllDeclaredFields}} which ignores transient and static fields).
> "All fields in the class (and all superclasses) are either public (and 
> non-final) or have a public getter- and a setter- method that follows the 
> Java beans naming conventions for getters and setters."



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6772) Incorrect ordering of matched state events in Flink CEP

2017-06-08 Thread Kostas Kloudas (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas closed FLINK-6772.
-
   Resolution: Fixed
Fix Version/s: 1.3.1

Merged at  5d3506e

> Incorrect ordering of matched state events in Flink CEP
> ---
>
> Key: FLINK-6772
> URL: https://issues.apache.org/jira/browse/FLINK-6772
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Kostas Kloudas
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1
>
>
> I've stumbled across an unexepected ordering of the matched state events. 
> Pattern:
> {code}
> Pattern pattern = Pattern
> .begin("start")
> .where(new IterativeCondition() {
> @Override
> public boolean filter(String s, Context context) throws 
> Exception {
> return s.startsWith("a-");
> }
> }).times(4).allowCombinations()
> .followedByAny("end")
> .where(new IterativeCondition() {
> public boolean filter(String s, Context context) throws 
> Exception {
> return s.startsWith("b-");
> }
> }).times(3).consecutive();
> {code}
> Input event sequence:
> a-1, a-2, a-3, a-4, b-1, b-2, b-3
> On b-3 a matched pattern would be triggered.
> Now, in the {{Map}} map passed via {{select}} in 
> {{PatternSelectFunction}}, the list for the "end" state is:
> b-3, b-1, b-2.
> Based on the timestamp of the events (simply using processing time), the 
> correct order should be b-1, b-2, b-3.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6874) Static and transient fields ignored for POJOs

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-6874:
--
Description: 
Update {{dev/types_serialization.html}} to note that static and transient 
fields are ignored when validating a POJO ({{TypeExtractor#analyzePojo}} calls 
{{#getAllDeclaredFields}} which ignores transient and static fields).

"All fields in the class (and all superclasses) are either public (and 
non-final) or have a public getter- and a setter- method that follows the Java 
beans naming conventions for getters and setters."

  was:
Update {{dev/types_serialization.html}} to note that static and transient 
fields are ignored when validating a POJO ({{TypeExtractor#analyzePojo}} calls 
{{#getAllDeclaredFields}} which ignores transient and static fields) and that 
{{is}} methods are allowed in place of {{get}} (see {{#isValidPojoField}}).

"All fields in the class (and all superclasses) are either public (and 
non-final) or have a public getter- and a setter- method that follows the Java 
beans naming conventions for getters and setters."


> Static and transient fields ignored for POJOs
> -
>
> Key: FLINK-6874
> URL: https://issues.apache.org/jira/browse/FLINK-6874
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.4.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
>
> Update {{dev/types_serialization.html}} to note that static and transient 
> fields are ignored when validating a POJO ({{TypeExtractor#analyzePojo}} 
> calls {{#getAllDeclaredFields}} which ignores transient and static fields).
> "All fields in the class (and all superclasses) are either public (and 
> non-final) or have a public getter- and a setter- method that follows the 
> Java beans naming conventions for getters and setters."



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6866) ClosureCleaner.clean fails for scala's JavaConverters wrapper classes

2017-06-08 Thread SmedbergM (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SmedbergM updated FLINK-6866:
-
Affects Version/s: 1.3.0
  Environment: 
Scala 2.10.6, Scala 2.11.11
Does not appear using Scala 2.12
  Description: 
MWE: https://github.com/SmedbergM/ClosureCleanerBug
MWE console output: 
https://gist.github.com/SmedbergM/ce969e6e8540da5b59c7dd921a496dc5

  was:
MWE:

```
import scala.collection.JavaConverters._
import org.apache.flink.api.java.ClosureCleaner

object SerializationFailureMWE extends App {
  val m4j: java.util.Map[String,String] = new java.util.HashMap
  m4j.put("key1", "value1")

  val m: java.util.Map[String,String] = Map(
"key1" -> "value1"
  ).asJava

  println("Cleaning native Java map")
  ClosureCleaner.clean(m4j, true)

  println("Cleaning map converted by JavaConverters")
  ClosureCleaner.clean(m, true)
}
```

Program output:
```
Cleaning native Java map
Cleaning map converted by JavaConverters
Exception in thread "main" org.apache.flink.api.common.InvalidProgramException: 
{key1=value1} is not serializable. The object probably contains or references 
non serializable fields.
at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:100)
at 
SerializationFailureMWE$delayedInit$body.apply(SerializationFailureMWE.scala:17)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at 
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at 
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at SerializationFailureMWE$.main(SerializationFailureMWE.scala:5)
at SerializationFailureMWE.main(SerializationFailureMWE.scala)
Caused by: java.io.NotSerializableException: 
scala.collection.convert.Wrappers$MapWrapper
...
```


> ClosureCleaner.clean fails for scala's JavaConverters wrapper classes
> -
>
> Key: FLINK-6866
> URL: https://issues.apache.org/jira/browse/FLINK-6866
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Scala API
>Affects Versions: 1.2.0, 1.3.0
> Environment: Scala 2.10.6, Scala 2.11.11
> Does not appear using Scala 2.12
>Reporter: SmedbergM
>
> MWE: https://github.com/SmedbergM/ClosureCleanerBug
> MWE console output: 
> https://gist.github.com/SmedbergM/ce969e6e8540da5b59c7dd921a496dc5



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6683) building with Scala 2.11 no longer uses change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042796#comment-16042796
 ] 

Greg Hogan commented on FLINK-6683:
---

Ah, so it was accidentally merged from master and then reverted.

> building with Scala 2.11 no longer uses change-scala-version.sh
> ---
>
> Key: FLINK-6683
> URL: https://issues.apache.org/jira/browse/FLINK-6683
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Documentation
>Affects Versions: 1.3.0
>Reporter: David Anderson
> Fix For: 1.3.0
>
>
> FLINK-6414 eliminated change-scala-version.sh. The documentation 
> (setup/building.html) needs to be updated to match.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-6414:
--
Fix Version/s: (was: 1.3.0)

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.4.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-6414.
-
Resolution: Fixed

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.4.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan reopened FLINK-6414:
---

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.4.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6874) Static and transient fields ignored for POJOs

2017-06-08 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-6874:
-

 Summary: Static and transient fields ignored for POJOs
 Key: FLINK-6874
 URL: https://issues.apache.org/jira/browse/FLINK-6874
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.4.0
Reporter: Greg Hogan
Assignee: Greg Hogan
Priority: Trivial


Update {{dev/types_serialization.html}} to note that static and transient 
fields are ignored when validating a POJO ({{TypeExtractor#analyzePojo}} calls 
{{#getAllDeclaredFields}} which ignores transient and static fields) and that 
{{is}} methods are allowed in place of {{get}} (see {{#isValidPojoField}}).

"All fields in the class (and all superclasses) are either public (and 
non-final) or have a public getter- and a setter- method that follows the Java 
beans naming conventions for getters and setters."



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6749) Table API / SQL Docs: SQL Page

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042789#comment-16042789
 ] 

ASF GitHub Bot commented on FLINK-6749:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4046
  
Hi @haohui, I merged the PR to the `tableDocs` branch.
Can you close it? Thanks!


> Table API / SQL Docs: SQL Page
> --
>
> Key: FLINK-6749
> URL: https://issues.apache.org/jira/browse/FLINK-6749
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Fabian Hueske
>Assignee: Haohui Mai
>
> Update and refine {{./docs/dev/table/sql.md}} in feature branch 
> https://github.com/apache/flink/tree/tableDocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #4046: [FLINK-6749] [table] Table API / SQL Docs: SQL Page

2017-06-08 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4046
  
Hi @haohui, I merged the PR to the `tableDocs` branch.
Can you close it? Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6683) building with Scala 2.11 no longer uses change-scala-version.sh

2017-06-08 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042786#comment-16042786
 ] 

Stephan Ewen commented on FLINK-6683:
-

Okay, I am confused now. When I browse the code in the {{release-1.3}} branch, 
I see Scala versions that are not variables and I see the {{chan

  - Non variable Scala versions: 
https://github.com/apache/flink/blob/release-1.3/flink-runtime/pom.xml#L32

  - {{change-scala-version.sh}} scripe: 
https://github.com/apache/flink/blob/release-1.3/tools/change-scala-version.sh

> building with Scala 2.11 no longer uses change-scala-version.sh
> ---
>
> Key: FLINK-6683
> URL: https://issues.apache.org/jira/browse/FLINK-6683
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Documentation
>Affects Versions: 1.3.0
>Reporter: David Anderson
> Fix For: 1.3.0
>
>
> FLINK-6414 eliminated change-scala-version.sh. The documentation 
> (setup/building.html) needs to be updated to match.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (FLINK-6871) Obsolete instruction for changing scala version for build

2017-06-08 Thread William Saar (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Saar resolved FLINK-6871.
-
Resolution: Duplicate

Correct, refers to master and duplicates issue, sorry for missing that one. 
Closing as dupe.

> Obsolete instruction for changing scala version for build
> -
>
> Key: FLINK-6871
> URL: https://issues.apache.org/jira/browse/FLINK-6871
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.3.0
>Reporter: William Saar
>Priority: Minor
>
> The documentation at
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html
> says you should change Scala version during build with the script
> tools/change-scala-version.sh 2.11
> The script does not exist. How do you change Scala version for the build?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6683) building with Scala 2.11 no longer uses change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042766#comment-16042766
 ] 

Greg Hogan commented on FLINK-6683:
---

FLINK-6414 was committed to both 1.3 
([3990d75aaedc8e03ef2facf5732c4a0fe52a7cdc|https://github.com/apache/flink/commit/3990d75aaedc8e03ef2facf5732c4a0fe52a7cdc])
 and 1.4/master 
([35c087129e2a27c2db47c5ed5ce3fb3523a7c719|https://github.com/apache/flink/commit/35c087129e2a27c2db47c5ed5ce3fb3523a7c719]).

> building with Scala 2.11 no longer uses change-scala-version.sh
> ---
>
> Key: FLINK-6683
> URL: https://issues.apache.org/jira/browse/FLINK-6683
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Documentation
>Affects Versions: 1.3.0
>Reporter: David Anderson
> Fix For: 1.3.0
>
>
> FLINK-6414 eliminated change-scala-version.sh. The documentation 
> (setup/building.html) needs to be updated to match.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-6414.
-
Resolution: Fixed

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.4.0, 1.3.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-6414:
--
Fix Version/s: 1.4.0

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.3.0, 1.4.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan reopened FLINK-6414:
---

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.3.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-6414:
--
Fix Version/s: (was: 1.4.0)

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.3.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan reopened FLINK-6414:
---

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.3.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-6414:
--
Fix Version/s: 1.3.0

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.3.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan reopened FLINK-6414:
---

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.3.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-6414.
-
Resolution: Fixed

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.4.0, 1.3.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6414) Use scala.binary.version in place of change-scala-version.sh

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-6414.
-
Resolution: Fixed

> Use scala.binary.version in place of change-scala-version.sh
> 
>
> Key: FLINK-6414
> URL: https://issues.apache.org/jira/browse/FLINK-6414
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.3.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.3.0
>
>
> Recent commits have failed to modify {{change-scala-version.sh}} resulting in 
> broken builds for {{scala-2.11}}. It looks like we can remove the need for 
> this script by replacing hard-coded references to the Scala version with 
> Flink's maven variable {{scala.binary.version}}.
> I had initially realized that the change script is [only used for 
> building|https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html#scala-versions]
>  and not for switching the IDE environment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6873) Limit the number of open writers in file system connector

2017-06-08 Thread Mu Kong (JIRA)
Mu Kong created FLINK-6873:
--

 Summary: Limit the number of open writers in file system connector
 Key: FLINK-6873
 URL: https://issues.apache.org/jira/browse/FLINK-6873
 Project: Flink
  Issue Type: Improvement
Reporter: Mu Kong


Mail list discuss:
https://mail.google.com/mail/u/1/#label/MailList%2Fflink-dev/15c869b2a5b20d43

Following exception will occur when Flink is writing to too many files:

{code}
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at org.apache.hadoop.hdfs.DFSOutputStream.start(DFSOutputStream.java:2170)
at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1685)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1689)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1624)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
at 
org.apache.flink.streaming.connectors.fs.StreamWriterBase.open(StreamWriterBase.java:120)
at 
org.apache.flink.streaming.connectors.fs.StringWriter.open(StringWriter.java:62)
at 
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.openNewPartFile(BucketingSink.java:545)
at 
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.invoke(BucketingSink.java:440)
at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:41)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:528)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:503)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:483)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:891)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:869)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:103)
at 
org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecord(AbstractFetcher.java:230)
at 
org.apache.flink.streaming.connectors.kafka.internals.SimpleConsumerThread.run(SimpleConsumerThread.java:379)
{code}

Letting developers decide the max open connections to the open files would be 
great.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6683) building with Scala 2.11 no longer uses change-scala-version.sh

2017-06-08 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042752#comment-16042752
 ] 

Stephan Ewen commented on FLINK-6683:
-

The update for the Scala version actually affects the master (1.4) not the 1.3 
release, if I see that correctly...

> building with Scala 2.11 no longer uses change-scala-version.sh
> ---
>
> Key: FLINK-6683
> URL: https://issues.apache.org/jira/browse/FLINK-6683
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Documentation
>Affects Versions: 1.3.0
>Reporter: David Anderson
> Fix For: 1.3.0
>
>
> FLINK-6414 eliminated change-scala-version.sh. The documentation 
> (setup/building.html) needs to be updated to match.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6871) Obsolete instruction for changing scala version for build

2017-06-08 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042753#comment-16042753
 ] 

Stephan Ewen commented on FLINK-6871:
-

This refers to the {{master}} branch, not the {{1.3}} release?

> Obsolete instruction for changing scala version for build
> -
>
> Key: FLINK-6871
> URL: https://issues.apache.org/jira/browse/FLINK-6871
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.3.0
>Reporter: William Saar
>Priority: Minor
>
> The documentation at
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html
> says you should change Scala version during build with the script
> tools/change-scala-version.sh 2.11
> The script does not exist. How do you change Scala version for the build?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6683) building with Scala 2.11 no longer uses change-scala-version.sh

2017-06-08 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042748#comment-16042748
 ] 

Stephan Ewen commented on FLINK-6683:
-

+1 to say Scala 2.12 does not work (only 2.10 and 2.11 work at the moment)

> building with Scala 2.11 no longer uses change-scala-version.sh
> ---
>
> Key: FLINK-6683
> URL: https://issues.apache.org/jira/browse/FLINK-6683
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Documentation
>Affects Versions: 1.3.0
>Reporter: David Anderson
> Fix For: 1.3.0
>
>
> FLINK-6414 eliminated change-scala-version.sh. The documentation 
> (setup/building.html) needs to be updated to match.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6872) Add MissingOverride to checkstyle

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-6872.
-
Resolution: Won't Fix

Hmm, you're right, that isn't what I wanted.

> Add MissingOverride to checkstyle
> -
>
> Key: FLINK-6872
> URL: https://issues.apache.org/jira/browse/FLINK-6872
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.4.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> [Verifies|http://checkstyle.sourceforge.net/config_annotation.html#MissingOverride]
>  that the java.lang.Override annotation is present when the @inheritDoc 
> javadoc tag is present.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6869) Scala serializers do not have the serialVersionUID specified

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042744#comment-16042744
 ] 

ASF GitHub Bot commented on FLINK-6869:
---

Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/4090
  
Overall, I think this is ok as a best effort until we have some eager 
registration that helps with the remaining problems in the heap backend.


> Scala serializers do not have the serialVersionUID specified
> 
>
> Key: FLINK-6869
> URL: https://issues.apache.org/jira/browse/FLINK-6869
> Project: Flink
>  Issue Type: Bug
>  Components: Scala API, Type Serialization System
>Affects Versions: 1.3.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1
>
>
> Currently, all Scala serializers, e.g. {{OptionSerializer}}, 
> {{CaseClassSerializer}}, {{TrySerializer}} etc. do not have the 
> serialVersionUID specified.
> In 1.3, the Scala serializer (all serializers in general) implementations had 
> to be changed since implementation of the compatibility methods 
> {{snapshotConfiguration}}, {{ensureCompatibility}} had to be implemented, 
> resulting in a new serialVersionUID.
> This means that when restoring from a snapshot pre-1.3 that contains Scala 
> types as state, the previous serializer in the snapshot cannot be 
> deserialized (due to UID mismatch).
> To fix this, we should specify the serialVersionUIDs of the Scala serializers 
> to be what they originally were pre-1.3. This would then allow users with 
> Scala types as state to restore from older versions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #4090: [FLINK-6869] [scala] Specify serialVersionUID for all Sca...

2017-06-08 Thread StefanRRichter
Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/4090
  
Overall, I think this is ok as a best effort until we have some eager 
registration that helps with the remaining problems in the heap backend.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6872) Add MissingOverride to checkstyle

2017-06-08 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042743#comment-16042743
 ] 

Chesnay Schepler commented on FLINK-6872:
-

the {{@inheridDoc}} tag seems to be rarely used in Flink, would it make sense 
to remove the remaining usages instead?

> Add MissingOverride to checkstyle
> -
>
> Key: FLINK-6872
> URL: https://issues.apache.org/jira/browse/FLINK-6872
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.4.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> [Verifies|http://checkstyle.sourceforge.net/config_annotation.html#MissingOverride]
>  that the java.lang.Override annotation is present when the @inheritDoc 
> javadoc tag is present.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6871) Obsolete instruction for changing scala version for build

2017-06-08 Thread David Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042736#comment-16042736
 ] 

David Anderson commented on FLINK-6871:
---

This issue duplicates FLINK-6683

> Obsolete instruction for changing scala version for build
> -
>
> Key: FLINK-6871
> URL: https://issues.apache.org/jira/browse/FLINK-6871
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.3.0
>Reporter: William Saar
>Priority: Minor
>
> The documentation at
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html
> says you should change Scala version during build with the script
> tools/change-scala-version.sh 2.11
> The script does not exist. How do you change Scala version for the build?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6872) Add MissingOverride to checkstyle

2017-06-08 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-6872:
-

 Summary: Add MissingOverride to checkstyle
 Key: FLINK-6872
 URL: https://issues.apache.org/jira/browse/FLINK-6872
 Project: Flink
  Issue Type: New Feature
Affects Versions: 1.4.0
Reporter: Greg Hogan
Assignee: Greg Hogan
Priority: Minor


[Verifies|http://checkstyle.sourceforge.net/config_annotation.html#MissingOverride]
 that the java.lang.Override annotation is present when the @inheritDoc javadoc 
tag is present.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6869) Scala serializers do not have the serialVersionUID specified

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042720#comment-16042720
 ] 

ASF GitHub Bot commented on FLINK-6869:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4090
  
One caveat that this PR does not yet fully fix:
the deserialization of the anonymous class serializers 
(`CaseClassSerializer` and `TraversableSerializer`), even with the 
`serialVersionUID` specified, can still fail because there is no guarantee of 
the generated classname of anonymous classes (it depends on the order of when 
the anonymous classes were instantiated, and format seems to also change across 
compilers).

At this moment, I've hit a bit of a wall trying to resolve this. The 
problem was always there pre-1.3, as if users simply change the order of their 
Scala type serializer generation (simply changing call order of 
`createTypeInformation` for their Scala types), the classnames would change and 
they wouldn't be able to restore state.


> Scala serializers do not have the serialVersionUID specified
> 
>
> Key: FLINK-6869
> URL: https://issues.apache.org/jira/browse/FLINK-6869
> Project: Flink
>  Issue Type: Bug
>  Components: Scala API, Type Serialization System
>Affects Versions: 1.3.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1
>
>
> Currently, all Scala serializers, e.g. {{OptionSerializer}}, 
> {{CaseClassSerializer}}, {{TrySerializer}} etc. do not have the 
> serialVersionUID specified.
> In 1.3, the Scala serializer (all serializers in general) implementations had 
> to be changed since implementation of the compatibility methods 
> {{snapshotConfiguration}}, {{ensureCompatibility}} had to be implemented, 
> resulting in a new serialVersionUID.
> This means that when restoring from a snapshot pre-1.3 that contains Scala 
> types as state, the previous serializer in the snapshot cannot be 
> deserialized (due to UID mismatch).
> To fix this, we should specify the serialVersionUIDs of the Scala serializers 
> to be what they originally were pre-1.3. This would then allow users with 
> Scala types as state to restore from older versions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #4090: [FLINK-6869] [scala] Specify serialVersionUID for all Sca...

2017-06-08 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4090
  
One caveat that this PR does not yet fully fix:
the deserialization of the anonymous class serializers 
(`CaseClassSerializer` and `TraversableSerializer`), even with the 
`serialVersionUID` specified, can still fail because there is no guarantee of 
the generated classname of anonymous classes (it depends on the order of when 
the anonymous classes were instantiated, and format seems to also change across 
compilers).

At this moment, I've hit a bit of a wall trying to resolve this. The 
problem was always there pre-1.3, as if users simply change the order of their 
Scala type serializer generation (simply changing call order of 
`createTypeInformation` for their Scala types), the classnames would change and 
they wouldn't be able to restore state.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6871) Obsolete instruction for changing scala version for build

2017-06-08 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042719#comment-16042719
 ] 

Greg Hogan commented on FLINK-6871:
---

This should be corrected. The change-scala-version script is no longer needed 
and the version can be changed in the parent {{pom.xml}} or by running maven 
with {{-Dscala-2.11}}.

> Obsolete instruction for changing scala version for build
> -
>
> Key: FLINK-6871
> URL: https://issues.apache.org/jira/browse/FLINK-6871
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.3.0
>Reporter: William Saar
>Priority: Minor
>
> The documentation at
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html
> says you should change Scala version during build with the script
> tools/change-scala-version.sh 2.11
> The script does not exist. How do you change Scala version for the build?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6772) Incorrect ordering of matched state events in Flink CEP

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042718#comment-16042718
 ] 

ASF GitHub Bot commented on FLINK-6772:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4084


> Incorrect ordering of matched state events in Flink CEP
> ---
>
> Key: FLINK-6772
> URL: https://issues.apache.org/jira/browse/FLINK-6772
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Kostas Kloudas
>  Labels: flink-rel-1.3.1-blockers
>
> I've stumbled across an unexepected ordering of the matched state events. 
> Pattern:
> {code}
> Pattern pattern = Pattern
> .begin("start")
> .where(new IterativeCondition() {
> @Override
> public boolean filter(String s, Context context) throws 
> Exception {
> return s.startsWith("a-");
> }
> }).times(4).allowCombinations()
> .followedByAny("end")
> .where(new IterativeCondition() {
> public boolean filter(String s, Context context) throws 
> Exception {
> return s.startsWith("b-");
> }
> }).times(3).consecutive();
> {code}
> Input event sequence:
> a-1, a-2, a-3, a-4, b-1, b-2, b-3
> On b-3 a matched pattern would be triggered.
> Now, in the {{Map}} map passed via {{select}} in 
> {{PatternSelectFunction}}, the list for the "end" state is:
> b-3, b-1, b-2.
> Based on the timestamp of the events (simply using processing time), the 
> correct order should be b-1, b-2, b-3.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4084: [FLINK-6772] [cep] Fix ordering (by timestamp) of ...

2017-06-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4084


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6869) Scala serializers do not have the serialVersionUID specified

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042717#comment-16042717
 ] 

ASF GitHub Bot commented on FLINK-6869:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4090
  
R: @StefanRRichter @aljoscha tagging you because I talked to you about the 
issue offline :) Could you have a quick look?


> Scala serializers do not have the serialVersionUID specified
> 
>
> Key: FLINK-6869
> URL: https://issues.apache.org/jira/browse/FLINK-6869
> Project: Flink
>  Issue Type: Bug
>  Components: Scala API, Type Serialization System
>Affects Versions: 1.3.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1
>
>
> Currently, all Scala serializers, e.g. {{OptionSerializer}}, 
> {{CaseClassSerializer}}, {{TrySerializer}} etc. do not have the 
> serialVersionUID specified.
> In 1.3, the Scala serializer (all serializers in general) implementations had 
> to be changed since implementation of the compatibility methods 
> {{snapshotConfiguration}}, {{ensureCompatibility}} had to be implemented, 
> resulting in a new serialVersionUID.
> This means that when restoring from a snapshot pre-1.3 that contains Scala 
> types as state, the previous serializer in the snapshot cannot be 
> deserialized (due to UID mismatch).
> To fix this, we should specify the serialVersionUIDs of the Scala serializers 
> to be what they originally were pre-1.3. This would then allow users with 
> Scala types as state to restore from older versions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #4090: [FLINK-6869] [scala] Specify serialVersionUID for all Sca...

2017-06-08 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4090
  
R: @StefanRRichter @aljoscha tagging you because I talked to you about the 
issue offline :) Could you have a quick look?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6869) Scala serializers do not have the serialVersionUID specified

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042715#comment-16042715
 ] 

ASF GitHub Bot commented on FLINK-6869:
---

GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/4090

[FLINK-6869] [scala] Specify serialVersionUID for all Scala serializers

This PR fixes 2 issues:

1. Configuration snapshots of Scala serializers were not readable:
Prior to this PR, the configuration snapshot classes of Scala serializers 
did not have the proper default empty constructor that is used for 
deserializing the configuration snapshot.

Since some Scala serializers' config snapshots extend the Java 
`CompositeTypeSerializerConfigSnapshot`, their config snapshot classes are also 
changed to be implemented in Java since in Scala we can only call a single base 
class constructor from subclasses.

2. Scala serializers did not specify the serialVersionUID:
Previously, Scala serializers did not specify the `serialVersionUID`, and 
therefore prohibited restore from previous Flink version snapshots because the 
serializers' implementations changed in 1.3.

The `serialVersionUID`s added in this PR are identical to what they were 
(as generated by Java) in Flink 1.2, so that we can at least restore state that 
were written with the Scala serializers as of 1.2.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink FLINK-6869

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4090.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4090


commit 416bd3b122e79bdd8b5876e8d645b346110b67f0
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-08T06:52:04Z

[hotfix] [scala] Fix instantiation of Scala serializers' config snapshot 
classes

Prior to this commit, the configuration snapshot classes of Scala
serializers did not have the proper default empty constructor that is
used for deserializing the configuration snapshot.

Since some Scala serializers' config snapshots extend the Java
CompositeTypeSerializerConfigSnapshot, their config snapshot classes are
also changed to be implemented in Java since in Scala we can only call a
single base class constructor from subclasses.

commit 16574c6623dd64846c888e6a608deb9ae3f081bd
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-08T13:29:45Z

[FLINK-6869] [scala] Specify serialVersionUID for all Scala serializers

Previously, Scala serializers did not specify the serialVersionUID, and
therefore prohibited restore from previous Flink version snapshots
because the serializers' implementations changed.

The serialVersionUIDs added in this commit are identical to what they
were (as generated by Java) in Flink 1.2, so that we can at least
restore state that were written with the Scala serializers as of 1.2.




> Scala serializers do not have the serialVersionUID specified
> 
>
> Key: FLINK-6869
> URL: https://issues.apache.org/jira/browse/FLINK-6869
> Project: Flink
>  Issue Type: Bug
>  Components: Scala API, Type Serialization System
>Affects Versions: 1.3.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1
>
>
> Currently, all Scala serializers, e.g. {{OptionSerializer}}, 
> {{CaseClassSerializer}}, {{TrySerializer}} etc. do not have the 
> serialVersionUID specified.
> In 1.3, the Scala serializer (all serializers in general) implementations had 
> to be changed since implementation of the compatibility methods 
> {{snapshotConfiguration}}, {{ensureCompatibility}} had to be implemented, 
> resulting in a new serialVersionUID.
> This means that when restoring from a snapshot pre-1.3 that contains Scala 
> types as state, the previous serializer in the snapshot cannot be 
> deserialized (due to UID mismatch).
> To fix this, we should specify the serialVersionUIDs of the Scala serializers 
> to be what they originally were pre-1.3. This would then allow users with 
> Scala types as state to restore from older versions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4090: [FLINK-6869] [scala] Specify serialVersionUID for ...

2017-06-08 Thread tzulitai
GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/4090

[FLINK-6869] [scala] Specify serialVersionUID for all Scala serializers

This PR fixes 2 issues:

1. Configuration snapshots of Scala serializers were not readable:
Prior to this PR, the configuration snapshot classes of Scala serializers 
did not have the proper default empty constructor that is used for 
deserializing the configuration snapshot.

Since some Scala serializers' config snapshots extend the Java 
`CompositeTypeSerializerConfigSnapshot`, their config snapshot classes are also 
changed to be implemented in Java since in Scala we can only call a single base 
class constructor from subclasses.

2. Scala serializers did not specify the serialVersionUID:
Previously, Scala serializers did not specify the `serialVersionUID`, and 
therefore prohibited restore from previous Flink version snapshots because the 
serializers' implementations changed in 1.3.

The `serialVersionUID`s added in this PR are identical to what they were 
(as generated by Java) in Flink 1.2, so that we can at least restore state that 
were written with the Scala serializers as of 1.2.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink FLINK-6869

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4090.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4090


commit 416bd3b122e79bdd8b5876e8d645b346110b67f0
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-08T06:52:04Z

[hotfix] [scala] Fix instantiation of Scala serializers' config snapshot 
classes

Prior to this commit, the configuration snapshot classes of Scala
serializers did not have the proper default empty constructor that is
used for deserializing the configuration snapshot.

Since some Scala serializers' config snapshots extend the Java
CompositeTypeSerializerConfigSnapshot, their config snapshot classes are
also changed to be implemented in Java since in Scala we can only call a
single base class constructor from subclasses.

commit 16574c6623dd64846c888e6a608deb9ae3f081bd
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-08T13:29:45Z

[FLINK-6869] [scala] Specify serialVersionUID for all Scala serializers

Previously, Scala serializers did not specify the serialVersionUID, and
therefore prohibited restore from previous Flink version snapshots
because the serializers' implementations changed.

The serialVersionUIDs added in this commit are identical to what they
were (as generated by Java) in Flink 1.2, so that we can at least
restore state that were written with the Scala serializers as of 1.2.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6783) Wrongly extracted TypeInformations for WindowedStream::aggregate

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042691#comment-16042691
 ] 

ASF GitHub Bot commented on FLINK-6783:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4089
  
A hotfix is for when no FLINK ticket has been created, and typically no PR. 
This commit header should be something like `[FLINK-6783] [streaming]`.


> Wrongly extracted TypeInformations for WindowedStream::aggregate
> 
>
> Key: FLINK-6783
> URL: https://issues.apache.org/jira/browse/FLINK-6783
> Project: Flink
>  Issue Type: Bug
>  Components: Core, DataStream API
>Affects Versions: 1.3.0, 1.3.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1, 1.4.0
>
>
> The following test fails because of wrongly acquired output type for 
> {{AggregateFunction}}:
> {code}
> @Test
> public void testAggregateWithWindowFunctionDifferentResultTypes() throws 
> Exception {
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   DataStream> source = 
> env.fromElements(Tuple2.of("hello", 1), Tuple2.of("hello", 2));
>   DataStream> window = source
>   .keyBy(new TupleKeySelector())
>   .window(TumblingEventTimeWindows.of(Time.of(1, 
> TimeUnit.SECONDS)))
>   .aggregate(new AggregateFunction, 
> Tuple2, String>() {
>   @Override
>   public Tuple2 createAccumulator() {
>   return Tuple2.of("", 0);
>   }
>   @Override
>   public void add(
>   Tuple2 value, Tuple2 Integer> accumulator) {
>   }
>   @Override
>   public String getResult(Tuple2 
> accumulator) {
>   return accumulator.f0;
>   }
>   @Override
>   public Tuple2 merge(
>   Tuple2 a, Tuple2 Integer> b) {
>   return Tuple2.of("", 0);
>   }
>   }, new WindowFunction, 
> String, TimeWindow>() {
>   @Override
>   public void apply(
>   String s,
>   TimeWindow window,
>   Iterable input,
>   Collector> out) 
> throws Exception {
>   out.collect(Tuple3.of("", "", 0));
>   }
>   });
>   OneInputTransformation, Tuple3 Integer>> transform =
>   (OneInputTransformation, Tuple3 String, Integer>>) window.getTransformation();
>   OneInputStreamOperator, Tuple3 Integer>> operator = transform.getOperator();
>   Assert.assertTrue(operator instanceof WindowOperator);
>   WindowOperator, ?, ?, ?> winOperator =
>   (WindowOperator, ?, ?, ?>) 
> operator;
>   Assert.assertTrue(winOperator.getTrigger() instanceof EventTimeTrigger);
>   Assert.assertTrue(winOperator.getWindowAssigner() instanceof 
> TumblingEventTimeWindows);
>   Assert.assertTrue(winOperator.getStateDescriptor() instanceof 
> AggregatingStateDescriptor);
>   processElementAndEnsureOutput(
>   operator, winOperator.getKeySelector(), 
> BasicTypeInfo.STRING_TYPE_INFO, new Tuple2<>("hello", 1));
> }
> {code}
> The test results in 
> {code}
> org.apache.flink.api.common.functions.InvalidTypesException: Input mismatch: 
> Tuple type expected.
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.validateInputType(TypeExtractor.java:1157)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:451)
>   at 
> org.apache.flink.streaming.api.datastream.WindowedStream.aggregate(WindowedStream.java:855)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowTranslationTest.testAggregateWithWindowFunctionDifferentResultTypes(WindowTranslationTest.java:702)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[GitHub] flink issue #4089: [FLINK-6783][hotfix] Removed lamba indices for abstract c...

2017-06-08 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4089
  
A hotfix is for when no FLINK ticket has been created, and typically no PR. 
This commit header should be something like `[FLINK-6783] [streaming]`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6871) Obsolete instruction for changing scala version for build

2017-06-08 Thread William Saar (JIRA)
William Saar created FLINK-6871:
---

 Summary: Obsolete instruction for changing scala version for build
 Key: FLINK-6871
 URL: https://issues.apache.org/jira/browse/FLINK-6871
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.3.0
Reporter: William Saar
Priority: Minor


The documentation at
https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/building.html

says you should change Scala version during build with the script
tools/change-scala-version.sh 2.11

The script does not exist. How do you change Scala version for the build?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6868) Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra Connector`

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-6868:
--
Priority: Critical  (was: Blocker)

> Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`
> -
>
> Key: FLINK-6868
> URL: https://issues.apache.org/jira/browse/FLINK-6868
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Cassandra Connector
>Affects Versions: 1.4.0
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Critical
> Fix For: 1.4.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Shoud using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6868) Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra Connector`

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-6868:
--
Affects Version/s: (was: 1.3.0)
   1.4.0

> Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`
> -
>
> Key: FLINK-6868
> URL: https://issues.apache.org/jira/browse/FLINK-6868
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System, Cassandra Connector
>Affects Versions: 1.4.0
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Blocker
> Fix For: 1.4.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Shoud using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6868) Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra Connector`

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-6868:
--
Priority: Blocker  (was: Critical)

> Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`
> -
>
> Key: FLINK-6868
> URL: https://issues.apache.org/jira/browse/FLINK-6868
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Cassandra Connector
>Affects Versions: 1.4.0
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Blocker
> Fix For: 1.4.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Shoud using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6868) Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra Connector`

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-6868:
--
Fix Version/s: (was: 1.3.1)

> Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`
> -
>
> Key: FLINK-6868
> URL: https://issues.apache.org/jira/browse/FLINK-6868
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System, Cassandra Connector
>Affects Versions: 1.4.0
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Blocker
> Fix For: 1.4.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Shoud using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6868) Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra Connector`

2017-06-08 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-6868:
--
Issue Type: Bug  (was: Improvement)

> Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`
> -
>
> Key: FLINK-6868
> URL: https://issues.apache.org/jira/browse/FLINK-6868
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Cassandra Connector
>Affects Versions: 1.4.0
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Blocker
> Fix For: 1.4.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Shoud using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6868) Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra Connector`

2017-06-08 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042682#comment-16042682
 ] 

Greg Hogan commented on FLINK-6868:
---

The affected code was added by FLINK-4497 in the master branch for the 1.4 
release.

> Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`
> -
>
> Key: FLINK-6868
> URL: https://issues.apache.org/jira/browse/FLINK-6868
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System, Cassandra Connector
>Affects Versions: 1.3.0
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Blocker
> Fix For: 1.3.1, 1.4.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Shoud using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6783) Wrongly extracted TypeInformations for WindowedStream::aggregate

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042678#comment-16042678
 ] 

ASF GitHub Bot commented on FLINK-6783:
---

Github user dawidwys closed the pull request at:

https://github.com/apache/flink/pull/4039


> Wrongly extracted TypeInformations for WindowedStream::aggregate
> 
>
> Key: FLINK-6783
> URL: https://issues.apache.org/jira/browse/FLINK-6783
> Project: Flink
>  Issue Type: Bug
>  Components: Core, DataStream API
>Affects Versions: 1.3.0, 1.3.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1, 1.4.0
>
>
> The following test fails because of wrongly acquired output type for 
> {{AggregateFunction}}:
> {code}
> @Test
> public void testAggregateWithWindowFunctionDifferentResultTypes() throws 
> Exception {
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   DataStream> source = 
> env.fromElements(Tuple2.of("hello", 1), Tuple2.of("hello", 2));
>   DataStream> window = source
>   .keyBy(new TupleKeySelector())
>   .window(TumblingEventTimeWindows.of(Time.of(1, 
> TimeUnit.SECONDS)))
>   .aggregate(new AggregateFunction, 
> Tuple2, String>() {
>   @Override
>   public Tuple2 createAccumulator() {
>   return Tuple2.of("", 0);
>   }
>   @Override
>   public void add(
>   Tuple2 value, Tuple2 Integer> accumulator) {
>   }
>   @Override
>   public String getResult(Tuple2 
> accumulator) {
>   return accumulator.f0;
>   }
>   @Override
>   public Tuple2 merge(
>   Tuple2 a, Tuple2 Integer> b) {
>   return Tuple2.of("", 0);
>   }
>   }, new WindowFunction, 
> String, TimeWindow>() {
>   @Override
>   public void apply(
>   String s,
>   TimeWindow window,
>   Iterable input,
>   Collector> out) 
> throws Exception {
>   out.collect(Tuple3.of("", "", 0));
>   }
>   });
>   OneInputTransformation, Tuple3 Integer>> transform =
>   (OneInputTransformation, Tuple3 String, Integer>>) window.getTransformation();
>   OneInputStreamOperator, Tuple3 Integer>> operator = transform.getOperator();
>   Assert.assertTrue(operator instanceof WindowOperator);
>   WindowOperator, ?, ?, ?> winOperator =
>   (WindowOperator, ?, ?, ?>) 
> operator;
>   Assert.assertTrue(winOperator.getTrigger() instanceof EventTimeTrigger);
>   Assert.assertTrue(winOperator.getWindowAssigner() instanceof 
> TumblingEventTimeWindows);
>   Assert.assertTrue(winOperator.getStateDescriptor() instanceof 
> AggregatingStateDescriptor);
>   processElementAndEnsureOutput(
>   operator, winOperator.getKeySelector(), 
> BasicTypeInfo.STRING_TYPE_INFO, new Tuple2<>("hello", 1));
> }
> {code}
> The test results in 
> {code}
> org.apache.flink.api.common.functions.InvalidTypesException: Input mismatch: 
> Tuple type expected.
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.validateInputType(TypeExtractor.java:1157)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:451)
>   at 
> org.apache.flink.streaming.api.datastream.WindowedStream.aggregate(WindowedStream.java:855)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowTranslationTest.testAggregateWithWindowFunctionDifferentResultTypes(WindowTranslationTest.java:702)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> 

[jira] [Commented] (FLINK-6783) Wrongly extracted TypeInformations for WindowedStream::aggregate

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042677#comment-16042677
 ] 

ASF GitHub Bot commented on FLINK-6783:
---

Github user dawidwys commented on the issue:

https://github.com/apache/flink/pull/4039
  
I created a hotfix for the discussed issue: #4089 . I will close this PR 
then.


> Wrongly extracted TypeInformations for WindowedStream::aggregate
> 
>
> Key: FLINK-6783
> URL: https://issues.apache.org/jira/browse/FLINK-6783
> Project: Flink
>  Issue Type: Bug
>  Components: Core, DataStream API
>Affects Versions: 1.3.0, 1.3.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1, 1.4.0
>
>
> The following test fails because of wrongly acquired output type for 
> {{AggregateFunction}}:
> {code}
> @Test
> public void testAggregateWithWindowFunctionDifferentResultTypes() throws 
> Exception {
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   DataStream> source = 
> env.fromElements(Tuple2.of("hello", 1), Tuple2.of("hello", 2));
>   DataStream> window = source
>   .keyBy(new TupleKeySelector())
>   .window(TumblingEventTimeWindows.of(Time.of(1, 
> TimeUnit.SECONDS)))
>   .aggregate(new AggregateFunction, 
> Tuple2, String>() {
>   @Override
>   public Tuple2 createAccumulator() {
>   return Tuple2.of("", 0);
>   }
>   @Override
>   public void add(
>   Tuple2 value, Tuple2 Integer> accumulator) {
>   }
>   @Override
>   public String getResult(Tuple2 
> accumulator) {
>   return accumulator.f0;
>   }
>   @Override
>   public Tuple2 merge(
>   Tuple2 a, Tuple2 Integer> b) {
>   return Tuple2.of("", 0);
>   }
>   }, new WindowFunction, 
> String, TimeWindow>() {
>   @Override
>   public void apply(
>   String s,
>   TimeWindow window,
>   Iterable input,
>   Collector> out) 
> throws Exception {
>   out.collect(Tuple3.of("", "", 0));
>   }
>   });
>   OneInputTransformation, Tuple3 Integer>> transform =
>   (OneInputTransformation, Tuple3 String, Integer>>) window.getTransformation();
>   OneInputStreamOperator, Tuple3 Integer>> operator = transform.getOperator();
>   Assert.assertTrue(operator instanceof WindowOperator);
>   WindowOperator, ?, ?, ?> winOperator =
>   (WindowOperator, ?, ?, ?>) 
> operator;
>   Assert.assertTrue(winOperator.getTrigger() instanceof EventTimeTrigger);
>   Assert.assertTrue(winOperator.getWindowAssigner() instanceof 
> TumblingEventTimeWindows);
>   Assert.assertTrue(winOperator.getStateDescriptor() instanceof 
> AggregatingStateDescriptor);
>   processElementAndEnsureOutput(
>   operator, winOperator.getKeySelector(), 
> BasicTypeInfo.STRING_TYPE_INFO, new Tuple2<>("hello", 1));
> }
> {code}
> The test results in 
> {code}
> org.apache.flink.api.common.functions.InvalidTypesException: Input mismatch: 
> Tuple type expected.
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.validateInputType(TypeExtractor.java:1157)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:451)
>   at 
> org.apache.flink.streaming.api.datastream.WindowedStream.aggregate(WindowedStream.java:855)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowTranslationTest.testAggregateWithWindowFunctionDifferentResultTypes(WindowTranslationTest.java:702)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 

[GitHub] flink pull request #4039: [FLINK-6783] Changed passing index of type argumen...

2017-06-08 Thread dawidwys
Github user dawidwys closed the pull request at:

https://github.com/apache/flink/pull/4039


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4039: [FLINK-6783] Changed passing index of type argument while...

2017-06-08 Thread dawidwys
Github user dawidwys commented on the issue:

https://github.com/apache/flink/pull/4039
  
I created a hotfix for the discussed issue: #4089 . I will close this PR 
then.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6783) Wrongly extracted TypeInformations for WindowedStream::aggregate

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042675#comment-16042675
 ] 

ASF GitHub Bot commented on FLINK-6783:
---

GitHub user dawidwys opened a pull request:

https://github.com/apache/flink/pull/4089

[FLINK-6783][hotfix] Removed lamba indices for abstract classes

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dawidwys/flink lambda-type-hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4089.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4089


commit ed19187f2c918dd283b7e0cea263efb175d73742
Author: Dawid Wysakowicz 
Date:   2017-06-08T13:13:58Z

[FLINK-6783][hotfix] Removed lamba indices for abstract classes




> Wrongly extracted TypeInformations for WindowedStream::aggregate
> 
>
> Key: FLINK-6783
> URL: https://issues.apache.org/jira/browse/FLINK-6783
> Project: Flink
>  Issue Type: Bug
>  Components: Core, DataStream API
>Affects Versions: 1.3.0, 1.3.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1, 1.4.0
>
>
> The following test fails because of wrongly acquired output type for 
> {{AggregateFunction}}:
> {code}
> @Test
> public void testAggregateWithWindowFunctionDifferentResultTypes() throws 
> Exception {
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   DataStream> source = 
> env.fromElements(Tuple2.of("hello", 1), Tuple2.of("hello", 2));
>   DataStream> window = source
>   .keyBy(new TupleKeySelector())
>   .window(TumblingEventTimeWindows.of(Time.of(1, 
> TimeUnit.SECONDS)))
>   .aggregate(new AggregateFunction, 
> Tuple2, String>() {
>   @Override
>   public Tuple2 createAccumulator() {
>   return Tuple2.of("", 0);
>   }
>   @Override
>   public void add(
>   Tuple2 value, Tuple2 Integer> accumulator) {
>   }
>   @Override
>   public String getResult(Tuple2 
> accumulator) {
>   return accumulator.f0;
>   }
>   @Override
>   public Tuple2 merge(
>   Tuple2 a, Tuple2 Integer> b) {
>   return Tuple2.of("", 0);
>   }
>   }, new WindowFunction, 
> String, TimeWindow>() {
>   @Override
>   public void apply(
>   String s,
>   TimeWindow window,
>   Iterable input,
>   Collector> out) 
> throws Exception {
>   out.collect(Tuple3.of("", "", 0));
>   }
>   });
>   OneInputTransformation, Tuple3 Integer>> transform =
>   

[GitHub] flink pull request #4089: [FLINK-6783][hotfix] Removed lamba indices for abs...

2017-06-08 Thread dawidwys
GitHub user dawidwys opened a pull request:

https://github.com/apache/flink/pull/4089

[FLINK-6783][hotfix] Removed lamba indices for abstract classes

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dawidwys/flink lambda-type-hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4089.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4089


commit ed19187f2c918dd283b7e0cea263efb175d73742
Author: Dawid Wysakowicz 
Date:   2017-06-08T13:13:58Z

[FLINK-6783][hotfix] Removed lamba indices for abstract classes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6783) Wrongly extracted TypeInformations for WindowedStream::aggregate

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042665#comment-16042665
 ] 

ASF GitHub Bot commented on FLINK-6783:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4039#discussion_r120882223
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AllWindowedStream.java
 ---
@@ -507,12 +505,41 @@ public AllWindowedStream(DataStream input,
TypeInformation aggResultType = 
TypeExtractor.getAggregateFunctionReturnType(
aggFunction, input.getType(), null, false);
 
-   TypeInformation resultType = 
TypeExtractor.getUnaryOperatorReturnType(
-   windowFunction, AllWindowFunction.class, true, 
true, aggResultType, null, false);
+   TypeInformation resultType = 
getAllWindowFunctionReturnType(windowFunction, aggResultType);
 
return aggregate(aggFunction, windowFunction, accumulatorType, 
aggResultType, resultType);
}
 
+   private static  TypeInformation 
getAllWindowFunctionReturnType(
+   AllWindowFunction function,
+   TypeInformation inType) {
+   return TypeExtractor.getUnaryOperatorReturnType(
+   function,
+   AllWindowFunction.class,
+   0,
+   1,
+   new int[]{1, 0},
+   new int[]{2, 0},
+   inType,
+   null,
+   false);
+   }
+
+   private static  TypeInformation 
getProcessAllWindowFunctionReturnType(
+   ProcessAllWindowFunction function,
+   TypeInformation inType) {
+   return TypeExtractor.getUnaryOperatorReturnType(
+   function,
+   ProcessAllWindowFunction.class,
+   0,
+   1,
+   new int[]{1, 0},
--- End diff --

Yes, it is not a serious problem. But we should change it to be consistent. 
@dawidwys Can you create a hot fix for it?


> Wrongly extracted TypeInformations for WindowedStream::aggregate
> 
>
> Key: FLINK-6783
> URL: https://issues.apache.org/jira/browse/FLINK-6783
> Project: Flink
>  Issue Type: Bug
>  Components: Core, DataStream API
>Affects Versions: 1.3.0, 1.3.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1, 1.4.0
>
>
> The following test fails because of wrongly acquired output type for 
> {{AggregateFunction}}:
> {code}
> @Test
> public void testAggregateWithWindowFunctionDifferentResultTypes() throws 
> Exception {
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   DataStream> source = 
> env.fromElements(Tuple2.of("hello", 1), Tuple2.of("hello", 2));
>   DataStream> window = source
>   .keyBy(new TupleKeySelector())
>   .window(TumblingEventTimeWindows.of(Time.of(1, 
> TimeUnit.SECONDS)))
>   .aggregate(new AggregateFunction, 
> Tuple2, String>() {
>   @Override
>   public Tuple2 createAccumulator() {
>   return Tuple2.of("", 0);
>   }
>   @Override
>   public void add(
>   Tuple2 value, Tuple2 Integer> accumulator) {
>   }
>   @Override
>   public String getResult(Tuple2 
> accumulator) {
>   return accumulator.f0;
>   }
>   @Override
>   public Tuple2 merge(
>   Tuple2 a, Tuple2 Integer> b) {
>   return Tuple2.of("", 0);
>   }
>   }, new WindowFunction, 
> String, TimeWindow>() {
>   @Override
>   public void apply(
>   String s,
>   TimeWindow window,
>   Iterable input,
>   Collector> out) 
> throws Exception {
>  

[GitHub] flink pull request #4039: [FLINK-6783] Changed passing index of type argumen...

2017-06-08 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4039#discussion_r120882223
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AllWindowedStream.java
 ---
@@ -507,12 +505,41 @@ public AllWindowedStream(DataStream input,
TypeInformation aggResultType = 
TypeExtractor.getAggregateFunctionReturnType(
aggFunction, input.getType(), null, false);
 
-   TypeInformation resultType = 
TypeExtractor.getUnaryOperatorReturnType(
-   windowFunction, AllWindowFunction.class, true, 
true, aggResultType, null, false);
+   TypeInformation resultType = 
getAllWindowFunctionReturnType(windowFunction, aggResultType);
 
return aggregate(aggFunction, windowFunction, accumulatorType, 
aggResultType, resultType);
}
 
+   private static  TypeInformation 
getAllWindowFunctionReturnType(
+   AllWindowFunction function,
+   TypeInformation inType) {
+   return TypeExtractor.getUnaryOperatorReturnType(
+   function,
+   AllWindowFunction.class,
+   0,
+   1,
+   new int[]{1, 0},
+   new int[]{2, 0},
+   inType,
+   null,
+   false);
+   }
+
+   private static  TypeInformation 
getProcessAllWindowFunctionReturnType(
+   ProcessAllWindowFunction function,
+   TypeInformation inType) {
+   return TypeExtractor.getUnaryOperatorReturnType(
+   function,
+   ProcessAllWindowFunction.class,
+   0,
+   1,
+   new int[]{1, 0},
--- End diff --

Yes, it is not a serious problem. But we should change it to be consistent. 
@dawidwys Can you create a hot fix for it?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-6870) Combined batch and stream TableSource can not produce same time attributes

2017-06-08 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-6870:

Affects Version/s: 1.4.0

> Combined batch and stream TableSource can not produce same time attributes
> --
>
> Key: FLINK-6870
> URL: https://issues.apache.org/jira/browse/FLINK-6870
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: Timo Walther
>
> If a class implements both {{BatchTableSource}} and {{StreamTableSource}}, it 
> is not possible to declare a time attribute which is valid for both 
> environments. For batch it should be a regular field, but not for streaming. 
> The {{getReturnType}} method does not know the environment in which it is 
> called.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6870) Combined batch and stream TableSource can not produce same time attributes

2017-06-08 Thread Timo Walther (JIRA)
Timo Walther created FLINK-6870:
---

 Summary: Combined batch and stream TableSource can not produce 
same time attributes
 Key: FLINK-6870
 URL: https://issues.apache.org/jira/browse/FLINK-6870
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Timo Walther


If a class implements both {{BatchTableSource}} and {{StreamTableSource}}, it 
is not possible to declare a time attribute which is valid for both 
environments. For batch it should be a regular field, but not for streaming. 
The {{getReturnType}} method does not know the environment in which it is 
called.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6783) Wrongly extracted TypeInformations for WindowedStream::aggregate

2017-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042628#comment-16042628
 ] 

ASF GitHub Bot commented on FLINK-6783:
---

Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/4039#discussion_r120875203
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AllWindowedStream.java
 ---
@@ -507,12 +505,41 @@ public AllWindowedStream(DataStream input,
TypeInformation aggResultType = 
TypeExtractor.getAggregateFunctionReturnType(
aggFunction, input.getType(), null, false);
 
-   TypeInformation resultType = 
TypeExtractor.getUnaryOperatorReturnType(
-   windowFunction, AllWindowFunction.class, true, 
true, aggResultType, null, false);
+   TypeInformation resultType = 
getAllWindowFunctionReturnType(windowFunction, aggResultType);
 
return aggregate(aggFunction, windowFunction, accumulatorType, 
aggResultType, resultType);
}
 
+   private static  TypeInformation 
getAllWindowFunctionReturnType(
+   AllWindowFunction function,
+   TypeInformation inType) {
+   return TypeExtractor.getUnaryOperatorReturnType(
+   function,
+   AllWindowFunction.class,
+   0,
+   1,
+   new int[]{1, 0},
+   new int[]{2, 0},
+   inType,
+   null,
+   false);
+   }
+
+   private static  TypeInformation 
getProcessAllWindowFunctionReturnType(
+   ProcessAllWindowFunction function,
+   TypeInformation inType) {
+   return TypeExtractor.getUnaryOperatorReturnType(
+   function,
+   ProcessAllWindowFunction.class,
+   0,
+   1,
+   new int[]{1, 0},
--- End diff --

Ha, it seems I might have merged to fast. What will happen if we leave it 
as is? Shouldn't the analysis simply fail? We should probably just push a hot 
fix for this.

Sorry for the inconvenience. 


> Wrongly extracted TypeInformations for WindowedStream::aggregate
> 
>
> Key: FLINK-6783
> URL: https://issues.apache.org/jira/browse/FLINK-6783
> Project: Flink
>  Issue Type: Bug
>  Components: Core, DataStream API
>Affects Versions: 1.3.0, 1.3.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: flink-rel-1.3.1-blockers
> Fix For: 1.3.1, 1.4.0
>
>
> The following test fails because of wrongly acquired output type for 
> {{AggregateFunction}}:
> {code}
> @Test
> public void testAggregateWithWindowFunctionDifferentResultTypes() throws 
> Exception {
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   DataStream> source = 
> env.fromElements(Tuple2.of("hello", 1), Tuple2.of("hello", 2));
>   DataStream> window = source
>   .keyBy(new TupleKeySelector())
>   .window(TumblingEventTimeWindows.of(Time.of(1, 
> TimeUnit.SECONDS)))
>   .aggregate(new AggregateFunction, 
> Tuple2, String>() {
>   @Override
>   public Tuple2 createAccumulator() {
>   return Tuple2.of("", 0);
>   }
>   @Override
>   public void add(
>   Tuple2 value, Tuple2 Integer> accumulator) {
>   }
>   @Override
>   public String getResult(Tuple2 
> accumulator) {
>   return accumulator.f0;
>   }
>   @Override
>   public Tuple2 merge(
>   Tuple2 a, Tuple2 Integer> b) {
>   return Tuple2.of("", 0);
>   }
>   }, new WindowFunction, 
> String, TimeWindow>() {
>   @Override
>   public void apply(
>   String s,
>   TimeWindow window,
>   Iterable input,
>

[GitHub] flink pull request #4039: [FLINK-6783] Changed passing index of type argumen...

2017-06-08 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/4039#discussion_r120875203
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AllWindowedStream.java
 ---
@@ -507,12 +505,41 @@ public AllWindowedStream(DataStream input,
TypeInformation aggResultType = 
TypeExtractor.getAggregateFunctionReturnType(
aggFunction, input.getType(), null, false);
 
-   TypeInformation resultType = 
TypeExtractor.getUnaryOperatorReturnType(
-   windowFunction, AllWindowFunction.class, true, 
true, aggResultType, null, false);
+   TypeInformation resultType = 
getAllWindowFunctionReturnType(windowFunction, aggResultType);
 
return aggregate(aggFunction, windowFunction, accumulatorType, 
aggResultType, resultType);
}
 
+   private static  TypeInformation 
getAllWindowFunctionReturnType(
+   AllWindowFunction function,
+   TypeInformation inType) {
+   return TypeExtractor.getUnaryOperatorReturnType(
+   function,
+   AllWindowFunction.class,
+   0,
+   1,
+   new int[]{1, 0},
+   new int[]{2, 0},
+   inType,
+   null,
+   false);
+   }
+
+   private static  TypeInformation 
getProcessAllWindowFunctionReturnType(
+   ProcessAllWindowFunction function,
+   TypeInformation inType) {
+   return TypeExtractor.getUnaryOperatorReturnType(
+   function,
+   ProcessAllWindowFunction.class,
+   0,
+   1,
+   new int[]{1, 0},
--- End diff --

Ha, it seems I might have merged to fast. What will happen if we leave it 
as is? Shouldn't the analysis simply fail? We should probably just push a hot 
fix for this.

Sorry for the inconvenience. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >