[jira] [Commented] (FLINK-6711) Activate strict checkstyle for flink-connectors

2017-05-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027628#comment-16027628
 ] 

ASF GitHub Bot commented on FLINK-6711:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3992
  
merging.


> Activate strict checkstyle for flink-connectors
> ---
>
> Key: FLINK-6711
> URL: https://issues.apache.org/jira/browse/FLINK-6711
> Project: Flink
>  Issue Type: Sub-task
>  Components: Batch Connectors and Input/Output Formats, Streaming 
> Connectors
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3992: [FLINK-6711] Activate strict checkstyle for flink-connect...

2017-05-27 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3992
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6711) Activate strict checkstyle for flink-connectors

2017-05-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027591#comment-16027591
 ] 

ASF GitHub Bot commented on FLINK-6711:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3992
  
+1


> Activate strict checkstyle for flink-connectors
> ---
>
> Key: FLINK-6711
> URL: https://issues.apache.org/jira/browse/FLINK-6711
> Project: Flink
>  Issue Type: Sub-task
>  Components: Batch Connectors and Input/Output Formats, Streaming 
> Connectors
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3992: [FLINK-6711] Activate strict checkstyle for flink-connect...

2017-05-27 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3992
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-6720) Activate strict checkstyle for flink-java8

2017-05-27 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-6720.
---
Resolution: Fixed

1.4: b7085de440e0e29c010d242846a74d3fd923cde7 & 
7355a59f48fcda834a256f7925e00c66312494ed

> Activate strict checkstyle for flink-java8
> --
>
> Key: FLINK-6720
> URL: https://issues.apache.org/jira/browse/FLINK-6720
> Project: Flink
>  Issue Type: Sub-task
>  Components: Java API
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6720) Activate strict checkstyle for flink-java8

2017-05-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027552#comment-16027552
 ] 

ASF GitHub Bot commented on FLINK-6720:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3999


> Activate strict checkstyle for flink-java8
> --
>
> Key: FLINK-6720
> URL: https://issues.apache.org/jira/browse/FLINK-6720
> Project: Flink
>  Issue Type: Sub-task
>  Components: Java API
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3999: [FLINK-6720] Activate strict checkstyle in flink-j...

2017-05-27 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3999


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6720) Activate strict checkstyle for flink-java8

2017-05-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027509#comment-16027509
 ] 

ASF GitHub Bot commented on FLINK-6720:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3999
  
merging.


> Activate strict checkstyle for flink-java8
> --
>
> Key: FLINK-6720
> URL: https://issues.apache.org/jira/browse/FLINK-6720
> Project: Flink
>  Issue Type: Sub-task
>  Components: Java API
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3999: [FLINK-6720] Activate strict checkstyle in flink-java8

2017-05-27 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3999
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6376) when deploy flink cluster on the yarn, it is lack of hdfs delegation token.

2017-05-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027507#comment-16027507
 ] 

ASF GitHub Bot commented on FLINK-6376:
---

Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/3776
  
+1


> when deploy flink cluster on the yarn, it is lack of hdfs delegation token.
> ---
>
> Key: FLINK-6376
> URL: https://issues.apache.org/jira/browse/FLINK-6376
> Project: Flink
>  Issue Type: Bug
>  Components: Security, YARN
>Reporter: zhangrucong1982
>Assignee: zhangrucong1982
>
> 1、I use the flink of version 1.2.0. And  I deploy the flink cluster on the 
> yarn. The hadoop version is 2.7.2.
> 2、I use flink in security model with the keytab and principal. And the key 
> configuration is :security.kerberos.login.keytab: /home/ketab/test.keytab 
> 、security.kerberos.login.principal: test.
> 3、The yarn configuration is default and enable the yarn log aggregation 
> configuration" yarn.log-aggregation-enable : true";
> 4、 Deploying the flink cluster  on the yarn,  the yarn Node manager occur the 
> following failure when aggregation the log in HDFS. The basic reason is lack 
> of HDFS  delegation token. 
>  java.io.IOException: Failed on local exception: java.io.IOException: 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]; Host Details : local host is: 
> "SZV1000258954/10.162.181.24"; destination host is: "SZV1000258954":25000;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:796)
> at org.apache.hadoop.ipc.Client.call(Client.java:1515)
> at org.apache.hadoop.ipc.Client.call(Client.java:1447)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
> at com.sun.proxy.$Proxy26.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:802)
> at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:201)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
> at com.sun.proxy.$Proxy27.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1919)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1500)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1496)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1496)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.checkExists(LogAggregationService.java:271)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.access$100(LogAggregationService.java:68)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$1.run(LogAggregationService.java:299)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1769)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.createAppDir(LogAggregationService.java:284)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initAppAggregator(LogAggregationService.java:390)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.initApp(LogAggregationService.java:342)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:470)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService.handle(LogAggregationService.java:68)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:194)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:120)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: 
> 

[GitHub] flink issue #3776: [FLINK-6376]when deploy flink cluster on the yarn, it is ...

2017-05-27 Thread EronWright
Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/3776
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6720) Activate strict checkstyle for flink-java8

2017-05-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027503#comment-16027503
 ] 

ASF GitHub Bot commented on FLINK-6720:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3999
  
+1


> Activate strict checkstyle for flink-java8
> --
>
> Key: FLINK-6720
> URL: https://issues.apache.org/jira/browse/FLINK-6720
> Project: Flink
>  Issue Type: Sub-task
>  Components: Java API
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3999: [FLINK-6720] Activate strict checkstyle in flink-java8

2017-05-27 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3999
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6742) Savepoint conversion might fail if operators change

2017-05-27 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027501#comment-16027501
 ] 

Chesnay Schepler commented on FLINK-6742:
-

 I would keep it, at the very least for improving the error message.

> Savepoint conversion might fail if operators change
> ---
>
> Key: FLINK-6742
> URL: https://issues.apache.org/jira/browse/FLINK-6742
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Priority: Minor
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6742) Savepoint conversion might fail if operators change

2017-05-27 Thread Gyula Fora (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027499#comment-16027499
 ] 

Gyula Fora commented on FLINK-6742:
---

Thank you for the help Chesnay! Should we close this JIRA?

> Savepoint conversion might fail if operators change
> ---
>
> Key: FLINK-6742
> URL: https://issues.apache.org/jira/browse/FLINK-6742
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Priority: Critical
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6742) Savepoint conversion might fail if operators change

2017-05-27 Thread Gyula Fora (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-6742:
--
Priority: Minor  (was: Critical)

> Savepoint conversion might fail if operators change
> ---
>
> Key: FLINK-6742
> URL: https://issues.apache.org/jira/browse/FLINK-6742
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Priority: Minor
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6742) Savepoint conversion might fail if operators change

2017-05-27 Thread Gyula Fora (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027498#comment-16027498
 ] 

Gyula Fora commented on FLINK-6742:
---

They are incompatible due to some custom state backend code, that's my problem 
really :)

I like option 2, but now I just went with adding an extra null check to the 
conversion step to avoid the nullpointer.

> Savepoint conversion might fail if operators change
> ---
>
> Key: FLINK-6742
> URL: https://issues.apache.org/jira/browse/FLINK-6742
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Priority: Critical
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6742) Savepoint conversion might fail if operators change

2017-05-27 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027496#comment-16027496
 ] 

Chesnay Schepler commented on FLINK-6742:
-

Why is the state incompatible? Is this tied to upgrading Flink or a change in 
the user code?

At the moment i can only think of the following workarounds:

1. Remove the operators from the topology and load them in 1.2 while allowing 
non-restored State. Take a new savepoint, add your operators, load in 1.3.
2. Add 2 no-op operators to the topology and assign them the UID's of the 
operators you want to drop. Load 1.2 savepoint, create 1.3 SP, drop operators, 
reload.

> Savepoint conversion might fail if operators change
> ---
>
> Key: FLINK-6742
> URL: https://issues.apache.org/jira/browse/FLINK-6742
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Priority: Critical
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6742) Savepoint conversion might fail if operators change

2017-05-27 Thread Gyula Fora (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027495#comment-16027495
 ] 

Gyula Fora commented on FLINK-6742:
---

In my case I would like to remove 2 operators while migrating because the state 
for those is not compatible (basically just changing the uids for those). In 
this case it actually becomes a techincal hurdle :D

> Savepoint conversion might fail if operators change
> ---
>
> Key: FLINK-6742
> URL: https://issues.apache.org/jira/browse/FLINK-6742
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Priority: Critical
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6742) Savepoint conversion might fail if operators change

2017-05-27 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027493#comment-16027493
 ] 

Chesnay Schepler commented on FLINK-6742:
-

Well, alright, technically you can add operators, so long as they don't modify 
chains.

The internal structure of savepoints was changed in 1.3. A 1.2 savepoint 
contains the state of tasks, as a list of states of the contained operators. In 
1.3 a savepoint only contains the states of operators, the notion of tasks was 
removed. In order to map an old savepoint to a new one we have to map the state 
of a task to the individual operators. For non-chains this is easy, but for 
chains this can only be done in a reliable way if the chains don't change, i.e 
no operator removed nor added.

The problem is that we don't know what happened to the missing task. It may 
very well be that the task was removed on purpose and the state should be lost. 
But it could also be that a user read that you can modify chains in 1.3 and did 
so before migrating the savepoint, this however only works after migration.

This isn't a technical hurdle, but a safety precaution.

> Savepoint conversion might fail if operators change
> ---
>
> Key: FLINK-6742
> URL: https://issues.apache.org/jira/browse/FLINK-6742
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Priority: Critical
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6742) Savepoint conversion might fail if operators change

2017-05-27 Thread Gyula Fora (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027477#comment-16027477
 ] 

Gyula Fora commented on FLINK-6742:
---

what's the reason for not allowing the removal / new operators here?

> Savepoint conversion might fail if operators change
> ---
>
> Key: FLINK-6742
> URL: https://issues.apache.org/jira/browse/FLINK-6742
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Priority: Critical
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6742) Savepoint conversion might fail if operators change

2017-05-27 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027473#comment-16027473
 ] 

Chesnay Schepler commented on FLINK-6742:
-

We can improve the error message, but this is supposed to fail. When restoring 
from a 1.2 savepoint in 1.3 the topology must not change as outlined in 
https://ci.apache.org/projects/flink/flink-docs-release-1.3/ops/upgrading.html#application-topology.
 Admittedly, we should make that more prominent.

Once you took a new 1.3 savepoint you can modify them at will again.

The {{allowNonRestoredState}} flag is ignored on purpose to prevent users from 
violating the above requirement by accident.

> Savepoint conversion might fail if operators change
> ---
>
> Key: FLINK-6742
> URL: https://issues.apache.org/jira/browse/FLINK-6742
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Priority: Critical
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6742) Savepoint conversion might fail if operators change

2017-05-27 Thread Gyula Fora (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-6742:
--
Summary: Savepoint conversion might fail if operators change  (was: 
Savepoint conversion might fail if task ids change)

> Savepoint conversion might fail if operators change
> ---
>
> Key: FLINK-6742
> URL: https://issues.apache.org/jira/browse/FLINK-6742
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Priority: Critical
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6742) Savepoint conversion might fail if task ids change

2017-05-27 Thread Gyula Fora (JIRA)
Gyula Fora created FLINK-6742:
-

 Summary: Savepoint conversion might fail if task ids change
 Key: FLINK-6742
 URL: https://issues.apache.org/jira/browse/FLINK-6742
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.3.0
Reporter: Gyula Fora
Priority: Critical


Caused by: java.lang.NullPointerException
at 
org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
at 
org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5476) Fail fast if trying to submit a job to a non-existing Flink cluster

2017-05-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027464#comment-16027464
 ] 

ASF GitHub Bot commented on FLINK-5476:
---

Github user DmytroShkvyra commented on the issue:

https://github.com/apache/flink/pull/3753
  
New PR is https://github.com/apache/flink/pull/4001


> Fail fast if trying to submit a job to a non-existing Flink cluster
> ---
>
> Key: FLINK-5476
> URL: https://issues.apache.org/jira/browse/FLINK-5476
> Project: Flink
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Till Rohrmann
>Assignee: Dmytro Shkvyra
>Priority: Minor
>
> In case of entering the wrong job manager address when submitting a job via 
> {{flink run}}, the {{JobClientActor}} waits per default {{60 s}} until a 
> {{JobClientActorConnectionException}}, indicating that the {{JobManager}} is 
> no longer reachable, is thrown. In order to fail fast in case of wrong 
> connection information, we could change it such that it uses initially a much 
> lower timeout and only increases the timeout if it had at least once 
> successfully connected to a {{JobManager}} before.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3753: [FLINK-5476] Fail fast if trying to submit a job to a non...

2017-05-27 Thread DmytroShkvyra
Github user DmytroShkvyra commented on the issue:

https://github.com/apache/flink/pull/3753
  
New PR is https://github.com/apache/flink/pull/4001


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5476) Fail fast if trying to submit a job to a non-existing Flink cluster

2017-05-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027462#comment-16027462
 ] 

ASF GitHub Bot commented on FLINK-5476:
---

GitHub user DmytroShkvyra opened a pull request:

https://github.com/apache/flink/pull/4001

[FLINK-5476] Fail fast if trying to submit a job to a non-existing Fl…

Fail fast if trying to submit a job to a non-existing Flink cluster

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-5476] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/DmytroShkvyra/flink FLINK-5476-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4001.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4001


commit cd3b8e38a1dd6214ff2d6e540a2b9c8454cd38a1
Author: DmytroShkvyra 
Date:   2017-05-27T15:36:32Z

[FLINK-5476] Fail fast if trying to submit a job to a non-existing Flink 
cluster




> Fail fast if trying to submit a job to a non-existing Flink cluster
> ---
>
> Key: FLINK-5476
> URL: https://issues.apache.org/jira/browse/FLINK-5476
> Project: Flink
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Till Rohrmann
>Assignee: Dmytro Shkvyra
>Priority: Minor
>
> In case of entering the wrong job manager address when submitting a job via 
> {{flink run}}, the {{JobClientActor}} waits per default {{60 s}} until a 
> {{JobClientActorConnectionException}}, indicating that the {{JobManager}} is 
> no longer reachable, is thrown. In order to fail fast in case of wrong 
> connection information, we could change it such that it uses initially a much 
> lower timeout and only increases the timeout if it had at least once 
> successfully connected to a {{JobManager}} before.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4001: [FLINK-5476] Fail fast if trying to submit a job t...

2017-05-27 Thread DmytroShkvyra
GitHub user DmytroShkvyra opened a pull request:

https://github.com/apache/flink/pull/4001

[FLINK-5476] Fail fast if trying to submit a job to a non-existing Fl…

Fail fast if trying to submit a job to a non-existing Flink cluster

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-5476] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/DmytroShkvyra/flink FLINK-5476-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4001.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4001


commit cd3b8e38a1dd6214ff2d6e540a2b9c8454cd38a1
Author: DmytroShkvyra 
Date:   2017-05-27T15:36:32Z

[FLINK-5476] Fail fast if trying to submit a job to a non-existing Flink 
cluster




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6741) In yarn cluster model with high available, the HDFS file is not deleted when cluster is shot down

2017-05-27 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027429#comment-16027429
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-6741:


Hi [~zhangrucong], I think this is fixed in 1.3.0 RC1. Please see FLINK-6646.

> In yarn cluster model with high available, the HDFS file is not deleted when 
> cluster is shot down
> -
>
> Key: FLINK-6741
> URL: https://issues.apache.org/jira/browse/FLINK-6741
> Project: Flink
>  Issue Type: Bug
>Reporter: zhangrucong1982
>Assignee: zhangrucong1982
>
> The flink version of 1.3.0 rc2. I use yarn cluster with high available.
> 1、the mainly configuration is:
> high-availability.zookeeper.storageDir: hdfs:///flink/recovery.
> 2、I use the command "./yarn-session.sh -n 2 -d" to start a cluster;
> 3、I use the command "./flink run ../example/streaming/WindowJoin.rar" to 
> summit a job;
> 4、I use the flink cancel command to cancel job;
> 5、 I use the “./yarn-session.sh -id ” command to attach to the yarn 
> cluster and use stop command to shutdown the cluster.
> 6、After shutdown the cluster,I find the file in hdfs is not deleted. Like the 
> following:
> /flink/recovery/application_1495781150990_0006/blob/cache/blob_9b2a6f6535819075889ebcf64490f4e6528b07



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6741) In yarn cluster model with high available, the HDFS file is not deleted when cluster is shot down

2017-05-27 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027429#comment-16027429
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-6741 at 5/27/17 1:39 PM:
-

Hi [~zhangrucong], I think this is fixed in 1.3.0 RC3. Please see FLINK-6646.


was (Author: tzulitai):
Hi [~zhangrucong], I think this is fixed in 1.3.0 RC1. Please see FLINK-6646.

> In yarn cluster model with high available, the HDFS file is not deleted when 
> cluster is shot down
> -
>
> Key: FLINK-6741
> URL: https://issues.apache.org/jira/browse/FLINK-6741
> Project: Flink
>  Issue Type: Bug
>Reporter: zhangrucong1982
>Assignee: zhangrucong1982
>
> The flink version of 1.3.0 rc2. I use yarn cluster with high available.
> 1、the mainly configuration is:
> high-availability.zookeeper.storageDir: hdfs:///flink/recovery.
> 2、I use the command "./yarn-session.sh -n 2 -d" to start a cluster;
> 3、I use the command "./flink run ../example/streaming/WindowJoin.rar" to 
> summit a job;
> 4、I use the flink cancel command to cancel job;
> 5、 I use the “./yarn-session.sh -id ” command to attach to the yarn 
> cluster and use stop command to shutdown the cluster.
> 6、After shutdown the cluster,I find the file in hdfs is not deleted. Like the 
> following:
> /flink/recovery/application_1495781150990_0006/blob/cache/blob_9b2a6f6535819075889ebcf64490f4e6528b07



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6658) Use scala Collections in scala CEP API

2017-05-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027409#comment-16027409
 ] 

ASF GitHub Bot commented on FLINK-6658:
---

Github user dawidwys closed the pull request at:

https://github.com/apache/flink/pull/3963


> Use scala Collections in scala CEP API
> --
>
> Key: FLINK-6658
> URL: https://issues.apache.org/jira/browse/FLINK-6658
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3963: [FLINK-6658][cep] Use scala Collections in scala C...

2017-05-27 Thread dawidwys
Github user dawidwys closed the pull request at:

https://github.com/apache/flink/pull/3963


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6740) Fix "parameterTypeEquals" method error.

2017-05-27 Thread sunjincheng (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027394#comment-16027394
 ] 

sunjincheng commented on FLINK-6740:


Hi [~fhueske] [~twalthr] Do you have some suggestions about this  JIRA.?

> Fix "parameterTypeEquals" method error.
> ---
>
> Key: FLINK-6740
> URL: https://issues.apache.org/jira/browse/FLINK-6740
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> When we define UDTF as follows:
> {code}
> class TableFuncPojo extends TableFunction[TPojo] {
>   def eval(age: Int, name:String): Unit = {
> collect(new TPojo(age.toLong,name))
>   }
>   def eval(age: Date, name:String): Unit = {
>   collect(new TPojo(age.getTime,name))
>   }
> }
> {code}
> TableAPI:
> {code}
>  val table = stream.toTable(tEnv,
>   'long2, 'int, 'double, 'float, 'bigdec, 'ts, 'date,'pojo, 'string, 
> 'long.rowtime)
> val windowedTable = table
>   .join(udtf('date, 'string) as 'pojo2).select('pojo2)
> {code}
> We will get the error as following:
> {code}
> org.apache.flink.table.api.ValidationException: Found multiple 'eval' methods 
> which match the signature.
>   at 
> org.apache.flink.table.functions.utils.UserDefinedFunctionUtils$.getUserDefinedMethod(UserDefinedFunctionUtils.scala:180)
>   at 
> org.apache.flink.table.plan.logical.LogicalTableFunctionCall.validate(operators.scala:700)
>   at org.apache.flink.table.api.Table.join(table.scala:539)
>   at org.apache.flink.table.api.Table.join(table.scala:328)
>   at 
> org.apache.flink.table.runtime.datastream.DataStreamAggregateITCase.test1(DataStreamAggregateITCase.scala:84)
> {code}
> The reason is in {{ parameterTypeEquals }} method, logical as follows:
> {code}
> candidate == classOf[Date] && (expected == classOf[Int] || expected == 
> classOf[JInt]) 
> {code}
> TestData:
> {code}
> val data = List(
> (1L, 1, 1d, 1f, new BigDecimal("1"),
>   new Timestamp(200020200),new Date(100101010),new TPojo(1L, "XX"),"Hi"),
> (2L, 2, 2d, 2f, new BigDecimal("2"),
>   new Timestamp(200020200),new Date(100101010),new TPojo(1L, 
> "XX"),"Hallo"),
> (3L, 2, 2d, 2f, new BigDecimal("2"),
>   new Timestamp(200020200), new Date(100101010),new TPojo(1L, 
> "XX"),"Hello"),
> (4L, 5, 5d, 5f, new BigDecimal("5"),
>   new Timestamp(200020200), new Date(2334234),new TPojo(2L, 
> "YY"),"Hello"),
> (7L, 3, 3d, 3f, new BigDecimal("3"),
>   new Timestamp(200020200), new Date(66633),new TPojo(1L, 
> "XX"),"Hello"),
> (8L, 3, 3d, 3f, new BigDecimal("3"),
>   new Timestamp(200020200), new Date(100101010),new TPojo(1L, 
> "XX"),"Hello world"),
> (16L, 4, 4d, 4f, new BigDecimal("4"),
>   new Timestamp(200020200), new Date(100101010),new TPojo(1L, 
> "XX"),"Hello world"))
> {code}
> But when we only define one `eval` method, we got different result, as 
> follows:
> {code}
> // for def eval(age: Int, name:String)
> Pojo{id=0, name='Hello'}
> Pojo{id=1, name='Hallo'}
> Pojo{id=1, name='Hello world'}
> Pojo{id=1, name='Hello world'}
> Pojo{id=1, name='Hello'}
> Pojo{id=1, name='Hi'}
> Pojo{id=8, name='Hello'}
> // for def eval(age: Date, name:String)
> Pojo{id=-2880, name='Hello'}
> Pojo{id=5760, name='Hallo'}
> Pojo{id=5760, name='Hello world'}
> Pojo{id=5760, name='Hello world'}
> Pojo{id=5760, name='Hello'}
> Pojo{id=5760, name='Hi'}
> Pojo{id=66240, name='Hello'}
> {code}
> So, We should modify the logical of  {{ parameterTypeEquals }} method.
> What do you think? Welcome anybody feedback...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6740) Fix "parameterTypeEquals" method error.

2017-05-27 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6740:
---
Description: 
When we define UDTF as follows:
{code}
class TableFuncPojo extends TableFunction[TPojo] {
  def eval(age: Int, name:String): Unit = {
collect(new TPojo(age.toLong,name))
  }
  def eval(age: Date, name:String): Unit = {
  collect(new TPojo(age.getTime,name))
  }
}
{code}

TableAPI:
{code}
 val table = stream.toTable(tEnv,
  'long2, 'int, 'double, 'float, 'bigdec, 'ts, 'date,'pojo, 'string, 
'long.rowtime)
val windowedTable = table
  .join(udtf('date, 'string) as 'pojo2).select('pojo2)
{code}
We will get the error as following:
{code}
org.apache.flink.table.api.ValidationException: Found multiple 'eval' methods 
which match the signature.

at 
org.apache.flink.table.functions.utils.UserDefinedFunctionUtils$.getUserDefinedMethod(UserDefinedFunctionUtils.scala:180)
at 
org.apache.flink.table.plan.logical.LogicalTableFunctionCall.validate(operators.scala:700)
at org.apache.flink.table.api.Table.join(table.scala:539)
at org.apache.flink.table.api.Table.join(table.scala:328)
at 
org.apache.flink.table.runtime.datastream.DataStreamAggregateITCase.test1(DataStreamAggregateITCase.scala:84)
{code}

The reason is in {{ parameterTypeEquals }} method, logical as follows:
{code}
candidate == classOf[Date] && (expected == classOf[Int] || expected == 
classOf[JInt]) 
{code}
TestData:
{code}
val data = List(
(1L, 1, 1d, 1f, new BigDecimal("1"),
  new Timestamp(200020200),new Date(100101010),new TPojo(1L, "XX"),"Hi"),
(2L, 2, 2d, 2f, new BigDecimal("2"),
  new Timestamp(200020200),new Date(100101010),new TPojo(1L, "XX"),"Hallo"),
(3L, 2, 2d, 2f, new BigDecimal("2"),
  new Timestamp(200020200), new Date(100101010),new TPojo(1L, 
"XX"),"Hello"),
(4L, 5, 5d, 5f, new BigDecimal("5"),
  new Timestamp(200020200), new Date(2334234),new TPojo(2L, "YY"),"Hello"),
(7L, 3, 3d, 3f, new BigDecimal("3"),
  new Timestamp(200020200), new Date(66633),new TPojo(1L, 
"XX"),"Hello"),
(8L, 3, 3d, 3f, new BigDecimal("3"),
  new Timestamp(200020200), new Date(100101010),new TPojo(1L, "XX"),"Hello 
world"),
(16L, 4, 4d, 4f, new BigDecimal("4"),
  new Timestamp(200020200), new Date(100101010),new TPojo(1L, "XX"),"Hello 
world"))
{code}
But when we only define one `eval` method, we got different result, as follows:

{code}
// for def eval(age: Int, name:String)
Pojo{id=0, name='Hello'}
Pojo{id=1, name='Hallo'}
Pojo{id=1, name='Hello world'}
Pojo{id=1, name='Hello world'}
Pojo{id=1, name='Hello'}
Pojo{id=1, name='Hi'}
Pojo{id=8, name='Hello'}

// for def eval(age: Date, name:String)
Pojo{id=-2880, name='Hello'}
Pojo{id=5760, name='Hallo'}
Pojo{id=5760, name='Hello world'}
Pojo{id=5760, name='Hello world'}
Pojo{id=5760, name='Hello'}
Pojo{id=5760, name='Hi'}
Pojo{id=66240, name='Hello'}
{code}

So, We should modify the logical of  {{ parameterTypeEquals }} method.
What do you think? Welcome anybody feedback...



  was:
When we define UDTF as follows:
{code}
class TableFuncPojo extends TableFunction[TPojo] {
  def eval(age: Int, name:String): Unit = {
collect(new TPojo(age.toLong,name))
  }
  def eval(age: Date, name:String): Unit = {
  collect(new TPojo(age.getTime,name))
  }
}
{code}

TableAPI:
{code}
 val table = stream.toTable(tEnv,
  'long2, 'int, 'double, 'float, 'bigdec, 'ts, 'date,'pojo, 'string, 
'long.rowtime)
val windowedTable = table
  .join(udtf('date, 'string) as 'pojo2).select('pojo2)
{code}
We will get the error as following:
{code}
org.apache.flink.table.api.ValidationException: Found multiple 'eval' methods 
which match the signature.

at 
org.apache.flink.table.functions.utils.UserDefinedFunctionUtils$.getUserDefinedMethod(UserDefinedFunctionUtils.scala:180)
at 
org.apache.flink.table.plan.logical.LogicalTableFunctionCall.validate(operators.scala:700)
at org.apache.flink.table.api.Table.join(table.scala:539)
at org.apache.flink.table.api.Table.join(table.scala:328)
at 
org.apache.flink.table.runtime.datastream.DataStreamAggregateITCase.test1(DataStreamAggregateITCase.scala:84)
{code}

The reason is in {{ parameterTypeEquals }} method, logical as follows:
{code}
candidate == classOf[Date] && (expected == classOf[Int] || expected == 
classOf[JInt]) 
{code}
But when we only define one `eval` method, we got different result, as follows:

{code}
// for def eval(age: Int, name:String)
Pojo{id=0, name='Hello'}
Pojo{id=1, name='Hallo'}
Pojo{id=1, name='Hello world'}
Pojo{id=1, name='Hello world'}
Pojo{id=1, name='Hello'}
Pojo{id=1, name='Hi'}
Pojo{id=8, name='Hello'}

// for def eval(age: Date, name:String)
Pojo{id=-2880, name='Hello'}
Pojo{id=5760, name='Hallo'}
Pojo{id=5760, name='Hello world'}
Pojo{id=5760, 

[jira] [Created] (FLINK-6741) In yarn cluster model with high available, the HDFS file is not deleted when cluster is shot down

2017-05-27 Thread zhangrucong1982 (JIRA)
zhangrucong1982 created FLINK-6741:
--

 Summary: In yarn cluster model with high available, the HDFS file 
is not deleted when cluster is shot down
 Key: FLINK-6741
 URL: https://issues.apache.org/jira/browse/FLINK-6741
 Project: Flink
  Issue Type: Bug
Reporter: zhangrucong1982
Assignee: zhangrucong1982


The flink version of 1.3.0 rc2. I use yarn cluster with high available.
1、the mainly configuration is:
high-availability.zookeeper.storageDir: hdfs:///flink/recovery.
2、I use the command "./yarn-session.sh -n 2 -d" to start a cluster;
3、I use the command "./flink run ../example/streaming/WindowJoin.rar" to summit 
a job;
4、I use the flink cancel command to cancel job;
5、 I use the “./yarn-session.sh -id ” command to attach to the yarn cluster 
and use stop command to shutdown the cluster.
6、After shutdown the cluster,I find the file in hdfs is not deleted. Like the 
following:
/flink/recovery/application_1495781150990_0006/blob/cache/blob_9b2a6f6535819075889ebcf64490f4e6528b07



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6740) Fix "parameterTypeEquals" method error.

2017-05-27 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6740:
--

 Summary: Fix "parameterTypeEquals" method error.
 Key: FLINK-6740
 URL: https://issues.apache.org/jira/browse/FLINK-6740
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.3.0
Reporter: sunjincheng
Assignee: sunjincheng


When we define UDTF as follows:
{code}
class TableFuncPojo extends TableFunction[TPojo] {
  def eval(age: Int, name:String): Unit = {
collect(new TPojo(age.toLong,name))
  }
  def eval(age: Date, name:String): Unit = {
  collect(new TPojo(age.getTime,name))
  }
}
{code}

TableAPI:
{code}
 val table = stream.toTable(tEnv,
  'long2, 'int, 'double, 'float, 'bigdec, 'ts, 'date,'pojo, 'string, 
'long.rowtime)
val windowedTable = table
  .join(udtf('date, 'string) as 'pojo2).select('pojo2)
{code}
We will get the error as following:
{code}
org.apache.flink.table.api.ValidationException: Found multiple 'eval' methods 
which match the signature.

at 
org.apache.flink.table.functions.utils.UserDefinedFunctionUtils$.getUserDefinedMethod(UserDefinedFunctionUtils.scala:180)
at 
org.apache.flink.table.plan.logical.LogicalTableFunctionCall.validate(operators.scala:700)
at org.apache.flink.table.api.Table.join(table.scala:539)
at org.apache.flink.table.api.Table.join(table.scala:328)
at 
org.apache.flink.table.runtime.datastream.DataStreamAggregateITCase.test1(DataStreamAggregateITCase.scala:84)
{code}

The reason is in {{ parameterTypeEquals }} method, logical as follows:
{code}
candidate == classOf[Date] && (expected == classOf[Int] || expected == 
classOf[JInt]) 
{code}
But when we only define one `eval` method, we got different result, as follows:

{code}
// for def eval(age: Int, name:String)
Pojo{id=0, name='Hello'}
Pojo{id=1, name='Hallo'}
Pojo{id=1, name='Hello world'}
Pojo{id=1, name='Hello world'}
Pojo{id=1, name='Hello'}
Pojo{id=1, name='Hi'}
Pojo{id=8, name='Hello'}

// for def eval(age: Date, name:String)
Pojo{id=-2880, name='Hello'}
Pojo{id=5760, name='Hallo'}
Pojo{id=5760, name='Hello world'}
Pojo{id=5760, name='Hello world'}
Pojo{id=5760, name='Hello'}
Pojo{id=5760, name='Hi'}
Pojo{id=66240, name='Hello'}
{code}

So, We should modify the logical of  {{ parameterTypeEquals }} method.
What do you think? Welcome anybody feedback...





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6739) Fix all string reference variable error.

2017-05-27 Thread sunjincheng (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027316#comment-16027316
 ] 

sunjincheng commented on FLINK-6739:


Fix it in [PR#4000|https://github.com/apache/flink/pull/4000] And close this 
JIRA.

> Fix all string reference variable error.
> 
>
> Key: FLINK-6739
> URL: https://issues.apache.org/jira/browse/FLINK-6739
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> I find  a lot of string reference variable error such as
> {code} in '${methodName}' methods {code}. 
> We should change it from {code} in '${methodName}' methods {code} to {code} 
> in s"${methodName}methods" {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6739) Fix all string reference variable error.

2017-05-27 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng closed FLINK-6739.
--
Resolution: Fixed

> Fix all string reference variable error.
> 
>
> Key: FLINK-6739
> URL: https://issues.apache.org/jira/browse/FLINK-6739
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> I find  a lot of string reference variable error such as
> {code} in '${methodName}' methods {code}. 
> We should change it from {code} in '${methodName}' methods {code} to {code} 
> in s"${methodName}methods" {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6739) Fix all string reference variable error.

2017-05-27 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6739:
---
Description: 
I find  a lot of string reference variable error such as
{code} in '${methodName}' methods {code}. 
We should change it from {code} in '${methodName}' methods {code} to {code} in 
s"${methodName}methods" {code}

  was:
I find  a lot of string reference variable error such as
 {{in '${methodName}' methods }}. 
We should change it from {{in '${methodName}' methods }} to {{in 
[${methodName}]methods }}


> Fix all string reference variable error.
> 
>
> Key: FLINK-6739
> URL: https://issues.apache.org/jira/browse/FLINK-6739
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> I find  a lot of string reference variable error such as
> {code} in '${methodName}' methods {code}. 
> We should change it from {code} in '${methodName}' methods {code} to {code} 
> in s"${methodName}methods" {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6739) Fix all string reference variable error.

2017-05-27 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6739:
---
Description: 
I find  a lot of string reference variable error such as
 {{in '${methodName}' methods }}. 
We should change it from {{in '${methodName}' methods }} to {{in 
[${methodName}]methods }}

  was:I find  a lot of string reference variable error such as {{in 
'${methodName}' methods }}. We should change it from {{in '${methodName}' 
methods }} to {{in [${methodName}]methods }}


> Fix all string reference variable error.
> 
>
> Key: FLINK-6739
> URL: https://issues.apache.org/jira/browse/FLINK-6739
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> I find  a lot of string reference variable error such as
>  {{in '${methodName}' methods }}. 
> We should change it from {{in '${methodName}' methods }} to {{in 
> [${methodName}]methods }}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-6670) remove CommonTestUtils.createTempDirectory()

2017-05-27 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027312#comment-16027312
 ] 

Chesnay Schepler edited comment on FLINK-6670 at 5/27/17 7:13 AM:
--

We may want to extend this issue to cover {{CommonTestUtils#getTempDir()}} and 
{{CommonTestUtils#createTmpFile()}} in {{flink-test-utils}}.


was (Author: zentol):
We may want to extend this issue to cover {{CommonTestUtils#getTempDir()}} and 
{{ CommonTestUtils#createTmpFile()}} in {{flink-test-utils}}.

> remove CommonTestUtils.createTempDirectory()
> 
>
> Key: FLINK-6670
> URL: https://issues.apache.org/jira/browse/FLINK-6670
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Nico Kruber
>Assignee: Chesnay Schepler
>Priority: Minor
>
> {{CommonTestUtils.createTempDirectory()}} encourages a dangerous design 
> pattern with potential concurrency issues in the unit tests as well as the 
> need to cleanup the created directories.
> Instead, it should be solved by using the following pattern:
> {code:java}
> @Rule
> public TemporaryFolder tempFolder = new TemporaryFolder();
> {code}
> We should therefore remove {{CommonTestUtils.createTempDirectory()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6670) remove CommonTestUtils.createTempDirectory()

2017-05-27 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027312#comment-16027312
 ] 

Chesnay Schepler commented on FLINK-6670:
-

We may want to extend this issue to cover {{CommonTestUtils#getTempDir()}} and 
{{ CommonTestUtils#createTmpFile()}} in {{flink-test-utils}}.

> remove CommonTestUtils.createTempDirectory()
> 
>
> Key: FLINK-6670
> URL: https://issues.apache.org/jira/browse/FLINK-6670
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Nico Kruber
>Assignee: Chesnay Schepler
>Priority: Minor
>
> {{CommonTestUtils.createTempDirectory()}} encourages a dangerous design 
> pattern with potential concurrency issues in the unit tests as well as the 
> need to cleanup the created directories.
> Instead, it should be solved by using the following pattern:
> {code:java}
> @Rule
> public TemporaryFolder tempFolder = new TemporaryFolder();
> {code}
> We should therefore remove {{CommonTestUtils.createTempDirectory()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6739) Fix all string reference variable error.

2017-05-27 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6739:
---
Description: I find  a lot of string reference variable error such as {{in 
'${methodName}' methods }}. We should change it from {{in '${methodName}' 
methods }} to {{in [${methodName}]methods }}  (was: I find  a lot of string 
reference variable error such as {{in '${methodName}' methods }}. We should 
change it from {{in '${methodName}' methods }} to {{in \'${methodName}\' 
methods }})

> Fix all string reference variable error.
> 
>
> Key: FLINK-6739
> URL: https://issues.apache.org/jira/browse/FLINK-6739
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> I find  a lot of string reference variable error such as {{in '${methodName}' 
> methods }}. We should change it from {{in '${methodName}' methods }} to {{in 
> [${methodName}]methods }}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6737) Fix over expression parse String error.

2017-05-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027303#comment-16027303
 ] 

ASF GitHub Bot commented on FLINK-6737:
---

GitHub user sunjincheng121 opened a pull request:

https://github.com/apache/flink/pull/4000

[FLINK-6737][table]Add over method supported string expression as inp…

   In this PR only add over method supported string expression as input 
parameter.
- [x] General
  - The pull request references the related JIRA issue 
("[FLINK-6737][table]Add over method supported string expression as input 
parameter")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunjincheng121/flink FLINK-6737-PR

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4000.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4000


commit 5acb389de1597e43855d0c6f2b7369ef7f3d80cf
Author: sunjincheng121 
Date:   2017-05-27T06:03:06Z

[FLINK-6737][table]Add over method supported string expression as input 
parameter




> Fix over expression parse String error.
> ---
>
> Key: FLINK-6737
> URL: https://issues.apache.org/jira/browse/FLINK-6737
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> When we run the TableAPI as follows:
> {code}
> val windowedTable = table
>   .window(Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW 
> as 'w)
>   .select('c, "countFun(b)" over 'w as 'mycount, weightAvgFun('a, 'b) 
> over 'w as 'wAvg)
> {code}
> We get the error:
> {code}
> org.apache.flink.table.api.TableException: The over method can only using 
> with aggregation expression.
>   at 
> org.apache.flink.table.api.scala.ImplicitExpressionOperations$class.over(expressionDsl.scala:469)
>   at 
> org.apache.flink.table.api.scala.ImplicitExpressionConversions$LiteralStringExpression.over(expressionDsl.scala:756)
> {code}
> The reason is, the `over` method of `expressionDsl` not parse the String case.
> I think we should fix this before 1.3 release.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4000: [FLINK-6737][table]Add over method supported strin...

2017-05-27 Thread sunjincheng121
GitHub user sunjincheng121 opened a pull request:

https://github.com/apache/flink/pull/4000

[FLINK-6737][table]Add over method supported string expression as inp…

   In this PR only add over method supported string expression as input 
parameter.
- [x] General
  - The pull request references the related JIRA issue 
("[FLINK-6737][table]Add over method supported string expression as input 
parameter")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunjincheng121/flink FLINK-6737-PR

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4000.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4000


commit 5acb389de1597e43855d0c6f2b7369ef7f3d80cf
Author: sunjincheng121 
Date:   2017-05-27T06:03:06Z

[FLINK-6737][table]Add over method supported string expression as input 
parameter




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-6670) remove CommonTestUtils.createTempDirectory()

2017-05-27 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-6670:
---

Assignee: Chesnay Schepler

> remove CommonTestUtils.createTempDirectory()
> 
>
> Key: FLINK-6670
> URL: https://issues.apache.org/jira/browse/FLINK-6670
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Nico Kruber
>Assignee: Chesnay Schepler
>Priority: Minor
>
> {{CommonTestUtils.createTempDirectory()}} encourages a dangerous design 
> pattern with potential concurrency issues in the unit tests as well as the 
> need to cleanup the created directories.
> Instead, it should be solved by using the following pattern:
> {code:java}
> @Rule
> public TemporaryFolder tempFolder = new TemporaryFolder();
> {code}
> We should therefore remove {{CommonTestUtils.createTempDirectory()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6739) Fix all string reference variable error.

2017-05-27 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6739:
--

 Summary: Fix all string reference variable error.
 Key: FLINK-6739
 URL: https://issues.apache.org/jira/browse/FLINK-6739
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.3.0
Reporter: sunjincheng
Assignee: sunjincheng


I find  a lot of string reference variable error such as {{in '${methodName}' 
methods }}. We should change it from {{in '${methodName}' methods }} to {{in 
\'${methodName}\' methods }}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)