[jira] [Updated] (FLINK-6951) Incompatible versions of httpcomponents jars for Flink kinesis connector

2017-06-19 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-6951:
--
Affects Version/s: 1.3.0
  Component/s: Kinesis Connector

> Incompatible versions of httpcomponents jars for Flink kinesis connector
> 
>
> Key: FLINK-6951
> URL: https://issues.apache.org/jira/browse/FLINK-6951
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>
> In the following thread, Bowen reported incompatible versions of 
> httpcomponents jars for Flink kinesis connector :
> http://search-hadoop.com/m/Flink/VkLeQN2m5EySpb1?subj=Re+Incompatible+Apache+Http+lib+in+Flink+kinesis+connector
> We should find a solution such that users don't have to change dependency 
> version(s) themselves when building Flink kinesis connector.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6951) Incompatible versions of httpcomponents jars for Flink kinesis connector

2017-06-19 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055136#comment-16055136
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-6951:


I've replied to the thread. The problem is that the httpcomponents dependency 
in Flink may have not been shaded properly. It should be shaded, unless it was 
incorrectly built by the user, or the binary distribution simply wasn't built 
correctly by us.

Also, please remember to set the "Components" / "Affects Versions" field 
appropriately. That would help when doing JIRA searches.

> Incompatible versions of httpcomponents jars for Flink kinesis connector
> 
>
> Key: FLINK-6951
> URL: https://issues.apache.org/jira/browse/FLINK-6951
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>
> In the following thread, Bowen reported incompatible versions of 
> httpcomponents jars for Flink kinesis connector :
> http://search-hadoop.com/m/Flink/VkLeQN2m5EySpb1?subj=Re+Incompatible+Apache+Http+lib+in+Flink+kinesis+connector
> We should find a solution such that users don't have to change dependency 
> version(s) themselves when building Flink kinesis connector.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6951) Incompatible versions of httpcomponents jars for Flink kinesis connector

2017-06-19 Thread Ted Yu (JIRA)
Ted Yu created FLINK-6951:
-

 Summary: Incompatible versions of httpcomponents jars for Flink 
kinesis connector
 Key: FLINK-6951
 URL: https://issues.apache.org/jira/browse/FLINK-6951
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu


In the following thread, Bowen reported incompatible versions of httpcomponents 
jars for Flink kinesis connector :

http://search-hadoop.com/m/Flink/VkLeQN2m5EySpb1?subj=Re+Incompatible+Apache+Http+lib+in+Flink+kinesis+connector

We should find a solution such that users don't have to change dependency 
version(s) themselves when building Flink kinesis connector.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6682) Improve error message in case parallelism exceeds maxParallelism

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055105#comment-16055105
 ] 

ASF GitHub Bot commented on FLINK-6682:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/4125
  
Thank you so much @zentol . I did learn a lot from those. Also, the code 
have been updated. Please review. Thanks again.


> Improve error message in case parallelism exceeds maxParallelism
> 
>
> Key: FLINK-6682
> URL: https://issues.apache.org/jira/browse/FLINK-6682
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>
> When restoring a job with a parallelism that exceeds the maxParallelism we're 
> not providing a useful error message, as all you get is an 
> IllegalArgumentException:
> {code}
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed
> at 
> org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:343)
> at 
> org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:396)
> at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:467)
> ... 22 more
> Caused by: java.lang.IllegalArgumentException
> at 
> org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:123)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.createKeyGroupPartitions(StateAssignmentOperation.java:449)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignAttemptState(StateAssignmentOperation.java:117)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignStates(StateAssignmentOperation.java:102)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedState(CheckpointCoordinator.java:1038)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1101)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:1386)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4125: [FLINK-6682] [checkpoints] Improve error message in case ...

2017-06-19 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/4125
  
Thank you so much @zentol . I did learn a lot from those. Also, the code 
have been updated. Please review. Thanks again.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6682) Improve error message in case parallelism exceeds maxParallelism

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055102#comment-16055102
 ] 

ASF GitHub Bot commented on FLINK-6682:
---

Github user zhangminglei commented on a diff in the pull request:

https://github.com/apache/flink/pull/4125#discussion_r122869047
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
 ---
@@ -225,7 +225,16 @@ private void assignAttemptState(ExecutionJobVertex 
executionJobVertex, List 
operatorStates, ExecutionJobVertex executionJobVertex) {
-
+   //parallelism compare 
preconditions-
+
+   // if the max parallelism is lower than parallelism, we will 
throw an exception.
+   if (executionJobVertex.getMaxParallelism() < 
executionJobVertex.getParallelism()) {
--- End diff --

Yes. You are very correct. I will move this check condition into this 
method ```checkParallelismCondition(OperatorState, ExecutionJobVertex).```  It 
is not probably easier, it is absolutely easier. :1st_place_medal: Lol.


> Improve error message in case parallelism exceeds maxParallelism
> 
>
> Key: FLINK-6682
> URL: https://issues.apache.org/jira/browse/FLINK-6682
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>
> When restoring a job with a parallelism that exceeds the maxParallelism we're 
> not providing a useful error message, as all you get is an 
> IllegalArgumentException:
> {code}
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed
> at 
> org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:343)
> at 
> org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:396)
> at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:467)
> ... 22 more
> Caused by: java.lang.IllegalArgumentException
> at 
> org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:123)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.createKeyGroupPartitions(StateAssignmentOperation.java:449)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignAttemptState(StateAssignmentOperation.java:117)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignStates(StateAssignmentOperation.java:102)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedState(CheckpointCoordinator.java:1038)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1101)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:1386)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4125: [FLINK-6682] [checkpoints] Improve error message i...

2017-06-19 Thread zhangminglei
Github user zhangminglei commented on a diff in the pull request:

https://github.com/apache/flink/pull/4125#discussion_r122869047
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
 ---
@@ -225,7 +225,16 @@ private void assignAttemptState(ExecutionJobVertex 
executionJobVertex, List 
operatorStates, ExecutionJobVertex executionJobVertex) {
-
+   //parallelism compare 
preconditions-
+
+   // if the max parallelism is lower than parallelism, we will 
throw an exception.
+   if (executionJobVertex.getMaxParallelism() < 
executionJobVertex.getParallelism()) {
--- End diff --

Yes. You are very correct. I will move this check condition into this 
method ```checkParallelismCondition(OperatorState, ExecutionJobVertex).```  It 
is not probably easier, it is absolutely easier. :1st_place_medal: Lol.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6881) Creating a table from a POJO and defining a time attribute fails

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054990#comment-16054990
 ] 

ASF GitHub Bot commented on FLINK-6881:
---

Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4144#discussion_r122852900
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/StreamTableEnvironment.scala
 ---
@@ -437,39 +437,64 @@ abstract class StreamTableEnvironment(
 var rowtime: Option[(Int, String)] = None
 var proctime: Option[(Int, String)] = None
 
-exprs.zipWithIndex.foreach {
-  case (RowtimeAttribute(reference@UnresolvedFieldReference(name)), 
idx) =>
-if (rowtime.isDefined) {
+def extractRowtime(idx: Int, name: String, origName: Option[String]): 
Unit = {
+  if (rowtime.isDefined) {
+throw new TableException(
+  "The rowtime attribute can only be defined once in a table 
schema.")
+  } else {
+val mappedIdx = streamType match {
+  case pti: PojoTypeInfo[_] =>
+pti.getFieldIndex(origName.getOrElse(name))
--- End diff --

When user write a mistake row-time property name of POJO. e.g.:
`(recordTimeA as rowtime).rowtime` --> correct name is `recordTime`.
will get the exception as follows:
```
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -1
at 
org.apache.flink.table.api.StreamTableEnvironment.org$apache$flink$table$api$StreamTableEnvironment$$extractRowtime$1(StreamTableEnvironment.scala:453)
at 
org.apache.flink.table.api.StreamTableEnvironment$$anonfun$validateAndExtractTimeAttributes$1.apply(StreamTableEnvironment.scala:484)
```
I suggest that:
1. May be we need check the row-time property name of POJO as early as 
possible. 
2. We should check the index value must >= 0, If no so, we should throw a 
exception with clearly error information. 


> Creating a table from a POJO and defining a time attribute fails
> 
>
> Key: FLINK-6881
> URL: https://issues.apache.org/jira/browse/FLINK-6881
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Creating a table from a DataStream of POJOs fails when the user tries to 
> define a rowtime attribute.
> There are multiple reasons in {{ExpressionParser}} as well as 
> {{StreamTableEnvironment#validateAndExtractTimeAttributes}}.
> See also: 
> https://stackoverflow.com/questions/8022/apache-flink-1-3-table-api-rowtime-strange-behavior



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4144: [FLINK-6881] [FLINK-6896] [table] Creating a table...

2017-06-19 Thread sunjincheng121
Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4144#discussion_r122852900
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/StreamTableEnvironment.scala
 ---
@@ -437,39 +437,64 @@ abstract class StreamTableEnvironment(
 var rowtime: Option[(Int, String)] = None
 var proctime: Option[(Int, String)] = None
 
-exprs.zipWithIndex.foreach {
-  case (RowtimeAttribute(reference@UnresolvedFieldReference(name)), 
idx) =>
-if (rowtime.isDefined) {
+def extractRowtime(idx: Int, name: String, origName: Option[String]): 
Unit = {
+  if (rowtime.isDefined) {
+throw new TableException(
+  "The rowtime attribute can only be defined once in a table 
schema.")
+  } else {
+val mappedIdx = streamType match {
+  case pti: PojoTypeInfo[_] =>
+pti.getFieldIndex(origName.getOrElse(name))
--- End diff --

When user write a mistake row-time property name of POJO. e.g.:
`(recordTimeA as rowtime).rowtime` --> correct name is `recordTime`.
will get the exception as follows:
```
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -1
at 
org.apache.flink.table.api.StreamTableEnvironment.org$apache$flink$table$api$StreamTableEnvironment$$extractRowtime$1(StreamTableEnvironment.scala:453)
at 
org.apache.flink.table.api.StreamTableEnvironment$$anonfun$validateAndExtractTimeAttributes$1.apply(StreamTableEnvironment.scala:484)
```
I suggest that:
1. May be we need check the row-time property name of POJO as early as 
possible. 
2. We should check the index value must >= 0, If no so, we should throw a 
exception with clearly error information. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (FLINK-6930) Selecting window start / end on row-based Tumble/Slide window causes NPE

2017-06-19 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske reopened FLINK-6930:
--

> Selecting window start / end on row-based Tumble/Slide window causes NPE
> 
>
> Key: FLINK-6930
> URL: https://issues.apache.org/jira/browse/FLINK-6930
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Fabian Hueske
>Assignee: Jark Wu
> Fix For: 1.3.1, 1.4.0
>
>
> Selecting the start and end properties of a row-based window causes a 
> NullPointerException.
> The following program:
> {code}
> val windowedTable = table
>   .window(Tumble over 2.rows on 'proctime as 'w)
>   .groupBy('w, 'string)
>   .select('string as 'n, 'int.count as 'cnt, 'w.start as 's, 'w.end as 'e)
> {code}
> causes 
> {code}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.calcite.runtime.SqlFunctions.toLong(SqlFunctions.java:1556)
>   at 
> org.apache.calcite.runtime.SqlFunctions.toLong(SqlFunctions.java:1551)
>   at DataStreamCalcRule$40.processElement(Unknown Source)
>   at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:67)
>   at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:35)
>   at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:528)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:503)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:483)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:890)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:868)
>   at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>   at 
> org.apache.flink.table.runtime.aggregate.IncrementalAggregateWindowFunction.apply(IncrementalAggregateWindowFunction.scala:75)
>   at 
> org.apache.flink.table.runtime.aggregate.IncrementalAggregateWindowFunction.apply(IncrementalAggregateWindowFunction.scala:37)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.functions.InternalSingleValueWindowFunction.process(InternalSingleValueWindowFunction.java:46)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.emitWindowContents(WindowOperator.java:599)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:456)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:207)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:265)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> We should validate that the start and end window properties are not accessed 
> if the window is defined on row-counts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-6930) Selecting window start / end on row-based Tumble/Slide window causes NPE

2017-06-19 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-6930.

   Resolution: Fixed
Fix Version/s: 1.3.1

Fixed for 1.3.1 with 2321898943c223241794c6a4f387430633c954fb

> Selecting window start / end on row-based Tumble/Slide window causes NPE
> 
>
> Key: FLINK-6930
> URL: https://issues.apache.org/jira/browse/FLINK-6930
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Fabian Hueske
>Assignee: Jark Wu
> Fix For: 1.3.1, 1.4.0
>
>
> Selecting the start and end properties of a row-based window causes a 
> NullPointerException.
> The following program:
> {code}
> val windowedTable = table
>   .window(Tumble over 2.rows on 'proctime as 'w)
>   .groupBy('w, 'string)
>   .select('string as 'n, 'int.count as 'cnt, 'w.start as 's, 'w.end as 'e)
> {code}
> causes 
> {code}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.calcite.runtime.SqlFunctions.toLong(SqlFunctions.java:1556)
>   at 
> org.apache.calcite.runtime.SqlFunctions.toLong(SqlFunctions.java:1551)
>   at DataStreamCalcRule$40.processElement(Unknown Source)
>   at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:67)
>   at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:35)
>   at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:528)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:503)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:483)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:890)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:868)
>   at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>   at 
> org.apache.flink.table.runtime.aggregate.IncrementalAggregateWindowFunction.apply(IncrementalAggregateWindowFunction.scala:75)
>   at 
> org.apache.flink.table.runtime.aggregate.IncrementalAggregateWindowFunction.apply(IncrementalAggregateWindowFunction.scala:37)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.functions.InternalSingleValueWindowFunction.process(InternalSingleValueWindowFunction.java:46)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.emitWindowContents(WindowOperator.java:599)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:456)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:207)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:265)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> We should validate that the start and end window properties are not accessed 
> if the window is defined on row-counts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-6602) Table source with defined time attributes allows empty string

2017-06-19 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-6602.

   Resolution: Fixed
Fix Version/s: 1.4.0
   1.3.1

Fixed for 1.3.1 with c7f6d0245cd0d47a99d9badcd43f1ebb8758125e
Fixed for 1.4.0 with 850e4d913bf33d90409a078dab2fbc26bfa976ce

> Table source with defined time attributes allows empty string
> -
>
> Key: FLINK-6602
> URL: https://issues.apache.org/jira/browse/FLINK-6602
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Zhe Li
> Fix For: 1.3.1, 1.4.0
>
> Attachments: getRowType.png
>
>
> {{DefinedRowtimeAttribute}} and {{DefinedProctimeAttribute}} are not checked 
> for empty strings.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-6859) StateCleaningCountTrigger should not delete timer

2017-06-19 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-6859.

Resolution: Fixed

Fixed for 1.3.1 with 78b5092dc82fe36412e1d47c1a1fd81ef821d7c6

> StateCleaningCountTrigger should not delete timer
> -
>
> Key: FLINK-6859
> URL: https://issues.apache.org/jira/browse/FLINK-6859
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Fabian Hueske
>Assignee: sunjincheng
> Fix For: 1.3.1, 1.4.0
>
>
> The {{StateCleaningCountTrigger}} which is used to clean-up inactive state 
> should not delete timers, i.e.. not call {{deleteProcessingTimeTimer()}}.
> This is an expensive operation.
> We should rather fire the timer and check if we need to clean the state or 
> not.
> What do you think [~sunjincheng121]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-6886) Fix Timestamp field can not be selected in event time case when toDataStream[T], `T` not a `Row` Type.

2017-06-19 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-6886.

   Resolution: Fixed
Fix Version/s: 1.4.0
   1.3.1

Fixed for 1.3.1 with 8b91df2b3cd0c0ef733902ad742045b318bac0fd
Fixed for 1.4.0 with d78eeca37554ac75faf1aa451d0b4107ebd96fb9

> Fix Timestamp field can not be selected in event time case when  
> toDataStream[T], `T` not a `Row` Type.
> ---
>
> Key: FLINK-6886
> URL: https://issues.apache.org/jira/browse/FLINK-6886
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
> Fix For: 1.3.1, 1.4.0
>
>
> Currently for event-time window(group/over), When contain `Timestamp` type 
> field in `SELECT Clause`, And toDataStream[T], `T` not a `Row` Type, Such 
> `PojoType`, will throw a exception. In this JIRA. will fix this bug. For 
> example:
> Group Window on SQL:
> {code}
> SELECT name, max(num) as myMax, TUMBLE_START(rowtime, INTERVAL '5' SECOND) as 
> winStart,TUMBLE_END(rowtime, INTERVAL '5' SECOND) as winEnd FROM T1 GROUP BY 
> name, TUMBLE(rowtime, INTERVAL '5' SECOND)
> {code}
> Throw Exception:
> {code}
> org.apache.flink.table.api.TableException: The field types of physical and 
> logical row types do not match.This is a bug and should not happen. Please 
> file an issue.
>   at org.apache.flink.table.api.TableException$.apply(exceptions.scala:53)
>   at 
> org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:721)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.getConversionMapper(StreamTableEnvironment.scala:247)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:647)
> {code}
> In fact, when we solve this exception,subsequent other exceptions will be 
> thrown. The real reason is {{TableEnvironment#generateRowConverterFunction}} 
> method bug. So in this JIRA. will fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-6941) Selecting window start / end on over window causes field not resolve exception

2017-06-19 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-6941.

   Resolution: Fixed
Fix Version/s: 1.4.0
   1.3.1

Fixed for 1.3.1 with b6d14b9147c8966810a184322920dac7e8ec0ee0
Fixed for 1.4.0 with 6cf6cb8ddbedb5c5e8dcfdbf498cebef305be488

> Selecting window start / end on over window causes field not resolve exception
> --
>
> Key: FLINK-6941
> URL: https://issues.apache.org/jira/browse/FLINK-6941
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
> Fix For: 1.3.1, 1.4.0
>
>
> Selecting window start / end on over window causes field not resolve 
> exception.
> The following program:
> {code}
> table
>   .window(
> Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW as 'w)
>   .select('c, countFun('b) over 'w, 'w.start, 'w.end)
> {code}
> causes
> {code}
> org.apache.flink.table.api.ValidationException: Cannot resolve [w] given 
> input [a, b, c, proctime].
>   at 
> org.apache.flink.table.plan.logical.LogicalNode.failValidation(LogicalNode.scala:143)
>   at 
> org.apache.flink.table.plan.logical.LogicalNode$$anonfun$validate$1.applyOrElse(LogicalNode.scala:86)
>   at 
> org.apache.flink.table.plan.logical.LogicalNode$$anonfun$validate$1.applyOrElse(LogicalNode.scala:83)
>   at 
> org.apache.flink.table.plan.TreeNode.postOrderTransform(TreeNode.scala:72)
>   at 
> org.apache.flink.table.plan.TreeNode$$anonfun$1.apply(TreeNode.scala:46)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
> {code}
> We should validate that the start and end window properties are not accessed 
> on over windows.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6881) Creating a table from a POJO and defining a time attribute fails

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054786#comment-16054786
 ] 

ASF GitHub Bot commented on FLINK-6881:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4144#discussion_r122829892
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/StreamTableEnvironment.scala
 ---
@@ -437,39 +437,64 @@ abstract class StreamTableEnvironment(
 var rowtime: Option[(Int, String)] = None
 var proctime: Option[(Int, String)] = None
 
-exprs.zipWithIndex.foreach {
-  case (RowtimeAttribute(reference@UnresolvedFieldReference(name)), 
idx) =>
-if (rowtime.isDefined) {
+def extractRowtime(idx: Int, name: String, origName: Option[String]): 
Unit = {
+  if (rowtime.isDefined) {
+throw new TableException(
+  "The rowtime attribute can only be defined once in a table 
schema.")
+  } else {
+val mappedIdx = streamType match {
+  case pti: PojoTypeInfo[_] =>
+pti.getFieldIndex(origName.getOrElse(name))
+  case _ => idx;
+}
+// check type of field that is replaced
+if (mappedIdx < fieldTypes.length &&
+  !(TypeCheckUtils.isLong(fieldTypes(mappedIdx)) ||
+TypeCheckUtils.isTimePoint(fieldTypes(mappedIdx {
   throw new TableException(
-"The rowtime attribute can only be defined once in a table 
schema.")
-} else {
-  // check type of field that is replaced
-  if (idx < fieldTypes.length &&
-!(TypeCheckUtils.isLong(fieldTypes(idx)) ||
-  TypeCheckUtils.isTimePoint(fieldTypes(idx {
-throw new TableException(
-  "The rowtime attribute can only be replace a field with a 
valid time type, such as " +
-"Timestamp or Long.")
-  }
-  rowtime = Some(idx, name)
+s"The rowtime attribute can only be replace a field with a 
valid time type, " +
--- End diff --

remove "be" -> `"The rowtime attribute can only replace a field with ..."`


> Creating a table from a POJO and defining a time attribute fails
> 
>
> Key: FLINK-6881
> URL: https://issues.apache.org/jira/browse/FLINK-6881
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Creating a table from a DataStream of POJOs fails when the user tries to 
> define a rowtime attribute.
> There are multiple reasons in {{ExpressionParser}} as well as 
> {{StreamTableEnvironment#validateAndExtractTimeAttributes}}.
> See also: 
> https://stackoverflow.com/questions/8022/apache-flink-1-3-table-api-rowtime-strange-behavior



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4144: [FLINK-6881] [FLINK-6896] [table] Creating a table...

2017-06-19 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4144#discussion_r122829892
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/StreamTableEnvironment.scala
 ---
@@ -437,39 +437,64 @@ abstract class StreamTableEnvironment(
 var rowtime: Option[(Int, String)] = None
 var proctime: Option[(Int, String)] = None
 
-exprs.zipWithIndex.foreach {
-  case (RowtimeAttribute(reference@UnresolvedFieldReference(name)), 
idx) =>
-if (rowtime.isDefined) {
+def extractRowtime(idx: Int, name: String, origName: Option[String]): 
Unit = {
+  if (rowtime.isDefined) {
+throw new TableException(
+  "The rowtime attribute can only be defined once in a table 
schema.")
+  } else {
+val mappedIdx = streamType match {
+  case pti: PojoTypeInfo[_] =>
+pti.getFieldIndex(origName.getOrElse(name))
+  case _ => idx;
+}
+// check type of field that is replaced
+if (mappedIdx < fieldTypes.length &&
+  !(TypeCheckUtils.isLong(fieldTypes(mappedIdx)) ||
+TypeCheckUtils.isTimePoint(fieldTypes(mappedIdx {
   throw new TableException(
-"The rowtime attribute can only be defined once in a table 
schema.")
-} else {
-  // check type of field that is replaced
-  if (idx < fieldTypes.length &&
-!(TypeCheckUtils.isLong(fieldTypes(idx)) ||
-  TypeCheckUtils.isTimePoint(fieldTypes(idx {
-throw new TableException(
-  "The rowtime attribute can only be replace a field with a 
valid time type, such as " +
-"Timestamp or Long.")
-  }
-  rowtime = Some(idx, name)
+s"The rowtime attribute can only be replace a field with a 
valid time type, " +
--- End diff --

remove "be" -> `"The rowtime attribute can only replace a field with ..."`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6886) Fix Timestamp field can not be selected in event time case when toDataStream[T], `T` not a `Row` Type.

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054705#comment-16054705
 ] 

ASF GitHub Bot commented on FLINK-6886:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4102


> Fix Timestamp field can not be selected in event time case when  
> toDataStream[T], `T` not a `Row` Type.
> ---
>
> Key: FLINK-6886
> URL: https://issues.apache.org/jira/browse/FLINK-6886
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently for event-time window(group/over), When contain `Timestamp` type 
> field in `SELECT Clause`, And toDataStream[T], `T` not a `Row` Type, Such 
> `PojoType`, will throw a exception. In this JIRA. will fix this bug. For 
> example:
> Group Window on SQL:
> {code}
> SELECT name, max(num) as myMax, TUMBLE_START(rowtime, INTERVAL '5' SECOND) as 
> winStart,TUMBLE_END(rowtime, INTERVAL '5' SECOND) as winEnd FROM T1 GROUP BY 
> name, TUMBLE(rowtime, INTERVAL '5' SECOND)
> {code}
> Throw Exception:
> {code}
> org.apache.flink.table.api.TableException: The field types of physical and 
> logical row types do not match.This is a bug and should not happen. Please 
> file an issue.
>   at org.apache.flink.table.api.TableException$.apply(exceptions.scala:53)
>   at 
> org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:721)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.getConversionMapper(StreamTableEnvironment.scala:247)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:647)
> {code}
> In fact, when we solve this exception,subsequent other exceptions will be 
> thrown. The real reason is {{TableEnvironment#generateRowConverterFunction}} 
> method bug. So in this JIRA. will fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4137: [FLINK-6941][table]Add validate that the start and...

2017-06-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4137


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4135: Flink 6602 -- bug fix

2017-06-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4135


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4102: [FLINK-6886][table]Fix Timestamp field can not be ...

2017-06-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4102


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6941) Selecting window start / end on over window causes field not resolve exception

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054706#comment-16054706
 ] 

ASF GitHub Bot commented on FLINK-6941:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4137


> Selecting window start / end on over window causes field not resolve exception
> --
>
> Key: FLINK-6941
> URL: https://issues.apache.org/jira/browse/FLINK-6941
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Selecting window start / end on over window causes field not resolve 
> exception.
> The following program:
> {code}
> table
>   .window(
> Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW as 'w)
>   .select('c, countFun('b) over 'w, 'w.start, 'w.end)
> {code}
> causes
> {code}
> org.apache.flink.table.api.ValidationException: Cannot resolve [w] given 
> input [a, b, c, proctime].
>   at 
> org.apache.flink.table.plan.logical.LogicalNode.failValidation(LogicalNode.scala:143)
>   at 
> org.apache.flink.table.plan.logical.LogicalNode$$anonfun$validate$1.applyOrElse(LogicalNode.scala:86)
>   at 
> org.apache.flink.table.plan.logical.LogicalNode$$anonfun$validate$1.applyOrElse(LogicalNode.scala:83)
>   at 
> org.apache.flink.table.plan.TreeNode.postOrderTransform(TreeNode.scala:72)
>   at 
> org.apache.flink.table.plan.TreeNode$$anonfun$1.apply(TreeNode.scala:46)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
> {code}
> We should validate that the start and end window properties are not accessed 
> on over windows.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (FLINK-6402) Consider removing annotation for REAPER_THREAD_LOCK in SafetyNetCloseableRegistry#doRegister()

2017-06-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15986965#comment-15986965
 ] 

Ted Yu edited comment on FLINK-6402 at 6/19/17 8:11 PM:


Pardon the typo.

The annotation can be dropped.


was (Author: yuzhih...@gmail.com):
Pardon the typo.
The annotation can be dropped.

> Consider removing annotation for REAPER_THREAD_LOCK in 
> SafetyNetCloseableRegistry#doRegister()
> --
>
> Key: FLINK-6402
> URL: https://issues.apache.org/jira/browse/FLINK-6402
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Reporter: Ted Yu
>Priority: Minor
>
> Here is related code:
> {code}
> PhantomDelegatingCloseableRef phantomRef = new 
> PhantomDelegatingCloseableRef(
> wrappingProxyCloseable,
> this,
> REAPER_THREAD.referenceQueue);
> {code}
> Instantiation of REAPER_THREAD can be made visible by ctor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6932) Update the inaccessible Dataflow Model paper link

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054292#comment-16054292
 ] 

ASF GitHub Bot commented on FLINK-6932:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/4131
  
cc @zentol Please take a look. Thank you so much. :cake: 


> Update the inaccessible Dataflow Model paper link
> -
>
> Key: FLINK-6932
> URL: https://issues.apache.org/jira/browse/FLINK-6932
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: mingleizhang
>Assignee: mingleizhang
>  Labels: None
>
>  I tried to access the Dataflow Model paper link which under 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/event_time.html],
>  then it gives me an error [ 404 ] instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4131: [FLINK-6932] [doc] Update inaccessible Dataflow Model pap...

2017-06-19 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/4131
  
cc @zentol Please take a look. Thank you so much. :cake: 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4074: [FLINK-6488] [scripts] Mark deprecated 'start-local.sh' a...

2017-06-19 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/4074
  
@zentol It seems weird why this stuff happened. I have updated the code 
though. It looks good now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6488) Mark deprecated for 'start-local.sh' and 'stop-local' scripts

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054285#comment-16054285
 ] 

ASF GitHub Bot commented on FLINK-6488:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/4074
  
@zentol It seems weird why this stuff happened. I have updated the code 
though. It looks good now.


> Mark deprecated for 'start-local.sh' and 'stop-local' scripts
> -
>
> Key: FLINK-6488
> URL: https://issues.apache.org/jira/browse/FLINK-6488
> Project: Flink
>  Issue Type: Sub-task
>  Components: Startup Shell Scripts
>Reporter: Stephan Ewen
>Assignee: mingleizhang
>
> The {{start-cluster.sh}} scripts work locally now, without needing SSH setup.
> We can remove {{start-local.sh}} without any loss of functionality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6949) Add ability to ship custom resource files to YARN cluster

2017-06-19 Thread Mikhail Pryakhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pryakhin updated FLINK-6949:

Priority: Critical  (was: Major)

> Add ability to ship custom resource files to YARN cluster
> -
>
> Key: FLINK-6949
> URL: https://issues.apache.org/jira/browse/FLINK-6949
> Project: Flink
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 1.3.0
>Reporter: Mikhail Pryakhin
>Priority: Critical
>
> *The problem:*
> When deploying a flink job on YARN it is not possible to specify custom 
> resource files to be shipped to YARN cluster.
>  
> *The use case description:*
> When running a flink job on multiple environments it becomes necessary to 
> pass environment-related configuration files to the job's runtime. It can be 
> accomplished by packaging configuration files within the job's jar. But 
> having tens of different environments one can easily end up packaging as many 
> jar as there are environments. It would be great to have an ability to 
> separate configuration files from the job artifacts. 
>  
> *The possible solution:*
> add the --yarnship-files option to flink cli to specify files that should be 
> shipped to the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6896) Creating a table from a POJO and use table sink to output fail

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054220#comment-16054220
 ] 

ASF GitHub Bot commented on FLINK-6896:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4111
  
Of course @twalthr , I haven't merged the code and thanks for your new PR. 
I will have a look tomorrow.


> Creating a table from a POJO and use table sink to output fail
> --
>
> Key: FLINK-6896
> URL: https://issues.apache.org/jira/browse/FLINK-6896
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Mark You
>Assignee: sunjincheng
> Attachments: debug.png
>
>
> Following example fails at sink, using debug mode to see the reason of 
> ArrayIndexOutOfBoundException is cause by the input type is Pojo type not Row?
> Sample:
> {code:title=TumblingWindow.java|borderStyle=solid}
> public class TumblingWindow {
> public static void main(String[] args) throws Exception {
> List data = new ArrayList();
> data.add(new Content(1L, "Hi"));
> data.add(new Content(2L, "Hallo"));
> data.add(new Content(3L, "Hello"));
> data.add(new Content(4L, "Hello"));
> data.add(new Content(7L, "Hello"));
> data.add(new Content(8L, "Hello world"));
> data.add(new Content(16L, "Hello world"));
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
> DataStream stream = env.fromCollection(data);
> DataStream stream2 = stream.assignTimestampsAndWatermarks(
> new 
> BoundedOutOfOrdernessTimestampExtractor(Time.milliseconds(1)) {
> /**
>  * 
>  */
> private static final long serialVersionUID = 
> 410512296011057717L;
> @Override
> public long extractTimestamp(Content element) {
> return element.getRecordTime();
> }
> });
> final StreamTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env);
> Table table = tableEnv.fromDataStream(stream2, 
> "urlKey,uplink,downlink,httpGetMessageCount,httpPostMessageCount,statusCode,rowtime.rowtime");
> Table windowTable = 
> table.window(Tumble.over("1.hours").on("rowtime").as("w")).groupBy("w, 
> urlKey")
> 
> .select("w.start,urlKey,uplink.sum,downlink.sum,httpGetMessageCount.sum,httpPostMessageCount.sum
>  ");
> //table.printSchema();
> TableSink windowSink = new 
> CsvTableSink("/Users/mark/Documents/specific-website-code.csv", ",", 1,
> WriteMode.OVERWRITE);
> windowTable.writeToSink(windowSink);
> // tableEnv.toDataStream(windowTable, Row.class).print();
> env.execute();
> }
> public static class Content implements Serializable {
> /**
>  * 
>  */
> private static final long serialVersionUID = 1429246948772430441L;
> private String urlKey;
> private long recordTime;
> // private String recordTimeStr;
> private long httpGetMessageCount;
> private long httpPostMessageCount;
> private long uplink;
> private long downlink;
> private long statusCode;
> private long statusCodeCount;
> public Content() {
> super();
> }
> public Content(long recordTime, String urlKey) {
> super();
> this.recordTime = recordTime;
> this.urlKey = urlKey;
> }
> public String getUrlKey() {
> return urlKey;
> }
> public void setUrlKey(String urlKey) {
> this.urlKey = urlKey;
> }
> public long getRecordTime() {
> return recordTime;
> }
> public void setRecordTime(long recordTime) {
> this.recordTime = recordTime;
> }
> public long getHttpGetMessageCount() {
> return httpGetMessageCount;
> }
> public void setHttpGetMessageCount(long httpGetMessageCount) {
> this.httpGetMessageCount = httpGetMessageCount;
> }
> public long getHttpPostMessageCount() {
> return httpPostMessageCount;
> }
> public void setHttpPostMessageCount(long httpPostMessageCount) {
> this.httpPostMessageCount = httpPostMessageCount;
> }
> public long getUplink() {
> return uplink;
> }
> public void setUplink(long uplink) {
> this.uplink = uplink;
> }
>

[GitHub] flink issue #4111: [FLINK-6896][table] Fix generate PojoType input result ex...

2017-06-19 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4111
  
Of course @twalthr , I haven't merged the code and thanks for your new PR. 
I will have a look tomorrow.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6418) Support for dynamic state changes in CEP patterns

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054212#comment-16054212
 ] 

ASF GitHub Bot commented on FLINK-6418:
---

Github user kl0u commented on the issue:

https://github.com/apache/flink/pull/4143
  
Hi @dawidwys ! Thanks for the work. 
Could you also update the documentation? 
This will also help the review process.


> Support for dynamic state changes in CEP patterns
> -
>
> Key: FLINK-6418
> URL: https://issues.apache.org/jira/browse/FLINK-6418
> Project: Flink
>  Issue Type: Improvement
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Elias Levy
>Assignee: Dawid Wysakowicz
>
> Flink CEP library allows one to define event pattern to match where the match 
> condition can be determined programmatically via the {{where}} method.  Flink 
> 1.3 will introduce so-called iterative conditions, which allow the predicate 
> to look up events already matched by the pattern and thus be conditional on 
> them.
> 1.3 also introduces to the API quantifer methods which allow one to 
> declaratively specific how many times a condition must be matched before 
> there is a state change.
> Alas, there are use cases where the quantifier must be determined dynamically 
> based on the events matched by the pattern so far.  Therefore, I propose the 
> adding of a new {{Pattern}}: {{until}}.
> Like the new iterative variant of {{where}}, {{until}} would take a predicate 
> function and a context that provides access to events already matched.  But 
> whereas {{where}} determines if an event is accepted by the pattern, 
> {{until}} determines whether is pattern should move on to the next state.
> In our particular use case, we have a pattern where an event is matched a 
> number of times, but depending on the event type, the number (threshold) for 
> the pattern to match is different.  We could decompose the pattern into 
> multiple similar patterns, but that could be inefficient if we have many such 
> patterns.  If the functionality of {{until}} were available, we could make do 
> with a single pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4143: [FLINK-6418][cep] Support for dynamic state changes in CE...

2017-06-19 Thread kl0u
Github user kl0u commented on the issue:

https://github.com/apache/flink/pull/4143
  
Hi @dawidwys ! Thanks for the work. 
Could you also update the documentation? 
This will also help the review process.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4135: Flink 6602 -- bug fix

2017-06-19 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4135
  
On more thing. It would be good if you build the flink-table module locally 
before submitting or updating a PR. The PR fails because some lines exceed the 
line length limit. Thank you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6896) Creating a table from a POJO and use table sink to output fail

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054167#comment-16054167
 ] 

ASF GitHub Bot commented on FLINK-6896:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/4111
  
@sunjincheng121 I opened #4144. Feel free to review and test. I really hope 
that I could cover all cases.


> Creating a table from a POJO and use table sink to output fail
> --
>
> Key: FLINK-6896
> URL: https://issues.apache.org/jira/browse/FLINK-6896
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Mark You
>Assignee: sunjincheng
> Attachments: debug.png
>
>
> Following example fails at sink, using debug mode to see the reason of 
> ArrayIndexOutOfBoundException is cause by the input type is Pojo type not Row?
> Sample:
> {code:title=TumblingWindow.java|borderStyle=solid}
> public class TumblingWindow {
> public static void main(String[] args) throws Exception {
> List data = new ArrayList();
> data.add(new Content(1L, "Hi"));
> data.add(new Content(2L, "Hallo"));
> data.add(new Content(3L, "Hello"));
> data.add(new Content(4L, "Hello"));
> data.add(new Content(7L, "Hello"));
> data.add(new Content(8L, "Hello world"));
> data.add(new Content(16L, "Hello world"));
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
> DataStream stream = env.fromCollection(data);
> DataStream stream2 = stream.assignTimestampsAndWatermarks(
> new 
> BoundedOutOfOrdernessTimestampExtractor(Time.milliseconds(1)) {
> /**
>  * 
>  */
> private static final long serialVersionUID = 
> 410512296011057717L;
> @Override
> public long extractTimestamp(Content element) {
> return element.getRecordTime();
> }
> });
> final StreamTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env);
> Table table = tableEnv.fromDataStream(stream2, 
> "urlKey,uplink,downlink,httpGetMessageCount,httpPostMessageCount,statusCode,rowtime.rowtime");
> Table windowTable = 
> table.window(Tumble.over("1.hours").on("rowtime").as("w")).groupBy("w, 
> urlKey")
> 
> .select("w.start,urlKey,uplink.sum,downlink.sum,httpGetMessageCount.sum,httpPostMessageCount.sum
>  ");
> //table.printSchema();
> TableSink windowSink = new 
> CsvTableSink("/Users/mark/Documents/specific-website-code.csv", ",", 1,
> WriteMode.OVERWRITE);
> windowTable.writeToSink(windowSink);
> // tableEnv.toDataStream(windowTable, Row.class).print();
> env.execute();
> }
> public static class Content implements Serializable {
> /**
>  * 
>  */
> private static final long serialVersionUID = 1429246948772430441L;
> private String urlKey;
> private long recordTime;
> // private String recordTimeStr;
> private long httpGetMessageCount;
> private long httpPostMessageCount;
> private long uplink;
> private long downlink;
> private long statusCode;
> private long statusCodeCount;
> public Content() {
> super();
> }
> public Content(long recordTime, String urlKey) {
> super();
> this.recordTime = recordTime;
> this.urlKey = urlKey;
> }
> public String getUrlKey() {
> return urlKey;
> }
> public void setUrlKey(String urlKey) {
> this.urlKey = urlKey;
> }
> public long getRecordTime() {
> return recordTime;
> }
> public void setRecordTime(long recordTime) {
> this.recordTime = recordTime;
> }
> public long getHttpGetMessageCount() {
> return httpGetMessageCount;
> }
> public void setHttpGetMessageCount(long httpGetMessageCount) {
> this.httpGetMessageCount = httpGetMessageCount;
> }
> public long getHttpPostMessageCount() {
> return httpPostMessageCount;
> }
> public void setHttpPostMessageCount(long httpPostMessageCount) {
> this.httpPostMessageCount = httpPostMessageCount;
> }
> public long getUplink() {
> return uplink;
> }
> public void setUplink(long uplink) {
> this.uplink = uplink;
> }
>  

[GitHub] flink issue #4111: [FLINK-6896][table] Fix generate PojoType input result ex...

2017-06-19 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/4111
  
@sunjincheng121 I opened #4144. Feel free to review and test. I really hope 
that I could cover all cases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4144: [FLINK-6881] [FLINK-6896] [table] Creating a table...

2017-06-19 Thread twalthr
GitHub user twalthr opened a pull request:

https://github.com/apache/flink/pull/4144

[FLINK-6881] [FLINK-6896] [table] Creating a table from a POJO and defining 
a time attribute fails

This PR fixes several issues with POJOs and time attributes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/twalthr/flink FLINK-6881_NEW

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4144.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4144


commit 0f5793159ca97d6d7e9f8a4b9fab3a3a2479fab8
Author: twalthr 
Date:   2017-06-19T15:06:44Z

[FLINK-6881] [FLINK-6896] [table] Creating a table from a POJO and defining 
a time attribute fails




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6881) Creating a table from a POJO and defining a time attribute fails

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054163#comment-16054163
 ] 

ASF GitHub Bot commented on FLINK-6881:
---

GitHub user twalthr opened a pull request:

https://github.com/apache/flink/pull/4144

[FLINK-6881] [FLINK-6896] [table] Creating a table from a POJO and defining 
a time attribute fails

This PR fixes several issues with POJOs and time attributes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/twalthr/flink FLINK-6881_NEW

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4144.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4144


commit 0f5793159ca97d6d7e9f8a4b9fab3a3a2479fab8
Author: twalthr 
Date:   2017-06-19T15:06:44Z

[FLINK-6881] [FLINK-6896] [table] Creating a table from a POJO and defining 
a time attribute fails




> Creating a table from a POJO and defining a time attribute fails
> 
>
> Key: FLINK-6881
> URL: https://issues.apache.org/jira/browse/FLINK-6881
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Creating a table from a DataStream of POJOs fails when the user tries to 
> define a rowtime attribute.
> There are multiple reasons in {{ExpressionParser}} as well as 
> {{StreamTableEnvironment#validateAndExtractTimeAttributes}}.
> See also: 
> https://stackoverflow.com/questions/8022/apache-flink-1-3-table-api-rowtime-strange-behavior



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6418) Support for dynamic state changes in CEP patterns

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054161#comment-16054161
 ] 

ASF GitHub Bot commented on FLINK-6418:
---

Github user dawidwys commented on the issue:

https://github.com/apache/flink/pull/4143
  
R: @kl0u 


> Support for dynamic state changes in CEP patterns
> -
>
> Key: FLINK-6418
> URL: https://issues.apache.org/jira/browse/FLINK-6418
> Project: Flink
>  Issue Type: Improvement
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Elias Levy
>Assignee: Dawid Wysakowicz
>
> Flink CEP library allows one to define event pattern to match where the match 
> condition can be determined programmatically via the {{where}} method.  Flink 
> 1.3 will introduce so-called iterative conditions, which allow the predicate 
> to look up events already matched by the pattern and thus be conditional on 
> them.
> 1.3 also introduces to the API quantifer methods which allow one to 
> declaratively specific how many times a condition must be matched before 
> there is a state change.
> Alas, there are use cases where the quantifier must be determined dynamically 
> based on the events matched by the pattern so far.  Therefore, I propose the 
> adding of a new {{Pattern}}: {{until}}.
> Like the new iterative variant of {{where}}, {{until}} would take a predicate 
> function and a context that provides access to events already matched.  But 
> whereas {{where}} determines if an event is accepted by the pattern, 
> {{until}} determines whether is pattern should move on to the next state.
> In our particular use case, we have a pattern where an event is matched a 
> number of times, but depending on the event type, the number (threshold) for 
> the pattern to match is different.  We could decompose the pattern into 
> multiple similar patterns, but that could be inefficient if we have many such 
> patterns.  If the functionality of {{until}} were available, we could make do 
> with a single pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6418) Support for dynamic state changes in CEP patterns

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054160#comment-16054160
 ] 

ASF GitHub Bot commented on FLINK-6418:
---

GitHub user dawidwys opened a pull request:

https://github.com/apache/flink/pull/4143

[FLINK-6418][cep] Support for dynamic state changes in CEP patterns

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dawidwys/flink cep-loop-until

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4143.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4143


commit b1832af10410266cedc26654f7be1052ea244c47
Author: Dawid Wysakowicz 
Date:   2017-06-19T14:40:02Z

[FLINK-6418][cep] Support for dynamic state changes in CEP patterns




> Support for dynamic state changes in CEP patterns
> -
>
> Key: FLINK-6418
> URL: https://issues.apache.org/jira/browse/FLINK-6418
> Project: Flink
>  Issue Type: Improvement
>  Components: CEP
>Affects Versions: 1.3.0
>Reporter: Elias Levy
>Assignee: Dawid Wysakowicz
>
> Flink CEP library allows one to define event pattern to match where the match 
> condition can be determined programmatically via the {{where}} method.  Flink 
> 1.3 will introduce so-called iterative conditions, which allow the predicate 
> to look up events already matched by the pattern and thus be conditional on 
> them.
> 1.3 also introduces to the API quantifer methods which allow one to 
> declaratively specific how many times a condition must be matched before 
> there is a state change.
> Alas, there are use cases where the quantifier must be determined dynamically 
> based on the events matched by the pattern so far.  Therefore, I propose the 
> adding of a new {{Pattern}}: {{until}}.
> Like the new iterative variant of {{where}}, {{until}} would take a predicate 
> function and a context that provides access to events already matched.  But 
> whereas {{where}} determines if an event is accepted by the pattern, 
> {{until}} determines whether is pattern should move on to the next state.
> In our particular use case, we have a pattern where an event is matched a 
> number of times, but depending on the event type, the number (threshold) for 
> the pattern to match is different.  We could decompose the pattern into 
> multiple similar patterns, but that could be inefficient if we have many such 
> patterns.  If the functionality of {{until}} were available, we could make do 
> with a single pattern.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4143: [FLINK-6418][cep] Support for dynamic state changes in CE...

2017-06-19 Thread dawidwys
Github user dawidwys commented on the issue:

https://github.com/apache/flink/pull/4143
  
R: @kl0u 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4143: [FLINK-6418][cep] Support for dynamic state change...

2017-06-19 Thread dawidwys
GitHub user dawidwys opened a pull request:

https://github.com/apache/flink/pull/4143

[FLINK-6418][cep] Support for dynamic state changes in CEP patterns

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dawidwys/flink cep-loop-until

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4143.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4143


commit b1832af10410266cedc26654f7be1052ea244c47
Author: Dawid Wysakowicz 
Date:   2017-06-19T14:40:02Z

[FLINK-6418][cep] Support for dynamic state changes in CEP patterns




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6930) Selecting window start / end on row-based Tumble/Slide window causes NPE

2017-06-19 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054146#comment-16054146
 ] 

Fabian Hueske commented on FLINK-6930:
--

I'll merge it to 1.3 as well to get the fix in the next RC for 1.3.1

> Selecting window start / end on row-based Tumble/Slide window causes NPE
> 
>
> Key: FLINK-6930
> URL: https://issues.apache.org/jira/browse/FLINK-6930
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Fabian Hueske
>Assignee: Jark Wu
> Fix For: 1.4.0
>
>
> Selecting the start and end properties of a row-based window causes a 
> NullPointerException.
> The following program:
> {code}
> val windowedTable = table
>   .window(Tumble over 2.rows on 'proctime as 'w)
>   .groupBy('w, 'string)
>   .select('string as 'n, 'int.count as 'cnt, 'w.start as 's, 'w.end as 'e)
> {code}
> causes 
> {code}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.calcite.runtime.SqlFunctions.toLong(SqlFunctions.java:1556)
>   at 
> org.apache.calcite.runtime.SqlFunctions.toLong(SqlFunctions.java:1551)
>   at DataStreamCalcRule$40.processElement(Unknown Source)
>   at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:67)
>   at 
> org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:35)
>   at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:528)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:503)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:483)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:890)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:868)
>   at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>   at 
> org.apache.flink.table.runtime.aggregate.IncrementalAggregateWindowFunction.apply(IncrementalAggregateWindowFunction.scala:75)
>   at 
> org.apache.flink.table.runtime.aggregate.IncrementalAggregateWindowFunction.apply(IncrementalAggregateWindowFunction.scala:37)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.functions.InternalSingleValueWindowFunction.process(InternalSingleValueWindowFunction.java:46)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.emitWindowContents(WindowOperator.java:599)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:456)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:207)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:265)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> We should validate that the start and end window properties are not accessed 
> if the window is defined on row-counts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6896) Creating a table from a POJO and use table sink to output fail

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054145#comment-16054145
 ] 

ASF GitHub Bot commented on FLINK-6896:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4111
  
Hi @twalthr Sounds good, And make sense to me.  I think i can do some code 
review or check some test case when you opened the PR. :)

Best,
SunJincheng


> Creating a table from a POJO and use table sink to output fail
> --
>
> Key: FLINK-6896
> URL: https://issues.apache.org/jira/browse/FLINK-6896
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Mark You
>Assignee: sunjincheng
> Attachments: debug.png
>
>
> Following example fails at sink, using debug mode to see the reason of 
> ArrayIndexOutOfBoundException is cause by the input type is Pojo type not Row?
> Sample:
> {code:title=TumblingWindow.java|borderStyle=solid}
> public class TumblingWindow {
> public static void main(String[] args) throws Exception {
> List data = new ArrayList();
> data.add(new Content(1L, "Hi"));
> data.add(new Content(2L, "Hallo"));
> data.add(new Content(3L, "Hello"));
> data.add(new Content(4L, "Hello"));
> data.add(new Content(7L, "Hello"));
> data.add(new Content(8L, "Hello world"));
> data.add(new Content(16L, "Hello world"));
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
> DataStream stream = env.fromCollection(data);
> DataStream stream2 = stream.assignTimestampsAndWatermarks(
> new 
> BoundedOutOfOrdernessTimestampExtractor(Time.milliseconds(1)) {
> /**
>  * 
>  */
> private static final long serialVersionUID = 
> 410512296011057717L;
> @Override
> public long extractTimestamp(Content element) {
> return element.getRecordTime();
> }
> });
> final StreamTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env);
> Table table = tableEnv.fromDataStream(stream2, 
> "urlKey,uplink,downlink,httpGetMessageCount,httpPostMessageCount,statusCode,rowtime.rowtime");
> Table windowTable = 
> table.window(Tumble.over("1.hours").on("rowtime").as("w")).groupBy("w, 
> urlKey")
> 
> .select("w.start,urlKey,uplink.sum,downlink.sum,httpGetMessageCount.sum,httpPostMessageCount.sum
>  ");
> //table.printSchema();
> TableSink windowSink = new 
> CsvTableSink("/Users/mark/Documents/specific-website-code.csv", ",", 1,
> WriteMode.OVERWRITE);
> windowTable.writeToSink(windowSink);
> // tableEnv.toDataStream(windowTable, Row.class).print();
> env.execute();
> }
> public static class Content implements Serializable {
> /**
>  * 
>  */
> private static final long serialVersionUID = 1429246948772430441L;
> private String urlKey;
> private long recordTime;
> // private String recordTimeStr;
> private long httpGetMessageCount;
> private long httpPostMessageCount;
> private long uplink;
> private long downlink;
> private long statusCode;
> private long statusCodeCount;
> public Content() {
> super();
> }
> public Content(long recordTime, String urlKey) {
> super();
> this.recordTime = recordTime;
> this.urlKey = urlKey;
> }
> public String getUrlKey() {
> return urlKey;
> }
> public void setUrlKey(String urlKey) {
> this.urlKey = urlKey;
> }
> public long getRecordTime() {
> return recordTime;
> }
> public void setRecordTime(long recordTime) {
> this.recordTime = recordTime;
> }
> public long getHttpGetMessageCount() {
> return httpGetMessageCount;
> }
> public void setHttpGetMessageCount(long httpGetMessageCount) {
> this.httpGetMessageCount = httpGetMessageCount;
> }
> public long getHttpPostMessageCount() {
> return httpPostMessageCount;
> }
> public void setHttpPostMessageCount(long httpPostMessageCount) {
> this.httpPostMessageCount = httpPostMessageCount;
> }
> public long getUplink() {
> return uplink;
> }
> public void 

[GitHub] flink issue #4111: [FLINK-6896][table] Fix generate PojoType input result ex...

2017-06-19 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4111
  
Hi @twalthr Sounds good, And make sense to me.  I think i can do some code 
review or check some test case when you opened the PR. :)

Best,
SunJincheng


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6896) Creating a table from a POJO and use table sink to output fail

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054135#comment-16054135
 ] 

ASF GitHub Bot commented on FLINK-6896:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/4111
  
@sunjincheng121 Sorry, that I haven't responded earlier. I was on vacation 
last week. I found a solution that requires less CodeGenerator changes. I found 
several issues (also regarding Java expression parsing). I will open a PR 
shortly. We can test the PR, with the test cases of this PR.


> Creating a table from a POJO and use table sink to output fail
> --
>
> Key: FLINK-6896
> URL: https://issues.apache.org/jira/browse/FLINK-6896
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Mark You
>Assignee: sunjincheng
> Attachments: debug.png
>
>
> Following example fails at sink, using debug mode to see the reason of 
> ArrayIndexOutOfBoundException is cause by the input type is Pojo type not Row?
> Sample:
> {code:title=TumblingWindow.java|borderStyle=solid}
> public class TumblingWindow {
> public static void main(String[] args) throws Exception {
> List data = new ArrayList();
> data.add(new Content(1L, "Hi"));
> data.add(new Content(2L, "Hallo"));
> data.add(new Content(3L, "Hello"));
> data.add(new Content(4L, "Hello"));
> data.add(new Content(7L, "Hello"));
> data.add(new Content(8L, "Hello world"));
> data.add(new Content(16L, "Hello world"));
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
> DataStream stream = env.fromCollection(data);
> DataStream stream2 = stream.assignTimestampsAndWatermarks(
> new 
> BoundedOutOfOrdernessTimestampExtractor(Time.milliseconds(1)) {
> /**
>  * 
>  */
> private static final long serialVersionUID = 
> 410512296011057717L;
> @Override
> public long extractTimestamp(Content element) {
> return element.getRecordTime();
> }
> });
> final StreamTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env);
> Table table = tableEnv.fromDataStream(stream2, 
> "urlKey,uplink,downlink,httpGetMessageCount,httpPostMessageCount,statusCode,rowtime.rowtime");
> Table windowTable = 
> table.window(Tumble.over("1.hours").on("rowtime").as("w")).groupBy("w, 
> urlKey")
> 
> .select("w.start,urlKey,uplink.sum,downlink.sum,httpGetMessageCount.sum,httpPostMessageCount.sum
>  ");
> //table.printSchema();
> TableSink windowSink = new 
> CsvTableSink("/Users/mark/Documents/specific-website-code.csv", ",", 1,
> WriteMode.OVERWRITE);
> windowTable.writeToSink(windowSink);
> // tableEnv.toDataStream(windowTable, Row.class).print();
> env.execute();
> }
> public static class Content implements Serializable {
> /**
>  * 
>  */
> private static final long serialVersionUID = 1429246948772430441L;
> private String urlKey;
> private long recordTime;
> // private String recordTimeStr;
> private long httpGetMessageCount;
> private long httpPostMessageCount;
> private long uplink;
> private long downlink;
> private long statusCode;
> private long statusCodeCount;
> public Content() {
> super();
> }
> public Content(long recordTime, String urlKey) {
> super();
> this.recordTime = recordTime;
> this.urlKey = urlKey;
> }
> public String getUrlKey() {
> return urlKey;
> }
> public void setUrlKey(String urlKey) {
> this.urlKey = urlKey;
> }
> public long getRecordTime() {
> return recordTime;
> }
> public void setRecordTime(long recordTime) {
> this.recordTime = recordTime;
> }
> public long getHttpGetMessageCount() {
> return httpGetMessageCount;
> }
> public void setHttpGetMessageCount(long httpGetMessageCount) {
> this.httpGetMessageCount = httpGetMessageCount;
> }
> public long getHttpPostMessageCount() {
> return httpPostMessageCount;
> }
> public void setHttpPostMessageCount(long httpPostMessageCount) {
> this.httpPostMessageCount = httpPostMessageCount;

[GitHub] flink issue #4111: [FLINK-6896][table] Fix generate PojoType input result ex...

2017-06-19 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/4111
  
@sunjincheng121 Sorry, that I haven't responded earlier. I was on vacation 
last week. I found a solution that requires less CodeGenerator changes. I found 
several issues (also regarding Java expression parsing). I will open a PR 
shortly. We can test the PR, with the test cases of this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6786) Remove duplicate QueryScopeInfoTest

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054128#comment-16054128
 ] 

ASF GitHub Bot commented on FLINK-6786:
---

Github user pnowojski commented on a diff in the pull request:

https://github.com/apache/flink/pull/4034#discussion_r122725538
  
--- Diff: 
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/dump/QueryScopeInfoTest.java
 ---
@@ -20,58 +20,140 @@
 
 import org.junit.Test;
 
-import static 
org.apache.flink.runtime.metrics.dump.QueryScopeInfo.INFO_CATEGORY_JM;
--- End diff --

I would leave that static imports for readability


> Remove duplicate QueryScopeInfoTest
> ---
>
> Key: FLINK-6786
> URL: https://issues.apache.org/jira/browse/FLINK-6786
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics, Tests
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> The {{QueryScopeInfoTest}} exists twice in {{runtime/metrics}}, under 
> {{groups/}} and {{dump/}}.
> These should be merged together.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4135: Flink 6602 -- bug fix

2017-06-19 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4135
  
thanks for the update @lmalds. I'll add two tests and will merge this


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6941) Selecting window start / end on over window causes field not resolve exception

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054126#comment-16054126
 ] 

ASF GitHub Bot commented on FLINK-6941:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4137
  
will merge this


> Selecting window start / end on over window causes field not resolve exception
> --
>
> Key: FLINK-6941
> URL: https://issues.apache.org/jira/browse/FLINK-6941
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Selecting window start / end on over window causes field not resolve 
> exception.
> The following program:
> {code}
> table
>   .window(
> Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW as 'w)
>   .select('c, countFun('b) over 'w, 'w.start, 'w.end)
> {code}
> causes
> {code}
> org.apache.flink.table.api.ValidationException: Cannot resolve [w] given 
> input [a, b, c, proctime].
>   at 
> org.apache.flink.table.plan.logical.LogicalNode.failValidation(LogicalNode.scala:143)
>   at 
> org.apache.flink.table.plan.logical.LogicalNode$$anonfun$validate$1.applyOrElse(LogicalNode.scala:86)
>   at 
> org.apache.flink.table.plan.logical.LogicalNode$$anonfun$validate$1.applyOrElse(LogicalNode.scala:83)
>   at 
> org.apache.flink.table.plan.TreeNode.postOrderTransform(TreeNode.scala:72)
>   at 
> org.apache.flink.table.plan.TreeNode$$anonfun$1.apply(TreeNode.scala:46)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
> {code}
> We should validate that the start and end window properties are not accessed 
> on over windows.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4137: [FLINK-6941][table]Add validate that the start and end wi...

2017-06-19 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4137
  
will merge this


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4034: [FLINK-6786] [metrics] Deduplicate QueryScopeIntoT...

2017-06-19 Thread pnowojski
Github user pnowojski commented on a diff in the pull request:

https://github.com/apache/flink/pull/4034#discussion_r122725538
  
--- Diff: 
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/dump/QueryScopeInfoTest.java
 ---
@@ -20,58 +20,140 @@
 
 import org.junit.Test;
 
-import static 
org.apache.flink.runtime.metrics.dump.QueryScopeInfo.INFO_CATEGORY_JM;
--- End diff --

I would leave that static imports for readability


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6896) Creating a table from a POJO and use table sink to output fail

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054103#comment-16054103
 ] 

ASF GitHub Bot commented on FLINK-6896:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4111
  
Hi @twalthr I have test this case using the first approach of FLINK-6896. 
without `as` clause. It's works well. And with the `as` clause, It can be fixed 
by add a case match in 
`StreamTableEnvironment#validateAndExtractTimeAttributes`. the code as follows:
`case (Alias(child, name,_), _) => fieldNames = name :: fieldNames` What do 
you think?


> Creating a table from a POJO and use table sink to output fail
> --
>
> Key: FLINK-6896
> URL: https://issues.apache.org/jira/browse/FLINK-6896
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Mark You
>Assignee: sunjincheng
> Attachments: debug.png
>
>
> Following example fails at sink, using debug mode to see the reason of 
> ArrayIndexOutOfBoundException is cause by the input type is Pojo type not Row?
> Sample:
> {code:title=TumblingWindow.java|borderStyle=solid}
> public class TumblingWindow {
> public static void main(String[] args) throws Exception {
> List data = new ArrayList();
> data.add(new Content(1L, "Hi"));
> data.add(new Content(2L, "Hallo"));
> data.add(new Content(3L, "Hello"));
> data.add(new Content(4L, "Hello"));
> data.add(new Content(7L, "Hello"));
> data.add(new Content(8L, "Hello world"));
> data.add(new Content(16L, "Hello world"));
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
> DataStream stream = env.fromCollection(data);
> DataStream stream2 = stream.assignTimestampsAndWatermarks(
> new 
> BoundedOutOfOrdernessTimestampExtractor(Time.milliseconds(1)) {
> /**
>  * 
>  */
> private static final long serialVersionUID = 
> 410512296011057717L;
> @Override
> public long extractTimestamp(Content element) {
> return element.getRecordTime();
> }
> });
> final StreamTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env);
> Table table = tableEnv.fromDataStream(stream2, 
> "urlKey,uplink,downlink,httpGetMessageCount,httpPostMessageCount,statusCode,rowtime.rowtime");
> Table windowTable = 
> table.window(Tumble.over("1.hours").on("rowtime").as("w")).groupBy("w, 
> urlKey")
> 
> .select("w.start,urlKey,uplink.sum,downlink.sum,httpGetMessageCount.sum,httpPostMessageCount.sum
>  ");
> //table.printSchema();
> TableSink windowSink = new 
> CsvTableSink("/Users/mark/Documents/specific-website-code.csv", ",", 1,
> WriteMode.OVERWRITE);
> windowTable.writeToSink(windowSink);
> // tableEnv.toDataStream(windowTable, Row.class).print();
> env.execute();
> }
> public static class Content implements Serializable {
> /**
>  * 
>  */
> private static final long serialVersionUID = 1429246948772430441L;
> private String urlKey;
> private long recordTime;
> // private String recordTimeStr;
> private long httpGetMessageCount;
> private long httpPostMessageCount;
> private long uplink;
> private long downlink;
> private long statusCode;
> private long statusCodeCount;
> public Content() {
> super();
> }
> public Content(long recordTime, String urlKey) {
> super();
> this.recordTime = recordTime;
> this.urlKey = urlKey;
> }
> public String getUrlKey() {
> return urlKey;
> }
> public void setUrlKey(String urlKey) {
> this.urlKey = urlKey;
> }
> public long getRecordTime() {
> return recordTime;
> }
> public void setRecordTime(long recordTime) {
> this.recordTime = recordTime;
> }
> public long getHttpGetMessageCount() {
> return httpGetMessageCount;
> }
> public void setHttpGetMessageCount(long httpGetMessageCount) {
> this.httpGetMessageCount = httpGetMessageCount;
> }
> public long getHttpPostMessageCount() {
> return httpPostMessageCount;
> }
> public void setHttpPostMessageCount(long httpPostMessageCount) 

[GitHub] flink issue #4111: [FLINK-6896][table] Fix generate PojoType input result ex...

2017-06-19 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4111
  
Hi @twalthr I have test this case using the first approach of FLINK-6896. 
without `as` clause. It's works well. And with the `as` clause, It can be fixed 
by add a case match in 
`StreamTableEnvironment#validateAndExtractTimeAttributes`. the code as follows:
`case (Alias(child, name,_), _) => fieldNames = name :: fieldNames` What do 
you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6859) StateCleaningCountTrigger should not delete timer

2017-06-19 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054078#comment-16054078
 ] 

Fabian Hueske commented on FLINK-6859:
--

I'll merge it to 1.3

> StateCleaningCountTrigger should not delete timer
> -
>
> Key: FLINK-6859
> URL: https://issues.apache.org/jira/browse/FLINK-6859
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Fabian Hueske
>Assignee: sunjincheng
> Fix For: 1.3.1, 1.4.0
>
>
> The {{StateCleaningCountTrigger}} which is used to clean-up inactive state 
> should not delete timers, i.e.. not call {{deleteProcessingTimeTimer()}}.
> This is an expensive operation.
> We should rather fire the timer and check if we need to clean the state or 
> not.
> What do you think [~sunjincheng121]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-5892) Recover job state at the granularity of operator

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054070#comment-16054070
 ] 

ASF GitHub Bot commented on FLINK-5892:
---

Github user pnowojski commented on a diff in the pull request:

https://github.com/apache/flink/pull/3844#discussion_r122718852
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/keyed/KeyedJob.java
 ---
@@ -98,10 +98,9 @@ public static void main(String[] args) throws Exception {
.map(new StatefulStringStoringMap(mode, "first"))
.setParallelism(4);
 
-   // TODO: re-enable this when generating the actual 1.2 savepoint
-   //if (mode == ExecutionMode.MIGRATE || mode == 
ExecutionMode.RESTORE) {
-   map.uid("first");
-   //}
+   if (mode == ExecutionMode.MIGRATE || mode == 
ExecutionMode.RESTORE) {
--- End diff --

I somehow don't like it that is not explained in the commit message what 
has actually changed/why was this change required at all. Especially since you 
have not changed anything else in the code, it is difficult to understand that. 
 If nothing else has changed, why do we need this `if (...)`? If something has 
changed, shouldn't it be covered by some test?


> Recover job state at the granularity of operator
> 
>
> Key: FLINK-5892
> URL: https://issues.apache.org/jira/browse/FLINK-5892
> Project: Flink
>  Issue Type: New Feature
>  Components: State Backends, Checkpointing
>Reporter: Guowei Ma
>Assignee: Guowei Ma
> Fix For: 1.3.0
>
>
> JobGraph has no `Operator` info so `ExecutionGraph` can only recovery at the 
> granularity of task.
> This leads to the result that the operator of the job may not recover the 
> state from a save point even if the save point has the state of operator. 
>  
> https://docs.google.com/document/d/19suTRF0nh7pRgeMnIEIdJ2Fq-CcNVt5_hR3cAoICf7Q/edit#.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #3844: [FLINK-5892] Enable 1.2 keyed state test

2017-06-19 Thread pnowojski
Github user pnowojski commented on a diff in the pull request:

https://github.com/apache/flink/pull/3844#discussion_r122718852
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/keyed/KeyedJob.java
 ---
@@ -98,10 +98,9 @@ public static void main(String[] args) throws Exception {
.map(new StatefulStringStoringMap(mode, "first"))
.setParallelism(4);
 
-   // TODO: re-enable this when generating the actual 1.2 savepoint
-   //if (mode == ExecutionMode.MIGRATE || mode == 
ExecutionMode.RESTORE) {
-   map.uid("first");
-   //}
+   if (mode == ExecutionMode.MIGRATE || mode == 
ExecutionMode.RESTORE) {
--- End diff --

I somehow don't like it that is not explained in the commit message what 
has actually changed/why was this change required at all. Especially since you 
have not changed anything else in the code, it is difficult to understand that. 
 If nothing else has changed, why do we need this `if (...)`? If something has 
changed, shouldn't it be covered by some test?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3712: [FLINK-6281] Create TableSink for JDBC.

2017-06-19 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3712
  
Thanks @zentol! 
I'll have a look at it as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6281) Create TableSink for JDBC

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054041#comment-16054041
 ] 

ASF GitHub Bot commented on FLINK-6281:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3712
  
Thanks @zentol! 
I'll have a look at it as well.


> Create TableSink for JDBC
> -
>
> Key: FLINK-6281
> URL: https://issues.apache.org/jira/browse/FLINK-6281
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> It would be nice to integrate the table APIs with the JDBC connectors so that 
> the rows in the tables can be directly pushed into JDBC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6888) Can not determine TypeInformation of ACC type of AggregateFunction when ACC is a Scala case/tuple class

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054032#comment-16054032
 ] 

ASF GitHub Bot commented on FLINK-6888:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4105#discussion_r122712259
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/BatchTableEnvironment.scala
 ---
@@ -192,10 +193,14 @@ class BatchTableEnvironment(
   name: String,
   f: AggregateFunction[T, ACC])
   : Unit = {
-implicit val typeInfo: TypeInformation[T] = TypeExtractor
-  .createTypeInfo(f, classOf[AggregateFunction[T, ACC]], f.getClass, 0)
+implicit val typeInfo: TypeInformation[T] = UserDefinedFunctionUtils
+  .getResultTypeOfAggregateFunction(f)
   .asInstanceOf[TypeInformation[T]]
 
+implicit val accTypeInfo: TypeInformation[ACC] = 
UserDefinedFunctionUtils
--- End diff --

UDAGGs are sometimes also shared in libraries. If a UDAGG class is included 
in a JAR file and loaded into the classpath, a user can hardly tell whether it 
is implemented in Java or Scala. So, I would not prohibit Scala UDFs in Jave 
environments. However, functions which are intended to be shared should be 
implemented with Java types.


> Can not determine TypeInformation of ACC type of AggregateFunction when ACC 
> is a Scala case/tuple class
> ---
>
> Key: FLINK-6888
> URL: https://issues.apache.org/jira/browse/FLINK-6888
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
> Fix For: 1.4.0
>
>
> Currently the {{ACC}} TypeInformation of 
> {{org.apache.flink.table.functions.AggregateFunction[T, ACC]}} is extracted 
> using {{TypeInformation.of(Class)}}. When {{ACC}} is a Scala case class or 
> tuple class, the TypeInformation will fall back to {{GenericType}} which 
> result in bad performance when state de/serialization. 
> I suggest to extract the ACC TypeInformation when called 
> {{TableEnvironment.registerFunction()}}.
> Here is an example:
> {code}
> case class Accumulator(sum: Long, count: Long)
> class MyAgg extends AggregateFunction[Long, Accumulator] {
>   //Overloaded accumulate method
>   def accumulate(acc: Accumulator, value: Long): Unit = {
>   }
>   override def createAccumulator(): Accumulator = Accumulator(0, 0)
>   override def getValue(accumulator: Accumulator): Long = 1
> }
> {code}
> The {{Accumulator}} will be recognized as {{GenericType}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4105: [FLINK-6888] [table] Can not determine TypeInforma...

2017-06-19 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4105#discussion_r122712259
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/BatchTableEnvironment.scala
 ---
@@ -192,10 +193,14 @@ class BatchTableEnvironment(
   name: String,
   f: AggregateFunction[T, ACC])
   : Unit = {
-implicit val typeInfo: TypeInformation[T] = TypeExtractor
-  .createTypeInfo(f, classOf[AggregateFunction[T, ACC]], f.getClass, 0)
+implicit val typeInfo: TypeInformation[T] = UserDefinedFunctionUtils
+  .getResultTypeOfAggregateFunction(f)
   .asInstanceOf[TypeInformation[T]]
 
+implicit val accTypeInfo: TypeInformation[ACC] = 
UserDefinedFunctionUtils
--- End diff --

UDAGGs are sometimes also shared in libraries. If a UDAGG class is included 
in a JAR file and loaded into the classpath, a user can hardly tell whether it 
is implemented in Java or Scala. So, I would not prohibit Scala UDFs in Jave 
environments. However, functions which are intended to be shared should be 
implemented with Java types.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6943) Improve exceptions within TypeExtractionUtils#getSingleAbstractMethod

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054025#comment-16054025
 ] 

ASF GitHub Bot commented on FLINK-6943:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4140#discussion_r122711527
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractionUtils.java
 ---
@@ -215,22 +215,33 @@ public static Type extractTypeArgument(Type t, int 
index) throws InvalidTypesExc
 * Extracts a Single Abstract Method (SAM) as defined in Java 
Specification (4.3.2. The Class Object,
 * 9.8 Functional Interfaces, 9.4.3 Interface Method Body) from given 
class.
 *
-* @param baseClass
-* @throws InvalidTypesException if the given class does not implement
-* @return
+* @param baseClass a class that is a FunctionalInterface to retrieve a 
SAM from
+* @throws InvalidTypesException if the given class does not implement 
FunctionalInterface
+* @return single abstract method of the given class
 */
public static Method getSingleAbstractMethod(Class baseClass) {
+
+   if (!baseClass.isInterface()) {
+   throw new InvalidTypesException("FunctionalInterface 
must be an interface");
--- End diff --

My main concern is consistency with other exception messages; they all 
include the base class.

I would suggest "... is not a FunctionalInterface."; i don't think we need 
the "which implies..." bit.


> Improve exceptions within TypeExtractionUtils#getSingleAbstractMethod
> -
>
> Key: FLINK-6943
> URL: https://issues.apache.org/jira/browse/FLINK-6943
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>
> Exception message seems to be inexact. 
> Also if there is no SAM, sam would be null upon returning from the method.
> The suggestion from a review was to change the message and add a check (for 
> null sam) prior to returning.
> Another suggestion is to check if the given method is an interface, as only 
> for interface it is possible to pass lambda.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6888) Can not determine TypeInformation of ACC type of AggregateFunction when ACC is a Scala case/tuple class

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054023#comment-16054023
 ] 

ASF GitHub Bot commented on FLINK-6888:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4105#discussion_r122711347
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/TableEnvironment.scala
 ---
@@ -358,20 +358,27 @@ abstract class TableEnvironment(val config: 
TableConfig) {
 * Registers an [[AggregateFunction]] under a unique name. Replaces 
already existing
 * user-defined functions under this name.
 */
-  private[flink] def registerAggregateFunctionInternal[T: TypeInformation, 
ACC](
+  private[flink] def registerAggregateFunctionInternal[T: TypeInformation, 
ACC: TypeInformation](
   name: String, function: AggregateFunction[T, ACC]): Unit = {
 // check if class not Scala object
 checkNotSingleton(function.getClass)
 // check if class could be instantiated
 checkForInstantiation(function.getClass)
 
-val typeInfo: TypeInformation[_] = implicitly[TypeInformation[T]]
+val resultTypeInfo: TypeInformation[_] = implicitly[TypeInformation[T]]
+val accTypeInfo: TypeInformation[_] = implicitly[TypeInformation[ACC]]
 
 // register in Table API
 functionCatalog.registerFunction(name, function.getClass)
 
 // register in SQL API
-val sqlFunctions = createAggregateSqlFunction(name, function, 
typeInfo, typeFactory)
+val sqlFunctions = createAggregateSqlFunction(
--- End diff --

You are right. We use the same approach for the return type `T` of all 
UDFs. However, the return type is part of the function signature and needs to 
be known to Calcite for semantic validation. The `ACC` type is only needed for 
compilation and an engine specific property.  

TBH, I'm not sure which approach is better.


> Can not determine TypeInformation of ACC type of AggregateFunction when ACC 
> is a Scala case/tuple class
> ---
>
> Key: FLINK-6888
> URL: https://issues.apache.org/jira/browse/FLINK-6888
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
> Fix For: 1.4.0
>
>
> Currently the {{ACC}} TypeInformation of 
> {{org.apache.flink.table.functions.AggregateFunction[T, ACC]}} is extracted 
> using {{TypeInformation.of(Class)}}. When {{ACC}} is a Scala case class or 
> tuple class, the TypeInformation will fall back to {{GenericType}} which 
> result in bad performance when state de/serialization. 
> I suggest to extract the ACC TypeInformation when called 
> {{TableEnvironment.registerFunction()}}.
> Here is an example:
> {code}
> case class Accumulator(sum: Long, count: Long)
> class MyAgg extends AggregateFunction[Long, Accumulator] {
>   //Overloaded accumulate method
>   def accumulate(acc: Accumulator, value: Long): Unit = {
>   }
>   override def createAccumulator(): Accumulator = Accumulator(0, 0)
>   override def getValue(accumulator: Accumulator): Long = 1
> }
> {code}
> The {{Accumulator}} will be recognized as {{GenericType}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4140: [FLINK-6943] Improve exceptions within TypeExtract...

2017-06-19 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4140#discussion_r122711527
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractionUtils.java
 ---
@@ -215,22 +215,33 @@ public static Type extractTypeArgument(Type t, int 
index) throws InvalidTypesExc
 * Extracts a Single Abstract Method (SAM) as defined in Java 
Specification (4.3.2. The Class Object,
 * 9.8 Functional Interfaces, 9.4.3 Interface Method Body) from given 
class.
 *
-* @param baseClass
-* @throws InvalidTypesException if the given class does not implement
-* @return
+* @param baseClass a class that is a FunctionalInterface to retrieve a 
SAM from
+* @throws InvalidTypesException if the given class does not implement 
FunctionalInterface
+* @return single abstract method of the given class
 */
public static Method getSingleAbstractMethod(Class baseClass) {
+
+   if (!baseClass.isInterface()) {
+   throw new InvalidTypesException("FunctionalInterface 
must be an interface");
--- End diff --

My main concern is consistency with other exception messages; they all 
include the base class.

I would suggest "... is not a FunctionalInterface."; i don't think we need 
the "which implies..." bit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4105: [FLINK-6888] [table] Can not determine TypeInforma...

2017-06-19 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4105#discussion_r122711347
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/TableEnvironment.scala
 ---
@@ -358,20 +358,27 @@ abstract class TableEnvironment(val config: 
TableConfig) {
 * Registers an [[AggregateFunction]] under a unique name. Replaces 
already existing
 * user-defined functions under this name.
 */
-  private[flink] def registerAggregateFunctionInternal[T: TypeInformation, 
ACC](
+  private[flink] def registerAggregateFunctionInternal[T: TypeInformation, 
ACC: TypeInformation](
   name: String, function: AggregateFunction[T, ACC]): Unit = {
 // check if class not Scala object
 checkNotSingleton(function.getClass)
 // check if class could be instantiated
 checkForInstantiation(function.getClass)
 
-val typeInfo: TypeInformation[_] = implicitly[TypeInformation[T]]
+val resultTypeInfo: TypeInformation[_] = implicitly[TypeInformation[T]]
+val accTypeInfo: TypeInformation[_] = implicitly[TypeInformation[ACC]]
 
 // register in Table API
 functionCatalog.registerFunction(name, function.getClass)
 
 // register in SQL API
-val sqlFunctions = createAggregateSqlFunction(name, function, 
typeInfo, typeFactory)
+val sqlFunctions = createAggregateSqlFunction(
--- End diff --

You are right. We use the same approach for the return type `T` of all 
UDFs. However, the return type is part of the function signature and needs to 
be known to Calcite for semantic validation. The `ACC` type is only needed for 
compilation and an engine specific property.  

TBH, I'm not sure which approach is better.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4140: [FLINK-6943] Improve exceptions within TypeExtract...

2017-06-19 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4140#discussion_r122707710
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractionUtils.java
 ---
@@ -215,22 +215,33 @@ public static Type extractTypeArgument(Type t, int 
index) throws InvalidTypesExc
 * Extracts a Single Abstract Method (SAM) as defined in Java 
Specification (4.3.2. The Class Object,
 * 9.8 Functional Interfaces, 9.4.3 Interface Method Body) from given 
class.
 *
-* @param baseClass
-* @throws InvalidTypesException if the given class does not implement
-* @return
+* @param baseClass a class that is a FunctionalInterface to retrieve a 
SAM from
+* @throws InvalidTypesException if the given class does not implement 
FunctionalInterface
+* @return single abstract method of the given class
 */
public static Method getSingleAbstractMethod(Class baseClass) {
+
+   if (!baseClass.isInterface()) {
+   throw new InvalidTypesException("FunctionalInterface 
must be an interface");
--- End diff --

Shouldn't this say "Given class ... must be an interface."?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4140: [FLINK-6943] Improve exceptions within TypeExtract...

2017-06-19 Thread dawidwys
Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/4140#discussion_r122708789
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractionUtils.java
 ---
@@ -215,22 +215,33 @@ public static Type extractTypeArgument(Type t, int 
index) throws InvalidTypesExc
 * Extracts a Single Abstract Method (SAM) as defined in Java 
Specification (4.3.2. The Class Object,
 * 9.8 Functional Interfaces, 9.4.3 Interface Method Body) from given 
class.
 *
-* @param baseClass
-* @throws InvalidTypesException if the given class does not implement
-* @return
+* @param baseClass a class that is a FunctionalInterface to retrieve a 
SAM from
+* @throws InvalidTypesException if the given class does not implement 
FunctionalInterface
+* @return single abstract method of the given class
 */
public static Method getSingleAbstractMethod(Class baseClass) {
+
+   if (!baseClass.isInterface()) {
+   throw new InvalidTypesException("FunctionalInterface 
must be an interface");
--- End diff --

Yes and no, I would say. Given class must be a FunctionalInterface which 
implies it has to be an interface. If you feel. How about such a message: 
"Given class must be a FunctionalInterface which implies it has to be an 
interface."?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6943) Improve exceptions within TypeExtractionUtils#getSingleAbstractMethod

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054005#comment-16054005
 ] 

ASF GitHub Bot commented on FLINK-6943:
---

Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/4140#discussion_r122708789
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractionUtils.java
 ---
@@ -215,22 +215,33 @@ public static Type extractTypeArgument(Type t, int 
index) throws InvalidTypesExc
 * Extracts a Single Abstract Method (SAM) as defined in Java 
Specification (4.3.2. The Class Object,
 * 9.8 Functional Interfaces, 9.4.3 Interface Method Body) from given 
class.
 *
-* @param baseClass
-* @throws InvalidTypesException if the given class does not implement
-* @return
+* @param baseClass a class that is a FunctionalInterface to retrieve a 
SAM from
+* @throws InvalidTypesException if the given class does not implement 
FunctionalInterface
+* @return single abstract method of the given class
 */
public static Method getSingleAbstractMethod(Class baseClass) {
+
+   if (!baseClass.isInterface()) {
+   throw new InvalidTypesException("FunctionalInterface 
must be an interface");
--- End diff --

Yes and no, I would say. Given class must be a FunctionalInterface which 
implies it has to be an interface. If you feel. How about such a message: 
"Given class must be a FunctionalInterface which implies it has to be an 
interface."?


> Improve exceptions within TypeExtractionUtils#getSingleAbstractMethod
> -
>
> Key: FLINK-6943
> URL: https://issues.apache.org/jira/browse/FLINK-6943
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>
> Exception message seems to be inexact. 
> Also if there is no SAM, sam would be null upon returning from the method.
> The suggestion from a review was to change the message and add a check (for 
> null sam) prior to returning.
> Another suggestion is to check if the given method is an interface, as only 
> for interface it is possible to pass lambda.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6943) Improve exceptions within TypeExtractionUtils#getSingleAbstractMethod

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053993#comment-16053993
 ] 

ASF GitHub Bot commented on FLINK-6943:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4140#discussion_r122707710
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractionUtils.java
 ---
@@ -215,22 +215,33 @@ public static Type extractTypeArgument(Type t, int 
index) throws InvalidTypesExc
 * Extracts a Single Abstract Method (SAM) as defined in Java 
Specification (4.3.2. The Class Object,
 * 9.8 Functional Interfaces, 9.4.3 Interface Method Body) from given 
class.
 *
-* @param baseClass
-* @throws InvalidTypesException if the given class does not implement
-* @return
+* @param baseClass a class that is a FunctionalInterface to retrieve a 
SAM from
+* @throws InvalidTypesException if the given class does not implement 
FunctionalInterface
+* @return single abstract method of the given class
 */
public static Method getSingleAbstractMethod(Class baseClass) {
+
+   if (!baseClass.isInterface()) {
+   throw new InvalidTypesException("FunctionalInterface 
must be an interface");
--- End diff --

Shouldn't this say "Given class ... must be an interface."?


> Improve exceptions within TypeExtractionUtils#getSingleAbstractMethod
> -
>
> Key: FLINK-6943
> URL: https://issues.apache.org/jira/browse/FLINK-6943
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>
> Exception message seems to be inexact. 
> Also if there is no SAM, sam would be null upon returning from the method.
> The suggestion from a review was to change the message and add a check (for 
> null sam) prior to returning.
> Another suggestion is to check if the given method is an interface, as only 
> for interface it is possible to pass lambda.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6541) Jar upload directory not created

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053992#comment-16053992
 ] 

ASF GitHub Bot commented on FLINK-6541:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3894
  
merging.


> Jar upload directory not created
> 
>
> Key: FLINK-6541
> URL: https://issues.apache.org/jira/browse/FLINK-6541
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Andrey
>Assignee: Chesnay Schepler
>Priority: Minor
>
> Steps to reproduce:
> * setup configuration property: jobmanager.web.tmpdir = /mnt/flink/web
> * this directory should not exist
> * Run flink job manager.
> * in logs: 
> {code}
> 2017-05-11 12:07:58,397 ERROR 
> org.apache.flink.runtime.webmonitor.WebMonitorUtils   - WebServer 
> could not be created [main]
> java.io.IOException: Jar upload directory 
> /mnt/flink/web/flink-web-3f2733c3-6f4c-4311-b617-1e93d9535421 cannot be 
> created or is not writable.
> {code}
> Expected:
> * create parent directories if they do not exit. i.e. use 
> "uploadDir.mkdirs()" instead of "uploadDir.mkdir()"
> Note:
> * BlobServer create parent directories (See BlobUtils storageDir.mkdirs())



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6461) Deprecate web-related configuration defaults in ConfigConstants

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053989#comment-16053989
 ] 

ASF GitHub Bot commented on FLINK-6461:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3951#discussion_r122706584
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
 ---
@@ -173,7 +173,7 @@ class JobManager(
* to run in the actor system of the associated job manager.
*/
   val webMonitorPort : Int = flinkConfiguration.getInteger(
-ConfigConstants.JOB_MANAGER_WEB_PORT_KEY, -1)
+JobManagerOptions.WEB_PORT.key(), -1)
--- End diff --

I thought about doing that, but decided against it to preserve existing 
behavior. If no port is configured at all, with the current code, a random port 
will be used. Someone out there may be relying on this (although they should 
just explicitly set it to 0)


> Deprecate web-related configuration defaults in ConfigConstants
> ---
>
> Key: FLINK-6461
> URL: https://issues.apache.org/jira/browse/FLINK-6461
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3894: [FLINK-6541] Improve tmp dir setup in TM/WebMonitor

2017-06-19 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3894
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6896) Creating a table from a POJO and use table sink to output fail

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053990#comment-16053990
 ] 

ASF GitHub Bot commented on FLINK-6896:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4111
  
@twalthr Thank you for pay attention to this PR. Feel free to left your 
comments.


> Creating a table from a POJO and use table sink to output fail
> --
>
> Key: FLINK-6896
> URL: https://issues.apache.org/jira/browse/FLINK-6896
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Mark You
>Assignee: sunjincheng
> Attachments: debug.png
>
>
> Following example fails at sink, using debug mode to see the reason of 
> ArrayIndexOutOfBoundException is cause by the input type is Pojo type not Row?
> Sample:
> {code:title=TumblingWindow.java|borderStyle=solid}
> public class TumblingWindow {
> public static void main(String[] args) throws Exception {
> List data = new ArrayList();
> data.add(new Content(1L, "Hi"));
> data.add(new Content(2L, "Hallo"));
> data.add(new Content(3L, "Hello"));
> data.add(new Content(4L, "Hello"));
> data.add(new Content(7L, "Hello"));
> data.add(new Content(8L, "Hello world"));
> data.add(new Content(16L, "Hello world"));
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
> DataStream stream = env.fromCollection(data);
> DataStream stream2 = stream.assignTimestampsAndWatermarks(
> new 
> BoundedOutOfOrdernessTimestampExtractor(Time.milliseconds(1)) {
> /**
>  * 
>  */
> private static final long serialVersionUID = 
> 410512296011057717L;
> @Override
> public long extractTimestamp(Content element) {
> return element.getRecordTime();
> }
> });
> final StreamTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env);
> Table table = tableEnv.fromDataStream(stream2, 
> "urlKey,uplink,downlink,httpGetMessageCount,httpPostMessageCount,statusCode,rowtime.rowtime");
> Table windowTable = 
> table.window(Tumble.over("1.hours").on("rowtime").as("w")).groupBy("w, 
> urlKey")
> 
> .select("w.start,urlKey,uplink.sum,downlink.sum,httpGetMessageCount.sum,httpPostMessageCount.sum
>  ");
> //table.printSchema();
> TableSink windowSink = new 
> CsvTableSink("/Users/mark/Documents/specific-website-code.csv", ",", 1,
> WriteMode.OVERWRITE);
> windowTable.writeToSink(windowSink);
> // tableEnv.toDataStream(windowTable, Row.class).print();
> env.execute();
> }
> public static class Content implements Serializable {
> /**
>  * 
>  */
> private static final long serialVersionUID = 1429246948772430441L;
> private String urlKey;
> private long recordTime;
> // private String recordTimeStr;
> private long httpGetMessageCount;
> private long httpPostMessageCount;
> private long uplink;
> private long downlink;
> private long statusCode;
> private long statusCodeCount;
> public Content() {
> super();
> }
> public Content(long recordTime, String urlKey) {
> super();
> this.recordTime = recordTime;
> this.urlKey = urlKey;
> }
> public String getUrlKey() {
> return urlKey;
> }
> public void setUrlKey(String urlKey) {
> this.urlKey = urlKey;
> }
> public long getRecordTime() {
> return recordTime;
> }
> public void setRecordTime(long recordTime) {
> this.recordTime = recordTime;
> }
> public long getHttpGetMessageCount() {
> return httpGetMessageCount;
> }
> public void setHttpGetMessageCount(long httpGetMessageCount) {
> this.httpGetMessageCount = httpGetMessageCount;
> }
> public long getHttpPostMessageCount() {
> return httpPostMessageCount;
> }
> public void setHttpPostMessageCount(long httpPostMessageCount) {
> this.httpPostMessageCount = httpPostMessageCount;
> }
> public long getUplink() {
> return uplink;
> }
> public void setUplink(long uplink) {
> this.uplink = uplink;
> }
> public 

[GitHub] flink issue #4111: [FLINK-6896][table] Fix generate PojoType input result ex...

2017-06-19 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4111
  
@twalthr Thank you for pay attention to this PR. Feel free to left your 
comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3951: [FLINK-6461] Replace usages of deprecated web port...

2017-06-19 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3951#discussion_r122706584
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
 ---
@@ -173,7 +173,7 @@ class JobManager(
* to run in the actor system of the associated job manager.
*/
   val webMonitorPort : Int = flinkConfiguration.getInteger(
-ConfigConstants.JOB_MANAGER_WEB_PORT_KEY, -1)
+JobManagerOptions.WEB_PORT.key(), -1)
--- End diff --

I thought about doing that, but decided against it to preserve existing 
behavior. If no port is configured at all, with the current code, a random port 
will be used. Someone out there may be relying on this (although they should 
just explicitly set it to 0)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (FLINK-6859) StateCleaningCountTrigger should not delete timer

2017-06-19 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-6859:
-

It appears i still didn't merge it.

> StateCleaningCountTrigger should not delete timer
> -
>
> Key: FLINK-6859
> URL: https://issues.apache.org/jira/browse/FLINK-6859
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Fabian Hueske
>Assignee: sunjincheng
> Fix For: 1.3.1, 1.4.0
>
>
> The {{StateCleaningCountTrigger}} which is used to clean-up inactive state 
> should not delete timers, i.e.. not call {{deleteProcessingTimeTimer()}}.
> This is an expensive operation.
> We should rather fire the timer and check if we need to clean the state or 
> not.
> What do you think [~sunjincheng121]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-5178) allow BlobCache to use a distributed file system irrespective of the HA mode

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053974#comment-16053974
 ] 

ASF GitHub Bot commented on FLINK-5178:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3085
  
I take it this PR will be subsumed in FLIP-19?


> allow BlobCache to use a distributed file system irrespective of the HA mode
> 
>
> Key: FLINK-5178
> URL: https://issues.apache.org/jira/browse/FLINK-5178
> Project: Flink
>  Issue Type: Improvement
>  Components: Network
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> After FLINK-5129, high availability (HA) mode adds the ability for the 
> BlobCache instances at the task managers to download blobs directly from the 
> distributed file system. It would be nice if this also worked in non-HA mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6008) collection of BlobServer improvements

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053975#comment-16053975
 ] 

ASF GitHub Bot commented on FLINK-6008:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3512
  
I take it this PR will be subsumed in FLIP-19?


> collection of BlobServer improvements
> -
>
> Key: FLINK-6008
> URL: https://issues.apache.org/jira/browse/FLINK-6008
> Project: Flink
>  Issue Type: Improvement
>  Components: Network
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> The following things should be improved around the BlobServer/BlobCache:
> * update config uptions with non-deprecated ones, e.g. 
> {{high-availability.cluster-id}} and {{high-availability.storageDir}}
> * promote {{BlobStore#deleteAll(JobID)}} to the {{BlobService}}
> * extend the {{BlobService}} to work with {{NAME_ADDRESSABLE}} blobs 
> (prepares FLINK-4399]
> * remove {{NAME_ADDRESSABLE}} blobs after job/task termination
> * do not fail the {{BlobServer}} when a delete operation fails
> * code style, like using {{Preconditions.checkArgument}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3512: [FLINK-6008] collection of BlobServer improvements

2017-06-19 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3512
  
I take it this PR will be subsumed in FLIP-19?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3085: [FLINK-5178] allow BlobCache to use a distributed file sy...

2017-06-19 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3085
  
I take it this PR will be subsumed in FLIP-19?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6682) Improve error message in case parallelism exceeds maxParallelism

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053967#comment-16053967
 ] 

ASF GitHub Bot commented on FLINK-6682:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4125#discussion_r122686915
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
 ---
@@ -225,7 +225,16 @@ private void assignAttemptState(ExecutionJobVertex 
executionJobVertex, List 
operatorStates, ExecutionJobVertex executionJobVertex) {
-
+   //parallelism compare 
preconditions-
+
+   // if the max parallelism is lower than parallelism, we will 
throw an exception.
+   if (executionJobVertex.getMaxParallelism() < 
executionJobVertex.getParallelism()) {
+   throw new IllegalArgumentException("JobVertex: " + 
executionJobVertex.getJobVertex()  +
--- End diff --

We don't usually refer to JobVertices in exceptions, a more user-known 
"equivalent" are tasks.

We should also throw an IllegalStateException to be consistent with 
existing exceptions in this area.

Finally, let's reword this a bit:

"The state for task () cannot be 
restored. The maximum parallelism () of the 
restored state is lower than the configured parallelism 
(). Please reduce the parallelism of the 
task to be lower or equal to the maximum parallelism."


> Improve error message in case parallelism exceeds maxParallelism
> 
>
> Key: FLINK-6682
> URL: https://issues.apache.org/jira/browse/FLINK-6682
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>
> When restoring a job with a parallelism that exceeds the maxParallelism we're 
> not providing a useful error message, as all you get is an 
> IllegalArgumentException:
> {code}
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed
> at 
> org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:343)
> at 
> org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:396)
> at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:467)
> ... 22 more
> Caused by: java.lang.IllegalArgumentException
> at 
> org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:123)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.createKeyGroupPartitions(StateAssignmentOperation.java:449)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignAttemptState(StateAssignmentOperation.java:117)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignStates(StateAssignmentOperation.java:102)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedState(CheckpointCoordinator.java:1038)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1101)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:1386)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4125: [FLINK-6682] [checkpoints] Improve error message i...

2017-06-19 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4125#discussion_r122684131
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
 ---
@@ -225,7 +225,16 @@ private void assignAttemptState(ExecutionJobVertex 
executionJobVertex, List 
operatorStates, ExecutionJobVertex executionJobVertex) {
-
+   //parallelism compare 
preconditions-
+
+   // if the max parallelism is lower than parallelism, we will 
throw an exception.
--- End diff --

remove the comment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6682) Improve error message in case parallelism exceeds maxParallelism

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053968#comment-16053968
 ] 

ASF GitHub Bot commented on FLINK-6682:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4125#discussion_r122685535
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
 ---
@@ -225,7 +225,16 @@ private void assignAttemptState(ExecutionJobVertex 
executionJobVertex, List 
operatorStates, ExecutionJobVertex executionJobVertex) {
-
+   //parallelism compare 
preconditions-
+
+   // if the max parallelism is lower than parallelism, we will 
throw an exception.
+   if (executionJobVertex.getMaxParallelism() < 
executionJobVertex.getParallelism()) {
--- End diff --

you have to compare the `maxParallelism` of the `operatorStates` with the 
one from the `executionJobVertex`. This is probably easier by moving this check 
into the `checkParallelismCondition(OperatorState, ExecutionJobVertex)` method.


> Improve error message in case parallelism exceeds maxParallelism
> 
>
> Key: FLINK-6682
> URL: https://issues.apache.org/jira/browse/FLINK-6682
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>
> When restoring a job with a parallelism that exceeds the maxParallelism we're 
> not providing a useful error message, as all you get is an 
> IllegalArgumentException:
> {code}
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed
> at 
> org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:343)
> at 
> org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:396)
> at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:467)
> ... 22 more
> Caused by: java.lang.IllegalArgumentException
> at 
> org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:123)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.createKeyGroupPartitions(StateAssignmentOperation.java:449)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignAttemptState(StateAssignmentOperation.java:117)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignStates(StateAssignmentOperation.java:102)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedState(CheckpointCoordinator.java:1038)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1101)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:1386)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4125: [FLINK-6682] [checkpoints] Improve error message i...

2017-06-19 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4125#discussion_r122685535
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
 ---
@@ -225,7 +225,16 @@ private void assignAttemptState(ExecutionJobVertex 
executionJobVertex, List 
operatorStates, ExecutionJobVertex executionJobVertex) {
-
+   //parallelism compare 
preconditions-
+
+   // if the max parallelism is lower than parallelism, we will 
throw an exception.
+   if (executionJobVertex.getMaxParallelism() < 
executionJobVertex.getParallelism()) {
--- End diff --

you have to compare the `maxParallelism` of the `operatorStates` with the 
one from the `executionJobVertex`. This is probably easier by moving this check 
into the `checkParallelismCondition(OperatorState, ExecutionJobVertex)` method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6682) Improve error message in case parallelism exceeds maxParallelism

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053966#comment-16053966
 ] 

ASF GitHub Bot commented on FLINK-6682:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4125#discussion_r122684131
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
 ---
@@ -225,7 +225,16 @@ private void assignAttemptState(ExecutionJobVertex 
executionJobVertex, List 
operatorStates, ExecutionJobVertex executionJobVertex) {
-
+   //parallelism compare 
preconditions-
+
+   // if the max parallelism is lower than parallelism, we will 
throw an exception.
--- End diff --

remove the comment.


> Improve error message in case parallelism exceeds maxParallelism
> 
>
> Key: FLINK-6682
> URL: https://issues.apache.org/jira/browse/FLINK-6682
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>
> When restoring a job with a parallelism that exceeds the maxParallelism we're 
> not providing a useful error message, as all you get is an 
> IllegalArgumentException:
> {code}
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed
> at 
> org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:343)
> at 
> org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:396)
> at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:467)
> ... 22 more
> Caused by: java.lang.IllegalArgumentException
> at 
> org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:123)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.createKeyGroupPartitions(StateAssignmentOperation.java:449)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignAttemptState(StateAssignmentOperation.java:117)
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignStates(StateAssignmentOperation.java:102)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedState(CheckpointCoordinator.java:1038)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1101)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:1386)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1372)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4125: [FLINK-6682] [checkpoints] Improve error message i...

2017-06-19 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4125#discussion_r122686915
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
 ---
@@ -225,7 +225,16 @@ private void assignAttemptState(ExecutionJobVertex 
executionJobVertex, List 
operatorStates, ExecutionJobVertex executionJobVertex) {
-
+   //parallelism compare 
preconditions-
+
+   // if the max parallelism is lower than parallelism, we will 
throw an exception.
+   if (executionJobVertex.getMaxParallelism() < 
executionJobVertex.getParallelism()) {
+   throw new IllegalArgumentException("JobVertex: " + 
executionJobVertex.getJobVertex()  +
--- End diff --

We don't usually refer to JobVertices in exceptions, a more user-known 
"equivalent" are tasks.

We should also throw an IllegalStateException to be consistent with 
existing exceptions in this area.

Finally, let's reword this a bit:

"The state for task () cannot be 
restored. The maximum parallelism () of the 
restored state is lower than the configured parallelism 
(). Please reduce the parallelism of the 
task to be lower or equal to the maximum parallelism."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4024: [FLINK-6782][docs] update snapshot documentation to refle...

2017-06-19 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4024
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6782) Update savepoint documentation

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053963#comment-16053963
 ] 

ASF GitHub Bot commented on FLINK-6782:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4024
  
merging.


> Update savepoint documentation
> --
>
> Key: FLINK-6782
> URL: https://issues.apache.org/jira/browse/FLINK-6782
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
> Fix For: 1.3.0
>
>
> Savepoint documentation is a bit outdated regarding full data being stored in 
> the savepoint path, not just a metadata file



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6461) Deprecate web-related configuration defaults in ConfigConstants

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053962#comment-16053962
 ] 

ASF GitHub Bot commented on FLINK-6461:
---

Github user pnowojski commented on a diff in the pull request:

https://github.com/apache/flink/pull/3951#discussion_r122701964
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
 ---
@@ -173,7 +173,7 @@ class JobManager(
* to run in the actor system of the associated job manager.
*/
   val webMonitorPort : Int = flinkConfiguration.getInteger(
-ConfigConstants.JOB_MANAGER_WEB_PORT_KEY, -1)
+JobManagerOptions.WEB_PORT.key(), -1)
--- End diff --

I am not sure, but why don't you use 
`flinkConfiguration.getInteger(JobManagerOptions.WEB_PORT)`? It would have 
different semantics (it would return `8081` instead of `-1` in case of missing 
value).


> Deprecate web-related configuration defaults in ConfigConstants
> ---
>
> Key: FLINK-6461
> URL: https://issues.apache.org/jira/browse/FLINK-6461
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #3951: [FLINK-6461] Replace usages of deprecated web port...

2017-06-19 Thread pnowojski
Github user pnowojski commented on a diff in the pull request:

https://github.com/apache/flink/pull/3951#discussion_r122701964
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
 ---
@@ -173,7 +173,7 @@ class JobManager(
* to run in the actor system of the associated job manager.
*/
   val webMonitorPort : Int = flinkConfiguration.getInteger(
-ConfigConstants.JOB_MANAGER_WEB_PORT_KEY, -1)
+JobManagerOptions.WEB_PORT.key(), -1)
--- End diff --

I am not sure, but why don't you use 
`flinkConfiguration.getInteger(JobManagerOptions.WEB_PORT)`? It would have 
different semantics (it would return `8081` instead of `-1` in case of missing 
value).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6788) Remove unused GenericFlatTypePostPass/AbstractSchema class

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053959#comment-16053959
 ] 

ASF GitHub Bot commented on FLINK-6788:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4118
  
merging.


> Remove unused GenericFlatTypePostPass/AbstractSchema class
> --
>
> Key: FLINK-6788
> URL: https://issues.apache.org/jira/browse/FLINK-6788
> Project: Flink
>  Issue Type: Improvement
>  Components: Optimizer
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Miao Wang
>Priority: Trivial
>
> The {{AbstractSchema}} and {{GenericFlatTypePostPass}} classes in 
> {{org.apache.flink.optimizer.postpass}} are unused and could maybe be removed.
> [~fhueske] your thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4118: [FLINK-6788]:Remove unused GenericFlatTypePostPass/Abstra...

2017-06-19 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4118
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6784) Add some notes about externalized checkpoints and the difference to savepoints

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053957#comment-16053957
 ] 

ASF GitHub Bot commented on FLINK-6784:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4033
  
merging.


> Add some notes about externalized checkpoints and the difference to savepoints
> --
>
> Key: FLINK-6784
> URL: https://issues.apache.org/jira/browse/FLINK-6784
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
> Fix For: 1.3.0
>
>
> while externalized checkpoints are described somehow, there does not seem to 
> be any paragraph explaining the difference to savepoints, also there are two 
> checkpointing docs which could at least be linked somehow



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4033: [FLINK-6784][docs] update externalized checkpoints docume...

2017-06-19 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4033
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6798) Remove/update documentation about network buffer tuning

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053954#comment-16053954
 ] 

ASF GitHub Bot commented on FLINK-6798:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4080
  
merging.


> Remove/update documentation about network buffer tuning
> ---
>
> Key: FLINK-6798
> URL: https://issues.apache.org/jira/browse/FLINK-6798
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Network
>Reporter: Robert Metzger
>Assignee: Nico Kruber
> Fix For: 1.3.0
>
>
> {quote}The number of network buffers is a parameter that can currently have 
> an effect on checkpointing at large scale. The Flink community is working on 
> eliminating that parameter in the next versions of Flink.
> {quote} 
> in 
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/large_state_tuning.html#tuning-network-buffers



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4080: [FLINK-6798][docs] update old network buffer notices

2017-06-19 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4080
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6868) Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra Connector`

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053952#comment-16053952
 ] 

ASF GitHub Bot commented on FLINK-6868:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4087
  
merging.


> Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`
> -
>
> Key: FLINK-6868
> URL: https://issues.apache.org/jira/browse/FLINK-6868
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Cassandra Connector
>Affects Versions: 1.4.0
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Blocker
> Fix For: 1.4.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Shoud using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4087: [FLINK-6868][build] Using `scala.binary.version` for `fli...

2017-06-19 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4087
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6863) Fully separate streaming examples

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053949#comment-16053949
 ] 

ASF GitHub Bot commented on FLINK-6863:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4115
  
merging.


> Fully separate streaming examples
> ---
>
> Key: FLINK-6863
> URL: https://issues.apache.org/jira/browse/FLINK-6863
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> The streaming examples module currently depends on the batch examples, 
> specifically they re-use {{WordCountData}} for {{PojoExample}}, 
> {{SideOutputExample}}, {{WordCount}} and {{WindowWordCount}}.
> I propose simply copying the example data to make the module more 
> self-contained.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4115: [FLINK-6863] Remove batchdependency from streaming-exampl...

2017-06-19 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4115
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6769) Replace usage of deprecated FileSystem#create(Path, boolean)

2017-06-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053947#comment-16053947
 ] 

ASF GitHub Bot commented on FLINK-6769:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4116
  
merging.


> Replace usage of deprecated FileSystem#create(Path, boolean)
> 
>
> Key: FLINK-6769
> URL: https://issues.apache.org/jira/browse/FLINK-6769
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >