[jira] [Commented] (FLINK-5444) Flink UI uses absolute URLs.

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820447#comment-15820447
 ] 

ASF GitHub Bot commented on FLINK-5444:
---

Github user sachingoel0101 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3093#discussion_r95740581
  
--- Diff: 
flink-runtime-web/web-dashboard/app/scripts/modules/submit/submit.ctrl.coffee 
---
@@ -167,7 +167,7 @@ angular.module('flinkApp')
 $scope.uploader['success'] = null
   else
 $scope.uploader['success'] = "Uploaded!"
-  xhr.open("POST", "/jars/upload")
+  xhr.open("POST", "jars/upload")
--- End diff --

Can you use `flinkConfig.jobServer + "jars/upload"` here? Otherwise, looks 
good to me.


> Flink UI uses absolute URLs.
> 
>
> Key: FLINK-5444
> URL: https://issues.apache.org/jira/browse/FLINK-5444
> Project: Flink
>  Issue Type: Bug
>Reporter: Joerg Schad
>Assignee: Joerg Schad
>
> The Flink UI has a mixed use of absolute and relative links. See for example 
> [here](https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/web/index.html)
> {code:|borderStyle=solid}
>  sizes="16x16"> 
> 
> {code}
> When referencing the UI from another UI, e.g., the DC/OS UI relative links 
> are preffered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3110: [FLINK-2184] Cannot get last element with maxBy/mi...

2017-01-12 Thread gallenvara
GitHub user gallenvara opened a pull request:

https://github.com/apache/flink/pull/3110

[FLINK-2184] Cannot get last element with maxBy/minBy.

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [X] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [X] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed

Extend the minBy/maxBy with first parameter. @fhueske sorry for the late 
update for this pr, can you help with review work?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gallenvara/flink flink-2184

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3110.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3110


commit 8ae9bb689320a0b5b3fe199ecc61c62504ff0e7d
Author: gallenvara 
Date:   2016-05-09T11:51:24Z

Extend minBy/maxBy methods to support returning last element.

commit e04ae6e2bfad1c52460dc742be41153dd012b291
Author: gaolun.gl 
Date:   2017-01-13T04:10:08Z

update pr of [FLINK-2184] Cannot get last element with maxBy/minBy.

commit 8bff4dda475bfecaaa2b5efeab84c1afc87d0f5f
Author: gaolun.gl 
Date:   2017-01-13T04:12:19Z

Merge remote-tracking branch 'origin/flink-2184' into flink-2184

# Conflicts:
#   
flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/AllWindowedStream.scala
#   
flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/KeyedStream.scala
#   
flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/WindowedStream.scala

commit 71d2c23574d42cedb558ca991ebd87f814a16fde
Author: gaolun.gl 
Date:   2017-01-13T04:26:45Z

update pr of [FLINK-2184] Cannot get last element with maxBy/minBy.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #1975: [FLINK-2184] Cannot get last element with maxBy/mi...

2017-01-12 Thread gallenvara
Github user gallenvara closed the pull request at:

https://github.com/apache/flink/pull/1975


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5432) ContinuousFileMonitoringFunction is not monitoring nested files

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820764#comment-15820764
 ] 

ASF GitHub Bot commented on FLINK-5432:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/3090
  
The changes look very good! I think it would be good to add a test for 
nested reading in `ContinuousFileProcessingTest`.


> ContinuousFileMonitoringFunction is not monitoring nested files
> ---
>
> Key: FLINK-5432
> URL: https://issues.apache.org/jira/browse/FLINK-5432
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
> Fix For: 1.2.0, 1.3.0
>
>
> The {{ContinuousFileMonitoringFunction}} does not monitor nested files even 
> if the inputformat has NestedFileEnumeration set to true. This can be fixed 
> by enabling a recursive scan of the directories in the {{listEligibleFiles}} 
> method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3093: [FLINK-5444] Made Flink UI links relative.

2017-01-12 Thread sachingoel0101
Github user sachingoel0101 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3093#discussion_r95740581
  
--- Diff: 
flink-runtime-web/web-dashboard/app/scripts/modules/submit/submit.ctrl.coffee 
---
@@ -167,7 +167,7 @@ angular.module('flinkApp')
 $scope.uploader['success'] = null
   else
 $scope.uploader['success'] = "Uploaded!"
-  xhr.open("POST", "/jars/upload")
+  xhr.open("POST", "jars/upload")
--- End diff --

Can you use `flinkConfig.jobServer + "jars/upload"` here? Otherwise, looks 
good to me.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5452) Make table unit tests pass under cluster mode

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820582#comment-15820582
 ] 

ASF GitHub Bot commented on FLINK-5452:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3095
  
Thanks for looking into this issue @KurtYoung 
I agree that this is rather an issue with the tests and not with the actual 
code. However, I would fix the tests a bit differently.
The goal of the code that you removed was to validate that each partition 
is correctly sorted and that the partitions themselves are correctly sorted, 
i.e., for a descending sort, the highest values should be in partition 0 and 
the lowest in partition n.

In order to ensure parallel execution, we cannot execute the sort tests in 
a collection environment but need a cluster environment. Moreover, we should 
explicitly set a default parallelism on the ExecutionEnvironment to avoid that 
the program is executed with parallelism 1 (a parallelism of 3 should suffice). 
Once we do that we must ensure that the started minicluster offers enough slots 
to run the program.

I'll add a few more inline comments to the tests.

Thanks, Fabian


> Make table unit tests pass under cluster mode
> -
>
> Key: FLINK-5452
> URL: https://issues.apache.org/jira/browse/FLINK-5452
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Kurt Young
>Assignee: Kurt Young
>
> Currently if we change the test execution mode to 
> {{TestExecutionMode.CLUSTER}} in {{TableProgramsTestBase}}, some cases will 
> fail. Need to figure out whether it's the case design problem or there are 
> some bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3095: [FLINK-5452] [table] Fix SortITCase which will fail under...

2017-01-12 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3095
  
Thanks for looking into this issue @KurtYoung 
I agree that this is rather an issue with the tests and not with the actual 
code. However, I would fix the tests a bit differently.
The goal of the code that you removed was to validate that each partition 
is correctly sorted and that the partitions themselves are correctly sorted, 
i.e., for a descending sort, the highest values should be in partition 0 and 
the lowest in partition n.

In order to ensure parallel execution, we cannot execute the sort tests in 
a collection environment but need a cluster environment. Moreover, we should 
explicitly set a default parallelism on the ExecutionEnvironment to avoid that 
the program is executed with parallelism 1 (a parallelism of 3 should suffice). 
Once we do that we must ensure that the started minicluster offers enough slots 
to run the program.

I'll add a few more inline comments to the tests.

Thanks, Fabian


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5452) Make table unit tests pass under cluster mode

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820593#comment-15820593
 ] 

ASF GitHub Bot commented on FLINK-5452:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3095#discussion_r95754128
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/table/SortITCase.scala
 ---
@@ -57,14 +57,8 @@ class SortITCase(
   - x.productElement(0).asInstanceOf[Int] )
 
 val expected = sortExpectedly(tupleDataSetStrings)
-val results = t.toDataSet[Row].mapPartition(rows => 
Seq(rows.toSeq)).collect()
-
-val result = results
-  .filterNot(_.isEmpty)
-  .sortBy(_.head)(Ordering.by(f=> f.toString))
--- End diff --

We should change this to ` .sortBy(_.head)` and provide an implicit 
ordering for `Row`. 
When we changed `Row` to not extend `Product`, we should have added the 
implicit ordering for `Row` instead of sorting by String.

The same applies to the other tests in this class.


> Make table unit tests pass under cluster mode
> -
>
> Key: FLINK-5452
> URL: https://issues.apache.org/jira/browse/FLINK-5452
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Kurt Young
>Assignee: Kurt Young
>
> Currently if we change the test execution mode to 
> {{TestExecutionMode.CLUSTER}} in {{TableProgramsTestBase}}, some cases will 
> fail. Need to figure out whether it's the case design problem or there are 
> some bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3095: [FLINK-5452] [table] Fix SortITCase which will fai...

2017-01-12 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3095#discussion_r95754128
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/table/SortITCase.scala
 ---
@@ -57,14 +57,8 @@ class SortITCase(
   - x.productElement(0).asInstanceOf[Int] )
 
 val expected = sortExpectedly(tupleDataSetStrings)
-val results = t.toDataSet[Row].mapPartition(rows => 
Seq(rows.toSeq)).collect()
-
-val result = results
-  .filterNot(_.isEmpty)
-  .sortBy(_.head)(Ordering.by(f=> f.toString))
--- End diff --

We should change this to ` .sortBy(_.head)` and provide an implicit 
ordering for `Row`. 
When we changed `Row` to not extend `Product`, we should have added the 
implicit ordering for `Row` instead of sorting by String.

The same applies to the other tests in this class.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5452) Make table unit tests pass under cluster mode

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820628#comment-15820628
 ] 

ASF GitHub Bot commented on FLINK-5452:
---

Github user KurtYoung commented on the issue:

https://github.com/apache/flink/pull/3095
  
@fhueske Ah i see, thanks for the explaining. I will try to fix this in 
another way after #3099 is in.


> Make table unit tests pass under cluster mode
> -
>
> Key: FLINK-5452
> URL: https://issues.apache.org/jira/browse/FLINK-5452
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Kurt Young
>Assignee: Kurt Young
>
> Currently if we change the test execution mode to 
> {{TestExecutionMode.CLUSTER}} in {{TableProgramsTestBase}}, some cases will 
> fail. Need to figure out whether it's the case design problem or there are 
> some bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3095: [FLINK-5452] [table] Fix SortITCase which will fail under...

2017-01-12 Thread KurtYoung
Github user KurtYoung commented on the issue:

https://github.com/apache/flink/pull/3095
  
@fhueske Ah i see, thanks for the explaining. I will try to fix this in 
another way after #3099 is in.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-5464) MetricQueryService throws NullPointerException on JobManager

2017-01-12 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-5464:
---

Assignee: Chesnay Schepler

> MetricQueryService throws NullPointerException on JobManager
> 
>
> Key: FLINK-5464
> URL: https://issues.apache.org/jira/browse/FLINK-5464
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>
> I'm using Flink 699f4b0.
> My JobManager log contains many of these log entries:
> {code}
> 2017-01-11 19:42:05,778 WARN  
> org.apache.flink.runtime.webmonitor.metrics.MetricFetcher - Fetching 
> metrics failed.
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/MetricQueryService#-970662317]] after [1 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:334)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:474)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:425)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:429)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:381)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-01-11 19:42:07,765 WARN  
> org.apache.flink.runtime.metrics.dump.MetricQueryService  - An exception 
> occurred while processing a message.
> java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.metrics.dump.MetricDumpSerialization.serializeGauge(MetricDumpSerialization.java:162)
>   at 
> org.apache.flink.runtime.metrics.dump.MetricDumpSerialization.access$300(MetricDumpSerialization.java:47)
>   at 
> org.apache.flink.runtime.metrics.dump.MetricDumpSerialization$MetricDumpSerializer.serialize(MetricDumpSerialization.java:90)
>   at 
> org.apache.flink.runtime.metrics.dump.MetricQueryService.onReceive(MetricQueryService.java:109)
>   at 
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5407) Savepoint for iterative Task fails.

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820786#comment-15820786
 ] 

ASF GitHub Bot commented on FLINK-5407:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/3088
  
Changes look very good! I fixed the formatting of the newly added methods 
in `TestingCluster` to conform to Scala coding guidelines.

I rebased on master, will wait for Travis to give the green light and then 
merge.


> Savepoint for iterative Task fails.
> ---
>
> Key: FLINK-5407
> URL: https://issues.apache.org/jira/browse/FLINK-5407
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0
>Reporter: Stephan Ewen
>Assignee: Stefan Richter
> Fix For: 1.2.0, 1.3.0
>
> Attachments: SavepointBug.java
>
>
> Flink 1.2-SNAPSHOT (Commit: 5b54009) on Windows.
> Triggering a savepoint for a streaming job, both the savepoint and the job 
> failed.
> The job failed with the following exception:
> {code}
> java.lang.RuntimeException: Error while triggering checkpoint for 
> IterationSource-7 (1/1)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1026)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.createOperatorIdentifier(StreamTask.java:767)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.access$500(StreamTask.java:115)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.createStreamFactory(StreamTask.java:986)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:956)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:583)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:551)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:511)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1019)
>   ... 5 more
> And the savepoint failed with the following exception:
> Using address /127.0.0.1:6123 to connect to JobManager.
> Triggering savepoint for job 153310c4a836a92ce69151757c6b73f1.
> Waiting for response...
> 
>  The program finished with the following exception:
> java.lang.Exception: Failed to complete savepoint
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anon$7.apply(JobManager.scala:793)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anon$7.apply(JobManager.scala:782)
> at 
> org.apache.flink.runtime.concurrent.impl.FlinkFuture$6.recover(FlinkFuture.java:263)
> at akka.dispatch.Recover.internal(Future.scala:267)
> at akka.dispatch.japi$RecoverBridge.apply(Future.scala:183)
> at akka.dispatch.japi$RecoverBridge.apply(Future.scala:181)
> at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185)
> at scala.util.Try$.apply(Try.scala:161)
> at scala.util.Failure.recover(Try.scala:185)
> at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
> at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
> at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> 

[jira] [Commented] (FLINK-5452) Make table unit tests pass under cluster mode

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820598#comment-15820598
 ] 

ASF GitHub Bot commented on FLINK-5452:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3095
  
It might make sense to wait with this PR until #3099 is in


> Make table unit tests pass under cluster mode
> -
>
> Key: FLINK-5452
> URL: https://issues.apache.org/jira/browse/FLINK-5452
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Kurt Young
>Assignee: Kurt Young
>
> Currently if we change the test execution mode to 
> {{TestExecutionMode.CLUSTER}} in {{TableProgramsTestBase}}, some cases will 
> fail. Need to figure out whether it's the case design problem or there are 
> some bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5466) Make production environment default in gulpfile

2017-01-12 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5466:
--

 Summary: Make production environment default in gulpfile
 Key: FLINK-5466
 URL: https://issues.apache.org/jira/browse/FLINK-5466
 Project: Flink
  Issue Type: Improvement
  Components: Webfrontend
Affects Versions: 1.1.4, 1.2.0
Reporter: Ufuk Celebi


Currently the default environment set in our gulpfile is development, which 
lead to very large created JS files. When building the web UI we apparently 
forgot to set the environment to production (build via gulp production).

Since this is likely to occur again, we should make the default environment 
production and make sure to use development manually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3088: [FLINK-5407] Fix savepoints for iterative jobs

2017-01-12 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/3088
  
Changes look very good! I fixed the formatting of the newly added methods 
in `TestingCluster` to conform to Scala coding guidelines.

I rebased on master, will wait for Travis to give the green light and then 
merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3102: [FLINK-5467] Avoid legacy state for CheckpointedRestoring...

2017-01-12 Thread StefanRRichter
Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/3102
  
cc @uce @aljoscha 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3093: [FLINK-5444] Made Flink UI links relative.

2017-01-12 Thread joerg84
Github user joerg84 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3093#discussion_r95741031
  
--- Diff: 
flink-runtime-web/web-dashboard/app/scripts/modules/submit/submit.ctrl.coffee 
---
@@ -167,7 +167,7 @@ angular.module('flinkApp')
 $scope.uploader['success'] = null
   else
 $scope.uploader['success'] = "Uploaded!"
-  xhr.open("POST", "/jars/upload")
+  xhr.open("POST", "jars/upload")
--- End diff --

Thx, @sachingoel0101 could you provide me some more details on why that is 
preferrable? Happy to change it, just want to understand the details it :-).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5444) Flink UI uses absolute URLs.

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820456#comment-15820456
 ] 

ASF GitHub Bot commented on FLINK-5444:
---

Github user joerg84 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3093#discussion_r95741031
  
--- Diff: 
flink-runtime-web/web-dashboard/app/scripts/modules/submit/submit.ctrl.coffee 
---
@@ -167,7 +167,7 @@ angular.module('flinkApp')
 $scope.uploader['success'] = null
   else
 $scope.uploader['success'] = "Uploaded!"
-  xhr.open("POST", "/jars/upload")
+  xhr.open("POST", "jars/upload")
--- End diff --

Thx, @sachingoel0101 could you provide me some more details on why that is 
preferrable? Happy to change it, just want to understand the details it :-).


> Flink UI uses absolute URLs.
> 
>
> Key: FLINK-5444
> URL: https://issues.apache.org/jira/browse/FLINK-5444
> Project: Flink
>  Issue Type: Bug
>Reporter: Joerg Schad
>Assignee: Joerg Schad
>
> The Flink UI has a mixed use of absolute and relative links. See for example 
> [here](https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/web/index.html)
> {code:|borderStyle=solid}
>  sizes="16x16"> 
> 
> {code}
> When referencing the UI from another UI, e.g., the DC/OS UI relative links 
> are preffered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5462) Flink job fails due to java.util.concurrent.CancellationException while snapshotting

2017-01-12 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-5462:
--
Attachment: application-1484132267957-0005

I've attached the full log.

> Flink job fails due to java.util.concurrent.CancellationException while 
> snapshotting
> 
>
> Key: FLINK-5462
> URL: https://issues.apache.org/jira/browse/FLINK-5462
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
> Attachments: application-1484132267957-0005
>
>
> I'm using Flink 699f4b0.
> My restored, rescaled Flink job failed while creating a checkpoint with the 
> following exception:
> {code}
> 2017-01-11 18:46:49,853 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering 
> checkpoint 3 @ 1484160409846
> 2017-01-11 18:49:50,111 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- 
> TriggerWindow(TumblingEventTimeWindows(4), 
> ListStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer@2edcd071},
>  EventTimeTrigger(), WindowedStream
> .apply(AllWindowedStream.java:440)) (1/1) (2accc6ca2727c4f7ec963318fbd237e9) 
> switched from RUNNING to FAILED.
> AsynchronousException{java.lang.Exception: Could not materialize checkpoint 3 
> for operator TriggerWindow(TumblingEventTimeWindows(4), 
> ListStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer@2edcd071},
>  EventTimeTrigger(), WindowedStream.ap
> ply(AllWindowedStream.java:440)) (1/1).}
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:939)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception: Could not materialize checkpoint 3 for 
> operator TriggerWindow(TumblingEventTimeWindows(4), 
> ListStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer@2edcd071},
>  EventTimeTrigger(), WindowedStream.apply(AllWind
> owedStream.java:440)) (1/1).
> ... 6 more
> Caused by: java.util.concurrent.CancellationException
> at java.util.concurrent.FutureTask.report(FutureTask.java:121)
> at java.util.concurrent.FutureTask.get(FutureTask.java:188)
> at 
> org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:40)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:899)
> ... 5 more
> 2017-01-11 18:49:50,113 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- Job Generate 
> Event Window stream (90859d392c1da472e07695f434b332ef) switched from state 
> RUNNING to FAILING.
> AsynchronousException{java.lang.Exception: Could not materialize checkpoint 3 
> for operator TriggerWindow(TumblingEventTimeWindows(4), 
> ListStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer@2edcd071},
>  EventTimeTrigger(), WindowedStream.ap
> ply(AllWindowedStream.java:440)) (1/1).}
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:939)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception: Could not materialize checkpoint 3 for 
> operator TriggerWindow(TumblingEventTimeWindows(4), 
> ListStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer@2edcd071},
>  EventTimeTrigger(), WindowedStream.apply(AllWindowedStream.java:440)) (1/1).
> ... 6 more
> Caused by: java.util.concurrent.CancellationException
> at java.util.concurrent.FutureTask.report(FutureTask.java:121)
> at java.util.concurrent.FutureTask.get(FutureTask.java:188)
> at 
> org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:40)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:899)
> ... 5 more
> 2017-01-11 18:49:50,122 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   

[jira] [Commented] (FLINK-5466) Make production environment default in gulpfile

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820648#comment-15820648
 ] 

ASF GitHub Bot commented on FLINK-5466:
---

Github user joerg84 commented on the issue:

https://github.com/apache/flink/pull/3100
  
LGTM, thanks for fixing this!


> Make production environment default in gulpfile
> ---
>
> Key: FLINK-5466
> URL: https://issues.apache.org/jira/browse/FLINK-5466
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Affects Versions: 1.2.0, 1.1.4
>Reporter: Ufuk Celebi
>
> Currently the default environment set in our gulpfile is development, which 
> lead to very large created JS files. When building the web UI we apparently 
> forgot to set the environment to production (build via gulp production).
> Since this is likely to occur again, we should make the default environment 
> production and make sure to use development manually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3100: [FLINK-5466] [webfrontend] Set environment to production ...

2017-01-12 Thread joerg84
Github user joerg84 commented on the issue:

https://github.com/apache/flink/pull/3100
  
LGTM, thanks for fixing this!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3101: [FLINK-5406] [table] add normalization phase for p...

2017-01-12 Thread godfreyhe
GitHub user godfreyhe opened a pull request:

https://github.com/apache/flink/pull/3101

[FLINK-5406] [table] add normalization phase for predicate logical plan 
rewriting

Normalization phase is for predicate logical plan rewriting and is 
independent of cost module. The rules in normalization phase do not need to 
repeatedly applied to different logical plan which is different to volcano 
optimization phase. And the benefit of normalization phase is to reduce the 
running time of volcano planner.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/godfreyhe/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3101.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3101


commit c9f11e7c38f921a403b56a5c124974eb66bddcac
Author: godfreyhe 
Date:   2017-01-12T10:42:49Z

add normalization phase for predicate logical plan rewriting




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3080: [FLINK-4920] Add a Scala Function Gauge

2017-01-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3080#discussion_r95773028
  
--- Diff: 
flink-scala/src/main/scala/org/apache/flink/api/scala/metrics/ScalaGauge.scala 
---
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.metrics
+
+import org.apache.flink.metrics.Gauge
+
--- End diff --

Well, it is certainly easy to see what it does rather quickly. For the 
description i was thinking more about that it allows the concise definition of 
a Gauge in Scala, which isn't that obvious for non-scala programmers i suppose.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3093: [FLINK-5444] Made Flink UI links relative.

2017-01-12 Thread sachingoel0101
Github user sachingoel0101 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3093#discussion_r95749646
  
--- Diff: 
flink-runtime-web/web-dashboard/app/scripts/modules/submit/submit.ctrl.coffee 
---
@@ -167,7 +167,7 @@ angular.module('flinkApp')
 $scope.uploader['success'] = null
   else
 $scope.uploader['success'] = "Uploaded!"
-  xhr.open("POST", "/jars/upload")
+  xhr.open("POST", "jars/upload")
--- End diff --

Primarily, this makes all server side requests consistent. Further, if you 
check in index.coffee, the jobserver URL is set to empty for production, and a 
local host string for development purposes. This helps with streamlining 
dashboard dev without having to rebuild the maven modules again and again. 
This particular case was disabling the dev on submit page. Best to get this 
in while you're fixing all urls. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5444) Flink UI uses absolute URLs.

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820553#comment-15820553
 ] 

ASF GitHub Bot commented on FLINK-5444:
---

Github user sachingoel0101 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3093#discussion_r95749646
  
--- Diff: 
flink-runtime-web/web-dashboard/app/scripts/modules/submit/submit.ctrl.coffee 
---
@@ -167,7 +167,7 @@ angular.module('flinkApp')
 $scope.uploader['success'] = null
   else
 $scope.uploader['success'] = "Uploaded!"
-  xhr.open("POST", "/jars/upload")
+  xhr.open("POST", "jars/upload")
--- End diff --

Primarily, this makes all server side requests consistent. Further, if you 
check in index.coffee, the jobserver URL is set to empty for production, and a 
local host string for development purposes. This helps with streamlining 
dashboard dev without having to rebuild the maven modules again and again. 
This particular case was disabling the dev on submit page. Best to get this 
in while you're fixing all urls. 


> Flink UI uses absolute URLs.
> 
>
> Key: FLINK-5444
> URL: https://issues.apache.org/jira/browse/FLINK-5444
> Project: Flink
>  Issue Type: Bug
>Reporter: Joerg Schad
>Assignee: Joerg Schad
>
> The Flink UI has a mixed use of absolute and relative links. See for example 
> [here](https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/web/index.html)
> {code:|borderStyle=solid}
>  sizes="16x16"> 
> 
> {code}
> When referencing the UI from another UI, e.g., the DC/OS UI relative links 
> are preffered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3095: [FLINK-5452] [table] Fix SortITCase which will fail under...

2017-01-12 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3095
  
It might make sense to wait with this PR until #3099 is in


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5467) Stateless chained tasks set legacy operator state

2017-01-12 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5467:
--

 Summary: Stateless chained tasks set legacy operator state
 Key: FLINK-5467
 URL: https://issues.apache.org/jira/browse/FLINK-5467
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Reporter: Ufuk Celebi


I discovered this while trying to rescale a job with a Kafka source with a 
chained stateless operator.

Looking into it, it turns out that this fails, because the checkpointed state 
contains legacy operator state for the chained operator although it is state 
less.

/cc [~aljoscha] You mentioned that this might be a possible duplicate?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5406) add normalization phase for predicate logical plan rewriting between decorrelate query phase and volcano optimization phase

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820769#comment-15820769
 ] 

ASF GitHub Bot commented on FLINK-5406:
---

GitHub user godfreyhe opened a pull request:

https://github.com/apache/flink/pull/3101

[FLINK-5406] [table] add normalization phase for predicate logical plan 
rewriting

Normalization phase is for predicate logical plan rewriting and is 
independent of cost module. The rules in normalization phase do not need to 
repeatedly applied to different logical plan which is different to volcano 
optimization phase. And the benefit of normalization phase is to reduce the 
running time of volcano planner.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/godfreyhe/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3101.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3101


commit c9f11e7c38f921a403b56a5c124974eb66bddcac
Author: godfreyhe 
Date:   2017-01-12T10:42:49Z

add normalization phase for predicate logical plan rewriting




> add normalization phase for predicate logical plan rewriting between 
> decorrelate query phase and volcano optimization phase
> ---
>
> Key: FLINK-5406
> URL: https://issues.apache.org/jira/browse/FLINK-5406
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: godfrey he
>Assignee: godfrey he
>
> Normalization phase is for predicate logical plan rewriting and is 
> independent of cost module. The rules in normalization phase do not need to 
> repeatedly applied to different logical plan which is different to volcano 
> optimization phase. And the benefit of normalization phase is to reduce the 
> running time of volcano planner.
> *ReduceExpressionsRule* can apply various simplifying transformations on 
> RexNode trees. Currently, there are two transformations:
> 1) Constant reduction, which evaluates constant subtrees, replacing them with 
> a corresponding RexLiteral
> 2) Removal of redundant casts, which occurs when the argument into the cast 
> is the same as the type of the resulting cast expression
> the above transformations do not depend on the cost module,  so we can move 
> the rules in *ReduceExpressionsRule* from 
> DATASET_OPT_RULES/DATASTREAM_OPT_RULES to DataSet/DataStream Normalization 
> Rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5468) Restoring from a semi async rocksdb statebackend (1.1) to 1.2 fails with ClassNotFoundException

2017-01-12 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-5468:
-

 Summary: Restoring from a semi async rocksdb statebackend (1.1) to 
1.2 fails with ClassNotFoundException
 Key: FLINK-5468
 URL: https://issues.apache.org/jira/browse/FLINK-5468
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.2.0
Reporter: Robert Metzger


I think we should catch this exception and explain what's going on and how 
users can resolve the issue.
{code}
org.apache.flink.client.program.ProgramInvocationException: The program 
execution failed: Job execution failed
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
at 
org.apache.flink.yarn.YarnClusterClient.submitJob(YarnClusterClient.java:210)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:400)
at 
org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
at com.dataartisans.eventwindow.Generator.main(Generator.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:419)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:339)
at 
org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:831)
at org.apache.flink.client.CliFrontend.run(CliFrontend.java:256)
at 
org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1073)
at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1120)
at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1117)
at 
org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1116)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution 
failed
at 
org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:328)
at 
org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:382)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
... 22 more
Caused by: java.io.IOException: java.lang.ClassNotFoundException: 
org.apache.flink.contrib.streaming.state.RocksDBStateBackend$FinalSemiAsyncSnapshot
at 
org.apache.flink.migration.runtime.checkpoint.savepoint.SavepointV0Serializer.deserialize(SavepointV0Serializer.java:162)
at 
org.apache.flink.migration.runtime.checkpoint.savepoint.SavepointV0Serializer.deserialize(SavepointV0Serializer.java:70)
at 
org.apache.flink.runtime.checkpoint.savepoint.SavepointStore.loadSavepoint(SavepointStore.java:138)
at 
org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:64)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:1348)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1330)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1330)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.ClassNotFoundException: 

[jira] [Commented] (FLINK-5464) MetricQueryService throws NullPointerException on JobManager

2017-01-12 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820801#comment-15820801
 ] 

Chesnay Schepler commented on FLINK-5464:
-

There are 2 possible cases for this: Either the supplied gauge was null, or it 
was not null but the value it supplies is null. Neither of these cases are 
checked at the moment.

> MetricQueryService throws NullPointerException on JobManager
> 
>
> Key: FLINK-5464
> URL: https://issues.apache.org/jira/browse/FLINK-5464
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>
> I'm using Flink 699f4b0.
> My JobManager log contains many of these log entries:
> {code}
> 2017-01-11 19:42:05,778 WARN  
> org.apache.flink.runtime.webmonitor.metrics.MetricFetcher - Fetching 
> metrics failed.
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/MetricQueryService#-970662317]] after [1 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:334)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:474)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:425)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:429)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:381)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-01-11 19:42:07,765 WARN  
> org.apache.flink.runtime.metrics.dump.MetricQueryService  - An exception 
> occurred while processing a message.
> java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.metrics.dump.MetricDumpSerialization.serializeGauge(MetricDumpSerialization.java:162)
>   at 
> org.apache.flink.runtime.metrics.dump.MetricDumpSerialization.access$300(MetricDumpSerialization.java:47)
>   at 
> org.apache.flink.runtime.metrics.dump.MetricDumpSerialization$MetricDumpSerializer.serialize(MetricDumpSerialization.java:90)
>   at 
> org.apache.flink.runtime.metrics.dump.MetricQueryService.onReceive(MetricQueryService.java:109)
>   at 
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5452) Make table unit tests pass under cluster mode

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820588#comment-15820588
 ] 

ASF GitHub Bot commented on FLINK-5452:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3095#discussion_r95753661
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/SortITCase.scala
 ---
@@ -79,14 +73,8 @@ class SortITCase(
 tEnv.registerDataSet("MyTable", ds)
 
 val expected = sortExpectedly(tupleDataSetStrings, 2, 21)
-val results = tEnv.sql(sqlQuery).toDataSet[Row].mapPartition(rows => 
Seq(rows.toSeq)).collect()
-
-val result = results.
-  filterNot(_.isEmpty)
-  .sortBy(_.head)(Ordering.by(f=> f.toString))
--- End diff --

same here


> Make table unit tests pass under cluster mode
> -
>
> Key: FLINK-5452
> URL: https://issues.apache.org/jira/browse/FLINK-5452
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Kurt Young
>Assignee: Kurt Young
>
> Currently if we change the test execution mode to 
> {{TestExecutionMode.CLUSTER}} in {{TableProgramsTestBase}}, some cases will 
> fail. Need to figure out whether it's the case design problem or there are 
> some bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3095: [FLINK-5452] [table] Fix SortITCase which will fai...

2017-01-12 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3095#discussion_r95753661
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/SortITCase.scala
 ---
@@ -79,14 +73,8 @@ class SortITCase(
 tEnv.registerDataSet("MyTable", ds)
 
 val expected = sortExpectedly(tupleDataSetStrings, 2, 21)
-val results = tEnv.sql(sqlQuery).toDataSet[Row].mapPartition(rows => 
Seq(rows.toSeq)).collect()
-
-val result = results.
-  filterNot(_.isEmpty)
-  .sortBy(_.head)(Ordering.by(f=> f.toString))
--- End diff --

same here


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5452) Make table unit tests pass under cluster mode

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820585#comment-15820585
 ] 

ASF GitHub Bot commented on FLINK-5452:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3095#discussion_r95753596
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/SortITCase.scala
 ---
@@ -55,14 +55,8 @@ class SortITCase(
 tEnv.registerDataSet("MyTable", ds)
 
 val expected = sortExpectedly(tupleDataSetStrings)
-val results = tEnv.sql(sqlQuery).toDataSet[Row].mapPartition(rows => 
Seq(rows.toSeq)).collect()
-
-val result = results
-  .filterNot(_.isEmpty)
-  .sortBy(_.head)(Ordering.by(f=> f.toString))
--- End diff --

The problem here is the string conversion which results in a 
lexicographical order ("21,." sorts before "5,...").
We should change this to `.sortBy(_.head)(Ordering.by(r => (r.getField(0), 
r.getField(1`


> Make table unit tests pass under cluster mode
> -
>
> Key: FLINK-5452
> URL: https://issues.apache.org/jira/browse/FLINK-5452
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Kurt Young
>Assignee: Kurt Young
>
> Currently if we change the test execution mode to 
> {{TestExecutionMode.CLUSTER}} in {{TableProgramsTestBase}}, some cases will 
> fail. Need to figure out whether it's the case design problem or there are 
> some bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3095: [FLINK-5452] [table] Fix SortITCase which will fai...

2017-01-12 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3095#discussion_r95753596
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/SortITCase.scala
 ---
@@ -55,14 +55,8 @@ class SortITCase(
 tEnv.registerDataSet("MyTable", ds)
 
 val expected = sortExpectedly(tupleDataSetStrings)
-val results = tEnv.sql(sqlQuery).toDataSet[Row].mapPartition(rows => 
Seq(rows.toSeq)).collect()
-
-val result = results
-  .filterNot(_.isEmpty)
-  .sortBy(_.head)(Ordering.by(f=> f.toString))
--- End diff --

The problem here is the string conversion which results in a 
lexicographical order ("21,." sorts before "5,...").
We should change this to `.sortBy(_.head)(Ordering.by(r => (r.getField(0), 
r.getField(1`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3090: [FLINK-5432] Fix nested files enumeration in ContinuousFi...

2017-01-12 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/3090
  
The changes look very good! I think it would be good to add a test for 
nested reading in `ContinuousFileProcessingTest`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4920) Add a Scala Function Gauge

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820796#comment-15820796
 ] 

ASF GitHub Bot commented on FLINK-4920:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3080#discussion_r95773028
  
--- Diff: 
flink-scala/src/main/scala/org/apache/flink/api/scala/metrics/ScalaGauge.scala 
---
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.metrics
+
+import org.apache.flink.metrics.Gauge
+
--- End diff --

Well, it is certainly easy to see what it does rather quickly. For the 
description i was thinking more about that it allows the concise definition of 
a Gauge in Scala, which isn't that obvious for non-scala programmers i suppose.


> Add a Scala Function Gauge
> --
>
> Key: FLINK-4920
> URL: https://issues.apache.org/jira/browse/FLINK-4920
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics, Scala API
>Reporter: Stephan Ewen
>Assignee: Pattarawat Chormai
>  Labels: easyfix, starter
>
> A useful metrics utility for the Scala API would be to add a Gauge that 
> obtains its value by calling a Scala Function0.
> That way, one can add Gauges in Scala programs using Scala lambda notation or 
> function references.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3099: [FLINK-5268] Split TableProgramsTestBase into Tabl...

2017-01-12 Thread mtunique
GitHub user mtunique opened a pull request:

https://github.com/apache/flink/pull/3099

[FLINK-5268] Split TableProgramsTestBase into 
TableProgramsCollectionTestBase and TableProgramsClusterTestBase

…TestBase and TableProgramsClusterTestBase

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mtunique/flink flink-5268

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3099.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3099


commit a99b2bfe5a06677163cf62dcc6716bf641422d5e
Author: mtunique 
Date:   2017-01-12T09:30:57Z

[FLINK-5268] Split TableProgramsTestBase into 
TableProgramsCollectionTestBase and TableProgramsClusterTestBase




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5268) Split TableProgramsTestBase into TableProgramsCollectionTestBase and TableProgramsClusterTestBase

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820589#comment-15820589
 ] 

ASF GitHub Bot commented on FLINK-5268:
---

GitHub user mtunique opened a pull request:

https://github.com/apache/flink/pull/3099

[FLINK-5268] Split TableProgramsTestBase into 
TableProgramsCollectionTestBase and TableProgramsClusterTestBase

…TestBase and TableProgramsClusterTestBase

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mtunique/flink flink-5268

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3099.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3099


commit a99b2bfe5a06677163cf62dcc6716bf641422d5e
Author: mtunique 
Date:   2017-01-12T09:30:57Z

[FLINK-5268] Split TableProgramsTestBase into 
TableProgramsCollectionTestBase and TableProgramsClusterTestBase




> Split TableProgramsTestBase into TableProgramsCollectionTestBase and 
> TableProgramsClusterTestBase
> -
>
> Key: FLINK-5268
> URL: https://issues.apache.org/jira/browse/FLINK-5268
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Priority: Minor
>
> Currently the {{TableProgramsTestBase}} allows to run tests on a collection 
> environment and a MiniCluster by setting a testing parameter. This was done 
> to cover different execution path. However, testing on a MiniCluster is quite 
> expensive and should only be done in rare cases.
> I propose to split the {{TableProgramsTestBase}} into 
> * {{TableProgramsCollectionTestBase}} and
> * {{TableProgramsClusterTestBase}}
> to have the separation of both execution backends more clear.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3100: [FLINK-5466] [webfrontend] Set environment to prod...

2017-01-12 Thread uce
GitHub user uce opened a pull request:

https://github.com/apache/flink/pull/3100

[FLINK-5466] [webfrontend] Set environment to production in gulpfile

The default environment was set to `development`, which leads to very large 
generated JS files. When building the web UI we apparently forgot to set the 
environment to `production` (build via `gulp production`).

Since this is likely to occur again, I set the default environment to 
`production` and let users set the environment to `development` manually (via 
`gulp dev`).

Now:
```
-rw-r--r--  1 uce  staff42K Jan 12 10:43 index.js
-rw-r--r--  1 uce  staff   931K Jan 12 10:43 vendor.js
```

Before:
```
-rw-r--r--  1 uce  staff   328K Jan 12 10:49 index.js
-rw-r--r--  1 uce  staff   8.4M Jan 12 10:49 vendor.js
```

I would like to merge this to the 1.1 and 1.2 branches as well.

/cc @joerg84


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/uce/flink 5466-prod

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3100.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3100


commit bd666810aa2df86206b607199139dedb23910b84
Author: Ufuk Celebi 
Date:   2017-01-12T09:46:31Z

[FLINK-5466] [webfrontend] Set environment to production in gulpfile




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5466) Make production environment default in gulpfile

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820615#comment-15820615
 ] 

ASF GitHub Bot commented on FLINK-5466:
---

GitHub user uce opened a pull request:

https://github.com/apache/flink/pull/3100

[FLINK-5466] [webfrontend] Set environment to production in gulpfile

The default environment was set to `development`, which leads to very large 
generated JS files. When building the web UI we apparently forgot to set the 
environment to `production` (build via `gulp production`).

Since this is likely to occur again, I set the default environment to 
`production` and let users set the environment to `development` manually (via 
`gulp dev`).

Now:
```
-rw-r--r--  1 uce  staff42K Jan 12 10:43 index.js
-rw-r--r--  1 uce  staff   931K Jan 12 10:43 vendor.js
```

Before:
```
-rw-r--r--  1 uce  staff   328K Jan 12 10:49 index.js
-rw-r--r--  1 uce  staff   8.4M Jan 12 10:49 vendor.js
```

I would like to merge this to the 1.1 and 1.2 branches as well.

/cc @joerg84


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/uce/flink 5466-prod

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3100.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3100


commit bd666810aa2df86206b607199139dedb23910b84
Author: Ufuk Celebi 
Date:   2017-01-12T09:46:31Z

[FLINK-5466] [webfrontend] Set environment to production in gulpfile




> Make production environment default in gulpfile
> ---
>
> Key: FLINK-5466
> URL: https://issues.apache.org/jira/browse/FLINK-5466
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Affects Versions: 1.2.0, 1.1.4
>Reporter: Ufuk Celebi
>
> Currently the default environment set in our gulpfile is development, which 
> lead to very large created JS files. When building the web UI we apparently 
> forgot to set the environment to production (build via gulp production).
> Since this is likely to occur again, we should make the default environment 
> production and make sure to use development manually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5467) Stateless chained tasks set legacy operator state

2017-01-12 Thread Stefan Richter (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Richter reassigned FLINK-5467:
-

Assignee: Stefan Richter

> Stateless chained tasks set legacy operator state
> -
>
> Key: FLINK-5467
> URL: https://issues.apache.org/jira/browse/FLINK-5467
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Stefan Richter
>
> I discovered this while trying to rescale a job with a Kafka source with a 
> chained stateless operator.
> Looking into it, it turns out that this fails, because the checkpointed state 
> contains legacy operator state for the chained operator although it is state 
> less.
> /cc [~aljoscha] You mentioned that this might be a possible duplicate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5464) MetricQueryService throws NullPointerException on JobManager

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820823#comment-15820823
 ] 

ASF GitHub Bot commented on FLINK-5464:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3103

[FLINK-5464] [metrics] Prevent some NPEs

This PR prevents some NullPointerExceptions from occurring in the metric 
system.

- When registering a metric that is null the metric is ignored, and a 
warning is logged.
  - i.e ```group.counter("counter", null);```
- The MetricDumpSerialization completely ignores gauges if their value is 
null.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 5464_mqs_npe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3103.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3103


commit 0912848b3ce54842fc6810aa0b041db5547ac690
Author: zentol 
Date:   2017-01-12T11:41:56Z

[FLINK-5464] [metrics] Ignore metrics that are null

commit 941c83a599221fc57c02605e2c3bc348d70aa8b2
Author: zentol 
Date:   2017-01-12T11:42:26Z

[FLINK-5464] [metrics] Prevent Gauge NPE in serialization




> MetricQueryService throws NullPointerException on JobManager
> 
>
> Key: FLINK-5464
> URL: https://issues.apache.org/jira/browse/FLINK-5464
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>
> I'm using Flink 699f4b0.
> My JobManager log contains many of these log entries:
> {code}
> 2017-01-11 19:42:05,778 WARN  
> org.apache.flink.runtime.webmonitor.metrics.MetricFetcher - Fetching 
> metrics failed.
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/MetricQueryService#-970662317]] after [1 ms]
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:334)
>   at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:691)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:474)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:425)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:429)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:381)
>   at java.lang.Thread.run(Thread.java:745)
> 2017-01-11 19:42:07,765 WARN  
> org.apache.flink.runtime.metrics.dump.MetricQueryService  - An exception 
> occurred while processing a message.
> java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.metrics.dump.MetricDumpSerialization.serializeGauge(MetricDumpSerialization.java:162)
>   at 
> org.apache.flink.runtime.metrics.dump.MetricDumpSerialization.access$300(MetricDumpSerialization.java:47)
>   at 
> org.apache.flink.runtime.metrics.dump.MetricDumpSerialization$MetricDumpSerializer.serialize(MetricDumpSerialization.java:90)
>   at 
> org.apache.flink.runtime.metrics.dump.MetricQueryService.onReceive(MetricQueryService.java:109)
>   at 
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5441) Directly allow SQL queries on a Table

2017-01-12 Thread Jark Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-5441:
--

Assignee: Jark Wu

> Directly allow SQL queries on a Table
> -
>
> Key: FLINK-5441
> URL: https://issues.apache.org/jira/browse/FLINK-5441
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> Right now a user has to register a table before it can be used in SQL 
> queries. In order to allow more fluent programming we propose calling SQL 
> directly on a table. An underscore can be used to reference the current table:
> {code}
> myTable.sql("SELECT a, b, c FROM _ WHERE d = 12")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5466) Make production environment default in gulpfile

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820658#comment-15820658
 ] 

ASF GitHub Bot commented on FLINK-5466:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3100
  
+1 to merge.


> Make production environment default in gulpfile
> ---
>
> Key: FLINK-5466
> URL: https://issues.apache.org/jira/browse/FLINK-5466
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Affects Versions: 1.2.0, 1.1.4
>Reporter: Ufuk Celebi
>
> Currently the default environment set in our gulpfile is development, which 
> lead to very large created JS files. When building the web UI we apparently 
> forgot to set the environment to production (build via gulp production).
> Since this is likely to occur again, we should make the default environment 
> production and make sure to use development manually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3100: [FLINK-5466] [webfrontend] Set environment to production ...

2017-01-12 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/3100
  
+1 to merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3102: [FLINK-5467] Avoid legacy state for CheckpointedRe...

2017-01-12 Thread StefanRRichter
GitHub user StefanRRichter opened a pull request:

https://github.com/apache/flink/pull/3102

[FLINK-5467] Avoid legacy state for CheckpointedRestoring operators

This PR fixes [FLINK-5467].

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StefanRRichter/flink FixCheckointedRestoring

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3102.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3102


commit efcb07198caafe4f28688775ea00e1048d08d532
Author: Stefan Richter 
Date:   2017-01-12T11:24:34Z

[FLINK-5467] Avoid legacy state for CheckpointedRestoring operators




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5467) Stateless chained tasks set legacy operator state

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820791#comment-15820791
 ] 

ASF GitHub Bot commented on FLINK-5467:
---

Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/3102
  
cc @uce @aljoscha 


> Stateless chained tasks set legacy operator state
> -
>
> Key: FLINK-5467
> URL: https://issues.apache.org/jira/browse/FLINK-5467
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Stefan Richter
>
> I discovered this while trying to rescale a job with a Kafka source with a 
> chained stateless operator.
> Looking into it, it turns out that this fails, because the checkpointed state 
> contains legacy operator state for the chained operator although it is state 
> less.
> /cc [~aljoscha] You mentioned that this might be a possible duplicate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5467) Stateless chained tasks set legacy operator state

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820790#comment-15820790
 ] 

ASF GitHub Bot commented on FLINK-5467:
---

GitHub user StefanRRichter opened a pull request:

https://github.com/apache/flink/pull/3102

[FLINK-5467] Avoid legacy state for CheckpointedRestoring operators

This PR fixes [FLINK-5467].

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StefanRRichter/flink FixCheckointedRestoring

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3102.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3102


commit efcb07198caafe4f28688775ea00e1048d08d532
Author: Stefan Richter 
Date:   2017-01-12T11:24:34Z

[FLINK-5467] Avoid legacy state for CheckpointedRestoring operators




> Stateless chained tasks set legacy operator state
> -
>
> Key: FLINK-5467
> URL: https://issues.apache.org/jira/browse/FLINK-5467
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Stefan Richter
>
> I discovered this while trying to rescale a job with a Kafka source with a 
> chained stateless operator.
> Looking into it, it turns out that this fails, because the checkpointed state 
> contains legacy operator state for the chained operator although it is state 
> less.
> /cc [~aljoscha] You mentioned that this might be a possible duplicate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2991: [FLINK-5321] [metrics] LocalFlinkMiniCluster starts JM Me...

2017-01-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2991
  
@StephanEwen Could you take another look?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5469) build_docs.sh -p fails on Windows Subsystem for Linux

2017-01-12 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-5469:
---

 Summary: build_docs.sh -p fails on Windows Subsystem for Linux
 Key: FLINK-5469
 URL: https://issues.apache.org/jira/browse/FLINK-5469
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.2.0, 1.3.0
Reporter: Chesnay Schepler
Priority: Trivial


As described [here|https://github.com/jekyll/jekyll/issues/5233] jekyll --watch 
(which is executed within build_docs.sh) fails when using it within Ubuntu on 
Windows. Adding --force_polling resolves this issue.

I was wondering whether we couldn't add --force_polling by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3093: [FLINK-5444] Made Flink UI links relative.

2017-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3093


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-5444) Flink UI uses absolute URLs.

2017-01-12 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi closed FLINK-5444.
--
   Resolution: Fixed
Fix Version/s: 1.3.0
   1.2.0

Fixed in
dff58df, 80f1517 (release-1.2),
42b53e6, d7e862a (master).

> Flink UI uses absolute URLs.
> 
>
> Key: FLINK-5444
> URL: https://issues.apache.org/jira/browse/FLINK-5444
> Project: Flink
>  Issue Type: Bug
>Reporter: Joerg Schad
>Assignee: Joerg Schad
> Fix For: 1.2.0, 1.3.0
>
>
> The Flink UI has a mixed use of absolute and relative links. See for example 
> [here](https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/web/index.html)
> {code:|borderStyle=solid}
>  sizes="16x16"> 
> 
> {code}
> When referencing the UI from another UI, e.g., the DC/OS UI relative links 
> are preffered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5444) Flink UI uses absolute URLs.

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821183#comment-15821183
 ] 

ASF GitHub Bot commented on FLINK-5444:
---

Github user uce commented on the issue:

https://github.com/apache/flink/pull/3093
  
Thanks! LGTM, going to merge this now.


> Flink UI uses absolute URLs.
> 
>
> Key: FLINK-5444
> URL: https://issues.apache.org/jira/browse/FLINK-5444
> Project: Flink
>  Issue Type: Bug
>Reporter: Joerg Schad
>Assignee: Joerg Schad
>
> The Flink UI has a mixed use of absolute and relative links. See for example 
> [here](https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/web/index.html)
> {code:|borderStyle=solid}
>  sizes="16x16"> 
> 
> {code}
> When referencing the UI from another UI, e.g., the DC/OS UI relative links 
> are preffered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5444) Flink UI uses absolute URLs.

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821199#comment-15821199
 ] 

ASF GitHub Bot commented on FLINK-5444:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3093


> Flink UI uses absolute URLs.
> 
>
> Key: FLINK-5444
> URL: https://issues.apache.org/jira/browse/FLINK-5444
> Project: Flink
>  Issue Type: Bug
>Reporter: Joerg Schad
>Assignee: Joerg Schad
>
> The Flink UI has a mixed use of absolute and relative links. See for example 
> [here](https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/web/index.html)
> {code:|borderStyle=solid}
>  sizes="16x16"> 
> 
> {code}
> When referencing the UI from another UI, e.g., the DC/OS UI relative links 
> are preffered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5118) Inconsistent records sent/received metrics

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821232#comment-15821232
 ] 

ASF GitHub Bot commented on FLINK-5118:
---

Github user uce commented on a diff in the pull request:

https://github.com/apache/flink/pull/3106#discussion_r95812448
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/serialization/SpanningRecordSerializer.java
 ---
@@ -99,7 +99,7 @@ public SerializationResult addRecord(T record) throws 
IOException {
this.lengthBuffer.putInt(0, len);

if (numBytesOut != null) {
-   numBytesOut.inc(len);
+   numBytesOut.inc(len + 4);
--- End diff --

I think this warrants both a comment and a test.


> Inconsistent records sent/received metrics
> --
>
> Key: FLINK-5118
> URL: https://issues.apache.org/jira/browse/FLINK-5118
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics, Webfrontend
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Ufuk Celebi
>Assignee: Chesnay Schepler
> Fix For: 1.2.0, 1.3.0
>
>
> In 1.2-SNAPSHOT running a large scale job you see that the counts for 
> send/received records are inconsistent, e.g. in a simple word count job we 
> see more received records/bytes than we see sent. This is a regression from 
> 1.1 where everything works as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5441) Directly allow SQL queries on a Table

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821118#comment-15821118
 ] 

ASF GitHub Bot commented on FLINK-5441:
---

GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/3107

[FLINK-5441] [table] Directly allow SQL queries on a Table

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed

This PR allows calling SQL directly on a table :

```scala
myTable.sql("SELECT a, b, c FROM _ WHERE d = 12")
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink sql-FLINK-5441

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3107.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3107


commit 528d7c2ad61b0247d296d083d3d4942621701cf8
Author: Jark Wu 
Date:   2017-01-12T14:26:52Z

[FLINK-5441] [table] Directly allow SQL queries on a Table




> Directly allow SQL queries on a Table
> -
>
> Key: FLINK-5441
> URL: https://issues.apache.org/jira/browse/FLINK-5441
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> Right now a user has to register a table before it can be used in SQL 
> queries. In order to allow more fluent programming we propose calling SQL 
> directly on a table. An underscore can be used to reference the current table:
> {code}
> myTable.sql("SELECT a, b, c FROM _ WHERE d = 12")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3107: [FLINK-5441] [table] Directly allow SQL queries on...

2017-01-12 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/3107

[FLINK-5441] [table] Directly allow SQL queries on a Table

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed

This PR allows calling SQL directly on a table :

```scala
myTable.sql("SELECT a, b, c FROM _ WHERE d = 12")
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink sql-FLINK-5441

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3107.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3107


commit 528d7c2ad61b0247d296d083d3d4942621701cf8
Author: Jark Wu 
Date:   2017-01-12T14:26:52Z

[FLINK-5441] [table] Directly allow SQL queries on a Table




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-3871) Add Kafka TableSource with Avro serialization

2017-01-12 Thread Ivan Mushketyk (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mushketyk reassigned FLINK-3871:
-

Assignee: Ivan Mushketyk

> Add Kafka TableSource with Avro serialization
> -
>
> Key: FLINK-3871
> URL: https://issues.apache.org/jira/browse/FLINK-3871
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Ivan Mushketyk
>
> Add a Kafka TableSource which supports Avro serialized data.
> The KafkaAvroTableSource should support two modes:
> # SpecificRecord Mode: In this case the user specifies a class which was 
> code-generated by Avro depending on a schema. Flink treats these classes as 
> regular POJOs. Hence, they are also natively supported by the Table API and 
> SQL. Classes generated by Avro contain their Schema in a static field. The 
> schema should be used to automatically derive field names and types. Hence, 
> there is no additional information required than the name of the class.
> # GenericRecord Mode: In this case the user specifies an Avro Schema. The 
> schema is used to deserialize the data into a GenericRecord which must be 
> translated into possibly nested {{Row}} based on the schema information. 
> Again, the Avro Schema is used to automatically derive the field names and 
> types. This mode is less efficient than the SpecificRecord mode because the 
> {{GenericRecord}} needs to be converted into {{Row}}.
> This feature depends on FLINK-5280, i.e., support for nested data in 
> {{TableSource}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5466) Make production environment default in gulpfile

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821161#comment-15821161
 ] 

ASF GitHub Bot commented on FLINK-5466:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3100


> Make production environment default in gulpfile
> ---
>
> Key: FLINK-5466
> URL: https://issues.apache.org/jira/browse/FLINK-5466
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Affects Versions: 1.2.0, 1.1.4
>Reporter: Ufuk Celebi
>
> Currently the default environment set in our gulpfile is development, which 
> lead to very large created JS files. When building the web UI we apparently 
> forgot to set the environment to production (build via gulp production).
> Since this is likely to occur again, we should make the default environment 
> production and make sure to use development manually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5471) Properly inform JobClientActor about terminated Mesos framework

2017-01-12 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-5471:


 Summary: Properly inform JobClientActor about terminated Mesos 
framework
 Key: FLINK-5471
 URL: https://issues.apache.org/jira/browse/FLINK-5471
 Project: Flink
  Issue Type: Improvement
  Components: Mesos
Affects Versions: 1.2.0, 1.3.0
Reporter: Till Rohrmann
Priority: Minor
 Fix For: 1.3.0


In case that the Mesos framework running Flink terminates (e.g. exceeded number 
of container restarts) the {{JobClientActor}} is not properly informed. As a 
consequence, the client only terminates after the {{JobClientActor}} detects 
that it lost the connection to the JobManager 
({{JobClientActorConnectionTimeoutException}}). The current default value for 
the timeout is 60s which is quite long to detect the connection loss in case of 
a termination.

I think it would be better to notify the {{JobClientActor}} which allows it to 
print a better message for the user and also allows it to react quicker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5118) Inconsistent records sent/received metrics

2017-01-12 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-5118:

Fix Version/s: 1.3.0
   1.2.0

> Inconsistent records sent/received metrics
> --
>
> Key: FLINK-5118
> URL: https://issues.apache.org/jira/browse/FLINK-5118
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics, Webfrontend
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Ufuk Celebi
>Assignee: Chesnay Schepler
> Fix For: 1.2.0, 1.3.0
>
>
> In 1.2-SNAPSHOT running a large scale job you see that the counts for 
> send/received records are inconsistent, e.g. in a simple word count job we 
> see more received records/bytes than we see sent. This is a regression from 
> 1.1 where everything works as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5118) Inconsistent records sent/received metrics

2017-01-12 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821166#comment-15821166
 ] 

Chesnay Schepler commented on FLINK-5118:
-

PR at https://github.com/apache/flink/pull/3106

> Inconsistent records sent/received metrics
> --
>
> Key: FLINK-5118
> URL: https://issues.apache.org/jira/browse/FLINK-5118
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics, Webfrontend
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Ufuk Celebi
>Assignee: Chesnay Schepler
> Fix For: 1.2.0, 1.3.0
>
>
> In 1.2-SNAPSHOT running a large scale job you see that the counts for 
> send/received records are inconsistent, e.g. in a simple word count job we 
> see more received records/bytes than we see sent. This is a regression from 
> 1.1 where everything works as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5118) Inconsistent records sent/received metrics

2017-01-12 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-5118:

Affects Version/s: 1.3.0
   1.2.0

> Inconsistent records sent/received metrics
> --
>
> Key: FLINK-5118
> URL: https://issues.apache.org/jira/browse/FLINK-5118
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics, Webfrontend
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Ufuk Celebi
>Assignee: Chesnay Schepler
> Fix For: 1.2.0, 1.3.0
>
>
> In 1.2-SNAPSHOT running a large scale job you see that the counts for 
> send/received records are inconsistent, e.g. in a simple word count job we 
> see more received records/bytes than we see sent. This is a regression from 
> 1.1 where everything works as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3100: [FLINK-5466] [webfrontend] Set environment to prod...

2017-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3100


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3093: [FLINK-5444] Made Flink UI links relative.

2017-01-12 Thread uce
Github user uce commented on the issue:

https://github.com/apache/flink/pull/3093
  
Thanks! LGTM, going to merge this now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5472) Flink's web server does not support https requests

2017-01-12 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-5472:


 Summary: Flink's web server does not support https requests
 Key: FLINK-5472
 URL: https://issues.apache.org/jira/browse/FLINK-5472
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Affects Versions: 1.2.0, 1.3.0
Reporter: Till Rohrmann
 Fix For: 1.3.0


Flink's webserver does not support HTTPS requests. When trying to access 
{{https://jobmanager:port}}, chrome says that the webserver answered with an 
invalid response {{ERR_SSL_PROTOCOL_ERROR}}.

This happens, for example, when one tries to access Flink's web UI from the 
DC/OS dashboard via the endpoint links.

I think we should add a ssl handler to Flink's web server pipeline (even though 
the certificates might not be trusted).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3100: [FLINK-5466] [webfrontend] Set environment to production ...

2017-01-12 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3100
  
+1 to merge and backport. This is a big improvement when accessing the web 
UI over a slow connection.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5321) FlinkMiniCluster does not start Jobmanager MetricQueryService

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821128#comment-15821128
 ] 

ASF GitHub Bot commented on FLINK-5321:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2991
  
@StephanEwen Could you take another look?


> FlinkMiniCluster does not start Jobmanager MetricQueryService
> -
>
> Key: FLINK-5321
> URL: https://issues.apache.org/jira/browse/FLINK-5321
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager, Metrics
>Affects Versions: 1.2.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.2.0
>
>
> The JobManager MetricQueryService is never started when using the 
> LocalFlinkMiniCluster. It lacks the call to 
> MetricRegistry#startQueryService().
> As a result jobmanager metrics aren't reporter to the web frontend, and it 
> causes repeated logging of exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5466) Make production environment default in gulpfile

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821103#comment-15821103
 ] 

ASF GitHub Bot commented on FLINK-5466:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3100
  
+1 to merge and backport. This is a big improvement when accessing the web 
UI over a slow connection.


> Make production environment default in gulpfile
> ---
>
> Key: FLINK-5466
> URL: https://issues.apache.org/jira/browse/FLINK-5466
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Affects Versions: 1.2.0, 1.1.4
>Reporter: Ufuk Celebi
>
> Currently the default environment set in our gulpfile is development, which 
> lead to very large created JS files. When building the web UI we apparently 
> forgot to set the environment to production (build via gulp production).
> Since this is likely to occur again, we should make the default environment 
> production and make sure to use development manually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5466) Make production environment default in gulpfile

2017-01-12 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi closed FLINK-5466.
--
   Resolution: Fixed
 Assignee: Ufuk Celebi
Fix Version/s: 1.1.5
   1.3.0
   1.2.0

FIxed in
12cf5dc, 4ea52d6 (release-1.1),
e55d426, 624f8ae (release-1.2),
408f6ea, e1181f6 (master).

> Make production environment default in gulpfile
> ---
>
> Key: FLINK-5466
> URL: https://issues.apache.org/jira/browse/FLINK-5466
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Affects Versions: 1.2.0, 1.1.4
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
> Fix For: 1.2.0, 1.3.0, 1.1.5
>
>
> Currently the default environment set in our gulpfile is development, which 
> lead to very large created JS files. When building the web UI we apparently 
> forgot to set the environment to production (build via gulp production).
> Since this is likely to occur again, we should make the default environment 
> production and make sure to use development manually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5470) Requesting non-existing log/stdout file from TM crashes the it

2017-01-12 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-5470:


 Summary: Requesting non-existing log/stdout file from TM crashes 
the it
 Key: FLINK-5470
 URL: https://issues.apache.org/jira/browse/FLINK-5470
 Project: Flink
  Issue Type: Bug
  Components: TaskManager
Affects Versions: 1.2.0, 1.3.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
Priority: Critical
 Fix For: 1.2.0, 1.3.0


Requesting the TM log/stdout file via the web interface crashes the TM if the 
respective file does not exist. This is, for example, the case when running 
Flink via DC/OS.

{code}
java.io.FileNotFoundException: flink-taskmanager.out (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleRequestTaskManagerLog(TaskManager.scala:833)
at 
org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:340)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at 
org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:44)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
at 
org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at 
org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
at akka.dispatch.Mailbox.run(Mailbox.scala:221)
at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3106: [FLINK-5118] Fix inconsistent numBytesIn/Out metri...

2017-01-12 Thread uce
Github user uce commented on a diff in the pull request:

https://github.com/apache/flink/pull/3106#discussion_r95812448
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/serialization/SpanningRecordSerializer.java
 ---
@@ -99,7 +99,7 @@ public SerializationResult addRecord(T record) throws 
IOException {
this.lengthBuffer.putInt(0, len);

if (numBytesOut != null) {
-   numBytesOut.inc(len);
+   numBytesOut.inc(len + 4);
--- End diff --

I think this warrants both a comment and a test.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-5470) Requesting non-existing log/stdout file from TM crashes the TM

2017-01-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-5470:
-
Summary: Requesting non-existing log/stdout file from TM crashes the TM  
(was: Requesting non-existing log/stdout file from TM crashes the it)

> Requesting non-existing log/stdout file from TM crashes the TM
> --
>
> Key: FLINK-5470
> URL: https://issues.apache.org/jira/browse/FLINK-5470
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
> Fix For: 1.2.0, 1.3.0
>
>
> Requesting the TM log/stdout file via the web interface crashes the TM if the 
> respective file does not exist. This is, for example, the case when running 
> Flink via DC/OS.
> {code}
> java.io.FileNotFoundException: flink-taskmanager.out (No such file or 
> directory)
> at java.io.FileInputStream.open0(Native Method)
> at java.io.FileInputStream.open(FileInputStream.java:195)
> at java.io.FileInputStream.(FileInputStream.java:138)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleRequestTaskManagerLog(TaskManager.scala:833)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:340)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:44)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
> at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
> at akka.actor.ActorCell.invoke(ActorCell.scala:487)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
> at akka.dispatch.Mailbox.run(Mailbox.scala:221)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4973) Flakey Yarn tests due to recently added latency marker

2017-01-12 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820897#comment-15820897
 ] 

Till Rohrmann commented on FLINK-4973:
--

I think that exactly for these cases it is not good to remove logging of 
exceptions. Instead we should check the lifecycle assumptions of the timer 
service and the buffer pools. If it cannot be corrected and does not indicate a 
bug, then we could think about filtering out this specific exceptions. But the 
filtering could also happen on the user level when you parse the actual log.

> Flakey Yarn tests due to recently added latency marker
> --
>
> Key: FLINK-4973
> URL: https://issues.apache.org/jira/browse/FLINK-4973
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.2.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.2.0
>
>
> The newly introduced {{LatencyMarksEmitter}} emits latency marker on the 
> {{Output}}. This can still happen after the underlying {{BufferPool}} has 
> been destroyed. The occurring exception is then logged:
> {code}
> 2016-10-29 15:00:48,088 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: Custom File Source (1/1) switched to FINISHED
> 2016-10-29 15:00:48,089 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: Custom File Source (1/1)
> 2016-10-29 15:00:48,089 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> FINISHED to JobManager for task Source: Custom File Source 
> (8fe0f817fa6d960ea33f6e57e0c3891c)
> 2016-10-29 15:00:48,101 WARN  
> org.apache.flink.streaming.api.operators.AbstractStreamOperator  - Error 
> while emitting latency marker
> java.lang.RuntimeException: Buffer pool is destroyed.
>   at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.emitLatencyMarker(RecordWriterOutput.java:99)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.emitLatencyMarker(AbstractStreamOperator.java:734)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource$LatencyMarksEmitter$1.run(StreamSource.java:134)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: Buffer pool is destroyed.
>   at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBuffer(LocalBufferPool.java:144)
>   at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBlocking(LocalBufferPool.java:133)
>   at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:118)
>   at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.randomEmit(RecordWriter.java:103)
>   at 
> org.apache.flink.streaming.runtime.io.StreamRecordWriter.randomEmit(StreamRecordWriter.java:104)
>   at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.emitLatencyMarker(RecordWriterOutput.java:96)
>   ... 9 more
> {code}
> This exception is clearly related to the shutdown of a stream operator and 
> does not indicate a wrong behaviour. Since the yarn tests simply scan the log 
> for some keywords (including exception) such a case can make them fail.
> Best if we could make sure that the {{LatencyMarksEmitter}} would only emit 
> latency marker if the {{Output}} would still be active. But we could also 
> simply not log exceptions which occurred after the stream operator has been 
> stopped.
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/171578846/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3105: [FLINK-4641] Support branching CEP patterns

2017-01-12 Thread chermenin
GitHub user chermenin opened a pull request:

https://github.com/apache/flink/pull/3105

[FLINK-4641] Support branching CEP patterns

Support for branched CEP patterns was added in this PR.
After merging that we will be able to use follow code to define more 
complex patterns:

```
Pattern pattern = EventPattern.event("start")
.next(
Pattern.or(
EventPattern.event("middle_1").subtype(F.class)),
EventPattern.event("middle_2").where(new MyFilterFunction())
))
.followedBy(EventPattern.event("end"));
```

This PR will close https://issues.apache.org/jira/browse/FLINK-4641.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chermenin/flink flink-4641

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3105.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3105


commit 026ada648d1277fd57f2fb2361a36bf0c8f5e57b
Author: Aleksandr Chermenin 
Date:   2017-01-12T09:54:44Z

[FLINK-4641] Base Java implementation.

commit f82fc8386493e84e824110a26d5e059333efaec0
Author: Aleksandr Chermenin 
Date:   2017-01-12T10:07:53Z

[FLINK-4641] Fixed branching pattern.

commit ad074e2e2c1faf8571b8b8e7ce3144c0fbc5e31d
Author: Aleksandr Chermenin 
Date:   2017-01-12T10:21:15Z

[FLINK-4641] Fixed Scala API.

commit 38e14a89b001bd443133746216d422ac46176c3f
Author: Aleksandr Chermenin 
Date:   2017-01-12T10:56:22Z

[FLINK-4641] Fixed tests for Scala API.

commit 9ba130df964ece5b8756e8b46b6ec22dcde69877
Author: Aleksandr Chermenin 
Date:   2017-01-12T12:15:01Z

[FLINK-4641] Fixed CEP Java 8 lambda test.

commit 8d490aae497e85003a402ca6c1fd687e30c3b55f
Author: Aleksandr Chermenin 
Date:   2017-01-12T12:24:52Z

[FLINK-4641] Improved code documentation.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5417) Fix the wrong config file name

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820953#comment-15820953
 ] 

ASF GitHub Bot commented on FLINK-5417:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3071
  
When viewing the documentation in firefox there is now a lot of whitespace 
above the diagram. The dimensions have slightly changed as well :/


> Fix the wrong config file name 
> ---
>
> Key: FLINK-5417
> URL: https://issues.apache.org/jira/browse/FLINK-5417
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Tao Wang
>Priority: Trivial
>
> As the config file name is conf/flink-conf.yaml, the usage 
> "conf/flink-config.yaml" in document is wrong and easy to confuse user. We 
> should correct them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3103: [FLINK-5464] [metrics] Prevent some NPEs

2017-01-12 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3103

[FLINK-5464] [metrics] Prevent some NPEs

This PR prevents some NullPointerExceptions from occurring in the metric 
system.

- When registering a metric that is null the metric is ignored, and a 
warning is logged.
  - i.e ```group.counter("counter", null);```
- The MetricDumpSerialization completely ignores gauges if their value is 
null.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 5464_mqs_npe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3103.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3103


commit 0912848b3ce54842fc6810aa0b041db5547ac690
Author: zentol 
Date:   2017-01-12T11:41:56Z

[FLINK-5464] [metrics] Ignore metrics that are null

commit 941c83a599221fc57c02605e2c3bc348d70aa8b2
Author: zentol 
Date:   2017-01-12T11:42:26Z

[FLINK-5464] [metrics] Prevent Gauge NPE in serialization




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5417) Fix the wrong config file name

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820913#comment-15820913
 ] 

ASF GitHub Bot commented on FLINK-5417:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3071
  
I will take a look at these changes.

Just a small note: We are not getting notified of pushed changes, a small 
comment is always good to get attention ;)


> Fix the wrong config file name 
> ---
>
> Key: FLINK-5417
> URL: https://issues.apache.org/jira/browse/FLINK-5417
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Tao Wang
>Priority: Trivial
>
> As the config file name is conf/flink-conf.yaml, the usage 
> "conf/flink-config.yaml" in document is wrong and easy to confuse user. We 
> should correct them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3071: [FLINK-5417][DOCUMENTATION]correct the wrong config file ...

2017-01-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3071
  
I will take a look at these changes.

Just a small note: We are not getting notified of pushed changes, a small 
comment is always good to get attention ;)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3071: [FLINK-5417][DOCUMENTATION]correct the wrong config file ...

2017-01-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3071
  
When viewing the documentation in firefox there is now a lot of whitespace 
above the diagram. The dimensions have slightly changed as well :/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3106: [FLINK-????] FIx inconsistent numBytesIn/Out metri...

2017-01-12 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3106

[FLINK-] FIx inconsistent numBytesIn/Out metrics

This PR fixes _some_ jira issue (i can't find the correct issue ID right 
now since JIRA is being stupid) regarding inconsistent numBytesIn/Out metrics.

Given 2 subsequent tasks A->B the numBytesOut count of A was lower than the 
numBytesIn(Local/Remote) count of B by a huge margin, although they should be 
(nearly) identical.

The problem is that A was counting how much bytes the serialized records 
were using, whereas B was counting how large the ```Buffer```s were that it 
received. A ```Buffer``` contains the following data:
```
|size_of_R1|R1|size_of_R2|R2|...
```
where as R1/R2 are serialized records and the sizes denote the serialized 
length of their respective record.

So, while A was adding ```sizeOf(R1)```, B was adding sizeOf(size_of_R1) + 
sizeOf(R1).

A was simply not accounting for the added bytes that size_of_RX were using, 
which is what this PR is fixing by adding ```4``` after the serialization of 
each record.

cc @uce 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink _metrics_incon

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3106.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3106


commit 5300a35ac6ee8815d07a739c9b370ae475f0d9a0
Author: zentol 
Date:   2017-01-12T13:59:22Z

[FLINK-] FIx inconsistent numBytesIn/Out metrics




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3074: [FLINK-5421] Explicit restore method in Snapshotable inte...

2017-01-12 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/3074
  
Thanks for your work! 👍 

I merged, could you please close this PR?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-5150) WebUI metric-related resource leak

2017-01-12 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-5150:
---

Assignee: Chesnay Schepler

> WebUI metric-related resource leak
> --
>
> Key: FLINK-5150
> URL: https://issues.apache.org/jira/browse/FLINK-5150
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.1.3
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.2.0
>
>
> The WebUI maintains a list of selected metrics for all jobs and vertices. 
> When a metric is selected in the metric screen it is added to this list, and 
> removed if it is unselected.
> The contents of this list are stored in the browser's localStorage. This 
> allows a user to setup a metric screen, move to another page, and return to 
> the original screen completely intact.
> However, if the metrics are never *unselected* by the user they will remain 
> in this list. They will also still be in this list if the WebUI can't even 
> display the corresponding job page anymore, if for example the history size 
> limit was exceeded. They will even survive a browser restart, since they are 
> not stored in a session-based storage.
> Furthermore, the WebUI still tries to update these metricsd, adding 
> additional overhead to the WebBackend and potentially network.
> In other words, if you _ever_ checked out metrics tab for some job, chances 
> are that the next time you start the WebInterface it will still try to update 
> the metrics for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5380) Number of outgoing records not reported in web interface

2017-01-12 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-5380:

Priority: Blocker  (was: Major)

> Number of outgoing records not reported in web interface
> 
>
> Key: FLINK-5380
> URL: https://issues.apache.org/jira/browse/FLINK-5380
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics, Streaming, Webfrontend
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.2.0, 1.3.0
>
> Attachments: outRecordsNotreported.png
>
>
> The web frontend does not report any outgoing records in the web frontend.
> The amount of data in MB is reported correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3102: [FLINK-5467] Avoid legacy state for CheckpointedRestoring...

2017-01-12 Thread uce
Github user uce commented on the issue:

https://github.com/apache/flink/pull/3102
  
I'm not aware of all implications of this change, but this fixes the 
problem I had. The operator does not checkpoint any legacy operator state.

+1 to merge from my side.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3093: [FLINK-5444] Made Flink UI links relative.

2017-01-12 Thread joerg84
Github user joerg84 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3093#discussion_r95783168
  
--- Diff: 
flink-runtime-web/web-dashboard/app/scripts/modules/submit/submit.ctrl.coffee 
---
@@ -167,7 +167,7 @@ angular.module('flinkApp')
 $scope.uploader['success'] = null
   else
 $scope.uploader['success'] = "Uploaded!"
-  xhr.open("POST", "/jars/upload")
+  xhr.open("POST", "jars/upload")
--- End diff --

Thx for the explanation!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5444) Flink UI uses absolute URLs.

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820921#comment-15820921
 ] 

ASF GitHub Bot commented on FLINK-5444:
---

Github user joerg84 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3093#discussion_r95783168
  
--- Diff: 
flink-runtime-web/web-dashboard/app/scripts/modules/submit/submit.ctrl.coffee 
---
@@ -167,7 +167,7 @@ angular.module('flinkApp')
 $scope.uploader['success'] = null
   else
 $scope.uploader['success'] = "Uploaded!"
-  xhr.open("POST", "/jars/upload")
+  xhr.open("POST", "jars/upload")
--- End diff --

Thx for the explanation!


> Flink UI uses absolute URLs.
> 
>
> Key: FLINK-5444
> URL: https://issues.apache.org/jira/browse/FLINK-5444
> Project: Flink
>  Issue Type: Bug
>Reporter: Joerg Schad
>Assignee: Joerg Schad
>
> The Flink UI has a mixed use of absolute and relative links. See for example 
> [here](https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/web/index.html)
> {code:|borderStyle=solid}
>  sizes="16x16"> 
> 
> {code}
> When referencing the UI from another UI, e.g., the DC/OS UI relative links 
> are preffered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3093: [FLINK-5444] Made Flink UI links relative.

2017-01-12 Thread joerg84
Github user joerg84 commented on the issue:

https://github.com/apache/flink/pull/3093
  
@sachingoel0101 PTAL


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5444) Flink UI uses absolute URLs.

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15821007#comment-15821007
 ] 

ASF GitHub Bot commented on FLINK-5444:
---

Github user joerg84 commented on the issue:

https://github.com/apache/flink/pull/3093
  
@sachingoel0101 PTAL


> Flink UI uses absolute URLs.
> 
>
> Key: FLINK-5444
> URL: https://issues.apache.org/jira/browse/FLINK-5444
> Project: Flink
>  Issue Type: Bug
>Reporter: Joerg Schad
>Assignee: Joerg Schad
>
> The Flink UI has a mixed use of absolute and relative links. See for example 
> [here](https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/web/index.html)
> {code:|borderStyle=solid}
>  sizes="16x16"> 
> 
> {code}
> When referencing the UI from another UI, e.g., the DC/OS UI relative links 
> are preffered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5150) WebUI metric-related resource leak

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820872#comment-15820872
 ] 

ASF GitHub Bot commented on FLINK-5150:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3104

[FLINK-5150] [webui] Store metrics in sessionStorage

This PR modifies the webfrontend to no longer store the metrics setup 
(selected metrics + their values) in localStorage but sessionStorage instead.

Using localStorage means that data is never deleted unless explicitly told. 
It survives moving across pages, but also browser restarts. We currently lack 
an automatic explicit removal, which was problematic since metrics for previous 
jobs (that may even have been executed on a completely different cluster) were 
still being updated. For example, if i ran any job on my local machine it would 
fire 30+ requests regularly for dead metrics.

By moving to sessionStorage this issue is solved since the data is cleared 
when the page is closed. However, you can still navigate to other pages and the 
setup will survive. As a bonus you can now have 2 tabs for the same task with 
different metric setups!

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 5150_webui_rl

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3104.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3104


commit bb0415c2e879e3a2b5e977484c529d3d6a94c657
Author: zentol 
Date:   2017-01-12T12:12:24Z

[FLINK-5150] [webui] Store metrics in sessionStorage




> WebUI metric-related resource leak
> --
>
> Key: FLINK-5150
> URL: https://issues.apache.org/jira/browse/FLINK-5150
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.1.3
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.2.0
>
>
> The WebUI maintains a list of selected metrics for all jobs and vertices. 
> When a metric is selected in the metric screen it is added to this list, and 
> removed if it is unselected.
> The contents of this list are stored in the browser's localStorage. This 
> allows a user to setup a metric screen, move to another page, and return to 
> the original screen completely intact.
> However, if the metrics are never *unselected* by the user they will remain 
> in this list. They will also still be in this list if the WebUI can't even 
> display the corresponding job page anymore, if for example the history size 
> limit was exceeded. They will even survive a browser restart, since they are 
> not stored in a session-based storage.
> Furthermore, the WebUI still tries to update these metricsd, adding 
> additional overhead to the WebBackend and potentially network.
> In other words, if you _ever_ checked out metrics tab for some job, chances 
> are that the next time you start the WebInterface it will still try to update 
> the metrics for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3104: [FLINK-5150] [webui] Store metrics in sessionStora...

2017-01-12 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/3104

[FLINK-5150] [webui] Store metrics in sessionStorage

This PR modifies the webfrontend to no longer store the metrics setup 
(selected metrics + their values) in localStorage but sessionStorage instead.

Using localStorage means that data is never deleted unless explicitly told. 
It survives moving across pages, but also browser restarts. We currently lack 
an automatic explicit removal, which was problematic since metrics for previous 
jobs (that may even have been executed on a completely different cluster) were 
still being updated. For example, if i ran any job on my local machine it would 
fire 30+ requests regularly for dead metrics.

By moving to sessionStorage this issue is solved since the data is cleared 
when the page is closed. However, you can still navigate to other pages and the 
setup will survive. As a bonus you can now have 2 tabs for the same task with 
different metric setups!

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 5150_webui_rl

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3104.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3104


commit bb0415c2e879e3a2b5e977484c529d3d6a94c657
Author: zentol 
Date:   2017-01-12T12:12:24Z

[FLINK-5150] [webui] Store metrics in sessionStorage




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5421) Explicit restore method in Snapshotable

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820881#comment-15820881
 ] 

ASF GitHub Bot commented on FLINK-5421:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/3074
  
Thanks for your work!  

I merged, could you please close this PR?



> Explicit restore method in Snapshotable
> ---
>
> Key: FLINK-5421
> URL: https://issues.apache.org/jira/browse/FLINK-5421
> Project: Flink
>  Issue Type: Improvement
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>
> We should introduce an explicit {{restore(...)}} method to match the 
> {{snapshot(...)}} method in this interface.
> Currently, restore happens implicit in backends, i.e. when state handles are 
> provided, backends execute restore logic in their constructors. This 
> behaviour makes it hard for backends to participate in the task's lifecycle 
> through {{CloseableRegistry}}, because we can only register backend objects 
> after they have been constructed. As a result, for example, all restore 
> operations that happen in the constructor are not responsive to cancelation.
> When we introduce an explicit restore, we can first create a backend object, 
> then register it, and only then run restore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5421) Explicit restore method in Snapshotable

2017-01-12 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820882#comment-15820882
 ] 

Aljoscha Krettek commented on FLINK-5421:
-

Implemented on master in:
aaf8e09d8f9ee7f04cb79d317e3122282153858c
499aea0d834ad8e8ef34fb80ffd89b6482062e37

> Explicit restore method in Snapshotable
> ---
>
> Key: FLINK-5421
> URL: https://issues.apache.org/jira/browse/FLINK-5421
> Project: Flink
>  Issue Type: Improvement
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>
> We should introduce an explicit {{restore(...)}} method to match the 
> {{snapshot(...)}} method in this interface.
> Currently, restore happens implicit in backends, i.e. when state handles are 
> provided, backends execute restore logic in their constructors. This 
> behaviour makes it hard for backends to participate in the task's lifecycle 
> through {{CloseableRegistry}}, because we can only register backend objects 
> after they have been constructed. As a result, for example, all restore 
> operations that happen in the constructor are not responsive to cancelation.
> When we introduce an explicit restore, we can first create a backend object, 
> then register it, and only then run restore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4641) Support branching CEP patterns

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820896#comment-15820896
 ] 

ASF GitHub Bot commented on FLINK-4641:
---

GitHub user chermenin opened a pull request:

https://github.com/apache/flink/pull/3105

[FLINK-4641] Support branching CEP patterns

Support for branched CEP patterns was added in this PR.
After merging that we will be able to use follow code to define more 
complex patterns:

```
Pattern pattern = EventPattern.event("start")
.next(
Pattern.or(
EventPattern.event("middle_1").subtype(F.class)),
EventPattern.event("middle_2").where(new MyFilterFunction())
))
.followedBy(EventPattern.event("end"));
```

This PR will close https://issues.apache.org/jira/browse/FLINK-4641.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chermenin/flink flink-4641

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3105.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3105


commit 026ada648d1277fd57f2fb2361a36bf0c8f5e57b
Author: Aleksandr Chermenin 
Date:   2017-01-12T09:54:44Z

[FLINK-4641] Base Java implementation.

commit f82fc8386493e84e824110a26d5e059333efaec0
Author: Aleksandr Chermenin 
Date:   2017-01-12T10:07:53Z

[FLINK-4641] Fixed branching pattern.

commit ad074e2e2c1faf8571b8b8e7ce3144c0fbc5e31d
Author: Aleksandr Chermenin 
Date:   2017-01-12T10:21:15Z

[FLINK-4641] Fixed Scala API.

commit 38e14a89b001bd443133746216d422ac46176c3f
Author: Aleksandr Chermenin 
Date:   2017-01-12T10:56:22Z

[FLINK-4641] Fixed tests for Scala API.

commit 9ba130df964ece5b8756e8b46b6ec22dcde69877
Author: Aleksandr Chermenin 
Date:   2017-01-12T12:15:01Z

[FLINK-4641] Fixed CEP Java 8 lambda test.

commit 8d490aae497e85003a402ca6c1fd687e30c3b55f
Author: Aleksandr Chermenin 
Date:   2017-01-12T12:24:52Z

[FLINK-4641] Improved code documentation.




> Support branching CEP patterns 
> ---
>
> Key: FLINK-4641
> URL: https://issues.apache.org/jira/browse/FLINK-4641
> Project: Flink
>  Issue Type: Improvement
>  Components: CEP
>Reporter: Till Rohrmann
>Assignee: Alexander Chermenin
>
> We should add support for branching CEP patterns to the Pattern API. 
> {code}
> |--> B --|
> ||
> A -- --> D
> ||
> |--> C --|
> {code}
> This feature will require changes to the {{Pattern}} class and the 
> {{NFACompiler}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5421) Explicit restore method in Snapshotable

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820907#comment-15820907
 ] 

ASF GitHub Bot commented on FLINK-5421:
---

Github user StefanRRichter closed the pull request at:

https://github.com/apache/flink/pull/3074


> Explicit restore method in Snapshotable
> ---
>
> Key: FLINK-5421
> URL: https://issues.apache.org/jira/browse/FLINK-5421
> Project: Flink
>  Issue Type: Improvement
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>
> We should introduce an explicit {{restore(...)}} method to match the 
> {{snapshot(...)}} method in this interface.
> Currently, restore happens implicit in backends, i.e. when state handles are 
> provided, backends execute restore logic in their constructors. This 
> behaviour makes it hard for backends to participate in the task's lifecycle 
> through {{CloseableRegistry}}, because we can only register backend objects 
> after they have been constructed. As a result, for example, all restore 
> operations that happen in the constructor are not responsive to cancelation.
> When we introduce an explicit restore, we can first create a backend object, 
> then register it, and only then run restore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5432) ContinuousFileMonitoringFunction is not monitoring nested files

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820908#comment-15820908
 ] 

ASF GitHub Bot commented on FLINK-5432:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3090#discussion_r95782207
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/ContinuousFileMonitoringFunction.java
 ---
@@ -282,7 +282,7 @@ private void monitorDirAndForwardSplits(FileSystem fs,
 * Returns the paths of the files not yet processed.
 * @param fileSystem The filesystem where the monitored directory 
resides.
 */
-   private Map listEligibleFiles(FileSystem fileSystem) 
throws IOException {
+   private Map listEligibleFiles(FileSystem fileSystem, 
String path) throws IOException {
--- End diff --

I would suggest passing a `Path` here. It is always a safer option to rely 
on the this class than on strings.


> ContinuousFileMonitoringFunction is not monitoring nested files
> ---
>
> Key: FLINK-5432
> URL: https://issues.apache.org/jira/browse/FLINK-5432
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
> Fix For: 1.2.0, 1.3.0
>
>
> The {{ContinuousFileMonitoringFunction}} does not monitor nested files even 
> if the inputformat has NestedFileEnumeration set to true. This can be fixed 
> by enabling a recursive scan of the directories in the {{listEligibleFiles}} 
> method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3090: [FLINK-5432] Fix nested files enumeration in Conti...

2017-01-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/3090#discussion_r95782207
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/ContinuousFileMonitoringFunction.java
 ---
@@ -282,7 +282,7 @@ private void monitorDirAndForwardSplits(FileSystem fs,
 * Returns the paths of the files not yet processed.
 * @param fileSystem The filesystem where the monitored directory 
resides.
 */
-   private Map listEligibleFiles(FileSystem fileSystem) 
throws IOException {
+   private Map listEligibleFiles(FileSystem fileSystem, 
String path) throws IOException {
--- End diff --

I would suggest passing a `Path` here. It is always a safer option to rely 
on the this class than on strings.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3074: [FLINK-5421] Explicit restore method in Snapshotab...

2017-01-12 Thread StefanRRichter
Github user StefanRRichter closed the pull request at:

https://github.com/apache/flink/pull/3074


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5467) Stateless chained tasks set legacy operator state

2017-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15820936#comment-15820936
 ] 

ASF GitHub Bot commented on FLINK-5467:
---

Github user uce commented on the issue:

https://github.com/apache/flink/pull/3102
  
I'm not aware of all implications of this change, but this fixes the 
problem I had. The operator does not checkpoint any legacy operator state.

+1 to merge from my side.


> Stateless chained tasks set legacy operator state
> -
>
> Key: FLINK-5467
> URL: https://issues.apache.org/jira/browse/FLINK-5467
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Stefan Richter
>
> I discovered this while trying to rescale a job with a Kafka source with a 
> chained stateless operator.
> Looking into it, it turns out that this fails, because the checkpointed state 
> contains legacy operator state for the chained operator although it is state 
> less.
> /cc [~aljoscha] You mentioned that this might be a possible duplicate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >