[jira] [Created] (FLINK-5440) Misleading error message when migrating and scaling down from savepoint

2017-01-10 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5440:
--

 Summary: Misleading error message when migrating and scaling down 
from savepoint
 Key: FLINK-5440
 URL: https://issues.apache.org/jira/browse/FLINK-5440
 Project: Flink
  Issue Type: Improvement
  Components: State Backends, Checkpointing
Reporter: Ufuk Celebi
Priority: Minor


When resuming from an 1.1 savepoint with 1.2 and reducing the parallelism (and 
correctly setting the max parallelism), the error message says something about 
a missing operator which is misleading. Restoring from the same savepoint with 
the savepoint parallelism works as expected.

Instead it should state that this kind of operation is not possible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5441) Directly allow SQL queries on a Table

2017-01-10 Thread Timo Walther (JIRA)
Timo Walther created FLINK-5441:
---

 Summary: Directly allow SQL queries on a Table
 Key: FLINK-5441
 URL: https://issues.apache.org/jira/browse/FLINK-5441
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Timo Walther


Right now a user has to register a table before it can be used in SQL queries. 
In order to allow more fluent programming we propose calling SQL directly on a 
table. An underscore can be used to reference the current table:

{code}
myTable.sql("SELECT a, b, c FROM _ WHERE d = 12")
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3040: [FLINK-3850] Add forward field annotations to DataSet

2017-01-10 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3040
  
I will also take a look at it soon. Unfortunetly, the build failed. Could 
you have a look at it? I also recommend to change the `TableProgramsTestBase` 
from `TestExecutionMode.COLLECTION` to `TestExecutionMode.CLUSTER` temporarily 
because I'm not sure if the properties are considered in collection execution.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3850) Add forward field annotations to DataSet operators generated by the Table API

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815389#comment-15815389
 ] 

ASF GitHub Bot commented on FLINK-3850:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3040
  
I will also take a look at it soon. Unfortunetly, the build failed. Could 
you have a look at it? I also recommend to change the `TableProgramsTestBase` 
from `TestExecutionMode.COLLECTION` to `TestExecutionMode.CLUSTER` temporarily 
because I'm not sure if the properties are considered in collection execution.


> Add forward field annotations to DataSet operators generated by the Table API
> -
>
> Key: FLINK-3850
> URL: https://issues.apache.org/jira/browse/FLINK-3850
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Nikolay Vasilishin
>
> The DataSet API features semantic annotations [1] to hint the optimizer which 
> input fields an operator copies. This information is valuable for the 
> optimizer because it can infer that certain physical properties such as 
> partitioning or sorting are not destroyed by user functions and thus generate 
> more efficient execution plans.
> The Table API is built on top of the DataSet API and generates DataSet 
> programs and code for user-defined functions. Hence, it knows exactly which 
> fields are modified and which not. We should use this information to 
> automatically generate forward field annotations and attach them to the 
> operators. This can help to significantly improve the performance of certain 
> jobs.
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/batch/index.html#semantic-annotations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3062: [FLINK-5144] Fix error while applying rule Aggrega...

2017-01-10 Thread KurtYoung
Github user KurtYoung commented on a diff in the pull request:

https://github.com/apache/flink/pull/3062#discussion_r95373645
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/CorrelateITCase.scala
 ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.scala.batch.sql
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.ExecutionEnvironment
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.table.api.TableEnvironment
+import org.apache.flink.table.api.scala._
+import org.apache.flink.table.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.table.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.apache.flink.test.util.TestBaseUtils
+import org.apache.flink.types.Row
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+import scala.collection.JavaConverters._
+
+@RunWith(classOf[Parameterized])
+class CorrelateITCase(mode: TestExecutionMode, configMode: TableConfigMode)
--- End diff --

Ok, will change that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5144) Error while applying rule AggregateJoinTransposeRule

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815110#comment-15815110
 ] 

ASF GitHub Bot commented on FLINK-5144:
---

Github user KurtYoung commented on a diff in the pull request:

https://github.com/apache/flink/pull/3062#discussion_r95373645
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/CorrelateITCase.scala
 ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.scala.batch.sql
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.ExecutionEnvironment
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.table.api.TableEnvironment
+import org.apache.flink.table.api.scala._
+import org.apache.flink.table.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.table.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.apache.flink.test.util.TestBaseUtils
+import org.apache.flink.types.Row
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+import scala.collection.JavaConverters._
+
+@RunWith(classOf[Parameterized])
+class CorrelateITCase(mode: TestExecutionMode, configMode: TableConfigMode)
--- End diff --

Ok, will change that.


> Error while applying rule AggregateJoinTransposeRule
> 
>
> Key: FLINK-5144
> URL: https://issues.apache.org/jira/browse/FLINK-5144
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Kurt Young
>
> AggregateJoinTransposeRule seems to cause errors. We have to investigate if 
> this is a Flink or Calcite error. Here a simplified example:
> {code}
> select
>   sum(l_extendedprice)
> from
>   lineitem,
>   part
> where
>   p_partkey = l_partkey
>   and l_quantity < (
> select
>   avg(l_quantity)
> from
>   lineitem
> where
>   l_partkey = p_partkey
>   )
> {code}
> Exception:
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: Error 
> occurred while applying rule AggregateJoinTransposeRule
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:148)
>   at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:225)
>   at 
> org.apache.calcite.rel.rules.AggregateJoinTransposeRule.onMatch(AggregateJoinTransposeRule.java:342)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:213)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:819)
>   at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:334)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.optimize(BatchTableEnvironment.scala:251)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.translate(BatchTableEnvironment.scala:286)
>   at 
> org.apache.flink.api.scala.table.BatchTableEnvironment.toDataSet(BatchTableEnvironment.scala:139)
>   at 
> org.apache.flink.api.scala.table.package$.table2RowDataSet(package.scala:77)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.runQ17(TPCHQueries.scala:826)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.main(TPCHQueries.scala:57)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries.main(TPCHQueries.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at 

[jira] [Commented] (FLINK-5144) Error while applying rule AggregateJoinTransposeRule

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815109#comment-15815109
 ] 

ASF GitHub Bot commented on FLINK-5144:
---

Github user KurtYoung commented on the issue:

https://github.com/apache/flink/pull/3062
  
Thanks @twalthr for the reviewing. I have opened 
https://issues.apache.org/jira/browse/FLINK-5435 to track the cleanup work.


> Error while applying rule AggregateJoinTransposeRule
> 
>
> Key: FLINK-5144
> URL: https://issues.apache.org/jira/browse/FLINK-5144
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Kurt Young
>
> AggregateJoinTransposeRule seems to cause errors. We have to investigate if 
> this is a Flink or Calcite error. Here a simplified example:
> {code}
> select
>   sum(l_extendedprice)
> from
>   lineitem,
>   part
> where
>   p_partkey = l_partkey
>   and l_quantity < (
> select
>   avg(l_quantity)
> from
>   lineitem
> where
>   l_partkey = p_partkey
>   )
> {code}
> Exception:
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: Error 
> occurred while applying rule AggregateJoinTransposeRule
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:148)
>   at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:225)
>   at 
> org.apache.calcite.rel.rules.AggregateJoinTransposeRule.onMatch(AggregateJoinTransposeRule.java:342)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:213)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:819)
>   at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:334)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.optimize(BatchTableEnvironment.scala:251)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.translate(BatchTableEnvironment.scala:286)
>   at 
> org.apache.flink.api.scala.table.BatchTableEnvironment.toDataSet(BatchTableEnvironment.scala:139)
>   at 
> org.apache.flink.api.scala.table.package$.table2RowDataSet(package.scala:77)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.runQ17(TPCHQueries.scala:826)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.main(TPCHQueries.scala:57)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries.main(TPCHQueries.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.lang.AssertionError: Type mismatch:
> rowtype of new rel:
> RecordType(BIGINT l_partkey, BIGINT p_partkey) NOT NULL
> rowtype of set:
> RecordType(BIGINT p_partkey) NOT NULL
>   at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31)
>   at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1838)
>   at org.apache.calcite.plan.volcano.RelSubset.add(RelSubset.java:273)
>   at org.apache.calcite.plan.volcano.RelSet.add(RelSet.java:148)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.addRelToSet(VolcanoPlanner.java:1820)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1766)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:1032)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1052)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1942)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:136)
>   ... 17 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4604) Add support for standard deviation/variance

2017-01-10 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815229#comment-15815229
 ] 

Timo Walther commented on FLINK-4604:
-

Any news on this [~anmu]? You could solve this issue temporarily by doing it 
similar to FLINK-5144.

> Add support for standard deviation/variance
> ---
>
> Key: FLINK-4604
> URL: https://issues.apache.org/jira/browse/FLINK-4604
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Anton Mushin
> Attachments: 1.jpg
>
>
> Calcite's {{AggregateReduceFunctionsRule}} can convert SQL {{AVG, STDDEV_POP, 
> STDDEV_SAMP, VAR_POP, VAR_SAMP}} to sum/count functions. We should add, test 
> and document this rule. 
> If we also want to add this aggregates to Table API is up for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3086: Improve docker setup

2017-01-10 Thread kaelumania
Github user kaelumania commented on a diff in the pull request:

https://github.com/apache/flink/pull/3086#discussion_r95384257
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -22,9 +22,9 @@ FROM java:8-jre-alpine
 RUN apk add --no-cache bash snappy
 
 # Configure Flink version
-ENV FLINK_VERSION=1.1.1
-ENV HADOOP_VERSION=27
-ENV SCALA_VERSION=2.11
+ARG FLINK_VERSION=1.1.3
--- End diff --

Thus, this was basically a bug.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3000: [FLINK-4861] [build] Package optional project arti...

2017-01-10 Thread greghogan
Github user greghogan closed the pull request at:

https://github.com/apache/flink/pull/3000


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3086: Improve docker setup

2017-01-10 Thread kaelumania
Github user kaelumania commented on the issue:

https://github.com/apache/flink/pull/3086
  
I am not sure, why the build fails as I am not deeply familiar with the 
Java/Scala world.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5436) UDF state without CheckpointedRestoring can result in restarting loop

2017-01-10 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5436:
--

 Summary: UDF state without CheckpointedRestoring can result in 
restarting loop
 Key: FLINK-5436
 URL: https://issues.apache.org/jira/browse/FLINK-5436
 Project: Flink
  Issue Type: Improvement
  Components: State Backends, Checkpointing
Reporter: Ufuk Celebi
Priority: Minor


When restoring a job with Checkpointed state and not implementing the new 
CheckpointedRestoring interface, the job will be restarted over and over again 
(given the respective restarting strategy).

Since this is not recoverable, we should immediately fail the job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5407) Savepoint for iterative Task fails.

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815285#comment-15815285
 ] 

ASF GitHub Bot commented on FLINK-5407:
---

GitHub user StefanRRichter opened a pull request:

https://github.com/apache/flink/pull/3088

[FLINK-5407] Fix savepoints for iterative jobs

This PR fixes savepoints for iterative jobs. Savepoints failed with NPE 
because the code assumed that operators in an operator chain are never null. 
For iterative jobs, this can happen.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StefanRRichter/flink NPE-Iterative-Snapshot

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3088.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3088


commit 984d596c063b5082520d8d58baa6b7361b1e9921
Author: Stefan Richter 
Date:   2017-01-05T13:28:50Z

[FLINK-5407] Handle snapshoting null-operator in chain

commit c96fe7ba35764b4f9e05ed61199b2027981daa54
Author: Stefan Richter 
Date:   2017-01-10T15:08:06Z

[FLINK-5407] IT case for savepoint with iterative job




> Savepoint for iterative Task fails.
> ---
>
> Key: FLINK-5407
> URL: https://issues.apache.org/jira/browse/FLINK-5407
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0
>Reporter: Stephan Ewen
>Assignee: Stefan Richter
> Fix For: 1.2.0, 1.3.0
>
> Attachments: SavepointBug.java
>
>
> Flink 1.2-SNAPSHOT (Commit: 5b54009) on Windows.
> Triggering a savepoint for a streaming job, both the savepoint and the job 
> failed.
> The job failed with the following exception:
> {code}
> java.lang.RuntimeException: Error while triggering checkpoint for 
> IterationSource-7 (1/1)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1026)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.createOperatorIdentifier(StreamTask.java:767)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.access$500(StreamTask.java:115)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.createStreamFactory(StreamTask.java:986)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:956)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:583)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:551)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:511)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1019)
>   ... 5 more
> And the savepoint failed with the following exception:
> Using address /127.0.0.1:6123 to connect to JobManager.
> Triggering savepoint for job 153310c4a836a92ce69151757c6b73f1.
> Waiting for response...
> 
>  The program finished with the following exception:
> java.lang.Exception: Failed to complete savepoint
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anon$7.apply(JobManager.scala:793)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anon$7.apply(JobManager.scala:782)
> at 
> org.apache.flink.runtime.concurrent.impl.FlinkFuture$6.recover(FlinkFuture.java:263)
> at akka.dispatch.Recover.internal(Future.scala:267)
> at akka.dispatch.japi$RecoverBridge.apply(Future.scala:183)
> at akka.dispatch.japi$RecoverBridge.apply(Future.scala:181)
> at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185)
> at scala.util.Try$.apply(Try.scala:161)
> at scala.util.Failure.recover(Try.scala:185)
> at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
> at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
> at 
> 

[jira] [Commented] (FLINK-5407) Savepoint for iterative Task fails.

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815287#comment-15815287
 ] 

ASF GitHub Bot commented on FLINK-5407:
---

Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/3088
  
cc @aljoscha 


> Savepoint for iterative Task fails.
> ---
>
> Key: FLINK-5407
> URL: https://issues.apache.org/jira/browse/FLINK-5407
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0
>Reporter: Stephan Ewen
>Assignee: Stefan Richter
> Fix For: 1.2.0, 1.3.0
>
> Attachments: SavepointBug.java
>
>
> Flink 1.2-SNAPSHOT (Commit: 5b54009) on Windows.
> Triggering a savepoint for a streaming job, both the savepoint and the job 
> failed.
> The job failed with the following exception:
> {code}
> java.lang.RuntimeException: Error while triggering checkpoint for 
> IterationSource-7 (1/1)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1026)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.createOperatorIdentifier(StreamTask.java:767)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.access$500(StreamTask.java:115)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.createStreamFactory(StreamTask.java:986)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:956)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:583)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:551)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:511)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1019)
>   ... 5 more
> And the savepoint failed with the following exception:
> Using address /127.0.0.1:6123 to connect to JobManager.
> Triggering savepoint for job 153310c4a836a92ce69151757c6b73f1.
> Waiting for response...
> 
>  The program finished with the following exception:
> java.lang.Exception: Failed to complete savepoint
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anon$7.apply(JobManager.scala:793)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anon$7.apply(JobManager.scala:782)
> at 
> org.apache.flink.runtime.concurrent.impl.FlinkFuture$6.recover(FlinkFuture.java:263)
> at akka.dispatch.Recover.internal(Future.scala:267)
> at akka.dispatch.japi$RecoverBridge.apply(Future.scala:183)
> at akka.dispatch.japi$RecoverBridge.apply(Future.scala:181)
> at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185)
> at scala.util.Try$.apply(Try.scala:161)
> at scala.util.Failure.recover(Try.scala:185)
> at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
> at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
> at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.Exception: Checkpoint 

[jira] [Commented] (FLINK-4917) Deprecate "CheckpointedAsynchronously" interface

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815300#comment-15815300
 ] 

ASF GitHub Bot commented on FLINK-4917:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3087
  
Per Fabian's comment from the Jira, we should also document the deprecation 
in the javadoc with the recommended replacement functionality.


> Deprecate "CheckpointedAsynchronously" interface
> 
>
> Key: FLINK-4917
> URL: https://issues.apache.org/jira/browse/FLINK-4917
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Stephan Ewen
>  Labels: easyfix, starter
>
> The {{CheckpointedAsynchronously}} should be deprecated, as it is no longer 
> part of the new operator state abstraction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3086: Improve docker setup

2017-01-10 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3086
  
For creating tickets, it's actually a requirement from Apache Legal as PR 
comments are then automatically added to the associated Jira which documents 
the contribution of the code to the project.

The build is sufficiently passing. Several of the tests are unstable on 
TravisCI.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3086: Improve docker setup

2017-01-10 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3086#discussion_r95401803
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -22,9 +22,9 @@ FROM java:8-jre-alpine
 RUN apk add --no-cache bash snappy
 
 # Configure Flink version
-ENV FLINK_VERSION=1.1.1
-ENV HADOOP_VERSION=27
-ENV SCALA_VERSION=2.11
+ARG FLINK_VERSION=1.1.3
--- End diff --

This does look to be broken. @mxm what was the issue with backwards 
compatibility?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-4673) TypeInfoFactory for Either type

2017-01-10 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-4673.
-
   Resolution: Fixed
Fix Version/s: 1.3.0

Fixed in 1.3.0: d4d7cc32667016d66c65a7d64601cabd101a0c4d

> TypeInfoFactory for Either type
> ---
>
> Key: FLINK-4673
> URL: https://issues.apache.org/jira/browse/FLINK-4673
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 1.3.0
>
>
> I was able to resolve the requirement to specify an explicit 
> {{TypeInformation}} in the pull request for FLINK-4624 by creating a 
> {{TypeInfoFactory}} for the {{Either}} type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4673) TypeInfoFactory for Either type

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815094#comment-15815094
 ] 

ASF GitHub Bot commented on FLINK-4673:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2545


> TypeInfoFactory for Either type
> ---
>
> Key: FLINK-4673
> URL: https://issues.apache.org/jira/browse/FLINK-4673
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 1.3.0
>
>
> I was able to resolve the requirement to specify an explicit 
> {{TypeInformation}} in the pull request for FLINK-4624 by creating a 
> {{TypeInfoFactory}} for the {{Either}} type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5435) Cleanup the rules introduced by FLINK-5144 when calcite releases 1.12

2017-01-10 Thread Kurt Young (JIRA)
Kurt Young created FLINK-5435:
-

 Summary: Cleanup the rules introduced by FLINK-5144 when calcite 
releases 1.12
 Key: FLINK-5435
 URL: https://issues.apache.org/jira/browse/FLINK-5435
 Project: Flink
  Issue Type: Task
  Components: Table API & SQL
Reporter: Kurt Young
Assignee: Kurt Young
Priority: Minor


When fixing https://issues.apache.org/jira/browse/FLINK-5144, we actually 
copied some classes from Calcite and do a quick fix in Flink. The fixing is 
actually merged by Calcite and will be included in version 1.12, we should 
update the Calcite version and remove the classes we copied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5394) the estimateRowCount method of DataSetCalc didn't work

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815100#comment-15815100
 ] 

ASF GitHub Bot commented on FLINK-5394:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3058#discussion_r95372931
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetSort.scala
 ---
@@ -71,6 +72,21 @@ class DataSetSort(
 )
   }
 
+  override def estimateRowCount(metadata: RelMetadataQuery): Double = {
+val inputRowCnt = metadata.getRowCount(this.getInput)
+if (inputRowCnt == null) {
+  inputRowCnt
+} else {
+  val rowCount = Math.max(inputRowCnt - limitStart, 0D)
--- End diff --

Returning a cardinality estimate of `0` is not a good idea because all 
remaining operations might appear to have no costs at all. Rather be 
conservative and return `1` which is still low but does not invalidate any 
subsequent costs.


> the estimateRowCount method of DataSetCalc didn't work
> --
>
> Key: FLINK-5394
> URL: https://issues.apache.org/jira/browse/FLINK-5394
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: zhangjing
>Assignee: zhangjing
>
> The estimateRowCount method of DataSetCalc didn't work now. 
> If I run the following code,
> {code}
> Table table = tableEnv
>   .fromDataSet(data, "a, b, c")
>   .groupBy("a")
>   .select("a, a.avg, b.sum, c.count")
>   .where("a == 1");
> {code}
> the cost of every node in Optimized node tree is :
> {code}
> DataSetAggregate(groupBy=[a], select=[a, AVG(a) AS TMP_0, SUM(b) AS TMP_1, 
> COUNT(c) AS TMP_2]): rowcount = 1000.0, cumulative cost = {3000.0 rows, 
> 5000.0 cpu, 28000.0 io}
>   DataSetCalc(select=[a, b, c], where=[=(a, 1)]): rowcount = 1000.0, 
> cumulative cost = {2000.0 rows, 2000.0 cpu, 0.0 io}
>   DataSetScan(table=[[_DataSetTable_0]]): rowcount = 1000.0, cumulative 
> cost = {1000.0 rows, 1000.0 cpu, 0.0 io}
> {code}
> We expect the input rowcount of DataSetAggregate less than 1000, however the 
> actual input rowcount is still 1000 because the the estimateRowCount method 
> of DataSetCalc didn't work. 
> There are two reasons caused to this:
> 1. Didn't provide custom metadataProvider yet. So when DataSetAggregate calls 
> RelMetadataQuery.getRowCount(DataSetCalc) to estimate its input rowcount 
> which would dispatch to RelMdRowCount.
> 2. DataSetCalc is subclass of SingleRel. So previous function call would 
> match getRowCount(SingleRel rel, RelMetadataQuery mq) which would never use 
> DataSetCalc.estimateRowCount.
> The question would also appear to all Flink RelNodes which are subclass of 
> SingleRel.
> I plan to resolve this problem by adding a FlinkRelMdRowCount which contains 
> specific getRowCount of Flink RelNodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3058: [FLINK-5394] [Table API & SQL]the estimateRowCount...

2017-01-10 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3058#discussion_r95372931
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetSort.scala
 ---
@@ -71,6 +72,21 @@ class DataSetSort(
 )
   }
 
+  override def estimateRowCount(metadata: RelMetadataQuery): Double = {
+val inputRowCnt = metadata.getRowCount(this.getInput)
+if (inputRowCnt == null) {
+  inputRowCnt
+} else {
+  val rowCount = Math.max(inputRowCnt - limitStart, 0D)
--- End diff --

Returning a cardinality estimate of `0` is not a good idea because all 
remaining operations might appear to have no costs at all. Rather be 
conservative and return `1` which is still low but does not invalidate any 
subsequent costs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5358) Support RowTypeInfo extraction in TypeExtractor by Row instance

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815183#comment-15815183
 ] 

ASF GitHub Bot commented on FLINK-5358:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3027
  
+1 to merge


> Support RowTypeInfo extraction in TypeExtractor by Row instance
> ---
>
> Key: FLINK-5358
> URL: https://issues.apache.org/jira/browse/FLINK-5358
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>
> {code}
> Row[] data = new Row[]{};
> TypeInformation typeInfo = TypeExtractor.getForObject(data[0]);
> {code}
> method {{getForObject}} wraps it into
> {code}
> GenericTypeInfo
> {code}
> the method should return {{RowTypeInfo}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3027: [FLINK-5358] add RowTypeInfo exctraction in TypeExtractor

2017-01-10 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3027
  
+1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5358) Support RowTypeInfo extraction in TypeExtractor by Row instance

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815202#comment-15815202
 ] 

ASF GitHub Bot commented on FLINK-5358:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3027
  
merging


> Support RowTypeInfo extraction in TypeExtractor by Row instance
> ---
>
> Key: FLINK-5358
> URL: https://issues.apache.org/jira/browse/FLINK-5358
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>
> {code}
> Row[] data = new Row[]{};
> TypeInformation typeInfo = TypeExtractor.getForObject(data[0]);
> {code}
> method {{getForObject}} wraps it into
> {code}
> GenericTypeInfo
> {code}
> the method should return {{RowTypeInfo}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3086: Improve docker setup

2017-01-10 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3086#discussion_r95381935
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -22,9 +22,9 @@ FROM java:8-jre-alpine
 RUN apk add --no-cache bash snappy
 
 # Configure Flink version
-ENV FLINK_VERSION=1.1.1
-ENV HADOOP_VERSION=27
-ENV SCALA_VERSION=2.11
+ARG FLINK_VERSION=1.1.3
--- End diff --

6e1e139 includes the note "replace ARG with ENV for 
backwards-compatibility".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3086: Improve docker setup

2017-01-10 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/3086#discussion_r95382374
  
--- Diff: flink-contrib/docker-flink/docker-entrypoint.sh ---
@@ -36,9 +39,9 @@ elif [ "$1" == "taskmanager" ]; then
 echo "Starting Task Manager"
 echo "config file: " && grep '^[^\n#]' $FLINK_HOME/conf/flink-conf.yaml
 $FLINK_HOME/bin/taskmanager.sh start
+
+  # prevent script to exit
+  tail -f /dev/null
 else
 $@
--- End diff --

How is this used?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5438) Typo in JobGraph generator Exception

2017-01-10 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5438:
--

 Summary: Typo in JobGraph generator Exception 
 Key: FLINK-5438
 URL: https://issues.apache.org/jira/browse/FLINK-5438
 Project: Flink
  Issue Type: Improvement
  Components: Client
Reporter: Ufuk Celebi
Priority: Trivial


When trying to run a job with parallelism  > max parallelism there is a typo in 
the error message:

{code}
Caused by: java.lang.IllegalStateException: The maximum parallelism (1) of the 
stream node Flat Map-3 is smaller than the parallelism (18). Increase the 
maximum parallelism or decrease the parallelism >>>ofthis<<< operator.
at 
org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobVertex(StreamingJobGraphGenerator.java:318)
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3086: Improve docker setup

2017-01-10 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/3086#discussion_r95410892
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -22,9 +22,9 @@ FROM java:8-jre-alpine
 RUN apk add --no-cache bash snappy
 
 # Configure Flink version
-ENV FLINK_VERSION=1.1.1
-ENV HADOOP_VERSION=27
-ENV SCALA_VERSION=2.11
+ARG FLINK_VERSION=1.1.3
--- End diff --

`ARG` is only available in newer versions of Docker. If we want to maintain 
backwards-compatibility, we should adjust the README to state `docker build 
--env FLINK_VERSION=1.0.3`. As far as I know, we don't gain anything by using 
`ARG`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3086: Improve docker setup

2017-01-10 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/3086#discussion_r95411497
  
--- Diff: flink-contrib/docker-flink/docker-entrypoint.sh ---
@@ -36,9 +39,9 @@ elif [ "$1" == "taskmanager" ]; then
 echo "Starting Task Manager"
 echo "config file: " && grep '^[^\n#]' $FLINK_HOME/conf/flink-conf.yaml
 $FLINK_HOME/bin/taskmanager.sh start
+
+  # prevent script to exit
+  tail -f /dev/null
 else
 $@
--- End diff --

@greghogan Seems like a way to execute an arbitrary command passed inside 
the Docker container passed as an argument to `docker run `.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3086: Improve docker setup

2017-01-10 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/3086#discussion_r95411011
  
--- Diff: flink-contrib/docker-flink/docker-entrypoint.sh ---
@@ -28,6 +28,9 @@ if [ "$1" == "jobmanager" ]; then
 
 echo "config file: " && grep '^[^\n#]' $FLINK_HOME/conf/flink-conf.yaml
 $FLINK_HOME/bin/jobmanager.sh start cluster
+
+  # prevent script to exit
+  tail -f /dev/null
--- End diff --

I think the proper way to fix this, would be to call a non-daemonized 
startup script.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3075: [FLINK-5296] Expose the old AlignedWindowOperators...

2017-01-10 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/3075#discussion_r95411773
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/SlidingAlignedProcessingTimeWindows.java
 ---
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.windowing.assigners;
+
+import org.apache.flink.streaming.api.windowing.time.Time;
+
+/**
+ * A processing time sliding {@link WindowAssigner window assigner} used 
to perform windowing using the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AccumulatingProcessingTimeWindowOperator
+ * AccumulatingProcessingTimeWindowOperator} and the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AggregatingProcessingTimeWindowOperator
+ * AggregatingProcessingTimeWindowOperator}.
+ * 
+ * With this assigner, the {@code trigger} used is a
+ * {@link 
org.apache.flink.streaming.api.windowing.triggers.ProcessingTimeTrigger
+ * ProcessingTimeTrigger} and no {@code evictor} can be specified.
+ * 
--- End diff --

Missing newline.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3075: [FLINK-5296] Expose the old AlignedWindowOperators...

2017-01-10 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/3075#discussion_r95411689
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/WindowedStream.java
 ---
@@ -977,6 +1012,79 @@ private LegacyWindowOperatorType 
getLegacyWindowType(Function function) {
return LegacyWindowOperatorType.NONE;
}
 
+   private  SingleOutputStreamOperator createFastTimeOperatorIfValid(
--- End diff --

This is simply a copy of the old code, right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5296) Expose the old AlignedWindowOperators to the user through explicit commands.

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1581#comment-1581
 ] 

ASF GitHub Bot commented on FLINK-5296:
---

Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/3075#discussion_r95411689
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/WindowedStream.java
 ---
@@ -977,6 +1012,79 @@ private LegacyWindowOperatorType 
getLegacyWindowType(Function function) {
return LegacyWindowOperatorType.NONE;
}
 
+   private  SingleOutputStreamOperator createFastTimeOperatorIfValid(
--- End diff --

This is simply a copy of the old code, right?


> Expose the old AlignedWindowOperators to the user through explicit commands.
> 
>
> Key: FLINK-5296
> URL: https://issues.apache.org/jira/browse/FLINK-5296
> Project: Flink
>  Issue Type: Bug
>  Components: Windowing Operators
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5296) Expose the old AlignedWindowOperators to the user through explicit commands.

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815553#comment-15815553
 ] 

ASF GitHub Bot commented on FLINK-5296:
---

Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/3075#discussion_r95412039
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/SlidingAlignedProcessingTimeWindows.java
 ---
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.windowing.assigners;
+
+import org.apache.flink.streaming.api.windowing.time.Time;
+
+/**
+ * A processing time sliding {@link WindowAssigner window assigner} used 
to perform windowing using the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AccumulatingProcessingTimeWindowOperator
+ * AccumulatingProcessingTimeWindowOperator} and the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AggregatingProcessingTimeWindowOperator
+ * AggregatingProcessingTimeWindowOperator}.
+ * 
+ * With this assigner, the {@code trigger} used is a
+ * {@link 
org.apache.flink.streaming.api.windowing.triggers.ProcessingTimeTrigger
+ * ProcessingTimeTrigger} and no {@code evictor} can be specified.
+ * 
+ * Bare in mind that no rescaling and no backwards compatibility is 
supported.
--- End diff --

I think we should have a bigger notice here, possibly with `` and 
`WARNING`.

Also, I think it should be "bear in mind". 
(https://www.quora.com/Which-is-correct-bare-in-mind-or-bear-in-mind)


> Expose the old AlignedWindowOperators to the user through explicit commands.
> 
>
> Key: FLINK-5296
> URL: https://issues.apache.org/jira/browse/FLINK-5296
> Project: Flink
>  Issue Type: Bug
>  Components: Windowing Operators
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5296) Expose the old AlignedWindowOperators to the user through explicit commands.

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815556#comment-15815556
 ] 

ASF GitHub Bot commented on FLINK-5296:
---

Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/3075#discussion_r95412067
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/TumblingAlignedProcessingTimeWindows.java
 ---
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.windowing.assigners;
+
+import org.apache.flink.streaming.api.windowing.time.Time;
+
+/**
+ * A processing time tumbling {@link WindowAssigner window assigner} used 
to perform windowing using the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AccumulatingProcessingTimeWindowOperator
+ * AccumulatingProcessingTimeWindowOperator} and the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AggregatingProcessingTimeWindowOperator
+ * AggregatingProcessingTimeWindowOperator}.
+ * 
--- End diff --

Same comments as for the other assigner hold.


> Expose the old AlignedWindowOperators to the user through explicit commands.
> 
>
> Key: FLINK-5296
> URL: https://issues.apache.org/jira/browse/FLINK-5296
> Project: Flink
>  Issue Type: Bug
>  Components: Windowing Operators
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3075: [FLINK-5296] Expose the old AlignedWindowOperators...

2017-01-10 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/3075#discussion_r95412067
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/TumblingAlignedProcessingTimeWindows.java
 ---
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.windowing.assigners;
+
+import org.apache.flink.streaming.api.windowing.time.Time;
+
+/**
+ * A processing time tumbling {@link WindowAssigner window assigner} used 
to perform windowing using the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AccumulatingProcessingTimeWindowOperator
+ * AccumulatingProcessingTimeWindowOperator} and the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AggregatingProcessingTimeWindowOperator
+ * AggregatingProcessingTimeWindowOperator}.
+ * 
--- End diff --

Same comments as for the other assigner hold.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3075: [FLINK-5296] Expose the old AlignedWindowOperators...

2017-01-10 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/3075#discussion_r95411761
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/SlidingAlignedProcessingTimeWindows.java
 ---
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.windowing.assigners;
+
+import org.apache.flink.streaming.api.windowing.time.Time;
+
+/**
+ * A processing time sliding {@link WindowAssigner window assigner} used 
to perform windowing using the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AccumulatingProcessingTimeWindowOperator
+ * AccumulatingProcessingTimeWindowOperator} and the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AggregatingProcessingTimeWindowOperator
+ * AggregatingProcessingTimeWindowOperator}.
+ * 
--- End diff --

Missing newline.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3075: [FLINK-5296] Expose the old AlignedWindowOperators...

2017-01-10 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/3075#discussion_r95411735
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/BaseAlignedWindowAssigner.java
 ---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.streaming.api.windowing.assigners;
+
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.windowing.triggers.Trigger;
+import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
+
+import java.util.Collection;
+
+/**
+ * A base {@link WindowAssigner} used to instantiate one of the deprecated
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AccumulatingProcessingTimeWindowOperator
+ * AccumulatingProcessingTimeWindowOperator} and
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AggregatingProcessingTimeWindowOperator
+ * AggregatingProcessingTimeWindowOperator}.
+ * 
--- End diff --

Missing newline.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5296) Expose the old AlignedWindowOperators to the user through explicit commands.

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815551#comment-15815551
 ] 

ASF GitHub Bot commented on FLINK-5296:
---

Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/3075#discussion_r95411735
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/BaseAlignedWindowAssigner.java
 ---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.streaming.api.windowing.assigners;
+
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.windowing.triggers.Trigger;
+import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
+
+import java.util.Collection;
+
+/**
+ * A base {@link WindowAssigner} used to instantiate one of the deprecated
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AccumulatingProcessingTimeWindowOperator
+ * AccumulatingProcessingTimeWindowOperator} and
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AggregatingProcessingTimeWindowOperator
+ * AggregatingProcessingTimeWindowOperator}.
+ * 
--- End diff --

Missing newline.


> Expose the old AlignedWindowOperators to the user through explicit commands.
> 
>
> Key: FLINK-5296
> URL: https://issues.apache.org/jira/browse/FLINK-5296
> Project: Flink
>  Issue Type: Bug
>  Components: Windowing Operators
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3075: [FLINK-5296] Expose the old AlignedWindowOperators...

2017-01-10 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/3075#discussion_r95412039
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/SlidingAlignedProcessingTimeWindows.java
 ---
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.windowing.assigners;
+
+import org.apache.flink.streaming.api.windowing.time.Time;
+
+/**
+ * A processing time sliding {@link WindowAssigner window assigner} used 
to perform windowing using the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AccumulatingProcessingTimeWindowOperator
+ * AccumulatingProcessingTimeWindowOperator} and the
+ * {@link 
org.apache.flink.streaming.runtime.operators.windowing.AggregatingProcessingTimeWindowOperator
+ * AggregatingProcessingTimeWindowOperator}.
+ * 
+ * With this assigner, the {@code trigger} used is a
+ * {@link 
org.apache.flink.streaming.api.windowing.triggers.ProcessingTimeTrigger
+ * ProcessingTimeTrigger} and no {@code evictor} can be specified.
+ * 
+ * Bare in mind that no rescaling and no backwards compatibility is 
supported.
--- End diff --

I think we should have a bigger notice here, possibly with `` and 
`WARNING`.

Also, I think it should be "bear in mind". 
(https://www.quora.com/Which-is-correct-bare-in-mind-or-bear-in-mind)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3075: [FLINK-5296] Expose the old AlignedWindowOperators throug...

2017-01-10 Thread kl0u
Github user kl0u commented on the issue:

https://github.com/apache/flink/pull/3075
  
@aljoscha Thanks for the review! 
I will integrate the comments and ping you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5296) Expose the old AlignedWindowOperators to the user through explicit commands.

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815559#comment-15815559
 ] 

ASF GitHub Bot commented on FLINK-5296:
---

Github user kl0u commented on the issue:

https://github.com/apache/flink/pull/3075
  
@aljoscha Thanks for the review! 
I will integrate the comments and ping you.


> Expose the old AlignedWindowOperators to the user through explicit commands.
> 
>
> Key: FLINK-5296
> URL: https://issues.apache.org/jira/browse/FLINK-5296
> Project: Flink
>  Issue Type: Bug
>  Components: Windowing Operators
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1737) Add statistical whitening transformation to machine learning library

2017-01-10 Thread Pattarawat Chormai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816172#comment-15816172
 ] 

Pattarawat Chormai commented on FLINK-1737:
---

[~till.rohrmann] should we close this issue?

> Add statistical whitening transformation to machine learning library
> 
>
> Key: FLINK-1737
> URL: https://issues.apache.org/jira/browse/FLINK-1737
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Till Rohrmann
>Assignee: Daniel Pape
>  Labels: ML, Starter
>
> The statistical whitening transformation [1] is a preprocessing step for 
> different ML algorithms. It decorrelates the individual dimensions and sets 
> its variance to 1.
> Statistical whitening should be implemented as a {{Transfomer}}.
> Resources:
> [1] [http://en.wikipedia.org/wiki/Whitening_transformation]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3030: Updated version of #3014

2017-01-10 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3030
  
Has been merged, closing the pull request...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-5280) Refactor TableSource

2017-01-10 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-5280:
-
Summary: Refactor TableSource  (was: Extend TableSource to support nested 
data)

> Refactor TableSource
> 
>
> Key: FLINK-5280
> URL: https://issues.apache.org/jira/browse/FLINK-5280
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Ivan Mushketyk
>
> The {{TableSource}} interface does currently only support the definition of 
> flat rows. 
> However, there are several storage formats for nested data that should be 
> supported such as Avro, Json, Parquet, and Orc. The Table API and SQL can 
> also natively handle nested rows.
> The {{TableSource}} interface and the code to register table sources in 
> Calcite's schema need to be extended to support nested data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5358) Support RowTypeInfo extraction in TypeExtractor by Row instance

2017-01-10 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-5358.

   Resolution: Fixed
Fix Version/s: 1.3.0

Implemented for 1.3 with 2af939a10288348eedb56fc0959daee77c9cdcf3

> Support RowTypeInfo extraction in TypeExtractor by Row instance
> ---
>
> Key: FLINK-5358
> URL: https://issues.apache.org/jira/browse/FLINK-5358
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
> Fix For: 1.3.0
>
>
> {code}
> Row[] data = new Row[]{};
> TypeInformation typeInfo = TypeExtractor.getForObject(data[0]);
> {code}
> method {{getForObject}} wraps it into
> {code}
> GenericTypeInfo
> {code}
> the method should return {{RowTypeInfo}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4499) Introduce findbugs maven plugin

2017-01-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816125#comment-15816125
 ] 

Ted Yu commented on FLINK-4499:
---

This should be assigned to [~smarthi], right ?

> Introduce findbugs maven plugin
> ---
>
> Key: FLINK-4499
> URL: https://issues.apache.org/jira/browse/FLINK-4499
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> As suggested by Stephan in FLINK-4482, this issue is to add 
> findbugs-maven-plugin into the build process so that we can detect lack of 
> proper locking and other defects automatically.
> We can begin with small set of rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3903) Homebrew Installation

2017-01-10 Thread Pattarawat Chormai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816161#comment-15816161
 ] 

Pattarawat Chormai commented on FLINK-3903:
---

[~uce] are you still working on the issue?



> Homebrew Installation
> -
>
> Key: FLINK-3903
> URL: https://issues.apache.org/jira/browse/FLINK-3903
> Project: Flink
>  Issue Type: Task
>  Components: Documentation, release
>Reporter: Eron Wright 
>Assignee: Ufuk Celebi
>Priority: Minor
>  Labels: starter
>
> Recently I submitted a formula for apache-flink to the 
> [homebrew|http://brew.sh/] project.   Homebrew simplifies installation on Mac:
> {code}
> $ brew install apache-flink
> ...
> $ flink --version
> Version: 1.0.2, Commit ID: d39af15
> {code}
> Updates to the formula are adhoc at the moment.  I opened this issue to 
> formalize updating homebrew into the release process.  I drafted a procedure 
> doc here:
> [https://gist.github.com/EronWright/b62bd3b192a15be4c200a2542f7c9376]
>  
> Please also consider updating the website documentation to suggest homebrew 
> as an alternate installation method for Mac users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5358) Support RowTypeInfo extraction in TypeExtractor by Row instance

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816247#comment-15816247
 ] 

ASF GitHub Bot commented on FLINK-5358:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3027


> Support RowTypeInfo extraction in TypeExtractor by Row instance
> ---
>
> Key: FLINK-5358
> URL: https://issues.apache.org/jira/browse/FLINK-5358
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>
> {code}
> Row[] data = new Row[]{};
> TypeInformation typeInfo = TypeExtractor.getForObject(data[0]);
> {code}
> method {{getForObject}} wraps it into
> {code}
> GenericTypeInfo
> {code}
> the method should return {{RowTypeInfo}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5280) Extend TableSource to support nested data

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816248#comment-15816248
 ] 

ASF GitHub Bot commented on FLINK-5280:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3039


> Extend TableSource to support nested data
> -
>
> Key: FLINK-5280
> URL: https://issues.apache.org/jira/browse/FLINK-5280
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Ivan Mushketyk
>
> The {{TableSource}} interface does currently only support the definition of 
> flat rows. 
> However, there are several storage formats for nested data that should be 
> supported such as Avro, Json, Parquet, and Orc. The Table API and SQL can 
> also natively handle nested rows.
> The {{TableSource}} interface and the code to register table sources in 
> Calcite's schema need to be extended to support nested data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3030: Updated version of #3014

2017-01-10 Thread StephanEwen
Github user StephanEwen closed the pull request at:

https://github.com/apache/flink/pull/3030


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-5280) Refactor TableSource

2017-01-10 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-5280.

   Resolution: Fixed
Fix Version/s: 1.3.0
   1.2.0

Fixed for 1.2 with a504abe4656e104e6b63db001542f3180e191740
Fixed for 1.3 with 38ded2bb00aeb5c9581fa7ef313e5b9f803f5c26

> Refactor TableSource
> 
>
> Key: FLINK-5280
> URL: https://issues.apache.org/jira/browse/FLINK-5280
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Ivan Mushketyk
> Fix For: 1.2.0, 1.3.0
>
>
> The {{TableSource}} interface does currently only support the definition of 
> flat rows. 
> However, there are several storage formats for nested data that should be 
> supported such as Avro, Json, Parquet, and Orc. The Table API and SQL can 
> also natively handle nested rows.
> The {{TableSource}} interface and the code to register table sources in 
> Calcite's schema need to be extended to support nested data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3027: [FLINK-5358] add RowTypeInfo exctraction in TypeEx...

2017-01-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3027


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3039: [FLINK-5280] Update TableSource to support nested ...

2017-01-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3039


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2075: [FLINK-3867] Provide Vagrant/Ansible based VMs to easily ...

2017-01-10 Thread kempa-liehr
Github user kempa-liehr commented on the issue:

https://github.com/apache/flink/pull/2075
  
I just separated the Flink-VM into a separate repository 
[https://github.com/kempa-liehr/DSVMs], such t you can close this pull request. 

However, could you arrange to link https://github.com/kempa-liehr/DSVMs 
into [https://flink.apache.org/community.html#third-party-packages]?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3867) Provide virtualized Flink architecture for testing purposes

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816532#comment-15816532
 ] 

ASF GitHub Bot commented on FLINK-3867:
---

Github user kempa-liehr commented on the issue:

https://github.com/apache/flink/pull/2075
  
I just separated the Flink-VM into a separate repository 
[https://github.com/kempa-liehr/DSVMs], such t you can close this pull request. 

However, could you arrange to link https://github.com/kempa-liehr/DSVMs 
into [https://flink.apache.org/community.html#third-party-packages]?


> Provide virtualized Flink architecture for testing purposes
> ---
>
> Key: FLINK-3867
> URL: https://issues.apache.org/jira/browse/FLINK-3867
> Project: Flink
>  Issue Type: Test
>  Components: flink-contrib
>Reporter: Andreas Kempa-Liehr
>
> For developers interested in Apache Flink it would be very helpful to deploy 
> an Apache Flink cluster on a set of virtualized machines, in order to get 
> used to the configuration of the system and the development of basic 
> algorithms.
> This kind of setup could also be used for testing purposes.
> An example implementation on basis of Ansible and Vagrant has been published 
> unter https://github.com/kempa-liehr/flinkVM/tree/master/flink-vm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3090: [FLINK-5432] fix nested files enumeration in Conti...

2017-01-10 Thread ymarzougui
GitHub user ymarzougui opened a pull request:

https://github.com/apache/flink/pull/3090

[FLINK-5432] fix nested files enumeration in 
ContinuousFileMonitoringFunction

This PR fixes reading nested files when the InputFormat has 
NestedFileEnumeration set to true. Nested files were not read because the code 
in listEligibleFiles did not recursively enumerate the input paths.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymarzougui/flink FLINK-5432

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3090.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3090


commit c74996b278187e348af7043ddc0aa9a500373502
Author: Yassine Marzougui 
Date:   2017-01-11T00:43:19Z

[FLINK-5432] recursively scan nested files in 
ContinuousFileMonitoringFunction




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5432) ContinuousFileMonitoringFunction is not monitoring nested files

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816697#comment-15816697
 ] 

ASF GitHub Bot commented on FLINK-5432:
---

GitHub user ymarzougui opened a pull request:

https://github.com/apache/flink/pull/3090

[FLINK-5432] fix nested files enumeration in 
ContinuousFileMonitoringFunction

This PR fixes reading nested files when the InputFormat has 
NestedFileEnumeration set to true. Nested files were not read because the code 
in listEligibleFiles did not recursively enumerate the input paths.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymarzougui/flink FLINK-5432

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3090.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3090


commit c74996b278187e348af7043ddc0aa9a500373502
Author: Yassine Marzougui 
Date:   2017-01-11T00:43:19Z

[FLINK-5432] recursively scan nested files in 
ContinuousFileMonitoringFunction




> ContinuousFileMonitoringFunction is not monitoring nested files
> ---
>
> Key: FLINK-5432
> URL: https://issues.apache.org/jira/browse/FLINK-5432
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
>
> The {{ContinuousFileMonitoringFunction}} does not monitor nested files even 
> if the inputformat has NestedFileEnumeration set to true. This can be fixed 
> by enabling a recursive scan of the directories in the {{listEligibleFiles}} 
> method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4988) Elasticsearch 5.x support

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816496#comment-15816496
 ] 

ASF GitHub Bot commented on FLINK-4988:
---

Github user mikedias commented on a diff in the pull request:

https://github.com/apache/flink/pull/2767#discussion_r95480042
  
--- Diff: 
flink-connectors/flink-connector-elasticsearch5/src/main/java/org/apache/flink/streaming/connectors/elasticsearch5/ElasticsearchSink.java
 ---
@@ -0,0 +1,259 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.streaming.connectors.elasticsearch5;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
+import org.apache.flink.util.Preconditions;
+import org.elasticsearch.action.bulk.BulkItemResponse;
+import org.elasticsearch.action.bulk.BulkProcessor;
+import org.elasticsearch.action.bulk.BulkRequest;
+import org.elasticsearch.action.bulk.BulkResponse;
+import org.elasticsearch.action.index.IndexRequest;
+import org.elasticsearch.client.Client;
+import org.elasticsearch.client.transport.TransportClient;
+import org.elasticsearch.common.network.NetworkModule;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.transport.InetSocketTransportAddress;
+import org.elasticsearch.common.transport.TransportAddress;
+import org.elasticsearch.common.unit.ByteSizeUnit;
+import org.elasticsearch.common.unit.ByteSizeValue;
+import org.elasticsearch.common.unit.TimeValue;
+import org.elasticsearch.transport.Netty3Plugin;
+import org.elasticsearch.transport.client.PreBuiltTransportClient;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * Sink that emits its input elements in bulk to an Elasticsearch cluster.
+ * 
+ * 
+ * The first {@link Map} passed to the constructor is forwarded to 
Elasticsearch when creating
+ * {@link TransportClient}. The config keys can be found in the 
Elasticsearch
+ * documentation. An important setting is {@code cluster.name}, this 
should be set to the name
+ * of the cluster that the sink should emit to.
+ * 
+ * Attention:  When using the {@code TransportClient} the sink will 
fail if no cluster
+ * can be connected to.
+ * 
+ * The second {@link Map} is used to configure a {@link BulkProcessor} to 
send {@link IndexRequest IndexRequests}.
+ * This will buffer elements before sending a request to the cluster. The 
behaviour of the
+ * {@code BulkProcessor} can be configured using these config keys:
+ * 
+ *  {@code bulk.flush.max.actions}: Maximum amount of elements to 
buffer
+ *  {@code bulk.flush.max.size.mb}: Maximum amount of data (in 
megabytes) to buffer
+ *  {@code bulk.flush.interval.ms}: Interval at which to flush data 
regardless of the other two
+ * settings in milliseconds
+ * 
+ * 
+ * 
+ * You also have to provide an {@link RequestIndexer}. This is used to 
create an
+ * {@link IndexRequest} from an element that needs to be added to 
Elasticsearch. See
+ * {@link RequestIndexer} for an example.
+ *
+ * @param  Type of the elements emitted by this sink
+ */
+public class ElasticsearchSink extends RichSinkFunction {
+
+   public static final String CONFIG_KEY_BULK_FLUSH_MAX_ACTIONS = 
"bulk.flush.max.actions";
+   public static final String CONFIG_KEY_BULK_FLUSH_MAX_SIZE_MB = 
"bulk.flush.max.size.mb";
+   public static final String CONFIG_KEY_BULK_FLUSH_INTERVAL_MS = 
"bulk.flush.interval.ms";
+
+   private static final long serialVersionUID = 1L;
+
+   private static final Logger LOG 

[GitHub] flink pull request #2767: [FLINK-4988] Elasticsearch 5.x support

2017-01-10 Thread mikedias
Github user mikedias commented on a diff in the pull request:

https://github.com/apache/flink/pull/2767#discussion_r95480042
  
--- Diff: 
flink-connectors/flink-connector-elasticsearch5/src/main/java/org/apache/flink/streaming/connectors/elasticsearch5/ElasticsearchSink.java
 ---
@@ -0,0 +1,259 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.streaming.connectors.elasticsearch5;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
+import org.apache.flink.util.Preconditions;
+import org.elasticsearch.action.bulk.BulkItemResponse;
+import org.elasticsearch.action.bulk.BulkProcessor;
+import org.elasticsearch.action.bulk.BulkRequest;
+import org.elasticsearch.action.bulk.BulkResponse;
+import org.elasticsearch.action.index.IndexRequest;
+import org.elasticsearch.client.Client;
+import org.elasticsearch.client.transport.TransportClient;
+import org.elasticsearch.common.network.NetworkModule;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.transport.InetSocketTransportAddress;
+import org.elasticsearch.common.transport.TransportAddress;
+import org.elasticsearch.common.unit.ByteSizeUnit;
+import org.elasticsearch.common.unit.ByteSizeValue;
+import org.elasticsearch.common.unit.TimeValue;
+import org.elasticsearch.transport.Netty3Plugin;
+import org.elasticsearch.transport.client.PreBuiltTransportClient;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * Sink that emits its input elements in bulk to an Elasticsearch cluster.
+ * 
+ * 
+ * The first {@link Map} passed to the constructor is forwarded to 
Elasticsearch when creating
+ * {@link TransportClient}. The config keys can be found in the 
Elasticsearch
+ * documentation. An important setting is {@code cluster.name}, this 
should be set to the name
+ * of the cluster that the sink should emit to.
+ * 
+ * Attention:  When using the {@code TransportClient} the sink will 
fail if no cluster
+ * can be connected to.
+ * 
+ * The second {@link Map} is used to configure a {@link BulkProcessor} to 
send {@link IndexRequest IndexRequests}.
+ * This will buffer elements before sending a request to the cluster. The 
behaviour of the
+ * {@code BulkProcessor} can be configured using these config keys:
+ * 
+ *  {@code bulk.flush.max.actions}: Maximum amount of elements to 
buffer
+ *  {@code bulk.flush.max.size.mb}: Maximum amount of data (in 
megabytes) to buffer
+ *  {@code bulk.flush.interval.ms}: Interval at which to flush data 
regardless of the other two
+ * settings in milliseconds
+ * 
+ * 
+ * 
+ * You also have to provide an {@link RequestIndexer}. This is used to 
create an
+ * {@link IndexRequest} from an element that needs to be added to 
Elasticsearch. See
+ * {@link RequestIndexer} for an example.
+ *
+ * @param  Type of the elements emitted by this sink
+ */
+public class ElasticsearchSink extends RichSinkFunction {
+
+   public static final String CONFIG_KEY_BULK_FLUSH_MAX_ACTIONS = 
"bulk.flush.max.actions";
+   public static final String CONFIG_KEY_BULK_FLUSH_MAX_SIZE_MB = 
"bulk.flush.max.size.mb";
+   public static final String CONFIG_KEY_BULK_FLUSH_INTERVAL_MS = 
"bulk.flush.interval.ms";
+
+   private static final long serialVersionUID = 1L;
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(ElasticsearchSink.class);
+
+   /**
+* The user specified config map that we forward to Elasticsearch when 
we create the Client.
+*/
+   private final Map esConfig;
   

[jira] [Closed] (FLINK-3867) Provide virtualized Flink architecture for testing purposes

2017-01-10 Thread Andreas Kempa-Liehr (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andreas Kempa-Liehr closed FLINK-3867.
--
Resolution: Fixed

Flink VM has been separated into a new repository 
https://github.com/kempa-liehr/DSVMs.

> Provide virtualized Flink architecture for testing purposes
> ---
>
> Key: FLINK-3867
> URL: https://issues.apache.org/jira/browse/FLINK-3867
> Project: Flink
>  Issue Type: Test
>  Components: flink-contrib
>Reporter: Andreas Kempa-Liehr
>
> For developers interested in Apache Flink it would be very helpful to deploy 
> an Apache Flink cluster on a set of virtualized machines, in order to get 
> used to the configuration of the system and the development of basic 
> algorithms.
> This kind of setup could also be used for testing purposes.
> An example implementation on basis of Ansible and Vagrant has been published 
> unter https://github.com/kempa-liehr/flinkVM/tree/master/flink-vm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5320) Fix result TypeInformation in WindowedStream.fold(ACC, FoldFunction, WindowFunction)

2017-01-10 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui updated FLINK-5320:
-
Fix Version/s: 1.2.0

> Fix result TypeInformation in WindowedStream.fold(ACC, FoldFunction, 
> WindowFunction)
> 
>
> Key: FLINK-5320
> URL: https://issues.apache.org/jira/browse/FLINK-5320
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
>Priority: Blocker
> Fix For: 1.2.0
>
>
> The WindowedStream.fold(ACC, FoldFunction, WindowFunction) does not correctly 
> infer the resultType.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5432) ContinuousFileMonitoringFunction is not monitoring nested files

2017-01-10 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui updated FLINK-5432:
-
Fix Version/s: 1.3.0
   1.2.0

> ContinuousFileMonitoringFunction is not monitoring nested files
> ---
>
> Key: FLINK-5432
> URL: https://issues.apache.org/jira/browse/FLINK-5432
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
> Fix For: 1.2.0, 1.3.0
>
>
> The {{ContinuousFileMonitoringFunction}} does not monitor nested files even 
> if the inputformat has NestedFileEnumeration set to true. This can be fixed 
> by enabling a recursive scan of the directories in the {{listEligibleFiles}} 
> method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4917) Deprecate "CheckpointedAsynchronously" interface

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816780#comment-15816780
 ] 

ASF GitHub Bot commented on FLINK-4917:
---

Github user mtunique commented on the issue:

https://github.com/apache/flink/pull/3087
  
I am sorry about it.


> Deprecate "CheckpointedAsynchronously" interface
> 
>
> Key: FLINK-4917
> URL: https://issues.apache.org/jira/browse/FLINK-4917
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Stephan Ewen
>  Labels: easyfix, starter
>
> The {{CheckpointedAsynchronously}} should be deprecated, as it is no longer 
> part of the new operator state abstraction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3087: [FLINK-4917] Deprecate "CheckpointedAsynchronously" inter...

2017-01-10 Thread mtunique
Github user mtunique commented on the issue:

https://github.com/apache/flink/pull/3087
  
I am sorry about it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4692) Add tumbling group-windows for batch tables

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817257#comment-15817257
 ] 

ASF GitHub Bot commented on FLINK-4692:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2938
  
Thanks @twalthr for your hint. It is a bug in my 
`DataSetTumbleTimeWindowAggReduceCombineFunction#combine(...)` method, that the 
rowtime attribute is dropped when combining. 

The collection environment will not run combine phase, but cluster 
environment will. That's why we can't reproduce the wrong result in test base.

BTW, do we need to activate the cluster execution mode in table IT cases ?  
Currently, only collection execution mode is activated. 


> Add tumbling group-windows for batch tables
> ---
>
> Key: FLINK-4692
> URL: https://issues.apache.org/jira/browse/FLINK-4692
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> Add Tumble group-windows for batch tables as described in 
> [FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2938: [FLINK-4692] [tableApi] Add tumbling group-windows for ba...

2017-01-10 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2938
  
Thanks @twalthr for your hint. It is a bug in my 
`DataSetTumbleTimeWindowAggReduceCombineFunction#combine(...)` method, that the 
rowtime attribute is dropped when combining. 

The collection environment will not run combine phase, but cluster 
environment will. That's why we can't reproduce the wrong result in test base.

BTW, do we need to activate the cluster execution mode in table IT cases ?  
Currently, only collection execution mode is activated. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5431) time format for akka status

2017-01-10 Thread Alexey Diomin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817437#comment-15817437
 ] 

Alexey Diomin commented on FLINK-5431:
--

comment Till Rohrmann

---
I agree with your proposal to use "-MM-dd HH:mm:ss" or even
"-MM-ddTHH:mm:ss" to follow the ISO standard per default but still give
the user the possibility to configure it.
---

> time format for akka status
> ---
>
> Key: FLINK-5431
> URL: https://issues.apache.org/jira/browse/FLINK-5431
> Project: Flink
>  Issue Type: Improvement
>Reporter: Alexey Diomin
>Assignee: Anton Solovev
>Priority: Minor
>
> In ExecutionGraphMessages we have code
> {code}
> private val DATE_FORMATTER: SimpleDateFormat = new 
> SimpleDateFormat("MM/dd/ HH:mm:ss")
> {code}
> But sometimes it cause confusion when main logger configured with 
> "dd/MM/".
> We need making this format configurable or maybe stay only "HH:mm:ss" for 
> prevent misunderstanding output date-time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5386) Refactoring Window Clause

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15814273#comment-15814273
 ] 

ASF GitHub Bot commented on FLINK-5386:
---

Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3046#discussion_r95315326
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/windows.scala
 ---
@@ -150,7 +147,7 @@ class TumblingWindow(size: Expression) extends 
GroupWindow {
   def as(alias: String): TumblingWindow = 
as(ExpressionParser.parseExpression(alias))
 
   override private[flink] def toLogicalWindow: LogicalWindow =
-ProcessingTimeTumblingGroupWindow(alias, size)
+ProcessingTimeTumblingGroupWindow(name, size)
--- End diff --

Hi, @shaoxuan-wang  thanks a lot for the review. I have updated the PR 
according to your comments. The change list:
 1. Remove GroupWindowedTable.
 2. Chanage "name" to "alias".
 3. Add testMultiWindow.
 4. Rebase code and fixed the conflicts.


> Refactoring Window Clause
> -
>
> Key: FLINK-5386
> URL: https://issues.apache.org/jira/browse/FLINK-5386
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Similar to the SQL, window clause is defined "as" a symbol which is 
> explicitly used in groupby/over. We are proposing to refactor the way to 
> write groupby+window tableAPI as follows: 
> val windowedTable = table
>  .window(Slide over 10.milli every 5.milli as 'w1)
>  .window(Tumble over 5.milli  as 'w2)
>  .groupBy('w1, 'key)
>  .select('string, 'int.count as 'count, 'w1.start)
>  .groupBy( 'w2, 'key)
>  .select('string, 'count.sum as sum2)
>  .window(Tumble over 5.milli  as 'w3)
>  .groupBy( 'w3) // windowAll
>  .select('sum2, 'w3.start, 'w3.end)
> In this way, we can remove both GroupWindowedTable and the window() method in 
> GroupedTable which makes the API a bit clean. In addition, for row-window, we 
> anyway need to define window clause as a symbol. This change will make the 
> API of window and row-window consistent, example for row-window:
>   .window(RowXXXWindow as ‘x, RowYYYWindow as ‘y)
>   .select(‘a, ‘b.count over ‘x as ‘xcnt, ‘c.count over ‘y as ‘ycnt, ‘x.start, 
> ‘x.end)
> What do you think? [~fhueske] [~twalthr]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3046: [FLINK-5386][Table API & SQL] refactoring Window C...

2017-01-10 Thread sunjincheng121
Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/3046#discussion_r95315326
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/windows.scala
 ---
@@ -150,7 +147,7 @@ class TumblingWindow(size: Expression) extends 
GroupWindow {
   def as(alias: String): TumblingWindow = 
as(ExpressionParser.parseExpression(alias))
 
   override private[flink] def toLogicalWindow: LogicalWindow =
-ProcessingTimeTumblingGroupWindow(alias, size)
+ProcessingTimeTumblingGroupWindow(name, size)
--- End diff --

Hi, @shaoxuan-wang  thanks a lot for the review. I have updated the PR 
according to your comments. The change list:
 1. Remove GroupWindowedTable.
 2. Chanage "name" to "alias".
 3. Add testMultiWindow.
 4. Rebase code and fixed the conflicts.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4396) GraphiteReporter class not found at startup of jobmanager

2017-01-10 Thread Steven Ruppert (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816509#comment-15816509
 ] 

Steven Ruppert commented on FLINK-4396:
---

I just ran into this today.

Reading: 
https://ci.apache.org/projects/flink/flink-docs-release-1.2/monitoring/metrics.html

it doesn't say anywhere there you need to download extra libs.

> GraphiteReporter class not found at startup of jobmanager
> -
>
> Key: FLINK-4396
> URL: https://issues.apache.org/jira/browse/FLINK-4396
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System, Metrics
>Affects Versions: 1.1.1
> Environment: Windows and Unix
>Reporter: RWenden
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> For Flink 1.1.1 we configured Graphite metrics settings on the 
> flink-conf.yaml (for job manager (and taskmanager)).
> We see the following error in the log:
> 2016-08-15 14:20:34,167 ERROR org.apache.flink.runtime.metrics.MetricRegistry 
>   - Could not instantiate metrics reportermy_reporter. Metrics 
> might not be exposed/reported.
> java.lang.ClassNotFoundException: 
> org.apache.flink.metrics.graphite.GraphiteReporter
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:264)
> at 
> org.apache.flink.runtime.metrics.MetricRegistry.(MetricRegistry.java:119)
> We found out that this class is not packaged inside flink-dist_2.11-1.1.1.jar.
> Long story short: we had to install/provide the following jars into the lib 
> folder to make Graphite metrics to work:
> flink-metrics-graphite-1.1.1.jar
> flink-metrics-dropwizard-1.1.1.jar
> metrics-graphite-3.1.0.jar (from dropwizard)
> We think these libraries (and the ones for Ganglia,StatsD,...) should be 
> included in flink-dist_2.11-1.1.1.jar, for these are needed at manager 
> startup time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3086: Improve docker setup

2017-01-10 Thread kaelumania
Github user kaelumania commented on the issue:

https://github.com/apache/flink/pull/3086
  
@mxn One drawback using `ENV` might be with docker-compose, see 
https://docs.docker.com/compose/compose-file/#/args which states

> You can omit the value when specifying a build argument, in which case 
its value at build time is the value in the environment where Compose is 
running.

> Note: If your service specifies a build option, variables defined in 
environment will not be automatically visible during the build. Use the args 
sub-option of build to define build-time environment variables.

On the other hand the Dockerfile reference says 
(https://docs.docker.com/engine/reference/builder/#/arg)

> Unlike an ARG instruction, ENV values are always persisted in the built 
image.

Maybe something like this can be used to support both
```
1 FROM ubuntu
2 ARG CONT_IMG_VER
3 ENV CONT_IMG_VER ${CONT_IMG_VER:-v1.0.0}
4 RUN echo $CONT_IMG_VER
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3089: remove duplicated tests

2017-01-10 Thread xhumanoid
Github user xhumanoid commented on the issue:

https://github.com/apache/flink/pull/3089
  
@aljoscha  @rmetzger


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5256) Extend DataSetSingleRowJoin to support Left and Right joins

2017-01-10 Thread Anton Mushin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817549#comment-15817549
 ] 

Anton Mushin commented on FLINK-5256:
-

Thanks for your comment!
I update my [test 
branch|https://github.com/apache/flink/compare/master...ex00:FLINK-5256-tests], 
could you look it please?
I have suspect that next line is not exactly correct
{code:title=DataSetSingleRowJoin.scala#176}
s"${conversion.resultTerm}.setField($i,null);")
{code}
What do you think about this changes?

bq.2) the MapJoinLeftRunner and MapJoinRightRunner: Right now both runners do 
only call the join function if the single input is set (not null). For outer 
joins we also need to produce output if the single input is null.

How can I test this case? I tried test it in [this 
case|https://github.com/apache/flink/compare/master...ex00:FLINK-5256-tests#diff-102e5d9e330260c0acf5e4e54ff3bdceR438].
 This case is passing now.

> Extend DataSetSingleRowJoin to support Left and Right joins
> ---
>
> Key: FLINK-5256
> URL: https://issues.apache.org/jira/browse/FLINK-5256
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Anton Mushin
>
> The {{DataSetSingleRowJoin}} is a broadcast-map join that supports arbitrary 
> inner joins where one input is a single row.
> I found that Calcite translates certain subqueries into non-equi left and 
> right joins with single input. These cases can be handled if the  
> {{DataSetSingleRowJoin}} is extended to support outer joins on the 
> non-single-row input, i.e., left joins if the right side is single input and 
> vice versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2545: [FLINK-4673] [core] TypeInfoFactory for Either typ...

2017-01-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2545


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3062: [FLINK-5144] Fix error while applying rule AggregateJoinT...

2017-01-10 Thread KurtYoung
Github user KurtYoung commented on the issue:

https://github.com/apache/flink/pull/3062
  
Thanks @twalthr for the reviewing. I have opened 
https://issues.apache.org/jira/browse/FLINK-5435 to track the cleanup work.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3086: Improve docker setup

2017-01-10 Thread kaelumania
Github user kaelumania commented on a diff in the pull request:

https://github.com/apache/flink/pull/3086#discussion_r95383742
  
--- Diff: flink-contrib/docker-flink/docker-entrypoint.sh ---
@@ -36,9 +39,9 @@ elif [ "$1" == "taskmanager" ]; then
 echo "Starting Task Manager"
 echo "config file: " && grep '^[^\n#]' $FLINK_HOME/conf/flink-conf.yaml
 $FLINK_HOME/bin/taskmanager.sh start
+
+  # prevent script to exit
+  tail -f /dev/null
 else
 $@
--- End diff --

When this image is used as a one-off container, e.g. I want to run 
`bin/flink list`, then this line would prevent the one-off container to exit 
correctly - instead it would hang forever.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4861) Package optional project artifacts

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815251#comment-15815251
 ] 

ASF GitHub Bot commented on FLINK-4861:
---

Github user greghogan closed the pull request at:

https://github.com/apache/flink/pull/3000


> Package optional project artifacts
> --
>
> Key: FLINK-4861
> URL: https://issues.apache.org/jira/browse/FLINK-4861
> Project: Flink
>  Issue Type: New Feature
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.2.0
>
>
> Per the mailing list 
> [discussion|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Additional-project-downloads-td13223.html],
>  package the Flink libraries and connectors into subdirectories of a new 
> {{opt}} directory in the release/snapshot tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5144) Error while applying rule AggregateJoinTransposeRule

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815115#comment-15815115
 ] 

ASF GitHub Bot commented on FLINK-5144:
---

Github user KurtYoung commented on a diff in the pull request:

https://github.com/apache/flink/pull/3062#discussion_r95374019
  
--- Diff: tools/maven/suppressions.xml ---
@@ -23,6 +23,8 @@ under the License.
"http://www.puppycrawl.com/dtds/suppressions_1_1.dtd;>
 
 
-   
-   
+
--- End diff --

Can you show me how to disable style check in flink-table module, or show 
me similar example and i can figure it out by myself.


> Error while applying rule AggregateJoinTransposeRule
> 
>
> Key: FLINK-5144
> URL: https://issues.apache.org/jira/browse/FLINK-5144
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Kurt Young
>
> AggregateJoinTransposeRule seems to cause errors. We have to investigate if 
> this is a Flink or Calcite error. Here a simplified example:
> {code}
> select
>   sum(l_extendedprice)
> from
>   lineitem,
>   part
> where
>   p_partkey = l_partkey
>   and l_quantity < (
> select
>   avg(l_quantity)
> from
>   lineitem
> where
>   l_partkey = p_partkey
>   )
> {code}
> Exception:
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: Error 
> occurred while applying rule AggregateJoinTransposeRule
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:148)
>   at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:225)
>   at 
> org.apache.calcite.rel.rules.AggregateJoinTransposeRule.onMatch(AggregateJoinTransposeRule.java:342)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:213)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:819)
>   at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:334)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.optimize(BatchTableEnvironment.scala:251)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.translate(BatchTableEnvironment.scala:286)
>   at 
> org.apache.flink.api.scala.table.BatchTableEnvironment.toDataSet(BatchTableEnvironment.scala:139)
>   at 
> org.apache.flink.api.scala.table.package$.table2RowDataSet(package.scala:77)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.runQ17(TPCHQueries.scala:826)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.main(TPCHQueries.scala:57)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries.main(TPCHQueries.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.lang.AssertionError: Type mismatch:
> rowtype of new rel:
> RecordType(BIGINT l_partkey, BIGINT p_partkey) NOT NULL
> rowtype of set:
> RecordType(BIGINT p_partkey) NOT NULL
>   at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31)
>   at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1838)
>   at org.apache.calcite.plan.volcano.RelSubset.add(RelSubset.java:273)
>   at org.apache.calcite.plan.volcano.RelSet.add(RelSet.java:148)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.addRelToSet(VolcanoPlanner.java:1820)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1766)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:1032)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1052)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1942)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:136)
>   ... 17 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3080: [FLINK-4920] Add a Scala Function Gauge

2017-01-10 Thread KurtYoung
Github user KurtYoung commented on a diff in the pull request:

https://github.com/apache/flink/pull/3080#discussion_r95375413
  
--- Diff: 
flink-scala/src/main/scala/org/apache/flink/api/scala/metrics/ScalaGauge.scala 
---
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.metrics
+
+import org.apache.flink.metrics.Gauge
+
+class ScalaGauge[T](value : T) extends Gauge[T] {
--- End diff --

Yeah that sounds good to me. Let us see what the codes looks like after you 
modifying this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2938: [FLINK-4692] [tableApi] Add tumbling group-windows for ba...

2017-01-10 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2938
  
@fhueske and I looked into this again. I seems that result depends on the 
batch ExecutionEnvironment. I used a regular environment while the test base 
uses a CollectionExecutionEnvironment. We don't know if this is a problem of 
your implementation or a bug in the collection environment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4692) Add tumbling group-windows for batch tables

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815149#comment-15815149
 ] 

ASF GitHub Bot commented on FLINK-4692:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2938
  
@fhueske and I looked into this again. I seems that result depends on the 
batch ExecutionEnvironment. I used a regular environment while the test base 
uses a CollectionExecutionEnvironment. We don't know if this is a problem of 
your implementation or a bug in the collection environment.


> Add tumbling group-windows for batch tables
> ---
>
> Key: FLINK-4692
> URL: https://issues.apache.org/jira/browse/FLINK-4692
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> Add Tumble group-windows for batch tables as described in 
> [FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3062: [FLINK-5144] Fix error while applying rule Aggrega...

2017-01-10 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/3062#discussion_r95381307
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/CorrelateITCase.scala
 ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.scala.batch.sql
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.ExecutionEnvironment
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.table.api.TableEnvironment
+import org.apache.flink.table.api.scala._
+import org.apache.flink.table.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.table.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.apache.flink.test.util.TestBaseUtils
+import org.apache.flink.types.Row
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+import scala.collection.JavaConverters._
+
+@RunWith(classOf[Parameterized])
+class CorrelateITCase(mode: TestExecutionMode, configMode: TableConfigMode)
--- End diff --

Yes, you are right. Query decorrelation has to be tested well, but 
everything happens logically. ITCases are basically only necessary to test the 
translation of FlinkRels such as `DataSetJoin`, `DataStreamUnion`, etc.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3062: [FLINK-5144] Fix error while applying rule Aggrega...

2017-01-10 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/3062#discussion_r95381802
  
--- Diff: tools/maven/suppressions.xml ---
@@ -23,6 +23,8 @@ under the License.
"http://www.puppycrawl.com/dtds/suppressions_1_1.dtd;>
 
 
-   
-   
+
--- End diff --

It should look similar to:

```java
//CHECKSTYLE.OFF: AvoidStarImport - Needed for TupleGenerator
import org.apache.flink.api.java.tuple.*;
//CHECKSTYLE.ON: AvoidStarImport
```

You have to specify which checkstyle rule you want to disable and give a 
explanation why.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5144) Error while applying rule AggregateJoinTransposeRule

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815217#comment-15815217
 ] 

ASF GitHub Bot commented on FLINK-5144:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/3062#discussion_r95381802
  
--- Diff: tools/maven/suppressions.xml ---
@@ -23,6 +23,8 @@ under the License.
"http://www.puppycrawl.com/dtds/suppressions_1_1.dtd;>
 
 
-   
-   
+
--- End diff --

It should look similar to:

```java
//CHECKSTYLE.OFF: AvoidStarImport - Needed for TupleGenerator
import org.apache.flink.api.java.tuple.*;
//CHECKSTYLE.ON: AvoidStarImport
```

You have to specify which checkstyle rule you want to disable and give a 
explanation why.


> Error while applying rule AggregateJoinTransposeRule
> 
>
> Key: FLINK-5144
> URL: https://issues.apache.org/jira/browse/FLINK-5144
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Kurt Young
>
> AggregateJoinTransposeRule seems to cause errors. We have to investigate if 
> this is a Flink or Calcite error. Here a simplified example:
> {code}
> select
>   sum(l_extendedprice)
> from
>   lineitem,
>   part
> where
>   p_partkey = l_partkey
>   and l_quantity < (
> select
>   avg(l_quantity)
> from
>   lineitem
> where
>   l_partkey = p_partkey
>   )
> {code}
> Exception:
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: Error 
> occurred while applying rule AggregateJoinTransposeRule
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:148)
>   at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:225)
>   at 
> org.apache.calcite.rel.rules.AggregateJoinTransposeRule.onMatch(AggregateJoinTransposeRule.java:342)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:213)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:819)
>   at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:334)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.optimize(BatchTableEnvironment.scala:251)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.translate(BatchTableEnvironment.scala:286)
>   at 
> org.apache.flink.api.scala.table.BatchTableEnvironment.toDataSet(BatchTableEnvironment.scala:139)
>   at 
> org.apache.flink.api.scala.table.package$.table2RowDataSet(package.scala:77)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.runQ17(TPCHQueries.scala:826)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.main(TPCHQueries.scala:57)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries.main(TPCHQueries.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.lang.AssertionError: Type mismatch:
> rowtype of new rel:
> RecordType(BIGINT l_partkey, BIGINT p_partkey) NOT NULL
> rowtype of set:
> RecordType(BIGINT p_partkey) NOT NULL
>   at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31)
>   at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1838)
>   at org.apache.calcite.plan.volcano.RelSubset.add(RelSubset.java:273)
>   at org.apache.calcite.plan.volcano.RelSet.add(RelSet.java:148)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.addRelToSet(VolcanoPlanner.java:1820)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1766)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:1032)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1052)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1942)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:136)
>   ... 17 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3086: Improve docker setup

2017-01-10 Thread kaelumania
Github user kaelumania commented on the issue:

https://github.com/apache/flink/pull/3086
  
@greghogan No, I didn't create a Ticket (sorry, I was not sure if this 
change justifies a Ticket). But I would love to see a Ticket about an automated 
build at DockerHub for Flink.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3087: [FLINK-4917] Deprecate "CheckpointedAsynchronously...

2017-01-10 Thread mtunique
GitHub user mtunique opened a pull request:

https://github.com/apache/flink/pull/3087

[FLINK-4917] Deprecate "CheckpointedAsynchronously" interface

- [x] General
  - The pull request references the related JIRA issue ([FLINK-4917] 
Deprecate "CheckpointedAsynchronously" interface)
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mtunique/flink flink-4917

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3087.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3087


commit e4ed2142447daf585cdd0c76fadc559429f8ac11
Author: mtunique 
Date:   2017-01-10T15:05:44Z

[FLINK-4917] Deprecate "CheckpointedAsynchronously" interface




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2545: [FLINK-4673] [core] TypeInfoFactory for Either typ...

2017-01-10 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/2545#discussion_r95386388
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java 
---
@@ -675,38 +673,6 @@ else if (isClassType(t) && 
Tuple.class.isAssignableFrom(typeToClass(t))) {
return new TupleTypeInfo(typeToClass(t), subTypesInfo);

}
-   // check if type is a subclass of Either
-   else if (isClassType(t) && 
Either.class.isAssignableFrom(typeToClass(t))) {
-   Type curT = t;
-
-   // go up the hierarchy until we reach Either (with or 
without generics)
-   // collect the types while moving up for a later 
top-down
-   while (!(isClassType(curT) && 
typeToClass(curT).equals(Either.class))) {
-   typeHierarchy.add(curT);
-   curT = typeToClass(curT).getGenericSuperclass();
-   }
-
-   // check if Either has generics
-   if (curT instanceof Class) {
-   throw new InvalidTypesException("Either needs 
to be parameterized by using generics.");
-   }
-
-   typeHierarchy.add(curT);
-
-   // create the type information for the subtypes
-   final TypeInformation[] subTypesInfo = 
createSubTypesInfo(t, (ParameterizedType) curT, typeHierarchy, in1Type, 
in2Type, false);
-   // type needs to be treated a pojo due to additional 
fields
-   if (subTypesInfo == null) {
-   if (t instanceof ParameterizedType) {
-   return (TypeInformation) 
analyzePojo(typeToClass(t), new ArrayList(typeHierarchy), 
(ParameterizedType) t, in1Type, in2Type);
--- End diff --

@twalthr thanks for checking this! Glad to hear that your factories 
implementation has exceeded expectations :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3087: [FLINK-4917] Deprecate "CheckpointedAsynchronously" inter...

2017-01-10 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3087
  
Per Fabian's comment from the Jira, we should also document the deprecation 
in the javadoc with the recommended replacement functionality.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-5432) ContinuousFileMonitoringFunction is not monitoring nested files

2017-01-10 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui reassigned FLINK-5432:


Assignee: Yassine Marzougui

> ContinuousFileMonitoringFunction is not monitoring nested files
> ---
>
> Key: FLINK-5432
> URL: https://issues.apache.org/jira/browse/FLINK-5432
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
>
> The {{ContinuousFileMonitoringFunction}} does not monitor nested files even 
> if the inputformat has NestedFileEnumeration set to true. This can be fixed 
> by enabling a recursive scan of the directories in the {{listEligibleFiles}} 
> method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3062: [FLINK-5144] Fix error while applying rule Aggrega...

2017-01-10 Thread KurtYoung
Github user KurtYoung commented on a diff in the pull request:

https://github.com/apache/flink/pull/3062#discussion_r95374019
  
--- Diff: tools/maven/suppressions.xml ---
@@ -23,6 +23,8 @@ under the License.
"http://www.puppycrawl.com/dtds/suppressions_1_1.dtd;>
 
 
-   
-   
+
--- End diff --

Can you show me how to disable style check in flink-table module, or show 
me similar example and i can figure it out by myself.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4920) Add a Scala Function Gauge

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815130#comment-15815130
 ] 

ASF GitHub Bot commented on FLINK-4920:
---

Github user KurtYoung commented on a diff in the pull request:

https://github.com/apache/flink/pull/3080#discussion_r95375413
  
--- Diff: 
flink-scala/src/main/scala/org/apache/flink/api/scala/metrics/ScalaGauge.scala 
---
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.metrics
+
+import org.apache.flink.metrics.Gauge
+
+class ScalaGauge[T](value : T) extends Gauge[T] {
--- End diff --

Yeah that sounds good to me. Let us see what the codes looks like after you 
modifying this.


> Add a Scala Function Gauge
> --
>
> Key: FLINK-4920
> URL: https://issues.apache.org/jira/browse/FLINK-4920
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics, Scala API
>Reporter: Stephan Ewen
>Assignee: Pattarawat Chormai
>  Labels: easyfix, starter
>
> A useful metrics utility for the Scala API would be to add a Gauge that 
> obtains its value by calling a Scala Function0.
> That way, one can add Gauges in Scala programs using Scala lambda notation or 
> function references.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3039: [FLINK-5280] Update TableSource to support nested data

2017-01-10 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3039
  
Merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3027: [FLINK-5358] add RowTypeInfo exctraction in TypeExtractor

2017-01-10 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3027
  
merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5280) Extend TableSource to support nested data

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815203#comment-15815203
 ] 

ASF GitHub Bot commented on FLINK-5280:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3039
  
Merging


> Extend TableSource to support nested data
> -
>
> Key: FLINK-5280
> URL: https://issues.apache.org/jira/browse/FLINK-5280
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Ivan Mushketyk
>
> The {{TableSource}} interface does currently only support the definition of 
> flat rows. 
> However, there are several storage formats for nested data that should be 
> supported such as Avro, Json, Parquet, and Orc. The Table API and SQL can 
> also natively handle nested rows.
> The {{TableSource}} interface and the code to register table sources in 
> Calcite's schema need to be extended to support nested data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5144) Error while applying rule AggregateJoinTransposeRule

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815208#comment-15815208
 ] 

ASF GitHub Bot commented on FLINK-5144:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/3062#discussion_r95381307
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/CorrelateITCase.scala
 ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.scala.batch.sql
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.ExecutionEnvironment
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.table.api.TableEnvironment
+import org.apache.flink.table.api.scala._
+import org.apache.flink.table.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.table.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.apache.flink.test.util.TestBaseUtils
+import org.apache.flink.types.Row
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+import scala.collection.JavaConverters._
+
+@RunWith(classOf[Parameterized])
+class CorrelateITCase(mode: TestExecutionMode, configMode: TableConfigMode)
--- End diff --

Yes, you are right. Query decorrelation has to be tested well, but 
everything happens logically. ITCases are basically only necessary to test the 
translation of FlinkRels such as `DataSetJoin`, `DataStreamUnion`, etc.


> Error while applying rule AggregateJoinTransposeRule
> 
>
> Key: FLINK-5144
> URL: https://issues.apache.org/jira/browse/FLINK-5144
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Kurt Young
>
> AggregateJoinTransposeRule seems to cause errors. We have to investigate if 
> this is a Flink or Calcite error. Here a simplified example:
> {code}
> select
>   sum(l_extendedprice)
> from
>   lineitem,
>   part
> where
>   p_partkey = l_partkey
>   and l_quantity < (
> select
>   avg(l_quantity)
> from
>   lineitem
> where
>   l_partkey = p_partkey
>   )
> {code}
> Exception:
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: Error 
> occurred while applying rule AggregateJoinTransposeRule
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:148)
>   at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:225)
>   at 
> org.apache.calcite.rel.rules.AggregateJoinTransposeRule.onMatch(AggregateJoinTransposeRule.java:342)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:213)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:819)
>   at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:334)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.optimize(BatchTableEnvironment.scala:251)
>   at 
> org.apache.flink.api.table.BatchTableEnvironment.translate(BatchTableEnvironment.scala:286)
>   at 
> org.apache.flink.api.scala.table.BatchTableEnvironment.toDataSet(BatchTableEnvironment.scala:139)
>   at 
> org.apache.flink.api.scala.table.package$.table2RowDataSet(package.scala:77)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.runQ17(TPCHQueries.scala:826)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries$.main(TPCHQueries.scala:57)
>   at 
> org.apache.flink.api.scala.sql.tpch.TPCHQueries.main(TPCHQueries.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[GitHub] flink issue #3030: Updated version of #3014

2017-01-10 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3030
  
This feature has been merged?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3086: Improve docker setup

2017-01-10 Thread kaelumania
Github user kaelumania commented on a diff in the pull request:

https://github.com/apache/flink/pull/3086#discussion_r95384182
  
--- Diff: flink-contrib/docker-flink/Dockerfile ---
@@ -22,9 +22,9 @@ FROM java:8-jre-alpine
 RUN apk add --no-cache bash snappy
 
 # Configure Flink version
-ENV FLINK_VERSION=1.1.1
-ENV HADOOP_VERSION=27
-ENV SCALA_VERSION=2.11
+ARG FLINK_VERSION=1.1.3
--- End diff --

The `README` says I can specify a `FLINK_VERSION` using `docker build 
--build-arg FLINK_VERSION=1.0.3 flink`, but that is only possible marking the 
variable as `ARG` and not `ENV`, see 
https://docs.docker.com/engine/reference/builder/#/arg


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4917) Deprecate "CheckpointedAsynchronously" interface

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815258#comment-15815258
 ] 

ASF GitHub Bot commented on FLINK-4917:
---

GitHub user mtunique opened a pull request:

https://github.com/apache/flink/pull/3087

[FLINK-4917] Deprecate "CheckpointedAsynchronously" interface

- [x] General
  - The pull request references the related JIRA issue ([FLINK-4917] 
Deprecate "CheckpointedAsynchronously" interface)
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mtunique/flink flink-4917

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3087.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3087


commit e4ed2142447daf585cdd0c76fadc559429f8ac11
Author: mtunique 
Date:   2017-01-10T15:05:44Z

[FLINK-4917] Deprecate "CheckpointedAsynchronously" interface




> Deprecate "CheckpointedAsynchronously" interface
> 
>
> Key: FLINK-4917
> URL: https://issues.apache.org/jira/browse/FLINK-4917
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Stephan Ewen
>  Labels: easyfix, starter
>
> The {{CheckpointedAsynchronously}} should be deprecated, as it is no longer 
> part of the new operator state abstraction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >